survey_title
stringlengths
19
197
section_num
int64
3
56
references
stringlengths
4
1.34M
section_outline
stringlengths
531
9.08k
User Interface Declarative Models and Development Environments: A Survey
14
--- paper_title: ITS: a tool for rapidly developing interactive applications paper_content: The ITS architecture separates applications into four layers. The action layer implements back-end application functions. The dialog layer defines the content of the user interface, independent of its style. Content specifies the objects included in each frame of the interface, the flow of control among frames, and what actions are associated with each object. The style rule layer defines the presentation and behavior of a family of interaction techniques. Finally, the style program layer implements primitive toolkit objects that are composed by the rule layer into complete interaction techniques. This paper describes the architecture in detail, compares it with previous User Interface Management systems and toolkits, and describes how ITS is being used to implement the visitor information system for EXPO '92. --- paper_title: User interface software tools paper_content: Almost as long as there have been user interfaces, there have been special software systems and tools to help design and implement the user interface software. Many of these tools have demonstrated significant productivity gains for programmers, and have become important commercial products. Others have proven less successful at supporting the kinds of user interfaces people want to build. This article discusses the different kinds of user interface software tools, and investigates why some approaches have worked and others have not. Many examples of commercial and research systems are included. Finally, current research directions and open issues in the field are discussed. --- paper_title: Declarative interface models for user interface construction tools: the MASTERMIND approach paper_content: Currently, building a user interface involves creating a large procedural program. Model-based programming provides an alternative new paradigm. In the model-based paradigm, developers create a declarative model that describes the tasks that users are expected to accomplish with a system, the functional capabilities of a system, the style and requirements of the interface, the characteristics and preferences of the users, and the I/O techniques supported by the delivery platform. Based on the model, a much smaller procedural program then determines the behavior of the system. --- paper_title: From OOA to GUI — The Janus System paper_content: The demand of usable, ergonomic, Graphic User Interfaces (GUI) from end users is growing rapidly today. On one hand, User Interface Management Systems (UIMS) are used to construct the user interface components. On the other hand, there is a need for representing the problem domain for the future user interface leading to a gap between user interface design and problem domain. The JANUS system addresses this problem by introducing a model using Object Oriented Analysis (ODA) from which a GUI can be almost automatically generated. --- paper_title: Adept - A task based design environment paper_content: Modern user interface development environments are based on fast prototyping which as a methodology does not incorporate any theory or design principles. Adept (Advanced design environment for prototyping with tasks) incorporates a theory of modelling users and user task knowledge known as Task Knowledge Structures 1111, and extends it to a theoretical framework for modelling user, task and interface characteristics. This paper introduces the underlying framework, and discusses how this can be used to support task based user interface design. --- paper_title: Template-based mapping of application data interactive displays paper_content: Abstract This paper describes a template-based method for construct-ing interactive displays with the building-blocks (widgets) provided in a user interface toolkit. Templates specify how to break down complex application objects into smaller pieces, specify the graphical components (widgets) to be used for displaying each piece, and specify their layout. Complex interfaces are constructed by recursively applying templates, thus constructing a tree of widgets to display a complex application object. The template-based method is more general than the interactive, WYSIWYG interface builders in that it can specify dynamic displays for applica-tion data that changes at run time. The template-based method also leads to more consistent, extendable and modi-fiable interfaces. 1.0 Introduction User interface toolkits such as the X Toolkit [10] and the Macintosh Tool-Box [5] provide abstractions that make the construction of user interfaces significantly easier than pro-gramming using graphics primitives. Unfortunately the toolkits do not make the construction of user interfaces easy enough. The tasks of assembling the widgets to construct complex displays, and of tying the widgets to application data structures remains difficult and time consuming.Interactive user interface builder systems such as Prototyper [14] provide interactive what-you-see-is-what-you-get in-terfaces to assemble tool-box widgets into more complex interfaces. These tools are excellent for a restricted class of interfaces, which typically includes only menus and dia-logue boxes. However, these tools do not help with the con-struction of the “main windows” of an application, which display application objects that typically change at run time.This paper describes a template-based method for assem-bling widgets into complex interfaces and tying them to ap-plication objects. Templates specify how to break down complex application objects into smaller pieces, specify the widgets to be used for displaying each piece, and specify their layout. Complex interfaces are constructed by recur-sively applying templates, thus constructing a tree of wid-gets to display a composite application object. The template-based method supports the construction of dynam-ic displays, and also encourages the design of consistent, extendable and modifiable interfaces.The paper is organized as follows. Section 2.0 presents an overview of a template-based UIMS named Humanoid, the High-level UIMS for Manufacturing Applications Needing Organized Iterative Development. Section 3.0 compares the template-based method with other methods, sections 4.0 and 5.0 describe the template-based method in detail, and fi-nally section 6.0 discusses our experience with Humanoid. --- paper_title: Bridging the Generation Gap: From Work Tasks to User Interface Designs paper_content: Abstract Task and model-based design techniques support the design of interactive systems by focusing on the use of integrated modelling notations to support design at vari-ous levels of abstraction. However, they are less concerned with examining the na-ture of the design activities that progress the design from one level of abstraction to another. This paper examines the distinctions between task and model-based approaches. Further, it discusses the role of design activities in such approaches, based on experience with one task-based technique, and the resulting implications for tool support and design guidelines. The discussion is contextualised by exam-ples drawn from a number of case studies where designers applied a task-based approach to solve one particular design problem: that of developing an airline flight query and booking system. Keywords Automatic generation, design guidelines, model-based design, task-based design, task models, Introduction Current interest in task and model-based approaches to design signifies a trend to-wards placing greater emphasis on what an interactive system should do and how people might use it rather than how the system itself works. Designers are encour-aged to conceptualise designs at a higher level of abstraction than is the case when working with standard prototyping tools; in particular, they are encouraged to fo-cus on the behaviour and structure of the user interface rather than on specific de-tails of low-level interaction objects. This interest is reflected in papers presented at the DSV-IS workshops [DSV-IS94, DSV-IS95]. Task and model-based approaches to design have many features in common. Most notably, they both focus on the use of models to represent the various sorts of in- --- paper_title: Management of interface design in humanoid paper_content: Today's interface design tools either force designers to handle a tremendous number of design details, or limit their control over design decisions. Neither of these approaches taps the true strengths of either human designers or computers in the design process. This paper presents a human-computer collaborative system that uses a model-based approach for interface design to help designers on decision making, and utilizes the bookkeeping capabilities of computers for regular and tedious tasks. We describe (a) the underlying modeling technique and an execution environment that allows even incompletely-specified designs to be executed for evaluation and testing purposes, and (b) a tool that decomposes high-level design goals into the necessary implementation steps, and helps designers manage the myriad of details that arise during design. --- paper_title: Generating user interfaces from data models and dialogue net specifications paper_content: A method and a set of supporting tools have been developed for an improved integration of user interface design with software engineering methods and tools. Animated user interfaces for database-oriented applications are generated from an extended data model and a new graphical technique for specifying dialogues. Based on views defined for the data model, an expert system uses explicit design rules derived from existing guidelines for producing the static layout of the user interface. A petri net based technique called dialogue nets is used for specifying the dynamic behaviour. Output is generated for an existing user interface management system. The approach supports rapid prototyping while using the advantages of standard software engineering methods. --- paper_title: Software Life Cycle Automation for Interactive Applications: The AME Design Environment paper_content: The model-based design environment AME offers CASE-tool support for all life cycle activities in the development process for interactive applications. The system allows the rapid automatic construction of interactive software from objectoriented analysis models (OOA) and/or OO-modelling information specified at later design stages. AME provides functionality for UI-structure generation, interaction object selection, layout prototype generation, dynamic behaviour generation, adaptation to user-specific requirements, integration of domain-methods and target code generation. Object-oriented and knowledge-based components provide automatic transition from one refinement stage to the next. System decisions can be visualised before code generation and may be revised by the designer. --- paper_title: The BOSS System: Coupling Visual Programming with Model Based Interface Design paper_content: Due to the limitations of WYSIWYG User Interface Builders and User Interface Management Systems model based user interface construction tools gain rising research interest. The paper describes the BOSS system, a model based tool which employs an encompassing specification model (HIT, Hierarchic Interaction graph Templates) for setting up all parts of the model of an interactive application (application interface, user interaction task space, presentation design rules) in a declarative, designer oriented manner. BOSS offers an integrated development environment in which specifications are elaborated in a graphical, visual-programming-like fashion. Through a refinement component a specification can be transformed according to high-level design goals. From a refined specification BOSS generates automatically user interfaces using modified techniques from compiler construction. --- paper_title: Facilitating the exploration of interface design alternatives: the HUMANOID model of interface design paper_content: HUMANOID is a user interface design tool that lets designers express abstract conceptualizations of an interface in an executable form, allowing designers to experiment with scenarios and dialogues even before the application model is completely worked out. Three properties of the HUMANOID approach allow it to do so: a modularization of design issues into independent dimensions, support for multiple levels of specificity in mapping application models to user interface constructs, and mechanisms for constructing executable default user interface implementations from whatever level of specificity has been provided by the designer. --- paper_title: Adept - A task based design environment paper_content: Modern user interface development environments are based on fast prototyping which as a methodology does not incorporate any theory or design principles. Adept (Advanced design environment for prototyping with tasks) incorporates a theory of modelling users and user task knowledge known as Task Knowledge Structures 1111, and extends it to a theoretical framework for modelling user, task and interface characteristics. This paper introduces the underlying framework, and discusses how this can be used to support task based user interface design. --- paper_title: The DIANE+ Method. paper_content: The DIANE method has been created to solve malfunctions in the use of interactive software, leading to trouble in the information systems and difficulties in the user learning and memorisation. The DIANE method aims to integrate the user and his interaction capability into the current process of designing an interactive software. DIANE+ extends the DIANE method to make possible the automatic generation of user interface. This extension concerns the model of dialogue control, and the integration of an OPAC object data model extending the PAC model. This work is based upon a key concept: the control sharing between man and machine. Our approach complements the object methods by integrating aspects relating to tasks and work stations, and concepts such as the user's level and activity. --- paper_title: Template-based mapping of application data interactive displays paper_content: Abstract This paper describes a template-based method for construct-ing interactive displays with the building-blocks (widgets) provided in a user interface toolkit. Templates specify how to break down complex application objects into smaller pieces, specify the graphical components (widgets) to be used for displaying each piece, and specify their layout. Complex interfaces are constructed by recursively applying templates, thus constructing a tree of widgets to display a complex application object. The template-based method is more general than the interactive, WYSIWYG interface builders in that it can specify dynamic displays for applica-tion data that changes at run time. The template-based method also leads to more consistent, extendable and modi-fiable interfaces. 1.0 Introduction User interface toolkits such as the X Toolkit [10] and the Macintosh Tool-Box [5] provide abstractions that make the construction of user interfaces significantly easier than pro-gramming using graphics primitives. Unfortunately the toolkits do not make the construction of user interfaces easy enough. The tasks of assembling the widgets to construct complex displays, and of tying the widgets to application data structures remains difficult and time consuming.Interactive user interface builder systems such as Prototyper [14] provide interactive what-you-see-is-what-you-get in-terfaces to assemble tool-box widgets into more complex interfaces. These tools are excellent for a restricted class of interfaces, which typically includes only menus and dia-logue boxes. However, these tools do not help with the con-struction of the “main windows” of an application, which display application objects that typically change at run time.This paper describes a template-based method for assem-bling widgets into complex interfaces and tying them to ap-plication objects. Templates specify how to break down complex application objects into smaller pieces, specify the widgets to be used for displaying each piece, and specify their layout. Complex interfaces are constructed by recur-sively applying templates, thus constructing a tree of wid-gets to display a composite application object. The template-based method supports the construction of dynam-ic displays, and also encourages the design of consistent, extendable and modifiable interfaces.The paper is organized as follows. Section 2.0 presents an overview of a template-based UIMS named Humanoid, the High-level UIMS for Manufacturing Applications Needing Organized Iterative Development. Section 3.0 compares the template-based method with other methods, sections 4.0 and 5.0 describe the template-based method in detail, and fi-nally section 6.0 discusses our experience with Humanoid. --- paper_title: Bridging the Generation Gap: From Work Tasks to User Interface Designs paper_content: Abstract Task and model-based design techniques support the design of interactive systems by focusing on the use of integrated modelling notations to support design at vari-ous levels of abstraction. However, they are less concerned with examining the na-ture of the design activities that progress the design from one level of abstraction to another. This paper examines the distinctions between task and model-based approaches. Further, it discusses the role of design activities in such approaches, based on experience with one task-based technique, and the resulting implications for tool support and design guidelines. The discussion is contextualised by exam-ples drawn from a number of case studies where designers applied a task-based approach to solve one particular design problem: that of developing an airline flight query and booking system. Keywords Automatic generation, design guidelines, model-based design, task-based design, task models, Introduction Current interest in task and model-based approaches to design signifies a trend to-wards placing greater emphasis on what an interactive system should do and how people might use it rather than how the system itself works. Designers are encour-aged to conceptualise designs at a higher level of abstraction than is the case when working with standard prototyping tools; in particular, they are encouraged to fo-cus on the behaviour and structure of the user interface rather than on specific de-tails of low-level interaction objects. This interest is reflected in papers presented at the DSV-IS workshops [DSV-IS94, DSV-IS95]. Task and model-based approaches to design have many features in common. Most notably, they both focus on the use of models to represent the various sorts of in- --- paper_title: Declarative interface models for user interface construction tools: the MASTERMIND approach paper_content: Currently, building a user interface involves creating a large procedural program. Model-based programming provides an alternative new paradigm. In the model-based paradigm, developers create a declarative model that describes the tasks that users are expected to accomplish with a system, the functional capabilities of a system, the style and requirements of the interface, the characteristics and preferences of the users, and the I/O techniques supported by the delivery platform. Based on the model, a much smaller procedural program then determines the behavior of the system. --- paper_title: Model-based User Interface Software Tools Current state of declarative models paper_content: The Interface Model is central to all model-based user interface software tools. This report investigates the different use of declarative models as a part of the Interface Model in modelbased interface development environments. Furthermore, we introduce definitions for the different declarative models. The report concludes with a description of an ontology of declarative models for future model-based interface development environments. --- paper_title: A Programming Language Basis for User Interface Management paper_content: The Mickey UIMS maps the user interface style and techniques of the Apple Macintosh onto the declarative constructs of Pascal. The relationships between user interfaces and the programming language control the interface generation. This imposes some restrictions on the possible styles of user interfaces but greatly enhances the usability of the UIMS. --- paper_title: Software Life Cycle Automation for Interactive Applications: The AME Design Environment paper_content: The model-based design environment AME offers CASE-tool support for all life cycle activities in the development process for interactive applications. The system allows the rapid automatic construction of interactive software from objectoriented analysis models (OOA) and/or OO-modelling information specified at later design stages. AME provides functionality for UI-structure generation, interaction object selection, layout prototype generation, dynamic behaviour generation, adaptation to user-specific requirements, integration of domain-methods and target code generation. Object-oriented and knowledge-based components provide automatic transition from one refinement stage to the next. System decisions can be visualised before code generation and may be revised by the designer. --- paper_title: Exploiting Model-Based Techniques for User Interfaces to Databases paper_content: Model-based systems provide methods for supporting the systematic and efficient development of application interfaces. This paper examines how model-based technologies can be exploited to develop user interfaces to databases. To this end five model-based systems, namely Adept, HUMANOID, Mastermind, TADEUS and DRIVE are discussed through the use of a unifying case study which allows the examination of the approaches followed by the different systems. --- paper_title: Teallach: a model-based user interface development environment for object databases paper_content: Abstract Model-based user interface development environments show promise for improving the productivity of user interface developers, and possibly for improving the quality of developed interfaces. While model-based techniques have previously been applied to the area of database interfaces, they have not been specifically targeted at the important area of object database applications. Such applications make use of models that are semantically richer than their relational counterparts in terms of both data structures and application functionality. In general, model-based techniques have not addressed how the information referenced in such applications is manifested within the described models, and is utilised within the generated interface itself. This lack of experience with such systems has led to many model-based projects providing minimal support for certain features that are essential to such data intensive applications, and has prevented object database interface developers in particular from benefiting from model-based techniques. This paper presents the Teallach model-based user interface development environment for object databases, describing the models it supports, the relationships between these models, the tool used to construct interfaces using the models and the generation of Java programs from the declarative models. Distinctive features of Teallach include comprehensive facilities for linking models, a flexible development method, an open architecture, and the generation of running applications based on the models constructed by designers. --- paper_title: Design alternatives for user interface management sytems based on experience with COUSIN paper_content: User interface management systems (UIMSs) provide user interfaces to application systems based on an abstract definition of the interface required. This approach can provide higher-quality interfaces at a lower construction cost. In this paper we consider three design choices for UIMSs which critically affect the quality of the user interfaces built with a UIMS, and the cost of constructing the interfaces. The choices are examined in terms of a general model of a UIMS. They concern the sharing of control between the UIMS and the applications it provides interfaces to, the level of abstraction in the definition of the information exchanged between user and application, and the level of abstraction in the definition of the sequencing of the dialogue. For each choice, we argue for a specific alternative. We go on to present COUSIN, a UIMS that provides graphical interfaces for a variety of applications based on highly abstracted interface definitions. COUSIN'S design corresponds to the alternatives we argued for in two out of three cases, and partially satisfies the third. An interface developed through, and run by COUSIN is described in some detail. --- paper_title: A high-level user interface management system paper_content: A high-level UIMS which automatically generates the lexical and syntactic design of graphical user interfaces is presented. The interfaces generated by the UIMS can easily and rapidly be refined by the designer by using highly interactive and graphical facilities. The UIMS accepts a high-level description of the semantic commands supported by the application, a description of the implementation device, and optionally, the end user's preferences. Based on these inputs the UIMS generates graphical user interfaces in which the commands are selected from menus and command arguments are provided through interaction with graphical interaction techniques. --- paper_title: Modelling and Generation of Graphical User Interfaces in the TADEUS Approach paper_content: The goal of the TADEUS-approach (TAsk-based DEvelopment of USer interface software) is the task-oriented and user-centred development of graphical user interfaces (GUI). For this reason TADEUS is a methodology as well as a supporting environment for GUI development. An overview about the TADEUS approach is given in this paper. The TADEUS Dialogue graph, a new specification technique for GUI, and the generation of GUI based on Dialogue graphs are described. --- paper_title: Human computer interaction: Psychology, task analysis, and software engineering paper_content: Introducing Human Computer Interaction. An introduction to Human Memory. Memory Structures. Knowledge and Representation. Expertise. Skill and Skill Acquisition. Organisation and early Attempts at Modelling Human-Computer Interaction. Interaction and User Modelling in HCI. Task Analysis and Task Modelling. Developing Interface Designs. Evaluations of Interactive systems. User Interface Design. Environments. Management System and Toolkits. Task Analysis. Knowledge Analysis of Tasks. Design. Applied to Interactive Informal and Formal Specifications of User Interaction Task Scenarios. Appendix. --- paper_title: Bridging the Generation Gap: From Work Tasks to User Interface Designs paper_content: Abstract Task and model-based design techniques support the design of interactive systems by focusing on the use of integrated modelling notations to support design at vari-ous levels of abstraction. However, they are less concerned with examining the na-ture of the design activities that progress the design from one level of abstraction to another. This paper examines the distinctions between task and model-based approaches. Further, it discusses the role of design activities in such approaches, based on experience with one task-based technique, and the resulting implications for tool support and design guidelines. The discussion is contextualised by exam-ples drawn from a number of case studies where designers applied a task-based approach to solve one particular design problem: that of developing an airline flight query and booking system. Keywords Automatic generation, design guidelines, model-based design, task-based design, task models, Introduction Current interest in task and model-based approaches to design signifies a trend to-wards placing greater emphasis on what an interactive system should do and how people might use it rather than how the system itself works. Designers are encour-aged to conceptualise designs at a higher level of abstraction than is the case when working with standard prototyping tools; in particular, they are encouraged to fo-cus on the behaviour and structure of the user interface rather than on specific de-tails of low-level interaction objects. This interest is reflected in papers presented at the DSV-IS workshops [DSV-IS94, DSV-IS95]. Task and model-based approaches to design have many features in common. Most notably, they both focus on the use of models to represent the various sorts of in- --- paper_title: Declarative interface models for user interface construction tools: the MASTERMIND approach paper_content: Currently, building a user interface involves creating a large procedural program. Model-based programming provides an alternative new paradigm. In the model-based paradigm, developers create a declarative model that describes the tasks that users are expected to accomplish with a system, the functional capabilities of a system, the style and requirements of the interface, the characteristics and preferences of the users, and the I/O techniques supported by the delivery platform. Based on the model, a much smaller procedural program then determines the behavior of the system. --- paper_title: The Amulet Environment: New Models for Effective User Interface Software Development paper_content: The Amulet user interface development environment makes it easier for programmers to create highly interactive, graphical user interface software for Unix, Windows and the Macintosh. Amulet uses new models for objects, constraints, animation, input, output, commands, and undo. The object system is a prototype instance model in which there is no distinction between classes and instances or between methods and data. The constraint system allows any value of any object to be computed by arbitrary code and supports multiple constraint solvers. Animations can be attached to existing objects with a single line of code. Input from the user is handled by "interactor" objects which support reuse of behavior objects. The output model provides a declarative definition of the graphics and supports automatic refresh. Command objects encapsulate all of the information needed about operations, including support for various ways to undo them. A key feature of the Amulet design is that all graphical objects and behaviors of those objects are explicitly represented at run time, so the system can provide a number of high level built-in functions, including automatic display and editing of objects, and external analysis and control of interfaces. Amulet integrates these capabilities in a flexible and effective manner. --- paper_title: ITS: a tool for rapidly developing interactive applications paper_content: The ITS architecture separates applications into four layers. The action layer implements back-end application functions. The dialog layer defines the content of the user interface, independent of its style. Content specifies the objects included in each frame of the interface, the flow of control among frames, and what actions are associated with each object. The style rule layer defines the presentation and behavior of a family of interaction techniques. Finally, the style program layer implements primitive toolkit objects that are composed by the rule layer into complete interaction techniques. This paper describes the architecture in detail, compares it with previous User Interface Management systems and toolkits, and describes how ITS is being used to implement the visitor information system for EXPO '92. --- paper_title: CORBA fundamentals and programming paper_content: Introducing CORBA and the OMA. Technical Overview. Introducing OMG IDL. Understanding the ORB, Part 1: Client Side. Understanding the ORB, Part 2: Object Implementation (Server) Side, Including the CORBA Component Model. Architecting and Programming for CORBA Interoperability. Language Mappings, Part 1: C++. Language Mappings, Part 2: Java. Language Mappings, Part 3: COBOL Designing with CORBAservices and CORBAfacilities. CORBAservices, Part 1: Naming and Trader Services. CORBAservices, Part 2: Event and Notification Services. CORBAservices, Part 3: Transaction and Concurrency Services. CORBRservices, Part 4: Security and Licensing Services. CORBAservices, Part 5: Introduction to the Other CORBAservices and the COBRAfacilities. CORBAservices, Part 6: LifeCycle and Relationship Services. CORBAservices, Part 7: Persistent State and Externalization Services. CORBAservices, Part 8: Property and Query Services. Introducing the CORBA Domains. Some CORBAdomain Specifications. Modeling CORBA Applications with UML. Implementing Metamodels and Repositories Using the MOF. The Tutorial Example: Overview and Scenario. The Tutorial Example: Analysis and Design. ORB Product Descriptions. Coding and Compiling the IDL. The Depot. Depot Implementation in Java. Depot: Overview and COBOL Language Coding. The Store. Coding the Store in Java. Store: COBOL Coding. Programming the POSTerminal in C++. Coding the POS in Java. POS: COBOL Coding. Running the Example. Appendices. About the Web Site. What's on the CD-ROM? Index. --- paper_title: Communicating sequential processes paper_content: This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of familiar programming exercises. --- paper_title: A Specification Language for Direct-Manipulation User Interfaces paper_content: A direct-manipulation user interface presents a set of visual representations on a display and a repertoire of manipulations that can be performed on any of them. Such representations might include screen buttons, scroll bars, spreadsheet cells, or flowchart boxes. Interaction techniques of this kind were first seen in interactive graphics systems; they are now proving effective in user interfaces for applications that are not inherently graphical. Although they are often easy to learn and use, these interfaces are also typically difficult to specify and program clearly. Examination of direct-manipulation interfaces reveals that they have a coroutine-like structure and, despite their surface appearance, a peculiar, highly moded dialogue. This paper introduces a specification technique for direct-manipulation interfaces based on these observations. In it, each locus of dialogue is described as a separate object with a single-thread state diagram, which can be suspended and resumed, but retains state. The objects are then combined to define the overall user interface as a set of coroutines, rather than inappropriately as a single highly regular state transition diagram. An inheritance mechanism for the interaction objects is provided to avoid repetitiveness in the specifications. A prototype implementation of a user-interface management system based on this approach is described, and example specifications are given. --- paper_title: Towards a general computational framework for model-based interface development systems paper_content: Model-based interface development systems have not been able to progress beyond producing narrowly focused interface designs of restricted applicability. We identify a level-of-abstraction mismatch in interface models, which we call the mapping problem, as the cause of the limitations in the usefulness of model-based systems. We propose a general computational framework for solving the mapping problem in model-based systems. We show an implementation of the framework within the MOBI-D (Model-Based Interface Designer) interface development environment. The MOBI-D approach to solving the mapping problem enables for the first time with modelbased technology the design of a wide variety of types of user interfaces. --- paper_title: ITS: a tool for rapidly developing interactive applications paper_content: The ITS architecture separates applications into four layers. The action layer implements back-end application functions. The dialog layer defines the content of the user interface, independent of its style. Content specifies the objects included in each frame of the interface, the flow of control among frames, and what actions are associated with each object. The style rule layer defines the presentation and behavior of a family of interaction techniques. Finally, the style program layer implements primitive toolkit objects that are composed by the rule layer into complete interaction techniques. This paper describes the architecture in detail, compares it with previous User Interface Management systems and toolkits, and describes how ITS is being used to implement the visitor information system for EXPO '92. --- paper_title: User interface software tools paper_content: Almost as long as there have been user interfaces, there have been special software systems and tools to help design and implement the user interface software. Many of these tools have demonstrated significant productivity gains for programmers, and have become important commercial products. Others have proven less successful at supporting the kinds of user interfaces people want to build. This article discusses the different kinds of user interface software tools, and investigates why some approaches have worked and others have not. Many examples of commercial and research systems are included. Finally, current research directions and open issues in the field are discussed. ---
Title: User Interface Declarative Models and Development Environments: A Survey Section 1: Introduction Description 1: Introduce the motivations, goals, and advantages of model-based user interface development technology using declarative models, and outline the challenges faced by MB-UIDEs. Section 2: MB-UIDE Description 2: Introduce the various MB-UIDEs referenced in the paper, their origins, and their unique contributions. Section 3: Background Description 3: Provide an overview of the evolution of MB-UIDEs, from first-generation systems to more modern iterations, and highlight key papers and contributions in the field. Section 4: User Interface Model Framework Description 4: Describe the framework used for comparing and analyzing the architectural components of user interface models (UIMs). Section 5: User Interface Development in a MB-UIDE Description 5: Explain the UI development process in a MB-UIDE, including design and implementation steps. Section 6: User Interface Design Process Description 6: Detail the processes involved in UI design within a MB-UIDE, including tools for editing models and design assistants. Section 7: Automated Tasks in User Interface Design Process Description 7: Discuss how automated tasks support the UI design process and present different approaches to generating and executing user interfaces. Section 8: User Interface Implementation Process Description 8: Describe the various tools and methods used for implementing UIs within a MB-UIDE framework. Section 9: Declarative Models Description 9: Detail the composition and notation of declarative models, identifying the key models and constructors used in MB-UIDEs. Section 10: Model Integration Description 10: Analyze how different user interface models are integrated within MB-UIDEs and discuss the relationships between model components. Section 11: Environments Description 11: Compare the design and implementation environments provided by different MB-UIDEs and their respective tools. Section 12: Design Environment Description 12: Provide a classification of tools used in the design environment of MB-UIDEs and their functionalities. Section 13: Implementation Environment Description 13: Summarize the tools and methods used in the implementation environment of MB-UIDEs, including code generation and runtime systems. Section 14: Conclusions Description 14: Summarize the key findings of the survey, highlight current challenges, and suggest future directions for MB-UIDE technology.
Tsallis Entropy Theory for Modeling in Water Engineering: A Review
17
--- paper_title: Entropy Theory in Hydraulic Engineering: An Introduction paper_content: \IEntropy Theory in Hydraulic Engineering: An Introduction\N is the first book to explain the basic concepts of entropy theory from a hydraulic perspective and demonstrate the theory’s application in solving practical engineering problems. In the hydraulic context, entropy is valuable as a way of measuring uncertainty or surprise —or even disorder or chaos— as a type of information. As hydraulic systems become more complex, entropy theory enables hydraulic engineers to quantify uncertainty, determine risk and reliability, estimate parameters, model processes, and design more robust and dependable water hydraulic systems. --- paper_title: THE USE OF ENTROPY IN HYDROLOGY AND WATER RESOURCES paper_content: Since the development of the entropy theory by Shannon in the late 1940s and of the principle of maximum entropy (POME) by Jaynes in the late 1950s there has been a proliferation of applications of entropy in a wide spectrum of areas, including hydrological and environmental sciences. The real impetus to entropy-based hydrological modelling was provided by Amorocho and Espildora in 1972. A great variety of entropy applications in hydrology and water resources have since been reported, and new applications continue to unfold. This paper reviews the recent contributions on entropy applications in hydrology and water resources, discusses the usefulness and versatility of the entropy concept, and reflects on the strengths and limitations of this concept. The paper concludes with comments on its implications in developing countries. #1997 by John Wiley & Sons, Ltd. --- paper_title: Entropy Theory in Hydrologic Science and Engineering paper_content: 1. Entropy Theory 2. Morphological Analysis 3. Evaluation and Design of Sampling and Measurement Networks 4. Precipitation Variability 5. Rainfall Frequency Distributions 6. Assessment of Potential Water Resources Availability 7. Evaluation of Precipitation Forecasting Schemes 8. Infiltration 9. Soil Moisture 10. Groundwater Flow 11. Rainfall-Runoff Modeling 12. Hydrologic Frequency Analysis 13. Steamflow Simulation 14. Long-Term Streamflow Forecasting 15. Flood Forecasting: Bivariate Analysis 16. River Regime Classification 17. Sediment Yield Eco Index --- paper_title: A toy model of climatic variability with scaling behaviour paper_content: Abstract It is demonstrated that a simple deterministic model in discrete time can reproduce the scaling behaviour of hydroclimatic processes at timescales coarser than annual, a behaviour more widely known in hydrology as the Hurst phenomenon. This toy model is based on a generalised ‘chaotic tent map’, which may be considered as the compound result of a positive and a negative feedback mechanism, and involves two degrees of freedom. The model is not a realistic representation of a climatic system, but rather a radical simplification of real climatic dynamics. However, its simplicity helps understand the physical mechanisms that cause the scaling behaviour and simultaneously enables easy implementation and convenient experimentation. Application of the toy model gives traces that can resemble historical time series of hydroclimatic variables, such as temperature and river flow. In particular, such traces exhibit scaling behaviour with a Hurst coefficient greater than 0.5 and their statistical properties are similar to that of observed time series. Moreover, application demonstrates that large-scale synthetic ‘climatic’ fluctuations (like upward or downward trends) can emerge without any specific reason and their evolution is unpredictable, even when they are generated by this simple fully deterministic model with only two degrees of freedom. Thus, the model emphasises the large uncertainty associated with the scaling behaviour, rather than enhances the prediction capability, despite the simple deterministic dynamics it uses, which obviously, are only a caricature of the much more complex dynamics of the real climatic system. --- paper_title: Possible generalization of Boltzmann-Gibbs statistics paper_content: With the use of a quantity normally scaled in multifractals, a generalized form is postulated for entropy, namelyS q ≡k [1 – ∑ i=1 W p i q ]/(q-1), whereq∈ℝ characterizes the generalization andpi are the probabilities associated withW (microscopic) configurations (W∈ℕ). The main properties associated with this entropy are established, particularly those corresponding to the microcanonical and canonical ensembles. The Boltzmann-Gibbs statistics is recovered as theq→1 limit. --- paper_title: The constrained entropy and cross-entropy functions paper_content: This study introduces the constrained forms of the Shannon information entropy and Kullback–Leibler cross-entropy functions. Applicable to a closed system, these functions incorporate the constraints on the system in a generic fashion, irrespective of the form or even the number of the constraints. Since they are fully constrained, the constrained functions may be “pulled apart” to give their partial or local constrained forms, providing the means to examine the probability of each outcome or state relative to its local maximum entropy (or minimum cross-entropy) position, independently of the other states of the system. The Shannon entropy and Kullback–Leibler cross-entropy functions do not possess this functionality. The features of each function are examined in detail, and the similarities between the constrained and Tsallis entropy functions are examined. The continuous constrained entropy and cross-entropy functions are also introduced, the integrands of which may be used to independently examine each infinitesimal state (physical element) of the system. --- paper_title: On the fractal dimension of orbits compatible with Tsallis statistics paper_content: In a previous paper [A. Carati, Physica A 348 (2005) 110–120] it was shown how, for a dynamical system, the probability distribution function of sojourn-times in phase-space, defined in terms of the dynamical orbits (up to a given observation time), induces unambiguously a statistical ensemble in phase-space. In the present paper, the p.d.f. of the sojourn-times corresponding to a Tsallis ensemble is obtained (this, by the way, requires the solution of a problem of a general character, disregarded in paper [A. Carati, Physica A 348 (2005) 110–120]). In particular some qualitative properties, such as the fractal dimension, of the dynamical orbits compatible with the Tsallis ensembles are indicated. --- paper_title: Entropic nonextensivity: a possible measure of complexity paper_content: An updated review [1] of nonextensive statistical mechanics and thermodynamics is colloquially presented. Quite naturally the possibility emerges for using the value of q-1 (entropic nonextensivity) as a simple and efficient manner to provide, at least for some classes of systems, some characterization of the degree of what is currently referred to as complexity [2]. A few historical digressions are included as well. --- paper_title: I. Nonextensive Statistical Mechanics and Thermodynamics: Historical Background and Present Status paper_content: The domain of validity of standard thermodynamics and Boltzmann-Gibbs statistical mechanics is focused on along a historical perspective. It is then formally enlarged in order to hopefully cover a variety of anomalous systems. The generalization concerns nonextensive systems, where nonextensivity is understood in the thermodynamical sense. This generalization was first proposed in 1988 inspired by the probabilistic description of multifractal geometry, and has been intensively studied during this decade. In the present effort, we describe the formalism, discuss the main ideas, and then exhibit the present status in what concerns theoretical, experimental and computational evidences and connections, as well as some perspectives for the future. The whole review can be considered as an attempt to clarify our current understanding of the foundations of statistical mechanics and its thermodynamical implications. --- paper_title: Two-Dimensional Velocity Distribution in Open Channels Using the Tsallis Entropy paper_content: AbstractAssuming time-averaged velocity as a random variable, a two-dimensional (2D) velocity distribution in open channels was derived by maximizing the Tsallis entropy, subject to mass conservation. The derived distribution was tested using field data and was also compared with other two-dimensional velocity distributions. The Tsallis entropy–based velocity distribution was found to predict the velocity near the boundary well. --- paper_title: Possible generalization of Boltzmann-Gibbs statistics paper_content: With the use of a quantity normally scaled in multifractals, a generalized form is postulated for entropy, namelyS q ≡k [1 – ∑ i=1 W p i q ]/(q-1), whereq∈ℝ characterizes the generalization andpi are the probabilities associated withW (microscopic) configurations (W∈ℕ). The main properties associated with this entropy are established, particularly those corresponding to the microcanonical and canonical ensembles. The Boltzmann-Gibbs statistics is recovered as theq→1 limit. --- paper_title: Entropy Theory in Hydrologic Science and Engineering paper_content: 1. Entropy Theory 2. Morphological Analysis 3. Evaluation and Design of Sampling and Measurement Networks 4. Precipitation Variability 5. Rainfall Frequency Distributions 6. Assessment of Potential Water Resources Availability 7. Evaluation of Precipitation Forecasting Schemes 8. Infiltration 9. Soil Moisture 10. Groundwater Flow 11. Rainfall-Runoff Modeling 12. Hydrologic Frequency Analysis 13. Steamflow Simulation 14. Long-Term Streamflow Forecasting 15. Flood Forecasting: Bivariate Analysis 16. River Regime Classification 17. Sediment Yield Eco Index --- paper_title: Entropy based derivation of probability distributions: A case study to daily rainfall paper_content: The principle of maximum entropy, along with empirical considerations, can provide consistent basis for constructing a consistent probability distribution model for highly varying geophysical processes. Here we examine the potential of using this principle with the Boltzmann-Gibbs-Shannon entropy definition in the probabilistic modeling of rainfall in different areas worldwide. We define and theoretically justify specific simple and general entropy maximization constraints which lead to two flexible distributions, i.e., the three-parameter Generalized Gamma (GG) and the four-parameter Generalized Beta of the second kind (GB2), with the former being a particular limiting case of the latter. We test the theoretical results in 11,519 daily rainfall records across the globe. The GB2 distribution seems to be able to describe all empirical records while two of its specific three-parameter cases, the GG and the Burr Type XII distributions perform very well by describing the 97.6% and 87.7% of the empirical records, respectively. --- paper_title: Just two moments! A cautionary note against use of high-order moments in multifractal models in hydrology paper_content: The need of understanding and modelling the space-time variability of natural processes in hydrological sciences produced a large body of literature over the last thirty years. In this context, a multifractal framework pro- vides parsimonious models which can be applied to a wide- scale range of hydrological processes, and are based on the empirical detection of some patterns in observational data, i.e. a scale invariant mechanism repeating scale after scale. Hence, multifractal analyses heavily rely on available data series and their statistical processing. In such analyses, high order moments are often estimated and used in model identi- fication and fitting as if they were reliable. This paper warns practitioners against the blind use in geophysical time series analyses of classical statistics, which is based upon indepen- dent samples typically following distributions of exponential type. Indeed, the study of natural processes reveals scaling behaviours in state (departure from exponential distribution tails) and in time (departure from independence), thus im- plying dramatic increase of bias and uncertainty in statisti- cal estimation. Surprisingly, all these differences are com- monly unaccounted for in most multifractal analyses of hy- drological processes, which may result in inappropriate mod- elling, wrong inferences and false claims about the prop- erties of the processes studied. Using theoretical reasoning and Monte Carlo simulations, we find that the reliability of multifractal methods that use high order moments (> 3) is questionable. In particular, we suggest that, because of esti- mation problems, the use of moments of order higher than two should be avoided, either in justifying or fitting models. Nonetheless, in most problems the first two moments provide enough information for the most important characteristics of the distribution. --- paper_title: Downstream hydraulic geometry relations: 1. Theoretical development: DOWNSTREAM HYDRAULIC GEOMETRY RELATIONS, 1 paper_content: An edited version of this paper was published by AGU. Copyright 2003 American Geophysical Union. --- paper_title: Entropy Theory in Hydraulic Engineering: An Introduction paper_content: \IEntropy Theory in Hydraulic Engineering: An Introduction\N is the first book to explain the basic concepts of entropy theory from a hydraulic perspective and demonstrate the theory’s application in solving practical engineering problems. In the hydraulic context, entropy is valuable as a way of measuring uncertainty or surprise —or even disorder or chaos— as a type of information. As hydraulic systems become more complex, entropy theory enables hydraulic engineers to quantify uncertainty, determine risk and reliability, estimate parameters, model processes, and design more robust and dependable water hydraulic systems. --- paper_title: Downstream hydraulic geometry relations: 1. Theoretical development: DOWNSTREAM HYDRAULIC GEOMETRY RELATIONS, 1 paper_content: An edited version of this paper was published by AGU. Copyright 2003 American Geophysical Union. --- paper_title: An entropy-based morphological analysis of river basin networks paper_content: Under the assumption that the only information available on a drainage basin is its mean elevation, the connection between entropy and potential energy is explored to analyze drainage basins morphological characteristics. The mean basin elevation is found to be linearly related to the entropy of the drainage basin. This relation leads to a linear relation between the mean elevation of a subnetwork and the logarithm of its topological diameter. Furthermore, the relation between the fall in elevation from the source to the outlet of the main channel and the entropy of its drainage basin is found to be linear and so is also the case between the elevation of a node and the logarithm of its distance from the source. When a drainage basin is ordered according to the Horton-Strahler ordering scheme, a linear relation is found between the drainage basin entropy and the basin order. This relation can be characterized as a measure of the basin network complexity. The basin entropy is found to be linearly related to the logarithm of the magnitude of the basin network. This relation leads to a nonlinear relation between the network diameter and magnitude, where the exponent is found to be related to the fractal dimension of the drainage network. Also, the exponent of the power law relating the channel slope to the network magnitude is found to be related to the fractal dimension of the network. These relationships are verified on three drainage basins in southern Italy, and the results are found to be promising. --- paper_title: Tsallis Entropy Theory for Derivation of Infiltration Equations paper_content: An entropy theory is formulated for deriving infiltration equations for the potential rate (or capacity) of infiltration in unsaturated soils. The theory is comprised of five parts: (1) Tsallis entropy, (2) principle of maximum entropy (POME), (3) specification of information on the potential rate of infiltration in terms of constraints, (4) maximization of entropy in accordance with POME, and (5) derivation of the probability distribution of infiltration and its maximum entropy. The theory is illustrated with the derivation of six infiltration equations commonly used in hydrology, watershed management, and agricultural irrigation, including Horton, Kostiakov, Philip two-term, Green-Ampt, Overton, and Holtan, and the determination of the least biased probability distributions underlying these infiltration equations and the entropies thereof. The theory leads to the expression of parameters of the derived infiltration equations in terms of three measurable quantities: initial infiltration capacity (potential rate), steady infiltration rate, and soil moisture retention capacity. In this sense, these derived equations are rendered nonparametric. With parameters thus obtained, infiltration capacity rates are computed using these six infiltration equations and are compared with field experimental observations reported in the hydrologic literature as well as the capacity rates computed using parameters of these equations obtained by calibration. It is found that infiltration capacity rates computed using parameter values yielded by the entropy theory are in reasonable agreement with observed as well as calibrated infiltration capacity rates. --- paper_title: Entropy and 2-D Velocity Distribution in Open Channels paper_content: Equations based on the entropy concept have been derived for describing the two‐dimensional velocity distribution in an openchannel cross section. The velocity equation derived is capable of describing the variation of velocity in both the vertical and transverse directions, with the maximum velocity occurring on or below the water surface. Equations for determining the location of mean velocity have also been derived along with those, such as the entropy function, that can be used as measures of the homogeneity of velocity distribution in a channel cross section. A dimensionless parameter of the entropy function named the M number has been found useful as an index for characterizing and comparing various patterns of velocity distribution and states of open‐channel flow systems. The definition and demonstrated usefulness of this parameter indicate the importance and value of the information given by the location and magnitude of maximum velocity in a cross section, and suggest the need for future experime... --- paper_title: Entropy-based parameter estimation in hydrology paper_content: preface. 1. Entropy and Principle of Maximum Entropy. 2. Methods of Parameter Estimation. 3. Uniform Distribution. 4. Exponential Distribution. 5. Normal Distribution. 6. Two-Parameter Lognormal Distribution. 7. Three-Parameter Lognormal Distribution. 8. Extreme Value Type I Distribution. 9. Log-Extreme Value Type I Distribution. 10. Extreme Value Type III Distribution. 11. Generalized Extreme Value Distribution. 12. Weibull Distribution. 13. Gamma Distribution. 14. Pearson Type III Distribution. 15. Log-Pearson Type III Distribution. 16. Beta Distribution. 17. Two-Parameter Log-Logistic Distribution. 18. Three-Parameter Log-Logistic Distribution. 19. Two-Parameter Pareto Distribution. 20. Two-Parameter Generalized Pareto Distribution. 21. Three-Parameter Generalized Pareto Distribution. 22. Two-Component Extreme Value Distribution. Subject Index. --- paper_title: A real-time flood forecasting model based on maximum-entropy spectral analysis. II: Application paper_content: The MESA-based model, developed in the first paper, for real-time flood forecasting was verified on five watersheds from different regions of the world. The sampling time interval and forecast lead time varied from several minutes to one day. The model was found to be superior to a state-space model for all events where it was difficult to obtain prior information about model parameters. The mathematical form of the model was found to be similar to a bivariate autoregressive (AR) model, and under certain conditions, these two models became equivalent. --- paper_title: Entropy theory for movement of moisture in soils paper_content: [1] An entropy theory is formulated for one-dimensional movement of moisture in unsaturated soils in the vertically downward direction. The theory is composed of five parts: (1) Tsallis entropy, (2) principle of maximum entropy, (3) specification of information on soil moisture in terms of constraints, (4) maximization of the Tsallis entropy, and (5) derivation of the probability distributions of soil moisture. The theory is applied to determine the soil moisture profile under three conditions: (1) the moisture is maximum at the soil surface and decreases downward to a minimum value at the bottom of the soil column (it may be near the water table); (2) the moisture is minimum at the soil surface and increases downward to a maximum value at the end of the soil column (this case is the opposite of case 1); and (3) the moisture at the soil surface is low and increases downward up to a distance and then decreases up to the bottom (this case combines case 2 and case 1). The entropy-based soil moisture profiles are tested using experimental observations reported in the literature, and properties of these profiles are enumerated. --- paper_title: Downstream hydraulic geometry relations: 2. Calibration and testing paper_content: An edited version of this paper was published by AGU. Copyright 2003 American Geophysical Union. --- paper_title: A model of evapotranspiration based on the theory of maximum entropy production paper_content: [1] Building on a proof-of-concept study of energy balance over dry soil, a model of evapotranspiration is proposed based on the theory of maximum entropy production (MEP). The MEP formalism leads to an analytical solution of evaporation rate (latent heat flux), together with sensible and ground heat fluxes, as a function of surface soil temperature, surface humidity, and net radiation. The model covers the entire range of soil wetness from dry to saturation. The MEP model of transpiration is formulated as a special case of bare soil evaporation. Test of the MEP model using field observations indicates that the model performs well over bare soil and canopy. --- paper_title: Downstream hydraulic geometry relations: 1. Theoretical development: DOWNSTREAM HYDRAULIC GEOMETRY RELATIONS, 1 paper_content: An edited version of this paper was published by AGU. Copyright 2003 American Geophysical Union. --- paper_title: Entropy-based grouping of river flow regimes paper_content: Abstract In the context of changing environment, and specifically a possible climate change, a river flow regime type turns from being a merely general, average, descriptive characteristic to a tool for monitoring changes in flow seasonality both in time and space. Utilization of river flow regimes as a diagnostic tool for the output of climate models and in flow sensitivity studies demands an objective grouping of flow series into regime types. A hierarchical aggregation of monthly flow series into flow regime types, satisfying chosen discriminating criteria, is effectively performed by means of minimization of an entropy-based objective function. This function is based on the concept of an `information loss' resulting from such an aggregation and describes the difference between the series aggregated into one group, i.e. inaccuracy of aggregation. The main advantage of the approach, operating on river flow series for individual years (and not only on long-term means), is its ability to consider also the temporal regularity of the seasonal flow patterns, neglected, as a rule, in other approaches. Meanwhile, both flow volumes and seasonal patterns of flow are sensitive to climate fluctuations. A strict formulation of the `stopping rules' in the hierarchical flow regime grouping, directly related to the aggregation principles, is suggested. The approach allows different formulations of criteria for discriminating flow regime types. It is illustrated on a regional river flow sample for Scandinavia for two different formulations of discriminating criteria. --- paper_title: Unit Stream Power and Sediment Transport paper_content: A thorough study of the existing applicable data reveals the basic reason that previous equations often provide misleading predictions of the sediment transport rates. The error stems from the unrealistic assumptions made in their derivations. Unit stream power, defined as the time rate of potential energy expenditure per unit weight of water in an alluvial channel, is shown to dominate the total sediment concentration. Statistical analyses of 1,225 sets of laboratory flume data and 50 sets of field data indicate the existence and the generality of the linear relationship between the logarithm of total sediment concentration and the logarithm of the effective unit stream power. The coefficients in the proposed equation are shown to be related to particle size and water depth, or particle size and width-depth ratio. An equation generalized from Gilbert's data can be applied to natural streams for the prediction of total sediment discharge with good accuracy. --- paper_title: Physics of uncertainty, the Gibbs paradox and indistinguishable particles paper_content: Abstract The idea that, in the microscopic world, particles are indistinguishable, interchangeable and without identity has been central in quantum physics. The same idea has been enrolled in statistical thermodynamics even in a classical framework of analysis to make theoretical results agree with experience. In thermodynamics of gases, this hypothesis is associated with several problems, logical and technical. For this case, an alternative theoretical framework is provided, replacing the indistinguishability hypothesis with standard probability and statistics. In this framework, entropy is a probabilistic notion applied to thermodynamic systems and is not extensive per se. Rather, the extensive entropy used in thermodynamics is the difference of two probabilistic entropies. According to this simple view, no paradoxical behaviors, such as the Gibbs paradox, appear. Such a simple probabilistic view within a classical physical framework, in which entropy is none other than uncertainty applicable irrespective of particle size, enables generalization of mathematical descriptions of processes across any type and scale of systems ruled by uncertainty. --- paper_title: Entropy based derivation of probability distributions: A case study to daily rainfall paper_content: The principle of maximum entropy, along with empirical considerations, can provide consistent basis for constructing a consistent probability distribution model for highly varying geophysical processes. Here we examine the potential of using this principle with the Boltzmann-Gibbs-Shannon entropy definition in the probabilistic modeling of rainfall in different areas worldwide. We define and theoretically justify specific simple and general entropy maximization constraints which lead to two flexible distributions, i.e., the three-parameter Generalized Gamma (GG) and the four-parameter Generalized Beta of the second kind (GB2), with the former being a particular limiting case of the latter. We test the theoretical results in 11,519 daily rainfall records across the globe. The GB2 distribution seems to be able to describe all empirical records while two of its specific three-parameter cases, the GG and the Burr Type XII distributions perform very well by describing the 97.6% and 87.7% of the empirical records, respectively. --- paper_title: Entropic nonextensivity: a possible measure of complexity paper_content: An updated review [1] of nonextensive statistical mechanics and thermodynamics is colloquially presented. Quite naturally the possibility emerges for using the value of q-1 (entropic nonextensivity) as a simple and efficient manner to provide, at least for some classes of systems, some characterization of the degree of what is currently referred to as complexity [2]. A few historical digressions are included as well. --- paper_title: Application of Information Theory to Groundwater Quality Monitoring Networks paper_content: Using the criteria of maximizing information and minimizing cost,a methodology is developed for design of an optimal groundwater-monitoring network for water resources management. A monitoring system is essentially an information collection system. Therefore, its technical design requires a quantifiablemeasure of information which can be achieved through applicationof the information (or entropy) theory. The theory also providesinformation-based statistical measures to evaluate the efficiencyof the monitoring network. The methodology is applied to groundwater monitoring wells in a portion of Gaza Strip in Palestine. --- paper_title: An entropy approach to data collection network design paper_content: Abstract A new methodology is developed for data collection network design. The approach employs a measure of the information flow between gauging stations in the network which is referred to as the directional information transfer. The information flow measure is based on the entropy of gauging stations and pairs of gauging stations. Non-parametric estimation is used to approximate the multivariate probability density functions required in the entropy calculations. The potential application of the approach is illustrated using extreme flow data from a collection of gauging stations located in southern Manitoba, Canada. --- paper_title: Assessing the Reliability of Water Distribution Networks Using Entropy Based Measures of Network Redundancy paper_content: A qualitative approach to assessing redundancy of water distribution networks using entropy theory is proposed. The redundancy measures derived from this approach are able to assess redundancy of supply for individual nodes and for the network as a whole. The measures themselves are based on the fundamental entropy function of Shannon modified to include such network features relevant to redundancy as the number of paths available to supply flow to each demand node, the capacities of these alternate paths, the degree of dependence among the paths, the possibility of flow reversal, and the desirability of having links of equal capacity incident on demand node. The measures are demonstrated by application to eight network layouts subject to the same demand conditions. --- paper_title: Cross Entropy multiobjective optimization for water distribution systems design paper_content: [1] A methodology extending the Cross Entropy combinatorial optimization method originating from an adaptive algorithm for rare events simulation estimation, to multiobjective optimization of water distribution systems design is developed and demonstrated. The single objective optimal design problem of a water distribution system is commonly to find the water distribution system component characteristics that minimize the system capital and operational costs such that the system hydraulics is maintained and constraints on quantities and pressures at the consumer nodes are fulfilled. The multiobjective design goals considered herein are the minimization of the network capital and operational costs versus the minimization of the maximum pressure deficit of the network demand nodes. The proposed methodology is demonstrated using two sample applications from the research literature and is compared to the NSGA-II multiobjective scheme. The method was found to be robust in that it produced very similar Pareto fronts in almost all runs. The suggested methodology provided improved results in all trails compared to the NSGA-II algorithm. --- paper_title: Current and future use of systems analysis in water distribution network design paper_content: Abstract Computer use in the design of water distribution networks was inititated through the use of network analysis techniques to determine system performance in terms of heads and flows. The last fifteen years, however, have seen the introduction of systems analysis optimization techniques to the range of computer models available for network design purposes. These optimization models differ markedly from the ‘traditional’ network analysis models in that they ‘design’ systems for specified loading conditions rather than just analysing the performance of predetermined systems under given loading conditions. Cost was the primary or only objective in almost all these early optimization models. Water distribution network design has, however, a number of other important objectives, such as maximizing reliability. Issues related to reliability concern include probability of component failure, probability of actual demands being greater than design values, and the system redundancy inherent within the layout ... --- paper_title: The constrained entropy and cross-entropy functions paper_content: This study introduces the constrained forms of the Shannon information entropy and Kullback–Leibler cross-entropy functions. Applicable to a closed system, these functions incorporate the constraints on the system in a generic fashion, irrespective of the form or even the number of the constraints. Since they are fully constrained, the constrained functions may be “pulled apart” to give their partial or local constrained forms, providing the means to examine the probability of each outcome or state relative to its local maximum entropy (or minimum cross-entropy) position, independently of the other states of the system. The Shannon entropy and Kullback–Leibler cross-entropy functions do not possess this functionality. The features of each function are examined in detail, and the similarities between the constrained and Tsallis entropy functions are examined. The continuous constrained entropy and cross-entropy functions are also introduced, the integrands of which may be used to independently examine each infinitesimal state (physical element) of the system. --- paper_title: Geometry of river channels paper_content: Hydraulic principles, as developed in firm boundary channels, appear insufficient to explain the form and profile of river channels. In nature, stream channels attain a most probable state that must fulfill the necessary hydraulic laws, but in addition, fulfills its degrees of freedom by tendency to equal distribution among velocity, depth, width, and slope. This principle is tested by the use of three examples. Another example explores the accommodation of a river channel to changing discharge. The last example is that of a river free to adjust its profile, velocities, depths, and widths to accommodate the downstream increase in discharge. Each of these examples appears to satisfy the postulate of this paper. The adjustment is toward minimum variance among the components of stream power. --- paper_title: Hydraulic geometry and minimum rate of energy dissipation paper_content: The theory of minimum rate of energy dissipation states that a system is in an equilibrium condition when its rate of energy dissipation is at its minimum value. This minimum value depends on the constraints applied to the system. When a system is not at equilibrium, it will adjust in such a manner that the rate of energy dissipation can be reduced until it reaches the minimum and regains equilibrium. A river system constantly adjusts itself in response to varying constraints in such a manner that the rate of energy dissipation approaches a minimum value and thus moves toward an equilibrium. It is shown that the values of the exponents of the hydraulic geometry relationships proposed by Leopold and Maddock for rivers can be obtained from the application of the theory of minimum rate of energy dissipation in conjunction with the Manning-Strickler equation and the dimensionless unit stream power equation for sediment transport proposed by Yang. Theoretical analysis is limited to channels which are approximately rectangular in shape. The width and depth exponents thus derived agree very well with those measured in laboratory experiments by Barr et al. Although the theoretical width and depth exponents are within the range of variations of measured data from river gaging stations, the at-station width adjustment of natural rivers may also depend on constraints other than water discharge and sediment load. --- paper_title: Downstream hydraulic geometry relations: 2. Calibration and testing paper_content: An edited version of this paper was published by AGU. Copyright 2003 American Geophysical Union. --- paper_title: Downstream hydraulic geometry relations: 1. Theoretical development: DOWNSTREAM HYDRAULIC GEOMETRY RELATIONS, 1 paper_content: An edited version of this paper was published by AGU. Copyright 2003 American Geophysical Union. --- paper_title: An extremum principle of evaporation paper_content: [1] It is proposed, on the basis of an argument of thermodynamic equilibrium, that land-atmosphere interactive processes lead to thermal and hydrologic states of the land surface that maximize evaporation in a given meteorological environment. The extremum principle leads to general equations linking surface energy fluxes to surface temperature and soil moisture. The hypothesis of maximum evaporation has been tested with data from three field experiments. We found strong evidence suggesting that evaporation is maximized and furthermore that it is determined by the state variables (temperature, soil moisture, and sensible heat flux into the atmosphere) and relatively insensitive to water vapor pressure deficit. The theory allows an independent estimate of the coefficient in the Priestley-Taylor formula for potential evaporation, which is consistent with the widely accepted value of 1.26. --- paper_title: On the Cumulative Distribution Function for Entropy-Based Hydrologic Modeling paper_content: In spatial or temporal physically based entropy-based modeling in hydrology and water resources, the cumulative distribution function (CDF) of a design variable (e.g., flux, say discharge) is hypothesized in terms of concentration (e.g., stage of flow). Thus far, a linear hypothesis has been employed when applying entropy to derive relationships for design variables, but without empirical evidence or physical justification. Examples of such relationships include velocity distribution as a function of flow depth, wind velocity as a function of height, sediment concentration profile along the flow depth, rating curve, infiltration capacity rate as a function of time, soil moisture profile along the depth below the soil surface, runoff as a function of rainfall amount, unit hydrograph, and groundwater discharge along the horizontal direction of flow. This study proposes a general nonlinear form of the CDF that specializes into commonly used linear forms. The general form is tested using empirical data on velocity, sediment concentration, soil moisture, and stage-discharge and compared with those reported in the literature. It is found that a simpler form of the general nonlinear hypothesis seems satisfactory for the data tested, and it is quite likely that this simple form will suffice for other data as well. The linear hypothesis does not seem to hold for the data employed in the study. --- paper_title: Entropy and 2-D Velocity Distribution in Open Channels paper_content: Equations based on the entropy concept have been derived for describing the two‐dimensional velocity distribution in an openchannel cross section. The velocity equation derived is capable of describing the variation of velocity in both the vertical and transverse directions, with the maximum velocity occurring on or below the water surface. Equations for determining the location of mean velocity have also been derived along with those, such as the entropy function, that can be used as measures of the homogeneity of velocity distribution in a channel cross section. A dimensionless parameter of the entropy function named the M number has been found useful as an index for characterizing and comparing various patterns of velocity distribution and states of open‐channel flow systems. The definition and demonstrated usefulness of this parameter indicate the importance and value of the information given by the location and magnitude of maximum velocity in a cross section, and suggest the need for future experime... --- paper_title: Entropy Theory for Distribution of One-Dimensional Velocity in Open Channels paper_content: Assuming time-averaged velocity as a random variable, this study develops an entropy theory for deriving the one-dimensional distribution of velocity in open channels. The theory includes five parts: (1) Tsallis entropy; (2) the principle of maximum entropy (POME); (3) the specification of information on velocity for constraints; (4) the maximization of entropy; and (5) the probability distribution of velocity and its entropy. An application of the entropy theory is illustrated by deriving a one-dimensional velocity distribution in open channels in which the dimension is vertical or the flow depth. The derived distribution is tested with field and laboratory observations and is compared to Chiu’s velocity distribution derived from Shannon entropy. The agreement between velocity values are computed with the entropy-based distribution. --- paper_title: On the Cumulative Distribution Function for Entropy-Based Hydrologic Modeling paper_content: In spatial or temporal physically based entropy-based modeling in hydrology and water resources, the cumulative distribution function (CDF) of a design variable (e.g., flux, say discharge) is hypothesized in terms of concentration (e.g., stage of flow). Thus far, a linear hypothesis has been employed when applying entropy to derive relationships for design variables, but without empirical evidence or physical justification. Examples of such relationships include velocity distribution as a function of flow depth, wind velocity as a function of height, sediment concentration profile along the flow depth, rating curve, infiltration capacity rate as a function of time, soil moisture profile along the depth below the soil surface, runoff as a function of rainfall amount, unit hydrograph, and groundwater discharge along the horizontal direction of flow. This study proposes a general nonlinear form of the CDF that specializes into commonly used linear forms. The general form is tested using empirical data on velocity, sediment concentration, soil moisture, and stage-discharge and compared with those reported in the literature. It is found that a simpler form of the general nonlinear hypothesis seems satisfactory for the data tested, and it is quite likely that this simple form will suffice for other data as well. The linear hypothesis does not seem to hold for the data employed in the study. --- paper_title: Entropy and 2-D Velocity Distribution in Open Channels paper_content: Equations based on the entropy concept have been derived for describing the two‐dimensional velocity distribution in an openchannel cross section. The velocity equation derived is capable of describing the variation of velocity in both the vertical and transverse directions, with the maximum velocity occurring on or below the water surface. Equations for determining the location of mean velocity have also been derived along with those, such as the entropy function, that can be used as measures of the homogeneity of velocity distribution in a channel cross section. A dimensionless parameter of the entropy function named the M number has been found useful as an index for characterizing and comparing various patterns of velocity distribution and states of open‐channel flow systems. The definition and demonstrated usefulness of this parameter indicate the importance and value of the information given by the location and magnitude of maximum velocity in a cross section, and suggest the need for future experime... --- paper_title: Entropy Theory for Two-Dimensional Velocity Distribution paper_content: Assuming time-averaged velocity as a random variable, this study develops an entropy theory for deriving two-dimensional (2D) distribution of velocity in open channels. The theory comprises five parts: (1) Tsallis entropy; (2) principle of maximum entropy (POME); (3) specification of information on velocity in terms of constraints; (4) maximization of entropy; and (5) derivation of the probability distribution of velocity. The entropy theory is then combined with a hypothesis on the cumulative distribution function of velocity in terms of flow depth to derive a 2D velocity distribution. The derived distribution is tested using field as well as laboratory observations reported in the literature and is compared with known velocity distributions. Agreement between velocity values computed using the entropy-based distribution and observed values is found satisfactory. Also, the derived distribution compares favorably with known distributions. --- paper_title: Entropy Theory in Hydrologic Science and Engineering paper_content: 1. Entropy Theory 2. Morphological Analysis 3. Evaluation and Design of Sampling and Measurement Networks 4. Precipitation Variability 5. Rainfall Frequency Distributions 6. Assessment of Potential Water Resources Availability 7. Evaluation of Precipitation Forecasting Schemes 8. Infiltration 9. Soil Moisture 10. Groundwater Flow 11. Rainfall-Runoff Modeling 12. Hydrologic Frequency Analysis 13. Steamflow Simulation 14. Long-Term Streamflow Forecasting 15. Flood Forecasting: Bivariate Analysis 16. River Regime Classification 17. Sediment Yield Eco Index --- paper_title: Entropy and 2-D Velocity Distribution in Open Channels paper_content: Equations based on the entropy concept have been derived for describing the two‐dimensional velocity distribution in an openchannel cross section. The velocity equation derived is capable of describing the variation of velocity in both the vertical and transverse directions, with the maximum velocity occurring on or below the water surface. Equations for determining the location of mean velocity have also been derived along with those, such as the entropy function, that can be used as measures of the homogeneity of velocity distribution in a channel cross section. A dimensionless parameter of the entropy function named the M number has been found useful as an index for characterizing and comparing various patterns of velocity distribution and states of open‐channel flow systems. The definition and demonstrated usefulness of this parameter indicate the importance and value of the information given by the location and magnitude of maximum velocity in a cross section, and suggest the need for future experime... --- paper_title: Entropy Theory for Two-Dimensional Velocity Distribution paper_content: Assuming time-averaged velocity as a random variable, this study develops an entropy theory for deriving two-dimensional (2D) distribution of velocity in open channels. The theory comprises five parts: (1) Tsallis entropy; (2) principle of maximum entropy (POME); (3) specification of information on velocity in terms of constraints; (4) maximization of entropy; and (5) derivation of the probability distribution of velocity. The entropy theory is then combined with a hypothesis on the cumulative distribution function of velocity in terms of flow depth to derive a 2D velocity distribution. The derived distribution is tested using field as well as laboratory observations reported in the literature and is compared with known velocity distributions. Agreement between velocity values computed using the entropy-based distribution and observed values is found satisfactory. Also, the derived distribution compares favorably with known distributions. --- paper_title: Two-Dimensional Velocity Distribution in Open Channels Using the Tsallis Entropy paper_content: AbstractAssuming time-averaged velocity as a random variable, a two-dimensional (2D) velocity distribution in open channels was derived by maximizing the Tsallis entropy, subject to mass conservation. The derived distribution was tested using field data and was also compared with other two-dimensional velocity distributions. The Tsallis entropy–based velocity distribution was found to predict the velocity near the boundary well. --- paper_title: Unit Stream Power and Sediment Transport paper_content: A thorough study of the existing applicable data reveals the basic reason that previous equations often provide misleading predictions of the sediment transport rates. The error stems from the unrealistic assumptions made in their derivations. Unit stream power, defined as the time rate of potential energy expenditure per unit weight of water in an alluvial channel, is shown to dominate the total sediment concentration. Statistical analyses of 1,225 sets of laboratory flume data and 50 sets of field data indicate the existence and the generality of the linear relationship between the logarithm of total sediment concentration and the logarithm of the effective unit stream power. The coefficients in the proposed equation are shown to be related to particle size and water depth, or particle size and width-depth ratio. An equation generalized from Gilbert's data can be applied to natural streams for the prediction of total sediment discharge with good accuracy. --- paper_title: Suspended Sediment Concentration in Open Channels Using Tsallis Entropy paper_content: AbstractConcentration of suspended sediment is of fundamental importance in environmental management, assessment of best management practices, water quality evaluation, reservoir ecosystem integrity, and fluvial hydraulics. Assuming time-averaged sediment concentration along a vertical as a random variable, a probability distribution of suspended sediment concentration is derived by maximizing the Tsallis entropy subject to the constraint given by the mean concentration and under the assumption that the sediment concentration is zero at the water surface. For deriving the sediment concentration profile along the vertical, a nonlinear cumulative distribution function is hypothesized and verified with observed data. The derived sediment concentration profile is tested using experimental and field data; however, the clear water surface assumption does not seem to be valid for field data. The Tsallis entropy-based concentration profile method is compared with three sediment concentration profile methods. Comp... --- paper_title: Current and future use of systems analysis in water distribution network design paper_content: Abstract Computer use in the design of water distribution networks was inititated through the use of network analysis techniques to determine system performance in terms of heads and flows. The last fifteen years, however, have seen the introduction of systems analysis optimization techniques to the range of computer models available for network design purposes. These optimization models differ markedly from the ‘traditional’ network analysis models in that they ‘design’ systems for specified loading conditions rather than just analysing the performance of predetermined systems under given loading conditions. Cost was the primary or only objective in almost all these early optimization models. Water distribution network design has, however, a number of other important objectives, such as maximizing reliability. Issues related to reliability concern include probability of component failure, probability of actual demands being greater than design values, and the system redundancy inherent within the layout ... --- paper_title: Two-Dimensional Velocity Distribution in Open Channels Using the Tsallis Entropy paper_content: AbstractAssuming time-averaged velocity as a random variable, a two-dimensional (2D) velocity distribution in open channels was derived by maximizing the Tsallis entropy, subject to mass conservation. The derived distribution was tested using field data and was also compared with other two-dimensional velocity distributions. The Tsallis entropy–based velocity distribution was found to predict the velocity near the boundary well. --- paper_title: Computation of Suspended Sediment Discharge in Open Channels by Combining Tsallis Entropy–Based Methods and Empirical Formulas paper_content: AbstractSediment discharge is computed by using different combinations of entropy-based and empirical methods of channel cross-section velocity and suspended sediment concentration distribution, and the results of these different methods are compared. The comparison shows that the entropy-based methods are more accurate than the empirical method and that the Tsallis entropy–based method is more accurate than the one based on Shannon entropy. The accuracy of the computation for all methods can generally be improved by introducing a correction factor; however, the fully entropy-based methods still remain the most accurate. --- paper_title: Suspended Sediment Concentration in Open Channels Using Tsallis Entropy paper_content: AbstractConcentration of suspended sediment is of fundamental importance in environmental management, assessment of best management practices, water quality evaluation, reservoir ecosystem integrity, and fluvial hydraulics. Assuming time-averaged sediment concentration along a vertical as a random variable, a probability distribution of suspended sediment concentration is derived by maximizing the Tsallis entropy subject to the constraint given by the mean concentration and under the assumption that the sediment concentration is zero at the water surface. For deriving the sediment concentration profile along the vertical, a nonlinear cumulative distribution function is hypothesized and verified with observed data. The derived sediment concentration profile is tested using experimental and field data; however, the clear water surface assumption does not seem to be valid for field data. The Tsallis entropy-based concentration profile method is compared with three sediment concentration profile methods. Comp... --- paper_title: On the Cumulative Distribution Function for Entropy-Based Hydrologic Modeling paper_content: In spatial or temporal physically based entropy-based modeling in hydrology and water resources, the cumulative distribution function (CDF) of a design variable (e.g., flux, say discharge) is hypothesized in terms of concentration (e.g., stage of flow). Thus far, a linear hypothesis has been employed when applying entropy to derive relationships for design variables, but without empirical evidence or physical justification. Examples of such relationships include velocity distribution as a function of flow depth, wind velocity as a function of height, sediment concentration profile along the flow depth, rating curve, infiltration capacity rate as a function of time, soil moisture profile along the depth below the soil surface, runoff as a function of rainfall amount, unit hydrograph, and groundwater discharge along the horizontal direction of flow. This study proposes a general nonlinear form of the CDF that specializes into commonly used linear forms. The general form is tested using empirical data on velocity, sediment concentration, soil moisture, and stage-discharge and compared with those reported in the literature. It is found that a simpler form of the general nonlinear hypothesis seems satisfactory for the data tested, and it is quite likely that this simple form will suffice for other data as well. The linear hypothesis does not seem to hold for the data employed in the study. --- paper_title: Flow Duration Curve Using Entropy Theory paper_content: AbstractUsing the entropy theory, this study derives a function for modeling the flow duration curve (FDC) based on two equations that form simple constraints: (1) the total probability of all flow discharges, and (2) the mean discharge. Parameters of the derived curve are determined from the entropy theory with the use of these two constraints. For deriving the flow duration curve by the entropy theory, a nonlinear cumulative distribution function is assumed, which is tested using observed flow data. The derived flow duration curves are tested using field data and are found to be in agreement with observed curves. With the entropy parameter determined for each station, the flow duration curve can also be forecasted for different recurrence intervals. The main advantage of the use of the entropy theory–based FDCs is that the parameters are based on observations, and hence no fitting is needed. Second, the theory permits a probabilistic characterization of the flow duration curve and hence the probability ... --- paper_title: Intermittent Motion, Nonlinear Diffusion Equation and Tsallis Formalism paper_content: We investigate an intermittent process obtained from the combination of a nonlinear diffusion equation and pauses. We consider the porous media equation with reaction terms related to the rate of switching the particles from the diffusive mode to the resting mode or switching them from the resting to the movement. The results show that in the asymptotic limit of small and long times, the spreading of the system is essentially governed by the diffusive term. The behavior exhibited for intermediate times depends on the rates present in the reaction terms. In this scenario, we show that, in the asymptotic limits, the distributions for this process are given by in terms of power laws which may be related to the q-exponential present in the Tsallis statistics. Furthermore, we also analyze a situation characterized by different diffusive regimes, which emerges when the diffusive term is a mixing of linear and nonlinear terms. --- paper_title: On some properties of Tsallis hypoentropies and hypodivergences paper_content: Both the Kullback–Leibler and the Tsallis divergence have a strong limitation: if the value zero appears in probability distributions (p1, ··· , pn) and (q1, ··· , qn), it must appear in the same positions for the sake of significance. In order to avoid that limitation in the framework of Shannon statistics, Ferreri introduced in 1980 hypoentropy: “such conditions rarely occur in practice”. The aim of the present paper is to extend Ferreri’s hypoentropy to the Tsallis statistics. We introduce the Tsallis hypoentropy and the Tsallis hypodivergence and describe their mathematical behavior. Fundamental properties, like nonnegativity, monotonicity, the chain rule and subadditivity, are established. --- paper_title: Measures of Qualitative Variation in the Case of Maximum Entropy paper_content: Asymptotic behavior of qualitative variation statistics, including entropy measures, can be modeled well by normal distributions. In this study, we test the normality of various qualitative variation measures in general. We find that almost all indices tend to normality as the sample size increases, and they are highly correlated. However, for all of these qualitative variation statistics, maximum uncertainty is a serious factor that prevents normality. Among these, we study the properties of two qualitative variation statistics; VarNC and StDev statistics in the case of maximum uncertainty, since these two statistics show lower sampling variability and utilize all sample information. We derive probability distribution functions of these statistics and prove that they are consistent. We also discuss the relationship between VarNC and the normalized form of Tsallis (α = 2) entropy in the case of maximum uncertainty. --- paper_title: The Legendre Transform in Non-additive Thermodynamics and Complexity paper_content: We present an argument which purports to show that the use of the standard Legendre transform in non-additive Statistical Mechanics is not appropriate. For concreteness, we use as paradigm, the case of systems which are conjecturally described by the (non-additive) Tsallis entropy. We point out the form of the modified Legendre transform that should be used, instead, in the non-additive thermodynamics induced by the Tsallis entropy. We comment on more general implications of this proposal for the thermodynamics of “complex systems”. --- paper_title: Tsallis Wavelet Entropy and Its Application in Power Signal Analysis paper_content: As a novel data mining approach, a wavelet entropy algorithm is used to perform entropy statistics on wavelet coefficients (or reconstructed signals) at various wavelet scales on the basis of wavelet decomposition and entropy statistic theory. Shannon wavelet energy entropy, one kind of wavelet entropy algorithm, has been taken into consideration and utilized in many areas since it came into being. However, as there is wavelet aliasing after the wavelet decomposition, and the information set of different-scale wavelet decomposition coefficients (or reconstructed signals) is non-additive to a certain extent, Shannon entropy, which is more adaptable to extensive systems, couldn’t do accurate uncertainty statistics on the wavelet decomposition results. Therefore, the transient signal features are extracted incorrectly by using Shannon wavelet energy entropy. From the two aspects, the theoretical limitations and negative effects of wavelet aliasing on extraction accuracy, the problems which exist in the feature extraction process of transient signals by Shannon wavelet energy entropy, are discussed in depth. Considering the defects of Shannon wavelet energy entropy, a novel wavelet entropy named Tsallis wavelet energy entropy is proposed by using Tsallis entropy instead of Shannon entropy, and it is applied to the feature extraction of transient signals in power systems. Theoretical derivation and experimental result prove that compared with Shannon wavelet energy entropy, Tsallis wavelet energy entropy could reduce the negative effects of wavelet aliasing on accuracy of feature extraction and extract transient signal feature of power system accurately. ---
Title: Tsallis Entropy Theory for Modeling in Water Engineering: A Review Section 1: Introduction Description 1: Introduce the significance of water resources systems, their historical development, and the contemporary integration of engineering and non-engineering aspects. Provide an overview of the paper's objectives and structure. Section 2: Definition of Entropy Description 2: Define the concept of entropy and its interpretations in various fields, particularly in water engineering. Introduce Shannon and Tsallis entropy formulations. Section 3: Properties of Tsallis Entropy Description 3: Summarize the key properties of Tsallis entropy, including m-entropy, maximum value, concavity, additivity, composability, and interaction between sub-systems. Section 4: Principle of Maximum Entropy Description 4: Discuss the principle of maximum entropy (POME) and its application steps in deriving probability distributions for random variables. Section 5: Specification of Constraints Description 5: Explain how to define appropriate constraints for deriving probability density functions using POME, focusing on considerations like simplicity, physical meaningfulness, and statistical moments. Section 6: Entropy Maximizing Using Lagrange Multipliers Description 6: Outline the process of maximizing Tsallis entropy subject to constraints using the method of Lagrange multipliers, leading to the determination of probability distributions and Lagrange multipliers. Section 7: Applications in Water Engineering: Overview Description 7: Provide an overview of the types of problems in water engineering that can be addressed using Tsallis entropy, categorized into three groups: those requiring entropy maximization, coupling with another theory, and deriving physical relations. Section 8: Problems Requiring Entropy Maximization Description 8: Detail specific water engineering problems that involve entropy maximization, such as deriving frequency distributions and network evaluation. Section 9: Problems Requiring Coupling with another Theory Description 9: Describe problems where Tsallis entropy must be coupled with other theories, such as hydraulic geometry and evaporation, to achieve solutions. Section 10: Problems Involving Physical Relations Description 10: Explain how to derive physical relationships in water engineering by concatenating the probability and physical domains, illustrated with examples like velocity distribution and sediment concentration. Section 11: Hypotheses on Cumulative Probability Distribution Function Description 11: Discuss the formulation of cumulative probability distribution functions (CDFs) for different types of design variables and their applications in deriving relationships. Section 12: One-Dimensional Velocity Distribution Description 12: Explain the derivation of one-dimensional velocity distribution using Tsallis entropy, considering the constraints and assumed CDFs. Section 13: Two-Dimensional Velocity Distribution Description 13: Describe the derivation of two-dimensional velocity distribution, transforming coordinates and utilizing the Tsallis entropy for characterizing flow in channel cross-sections. Section 14: Suspended Sediment Concentration Description 14: Illustrate the process of deriving suspended sediment concentration distributions using Tsallis entropy, including defining constraints and determining dimensionless parameters. Section 15: Sediment Discharge Description 15: Discuss the computation of sediment discharge using combinations of entropy-based and empirical methods for velocity and sediment concentration distributions. Section 16: Flow-Duration Curve Description 16: Detail the derivation of flow-duration curves (FDC) using Tsallis entropy, defining discharge as a random variable and determining the PDF and dimensionless parameters. Section 17: Conclusions Description 17: Summarize the potential and advantages of Tsallis entropy theory in water engineering, citing its ability to combine statistical information with physical laws and derive probability distributions and physical relations. Discuss the scope for future research.
An overview of MIMO communications - a key to gigabit wireless
9
--- paper_title: Space-Time Block Codes from Orthogonal Designs paper_content: We introduce space-time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum-likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space-time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space-time block codes. It is shown that space-time block codes constructed in this way only exist for few sporadic values of n. Subsequently, a generalization of orthogonal designs is shown to provide space-time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space-time block codes are designed that achieve 1/2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space-time block codes are designed that achieve, respectively, all, 3/4, and 3/4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well. --- paper_title: Signal design for transmitter diversity wireless communication systems over Rayleigh fading channels paper_content: Transmitter diversity wireless communication systems over Rayleigh fading channels using pilot symbol assisted modulation (PSAM) are studied. Unlike conventional transmitter diversity systems with PSAM that estimate the superimposed fading process, we are able to estimate each individual fading process corresponding to the multiple transmitters by using appropriately designed pilot symbol sequences. With such sequences, special coded modulation schemes can then be designed to access the diversity provided by the multiple transmitters without having to use an interleaver or expand the signal bandwidth. The code matrix notion is introduced for the coded modulation scheme, and its design criteria are also established. In addition to the reduction in receiver complexity, simulation results are compared to, and shown to be superior to, that of an intentional frequency offset system over a wide range of system parameters. --- paper_title: On the capacity of OFDM-based spatial multiplexing systems paper_content: This paper deals with the capacity behavior of wireless orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing systems in broad-band fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multiple-input multiple-output (MIMO) broad-band fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that, in the MIMO case, unlike the single-input single-output (SISO) case, delay spread channels may provide advantages over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flat fading channels --- paper_title: A Simple Transmit Diversity Technique for Wireless Communications paper_content: This paper presents a simple two-branch transmit diversity scheme. Using two transmit antennas and one receive antenna the scheme provides the same diversity order as maximal-ratio receiver combining (MRRC) with one transmit antenna, and two receive antennas. It is also shown that the scheme may easily be generalized to two transmit antennas and receive antennas to provide a diversity order of 2. The new scheme does not require any bandwidth expansion any feedback from the receiver to the transmitter and its computation complexity is similar to MRRC. --- paper_title: Space-Time Block Codes from Orthogonal Designs paper_content: We introduce space-time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum-likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space-time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space-time block codes. It is shown that space-time block codes constructed in this way only exist for few sporadic values of n. Subsequently, a generalization of orthogonal designs is shown to provide space-time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space-time block codes are designed that achieve 1/2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space-time block codes are designed that achieve, respectively, all, 3/4, and 3/4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well. --- paper_title: Space-time codes for high data rate wireless communication: Performance criterion and code construction paper_content: We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by a channel code and the encoded data is split into n streams that are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. We derive performance criteria for designing such codes under the assumption that the fading is slow and frequency nonselective. Performance is shown to be determined by matrices constructed from pairs of distinct code sequences. The minimum rank among these matrices quantifies the diversity gain, while the minimum determinant of these matrices quantifies the coding gain. The results are then extended to fast fading channels. The design criteria are used to design trellis codes for high data rate wireless communication. The encoding/decoding complexity of these codes is comparable to trellis codes employed in practice over Gaussian channels. The codes constructed here provide the best tradeoff between data rate, diversity advantage, and trellis complexity. Simulation results are provided for 4 and 8 PSK signal sets with data rates of 2 and 3 bits/symbol, demonstrating excellent performance that is within 2-3 dB of the outage capacity for these channels using only 64 state encoders. --- paper_title: On the capacity of OFDM-based spatial multiplexing systems paper_content: This paper deals with the capacity behavior of wireless orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing systems in broad-band fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multiple-input multiple-output (MIMO) broad-band fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that, in the MIMO case, unlike the single-input single-output (SISO) case, delay spread channels may provide advantages over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flat fading channels --- paper_title: Measurement and characterization of broadband MIMO fixed wireless channels at 2.5 GHz paper_content: We study the channel typical for cellular broadband fixed wireless applications. A measurement system for a two-element-transmit by two-element-receive antenna configuration was built, Measurements were conducted in a suburban environment with dual antenna polarization and transmit separation. We present results on K-factor, cross-polarization discrimination (XPD) and Doppler spectrum. Our results address the influence of distance and antenna height for K-factor and XPD. We also comment on the properties of a fixed wireless channel and describe its Doppler spectrum. --- paper_title: Introduction to Space-Time Wireless Communications paper_content: Wireless networks are under constant pressure to provide ever-higher data rates to increasing numbers of users with greater reliability. This book is an accessible introduction to every fundamental aspect of space-time wireless communications. Space-time processing technology is a powerful tool for improving system performance that already features in the UMTS and CDMA2000 mobile standards. The ideal volume for graduate students and professionals, it features homework problems and other supporting material on a companion website. --- paper_title: Spectral efficiency of wireless systems with multiple transmit and receive antennas paper_content: Recent information-theory results have shown the enormous capacity potential of wireless techniques that use transmit and receive antenna arrays. As a result, a number of layered space-time (BLAST) architectures have been proposed wherein multiple data streams are transmitted in parallel and separated at the receiver on account of their distinct spatial signatures. While extremely promising, all analysis of BLAST to date were restricted to the context of a single-user link. In this paper, the system-level benefit of using BLAST in multicell scenarios is evaluated in comparison with other directive- and adaptive-array techniques. --- paper_title: Multiple-input - multiple-output measurements and modeling in Manhattan paper_content: Narrowband multiple-input-multiple-output (MIMO) measurements using 16 transmitters and 16 receivers at 2.11 GHz were carried out in Manhattan. High capacities were found for full, as well as smaller array configurations, all within 80% of the fully scattering channel capacity. Correlation model parameters are derived from data. Spatial MIMO channel capacity statistics are found to be well represented by the separate transmitter and receiver correlation matrices, with a median relative error in capacity of 3%, in contrast with the 18% median relative error observed by assuming the antennas to be uncorrelated. A reduced parameter model, consisting of 4 parameters, has been developed to statistically represent the channel correlation matrices. These correlation matrices are, in turn, used to generate H matrices with capacities that are consistent within a few percent of those measured in New York. The spatial channel model reported allows simulations of H matrices for arbitrary antenna configurations. These channel matrices may be used to test receiver algorithms in system performance studies. These results may also be used for antenna array design, as the decay of mobile antenna correlation with antenna separation has been reported here. An important finding for the base transmitter array was that the antennas were largely uncorrelated even at antenna separations as small as two wavelengths. --- paper_title: On the capacity of OFDM-based spatial multiplexing systems paper_content: This paper deals with the capacity behavior of wireless orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing systems in broad-band fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multiple-input multiple-output (MIMO) broad-band fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that, in the MIMO case, unlike the single-input single-output (SISO) case, delay spread channels may provide advantages over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flat fading channels --- paper_title: Elements of Information Theory paper_content: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index. --- paper_title: On limits of wireless communications in a fading environment when using multiple antennas paper_content: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. ::: ::: We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised. --- paper_title: Outdoor MIMO Wireless Channels: Models and Performance Prediction paper_content: We present a new model for multiple-input-multiple-output (MIMO) outdoor wireless fading channels and their capacity performance. The proposed model is more general and realistic than the usual independent and identically distributed (i.i.d.) model, and allows us to investigate the behavior of channel capacity as a function of the scattering radii at transmitter and receiver, distance between the transmit and receive arrays, and antenna beamwidths and spacing. We show how the MIMO capacity is governed by spatial fading correlation and the condition number of the channel matrix through specific sets of propagation parameters. The proposed model explains the existence of "pinhole" channels which exhibit low spatial fading correlation at both ends of the link but still have poor rank properties, and hence, low ergodic capacity. In fact, the model suggests the existence of a more general family of channels spanning continuously from full rank i.i.d. to low-rank pinhole cases. We suggest guidelines for predicting high rank (and hence, high ergodic capacity) in MIMO channels, and show that even at long ranges, high channel rank can easily be sustained under mild scattering conditions. Finally, we validate our results by simulations using ray tracing techniques. Connections with basic antenna theory are made. --- paper_title: Capacity of a Mobile Multiple-Antenna Communication Link in Rayleigh Flat Fading paper_content: We analyze a mobile wireless link comprising M transmitter and N receiver antennas operating in a Rayleigh flat-fading environment. The propagation coefficients between pairs of transmitter and receiver antennas are statistically independent and unknown; they remain constant for a coherence interval of T symbol periods, after which they change to new independent values which they maintain for another T symbol periods, and so on. Computing the link capacity, associated with channel coding over multiple fading intervals, requires an optimization over the joint density of T/spl middot/M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater than the length of the coherence interval: the capacity for M>T is equal to the capacity for M=T. Capacity is achieved when the T/spl times/M transmitted signal matrix is equal to the product of two statistically independent matrices: a T/spl times/T isotropically distributed unitary matrix times a certain T/spl times/M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity for many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence interval increases, the capacity approaches the capacity obtained as if the receiver knew the propagation coefficients. --- paper_title: Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel paper_content: We study the capacity of multiple-antenna fading channels. We focus on the scenario where the fading coefficients vary quickly; thus an accurate estimation of the coefficients is generally not available to either the transmitter or the receiver. We use a noncoherent block fading model proposed by Marzetta and Hochwald (see ibid. vol.45, p.139-57, 1999). The model does not assume any channel side information at the receiver or at the transmitter, but assumes that the coefficients remain constant for a coherence interval of length T symbol periods. We compute the asymptotic capacity of this channel at high signal-to-noise ratio (SNR) in terms of the coherence time T, the number of transmit antennas M, and the number of receive antennas N. While the capacity gain of the coherent multiple antenna channel is min{M, N} bits per second per Hertz for every 3-dB increase in SNR, the corresponding gain for the noncoherent channel turns out to be M* (1 - M*/T) bits per second per Hertz, where M*=min{M, N, [T/2]}. The capacity expression has a geometric interpretation as sphere packing in the Grassmann manifold. --- paper_title: Measurement and characterization of broadband MIMO fixed wireless channels at 2.5 GHz paper_content: We study the channel typical for cellular broadband fixed wireless applications. A measurement system for a two-element-transmit by two-element-receive antenna configuration was built, Measurements were conducted in a suburban environment with dual antenna polarization and transmit separation. We present results on K-factor, cross-polarization discrimination (XPD) and Doppler spectrum. Our results address the influence of distance and antenna height for K-factor and XPD. We also comment on the properties of a fixed wireless channel and describe its Doppler spectrum. --- paper_title: Unitary space-time modulation for multiple-antenna communications in Rayleigh flat fading paper_content: Motivated by information-theoretic considerations, we propose a signaling scheme, unitary space-time modulation, for multiple-antenna communication links. This modulation is ideally suited for Rayleigh fast-fading environments, since it does not require the receiver to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T/spl times/M space-time signals (/spl Phi//sub i/, l=1, ..., L), where T represents the coherence interval during which the fading is approximately constant, and M<T is the number of transmitter antennas. The columns of each /spl Phi//sub i/ are orthonormal. When the receiver does not know the propagation coefficients, which between pairs of transmitter and receiver antennas are modeled as statistically independent, this modulation performs very well either when the signal-to-noise ratio (SNR) is high or when T/spl Gt/M. We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit-error probability with maximum-likelihood decoding. We demonstrate that two antennas have a 6-dB diversity gain over one antenna at 15-dB SNR. --- paper_title: Fading Channels: Information-Theoretic and Communications Aspects paper_content: In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels. --- paper_title: Information theoretic considerations for cellular mobile radio paper_content: We present some information-theoretic considerations used to determine upper bounds on the information rates that can be reliably transmitted over a two-ray propagation path mobile radio channel model, operating in a time division multiplex access (TDMA) regime, under given decoding delay constraints. The sense in which reliability is measured is addressed, and in the interesting eases where the decoding delay constraint plays a significant role, the maximal achievable rate (capacity), is specified in terms of capacity versus outage. In this case, no coding capacity in the strict Shannon sense exists. Simple schemes for time and space diversity are examined, and their potential benefits are illuminated from an information-theoretic stand point. In our presentation, we chose to specialize to the TDMA protocol for the sake of clarity and convenience. Our main arguments and results extend directly to certain variants of other multiple access protocols such as code division multiple access (CDMA) and frequency division multiple access (FDMA), provided that no fast feedback from the receiver to the transmitter is available. > --- paper_title: On the capacity of OFDM-based spatial multiplexing systems paper_content: This paper deals with the capacity behavior of wireless orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing systems in broad-band fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multiple-input multiple-output (MIMO) broad-band fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that, in the MIMO case, unlike the single-input single-output (SISO) case, delay spread channels may provide advantages over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flat fading channels --- paper_title: Multiple-input - multiple-output measurements and modeling in Manhattan paper_content: Narrowband multiple-input-multiple-output (MIMO) measurements using 16 transmitters and 16 receivers at 2.11 GHz were carried out in Manhattan. High capacities were found for full, as well as smaller array configurations, all within 80% of the fully scattering channel capacity. Correlation model parameters are derived from data. Spatial MIMO channel capacity statistics are found to be well represented by the separate transmitter and receiver correlation matrices, with a median relative error in capacity of 3%, in contrast with the 18% median relative error observed by assuming the antennas to be uncorrelated. A reduced parameter model, consisting of 4 parameters, has been developed to statistically represent the channel correlation matrices. These correlation matrices are, in turn, used to generate H matrices with capacities that are consistent within a few percent of those measured in New York. The spatial channel model reported allows simulations of H matrices for arbitrary antenna configurations. These channel matrices may be used to test receiver algorithms in system performance studies. These results may also be used for antenna array design, as the decay of mobile antenna correlation with antenna separation has been reported here. An important finding for the base transmitter array was that the antennas were largely uncorrelated even at antenna separations as small as two wavelengths. --- paper_title: Signal design for transmitter diversity wireless communication systems over Rayleigh fading channels paper_content: Transmitter diversity wireless communication systems over Rayleigh fading channels using pilot symbol assisted modulation (PSAM) are studied. Unlike conventional transmitter diversity systems with PSAM that estimate the superimposed fading process, we are able to estimate each individual fading process corresponding to the multiple transmitters by using appropriately designed pilot symbol sequences. With such sequences, special coded modulation schemes can then be designed to access the diversity provided by the multiple transmitters without having to use an interleaver or expand the signal bandwidth. The code matrix notion is introduced for the coded modulation scheme, and its design criteria are also established. In addition to the reduction in receiver complexity, simulation results are compared to, and shown to be superior to, that of an intentional frequency offset system over a wide range of system parameters. --- paper_title: Capacity of a Mobile Multiple-Antenna Communication Link in Rayleigh Flat Fading paper_content: We analyze a mobile wireless link comprising M transmitter and N receiver antennas operating in a Rayleigh flat-fading environment. The propagation coefficients between pairs of transmitter and receiver antennas are statistically independent and unknown; they remain constant for a coherence interval of T symbol periods, after which they change to new independent values which they maintain for another T symbol periods, and so on. Computing the link capacity, associated with channel coding over multiple fading intervals, requires an optimization over the joint density of T/spl middot/M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater than the length of the coherence interval: the capacity for M>T is equal to the capacity for M=T. Capacity is achieved when the T/spl times/M transmitted signal matrix is equal to the product of two statistically independent matrices: a T/spl times/T isotropically distributed unitary matrix times a certain T/spl times/M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity for many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence interval increases, the capacity approaches the capacity obtained as if the receiver knew the propagation coefficients. --- paper_title: Unitary space-time modulation for multiple-antenna communications in Rayleigh flat fading paper_content: Motivated by information-theoretic considerations, we propose a signaling scheme, unitary space-time modulation, for multiple-antenna communication links. This modulation is ideally suited for Rayleigh fast-fading environments, since it does not require the receiver to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T/spl times/M space-time signals (/spl Phi//sub i/, l=1, ..., L), where T represents the coherence interval during which the fading is approximately constant, and M<T is the number of transmitter antennas. The columns of each /spl Phi//sub i/ are orthonormal. When the receiver does not know the propagation coefficients, which between pairs of transmitter and receiver antennas are modeled as statistically independent, this modulation performs very well either when the signal-to-noise ratio (SNR) is high or when T/spl Gt/M. We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit-error probability with maximum-likelihood decoding. We demonstrate that two antennas have a 6-dB diversity gain over one antenna at 15-dB SNR. --- paper_title: Introduction to Space-Time Wireless Communications paper_content: Wireless networks are under constant pressure to provide ever-higher data rates to increasing numbers of users with greater reliability. This book is an accessible introduction to every fundamental aspect of space-time wireless communications. Space-time processing technology is a powerful tool for improving system performance that already features in the UMTS and CDMA2000 mobile standards. The ideal volume for graduate students and professionals, it features homework problems and other supporting material on a companion website. --- paper_title: Systematic design of unitary space-time constellations paper_content: We propose a systematic method for creating constellations of unitary space-time signals for multiple-antenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation-an oblong complex-valued matrix whose columns are orthonormal-and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas. --- paper_title: Two signaling schemes for improving the error performance of frequency division duplex (FDD) transmission systems using transmitter antenna diversity paper_content: We propose two signaling schemes that exploit the availability of multiple (N) antennas at the transmitter to provide diversity benefit to the receiver. This is typical of cellular radio systems where a mobile is equipped with only one antenna while the base station is equipped with multiple antennas. We further assume that the mobile-to-base and base-to-mobile channel variations are statistically independent and that the base station has no knowledge of the base-to-mobile channel characteristics. In the first scheme, a channel code of lengthN and minimum Hamming distanced ::: min≤N is used to encode a group ofK information bits. Channel code symbolc ::: ::: i ::: is transmitted with thei th antenna. At the receiver, a maximum likelihood decoder for the channel code provides a diversity ofd ::: min as long as each transmitted code symbol is subjected to independent fading. This can be achieved by spacing the transmit antennas several wavelengths apart. The second scheme introduces deliberate resolvable multipath distortion by transmitting the data-bearing signal with antenna 1, andN−1 delayed versions of it with antennas 2 throughN. The delays are unique to each antenna and are chosen to be multiples of the symbol interval. At the receiver, a maximum likelihood sequence estimator resolves the multipath in an optimal manner to realize a diversity benefit ofN. Both schemes can suppress co-channel interference. We provide code constructions and simulation results for scheme 1 to demonstrate its merit. We derive the receiver structure and provide a bound on the error probability for scheme 2 which we show to be tight, by means of simulations, for the nontrivial and perhaps the most interesting caseN=2 antennas. The second scheme is backward-compatible with two of the proposed digital cellular system standards, viz., GSM for Europe and IS-54 for North America. --- paper_title: A Simple Transmit Diversity Technique for Wireless Communications paper_content: This paper presents a simple two-branch transmit diversity scheme. Using two transmit antennas and one receive antenna the scheme provides the same diversity order as maximal-ratio receiver combining (MRRC) with one transmit antenna, and two receive antennas. It is also shown that the scheme may easily be generalized to two transmit antennas and receive antennas to provide a diversity order of 2. The new scheme does not require any bandwidth expansion any feedback from the receiver to the transmitter and its computation complexity is similar to MRRC. --- paper_title: Space-frequency coded broadband OFDM systems paper_content: Space-time coding for fading channels is a communication technique that realizes the diversity benefits of multiple transmit antennas. Previous work in this area has focused on the narrowband flat fading case where spatial diversity only is available. We investigate the use of space-time coding in OFDM-based broadband systems where both spatial and frequency diversity are available. We consider a strategy which basically consists of coding across OFDM tones and is therefore called space-frequency coding. For a spatial broadband channel model taking into account physical propagation parameters and antenna spacing, we derive the design criteria for space-frequency codes and we show that space-time codes designed to achieve full spatial diversity in the narrowband case will in general not achieve full space-frequency diversity. Specifically, we show that the Alamouti (see IEEE J. Sel. Areas Comm., vol.16, p.1451-58, 1998) scheme across tones fails to exploit frequency diversity. For a given set of propagation parameters and given antenna spacing, we establish the maximum achievable diversity order. Finally, we provide simulation results studying the influence of delay spread, propagation parameters, and antenna spacing on the performance of space-frequency codes. --- paper_title: On limits of wireless communications in a fading environment when using multiple antennas paper_content: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. ::: ::: We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised. --- paper_title: Space-Time Block Codes from Orthogonal Designs paper_content: We introduce space-time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum-likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space-time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space-time block codes. It is shown that space-time block codes constructed in this way only exist for few sporadic values of n. Subsequently, a generalization of orthogonal designs is shown to provide space-time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space-time block codes are designed that achieve 1/2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space-time block codes are designed that achieve, respectively, all, 3/4, and 3/4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well. --- paper_title: Space-time codes for high data rate wireless communication: Performance criterion and code construction paper_content: We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by a channel code and the encoded data is split into n streams that are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. We derive performance criteria for designing such codes under the assumption that the fading is slow and frequency nonselective. Performance is shown to be determined by matrices constructed from pairs of distinct code sequences. The minimum rank among these matrices quantifies the diversity gain, while the minimum determinant of these matrices quantifies the coding gain. The results are then extended to fast fading channels. The design criteria are used to design trellis codes for high data rate wireless communication. The encoding/decoding complexity of these codes is comparable to trellis codes employed in practice over Gaussian channels. The codes constructed here provide the best tradeoff between data rate, diversity advantage, and trellis complexity. Simulation results are provided for 4 and 8 PSK signal sets with data rates of 2 and 3 bits/symbol, demonstrating excellent performance that is within 2-3 dB of the outage capacity for these channels using only 64 state encoders. --- paper_title: Impact of the propagation environment on the performance of space-frequency coded MIMO-OFDM paper_content: Previous work on space-frequency coded multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) has been restricted to idealistic propagation conditions. In this paper, using a broadband MIMO channel model taking into account Ricean K-factor, transmit and receive angle spread, and antenna spacing, we study the impact of the propagation environment on the performance of space-frequency coded MIMO-OFDM. For a given space-frequency code, we quantify the achievable diversity order and coding gain as a function of the propagation parameters. We find that while the presence of spatial receive correlation affects all space-frequency codes equally, spatial fading correlation at the transmit array can result in widely varying performance losses. High-rate space-frequency codes such as spatial multiplexing are typically significantly more affected by transmit correlation than low-rate codes such as space-frequency block codes. We show that in the MIMO Ricean case the presence of frequency-selectivity typically results in improved performance compared to the frequency-flat case. --- paper_title: Layered space-time architecture for wireless communication in a fading environment when using multi-element antennas paper_content: This paper addresses digital communication in a Rayleigh fading environment when the channel characteristic is unknown at the transmitter but is known (tracked) at the receiver. Inventing a codec architecture that can realize a significant portion of the great capacity promised by information theory is essential to a standout long-term position in highly competitive arenas like fixed and indoor wireless. Use (n T , n R ) to express the number of antenna elements at the transmitter and receiver. An (n, n) analysis shows that despite the n received waves interfering randomly, capacity grows linearly with n and is enormous. With n = 8 at 1% outage and 21-dB average SNR at each receiving element, 42 b/s/Hz is achieved. The capacity is more than 40 times that of a (1, 1) system at the same total radiated transmitter power and bandwidth. Moreover, in some applications, n could be much larger than 8. In striving for significant fractions of such huge capacities, the question arises: Can one construct an (n, n) system whose capacity scales linearly with n, using as building blocks n separately coded one-dimensional (1-D) subsystems of equal capacity? With the aim of leveraging the already highly developed 1-D codec technology, this paper reports just such an invention. In this new architecture, signals are layered in space and time as suggested by a tight capacity bound. --- paper_title: Two signaling schemes for improving the error performance of frequency division duplex (FDD) transmission systems using transmitter antenna diversity paper_content: We propose two signaling schemes that exploit the availability of multiple (N) antennas at the transmitter to provide diversity benefit to the receiver. This is typical of cellular radio systems where a mobile is equipped with only one antenna while the base station is equipped with multiple antennas. We further assume that the mobile-to-base and base-to-mobile channel variations are statistically independent and that the base station has no knowledge of the base-to-mobile channel characteristics. In the first scheme, a channel code of lengthN and minimum Hamming distanced ::: min≤N is used to encode a group ofK information bits. Channel code symbolc ::: ::: i ::: is transmitted with thei th antenna. At the receiver, a maximum likelihood decoder for the channel code provides a diversity ofd ::: min as long as each transmitted code symbol is subjected to independent fading. This can be achieved by spacing the transmit antennas several wavelengths apart. The second scheme introduces deliberate resolvable multipath distortion by transmitting the data-bearing signal with antenna 1, andN−1 delayed versions of it with antennas 2 throughN. The delays are unique to each antenna and are chosen to be multiples of the symbol interval. At the receiver, a maximum likelihood sequence estimator resolves the multipath in an optimal manner to realize a diversity benefit ofN. Both schemes can suppress co-channel interference. We provide code constructions and simulation results for scheme 1 to demonstrate its merit. We derive the receiver structure and provide a bound on the error probability for scheme 2 which we show to be tight, by means of simulations, for the nontrivial and perhaps the most interesting caseN=2 antennas. The second scheme is backward-compatible with two of the proposed digital cellular system standards, viz., GSM for Europe and IS-54 for North America. --- paper_title: Space-time block coding for channels with intersymbol interference paper_content: Abstract Stoica, P., and Lindskog, E., Space–Time Block Coding for Channels with Intersymbol Interference, Digital Signal Processing 12 (2002) 616–627 The downlink of many wireless communication systems can be a MISO channel. An important problem for a MISO channel is how to code across space and time to obtain the same ML receiver as for the corresponding SIMO channel. For flat fading channels, space–time block coding (STBC) is a recent breakthrough solution to this problem. In Lindskog and Paulraj (in Proceedings of ICC'2000, NewOrleans, LA, June 18–22, 2000), STBC has been generalized to channels with intersymbol interference (ISI) for the case of two transmit antennas and one receive antenna. In this paper we first revisit the generalized STBC scheme of Lindskog and Paulraj and show that it has the same appealing properties as the standard STBC for flat fading channels. Then we go on to present an extension of this scheme to ISI channels with any number of transmit and receive antennas. --- paper_title: Space-Time Block Codes from Orthogonal Designs paper_content: We introduce space-time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum-likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space-time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space-time block codes. It is shown that space-time block codes constructed in this way only exist for few sporadic values of n. Subsequently, a generalization of orthogonal designs is shown to provide space-time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space-time block codes are designed that achieve 1/2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space-time block codes are designed that achieve, respectively, all, 3/4, and 3/4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well. --- paper_title: Space-time codes for high data rate wireless communication: Performance criterion and code construction paper_content: We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by a channel code and the encoded data is split into n streams that are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. We derive performance criteria for designing such codes under the assumption that the fading is slow and frequency nonselective. Performance is shown to be determined by matrices constructed from pairs of distinct code sequences. The minimum rank among these matrices quantifies the diversity gain, while the minimum determinant of these matrices quantifies the coding gain. The results are then extended to fast fading channels. The design criteria are used to design trellis codes for high data rate wireless communication. The encoding/decoding complexity of these codes is comparable to trellis codes employed in practice over Gaussian channels. The codes constructed here provide the best tradeoff between data rate, diversity advantage, and trellis complexity. Simulation results are provided for 4 and 8 PSK signal sets with data rates of 2 and 3 bits/symbol, demonstrating excellent performance that is within 2-3 dB of the outage capacity for these channels using only 64 state encoders. --- paper_title: Capacity of a Mobile Multiple-Antenna Communication Link in Rayleigh Flat Fading paper_content: We analyze a mobile wireless link comprising M transmitter and N receiver antennas operating in a Rayleigh flat-fading environment. The propagation coefficients between pairs of transmitter and receiver antennas are statistically independent and unknown; they remain constant for a coherence interval of T symbol periods, after which they change to new independent values which they maintain for another T symbol periods, and so on. Computing the link capacity, associated with channel coding over multiple fading intervals, requires an optimization over the joint density of T/spl middot/M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater than the length of the coherence interval: the capacity for M>T is equal to the capacity for M=T. Capacity is achieved when the T/spl times/M transmitted signal matrix is equal to the product of two statistically independent matrices: a T/spl times/T isotropically distributed unitary matrix times a certain T/spl times/M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity for many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence interval increases, the capacity approaches the capacity obtained as if the receiver knew the propagation coefficients. --- paper_title: Detection algorithm and initial laboratory results using V-BLAST space-time communication architecture paper_content: The signal detection algorithm of the vertical BLAST (Bell Laboratories Layered Space-Time) wireless communications architecture is briefly described. Using this joint space-time approach, spectral efficiencies ranging from 20-40 bit/s/Hz have been demonstrated in the laboratory under flat fading conditions at indoor fading rates. Early results are presented. --- paper_title: Fading Channels: Information-Theoretic and Communications Aspects paper_content: In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels. --- paper_title: The impact of antenna diversity on the capacity of wireless communication systems paper_content: For a broad class of interference-dominated wireless systems including mobile, personal communications, and wireless PBX/LAN networks, the authors show that a significant increase in system capacity can be achieved by the use of spatial diversity (multiple antennas), and optimum combining. This is explained by the following observation: for independent flat-Rayleigh fading wireless systems with N mutually interfering users, they demonstrate that with K+N antennas, N-1 interferers can be nulled out and K+1 path diversity improvement can be achieved by each of the N users. Monte Carlo evaluations show that these results also hold with frequency-selective fading when optimum equalization is used at the receiver. Thus an N-fold increase in user capacity can be achieved, allowing for modular growth and improved performance by increasing the number of antennas. The interferers can also be users in other cells, users in other radio systems, or even other types of radiating devices, and thus interference cancellation also allows radio systems to operate in high interference environments. As an example of the potential system gain, the authors show that with 2 or 3 antennas the capacity of the mobile radio system IS-54 can be doubled, and with 5 antennas a 7-fold capacity increase (frequency reuse in every cell) can be achieved. > --- paper_title: Elements of Information Theory paper_content: Preface to the Second Edition. Preface to the First Edition. Acknowledgments for the Second Edition. Acknowledgments for the First Edition. 1. Introduction and Preview. 1.1 Preview of the Book. 2. Entropy, Relative Entropy, and Mutual Information. 2.1 Entropy. 2.2 Joint Entropy and Conditional Entropy. 2.3 Relative Entropy and Mutual Information. 2.4 Relationship Between Entropy and Mutual Information. 2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information. 2.6 Jensen's Inequality and Its Consequences. 2.7 Log Sum Inequality and Its Applications. 2.8 Data-Processing Inequality. 2.9 Sufficient Statistics. 2.10 Fano's Inequality. Summary. Problems. Historical Notes. 3. Asymptotic Equipartition Property. 3.1 Asymptotic Equipartition Property Theorem. 3.2 Consequences of the AEP: Data Compression. 3.3 High-Probability Sets and the Typical Set. Summary. Problems. Historical Notes. 4. Entropy Rates of a Stochastic Process. 4.1 Markov Chains. 4.2 Entropy Rate. 4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph. 4.4 Second Law of Thermodynamics. 4.5 Functions of Markov Chains. Summary. Problems. Historical Notes. 5. Data Compression. 5.1 Examples of Codes. 5.2 Kraft Inequality. 5.3 Optimal Codes. 5.4 Bounds on the Optimal Code Length. 5.5 Kraft Inequality for Uniquely Decodable Codes. 5.6 Huffman Codes. 5.7 Some Comments on Huffman Codes. 5.8 Optimality of Huffman Codes. 5.9 Shannon-Fano-Elias Coding. 5.10 Competitive Optimality of the Shannon Code. 5.11 Generation of Discrete Distributions from Fair Coins. Summary. Problems. Historical Notes. 6. Gambling and Data Compression. 6.1 The Horse Race. 6.2 Gambling and Side Information. 6.3 Dependent Horse Races and Entropy Rate. 6.4 The Entropy of English. 6.5 Data Compression and Gambling. 6.6 Gambling Estimate of the Entropy of English. Summary. Problems. Historical Notes. 7. Channel Capacity. 7.1 Examples of Channel Capacity. 7.2 Symmetric Channels. 7.3 Properties of Channel Capacity. 7.4 Preview of the Channel Coding Theorem. 7.5 Definitions. 7.6 Jointly Typical Sequences. 7.7 Channel Coding Theorem. 7.8 Zero-Error Codes. 7.9 Fano's Inequality and the Converse to the Coding Theorem. 7.10 Equality in the Converse to the Channel Coding Theorem. 7.11 Hamming Codes. 7.12 Feedback Capacity. 7.13 Source-Channel Separation Theorem. Summary. Problems. Historical Notes. 8. Differential Entropy. 8.1 Definitions. 8.2 AEP for Continuous Random Variables. 8.3 Relation of Differential Entropy to Discrete Entropy. 8.4 Joint and Conditional Differential Entropy. 8.5 Relative Entropy and Mutual Information. 8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information. Summary. Problems. Historical Notes. 9. Gaussian Channel. 9.1 Gaussian Channel: Definitions. 9.2 Converse to the Coding Theorem for Gaussian Channels. 9.3 Bandlimited Channels. 9.4 Parallel Gaussian Channels. 9.5 Channels with Colored Gaussian Noise. 9.6 Gaussian Channels with Feedback. Summary. Problems. Historical Notes. 10. Rate Distortion Theory. 10.1 Quantization. 10.2 Definitions. 10.3 Calculation of the Rate Distortion Function. 10.4 Converse to the Rate Distortion Theorem. 10.5 Achievability of the Rate Distortion Function. 10.6 Strongly Typical Sequences and Rate Distortion. 10.7 Characterization of the Rate Distortion Function. 10.8 Computation of Channel Capacity and the Rate Distortion Function. Summary. Problems. Historical Notes. 11. Information Theory and Statistics. 11.1 Method of Types. 11.2 Law of Large Numbers. 11.3 Universal Source Coding. 11.4 Large Deviation Theory. 11.5 Examples of Sanov's Theorem. 11.6 Conditional Limit Theorem. 11.7 Hypothesis Testing. 11.8 Chernoff-Stein Lemma. 11.9 Chernoff Information. 11.10 Fisher Information and the Cram-er-Rao Inequality. Summary. Problems. Historical Notes. 12. Maximum Entropy. 12.1 Maximum Entropy Distributions. 12.2 Examples. 12.3 Anomalous Maximum Entropy Problem. 12.4 Spectrum Estimation. 12.5 Entropy Rates of a Gaussian Process. 12.6 Burg's Maximum Entropy Theorem. Summary. Problems. Historical Notes. 13. Universal Source Coding. 13.1 Universal Codes and Channel Capacity. 13.2 Universal Coding for Binary Sequences. 13.3 Arithmetic Coding. 13.4 Lempel-Ziv Coding. 13.5 Optimality of Lempel-Ziv Algorithms. Compression. Summary. Problems. Historical Notes. 14. Kolmogorov Complexity. 14.1 Models of Computation. 14.2 Kolmogorov Complexity: Definitions and Examples. 14.3 Kolmogorov Complexity and Entropy. 14.4 Kolmogorov Complexity of Integers. 14.5 Algorithmically Random and Incompressible Sequences. 14.6 Universal Probability. 14.7 Kolmogorov complexity. 14.9 Universal Gambling. 14.10 Occam's Razor. 14.11 Kolmogorov Complexity and Universal Probability. 14.12 Kolmogorov Sufficient Statistic. 14.13 Minimum Description Length Principle. Summary. Problems. Historical Notes. 15. Network Information Theory. 15.1 Gaussian Multiple-User Channels. 15.2 Jointly Typical Sequences. 15.3 Multiple-Access Channel. 15.4 Encoding of Correlated Sources. 15.5 Duality Between Slepian-Wolf Encoding and Multiple-Access Channels. 15.6 Broadcast Channel. 15.7 Relay Channel. 15.8 Source Coding with Side Information. 15.9 Rate Distortion with Side Information. 15.10 General Multiterminal Networks. Summary. Problems. Historical Notes. 16. Information Theory and Portfolio Theory. 16.1 The Stock Market: Some Definitions. 16.2 Kuhn-Tucker Characterization of the Log-Optimal Portfolio. 16.3 Asymptotic Optimality of the Log-Optimal Portfolio. 16.4 Side Information and the Growth Rate. 16.5 Investment in Stationary Markets. 16.6 Competitive Optimality of the Log-Optimal Portfolio. 16.7 Universal Portfolios. 16.8 Shannon-McMillan-Breiman Theorem (General AEP). Summary. Problems. Historical Notes. 17. Inequalities in Information Theory. 17.1 Basic Inequalities of Information Theory. 17.2 Differential Entropy. 17.3 Bounds on Entropy and Relative Entropy. 17.4 Inequalities for Types. 17.5 Combinatorial Bounds on Entropy. 17.6 Entropy Rates of Subsets. 17.7 Entropy and Fisher Information. 17.8 Entropy Power Inequality and Brunn-Minkowski Inequality. 17.9 Inequalities for Determinants. 17.10 Inequalities for Ratios of Determinants. Summary. Problems. Historical Notes. Bibliography. List of Symbols. Index. --- paper_title: Performance analysis of the V-BLAST algorithm: an analytical approach paper_content: An analytical approach to the performance analysis of the V-BLAST (vertical Bell Labs layered space-time) algorithm is presented. The approach is based on the analytical model of the Gramm-Schmidt process. Closed-form analytical expressions of the vector signal at the i-th processing step and its power are presented. A rigorous proof is given that the diversity order at the i-th step (without optimal ordering) is (n-m+i), where n and m are the number of receiver and transmitter antennas respectively. It is shown that the optimal ordering is based on the least correlation criterion and that the after-processing signal power is determined by the channel correlation matrices in a fashion similar to the channel capacity. --- paper_title: On performance of the zero forcing receiver in presence of transmit correlation paper_content: Spatial multiplexing (SM) is an emerging spatial signaling technique that achieves high spectral efficiency in MIMO wireless communication links. We study the performance of the zero forcing receiver (used to recover the transmitted data stream) in flat fading channels with transmit correlation. The results for interference rejection in i.i.d. environments presented by Salz, Winters and Gitlin (1994) may be interpreted in a MIMO single user context to conclude that the zero forcing receiver achieves M /sub r/ -M/sub t/ + 1 order diversity on each stream. We provide an alternative derivation based on Wishart matrix analysis for the same result. In addition our formulation allows the inclusion of transmit correlation in the channel. We show that the SNR on each decoded strewn is a weighted Chi-squared variable with 2(M/sub r/ $M/sub t/ + 1) degrees of freedom. The weight characterizes the degradation in SNR and is accurately quantified by the corresponding diagonal element of the inverse of the transmit correlation matrix. --- paper_title: On limits of wireless communications in a fading environment when using multiple antennas paper_content: This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. ::: ::: We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon‘s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised. --- paper_title: On the expected complexity of sphere decoding paper_content: The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In communications applications, however, the given vector is not arbitrary, but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore in this paper, rather than dwell on the worst-case complexity of the integer-least-squares problem, we study its expected complexity, averaged over the noise and over the lattice. For the "sphere decoding" algorithm of Fincke and Pohst (1995) we find a closed-form expression for the expected complexity and show that for a wide range of noise variances the expected complexity is polynomial, in fact often sub-cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can in fact be implemented in real-time-a result with many practical implications. --- paper_title: Improved methods for calculating vectors of short length in a lattice, including a complexity analysis paper_content: The standard methods for calculating vectors of short length in a lattice use a reduction procedure followed by enumerating all vectors of Z'.. in a suitable box. However, it suffices to consider those x E Z'" which lie in a suitable ellipsoid having a much smaller volume than the box. We show in this paper that searching through that ellipsoid is in many cases much more efficient. If combined with an appropriate reduction procedure our method allows to do computations in lattices of much higher dimensions. Several randomly constructed numerical examples illustrate the superiority of our new method over the known ones. --- paper_title: V-BLAST: an architecture for realizing very high data rates over the rich-scattering wireless channel paper_content: Information theory research has shown that the rich-scattering wireless channel is capable of enormous theoretical capacities if the multipath is properly exploited. In this paper, we describe a wireless communication architecture known as vertical BLAST (Bell Laboratories Layered Space-Time) or V-BLAST, which has been implemented in real-time in the laboratory. Using our laboratory prototype, we have demonstrated spectral efficiencies of 20-40 bps/Hz in an indoor propagation environment at realistic SNRs and error rates. To the best of our knowledge, wireless spectral efficiencies of this magnitude are unprecedented and are furthermore unattainable using traditional techniques. --- paper_title: Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel paper_content: We study the capacity of multiple-antenna fading channels. We focus on the scenario where the fading coefficients vary quickly; thus an accurate estimation of the coefficients is generally not available to either the transmitter or the receiver. We use a noncoherent block fading model proposed by Marzetta and Hochwald (see ibid. vol.45, p.139-57, 1999). The model does not assume any channel side information at the receiver or at the transmitter, but assumes that the coefficients remain constant for a coherence interval of length T symbol periods. We compute the asymptotic capacity of this channel at high signal-to-noise ratio (SNR) in terms of the coherence time T, the number of transmit antennas M, and the number of receive antennas N. While the capacity gain of the coherent multiple antenna channel is min{M, N} bits per second per Hertz for every 3-dB increase in SNR, the corresponding gain for the noncoherent channel turns out to be M* (1 - M*/T) bits per second per Hertz, where M*=min{M, N, [T/2]}. The capacity expression has a geometric interpretation as sphere packing in the Grassmann manifold. --- paper_title: Diversity and multiplexing: a fundamental tradeoff in multiple-antenna channels paper_content: Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. We propose the point of view that both types of gains can be simultaneously obtained for a given multiple-antenna channel, but there is a fundamental tradeoff between how much of each any coding scheme can get. For the richly scattered Rayleigh-fading channel, we give a simple characterization of the optimal tradeoff curve and use it to evaluate the performance of existing multiple antenna schemes. --- paper_title: Diversity and multiplexing: a fundamental tradeoff in multiple-antenna channels paper_content: Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. We propose the point of view that both types of gains can be simultaneously obtained for a given multiple-antenna channel, but there is a fundamental tradeoff between how much of each any coding scheme can get. For the richly scattered Rayleigh-fading channel, we give a simple characterization of the optimal tradeoff curve and use it to evaluate the performance of existing multiple antenna schemes. --- paper_title: Microwave Mobile Communications paper_content: From the Publisher: ::: IEEE Press is pleased to bring back into print this definitive text and reference covering all aspects of microwave mobile systems design. Encompassing ten years of advanced research in the field, this invaluable resource reviews basic microwave theory, explains how cellular systems work, and presents useful techniques for effective systems development. The return of this classic volume should be welcomed by all those seeking the original authoritative and complete source of information on this emerging technology. An in-depth and practical guide, Microwave Mobile Communications will provide you with a solid understanding of the microwave propagation techniques essential to the design of effective cellular systems. --- paper_title: On the Capacity of Certain Space-Time Coding Schemes paper_content: We take a capacity view of a number of different space-time coding (STC) schemes. While the Shannon capacity of multiple-input multiple-output (MIMO) channels has been known for a number of years now, the attainment of these capacities remains a challenging issue in many cases. The introduction of space-time coding schemes in the last 2–3 years has, however, begun paving the way towards the attainment of the promised capacities. In this work we attempt to describe what are the attainable information rates of certain STC schemes, by quantifying their inherent capacity penalties. The obtained results, which are validated for a number of typical cases, cast some interesting light on the merits and tradeoffs of different techniques. Further, they point to future work needed in bridging the gap between the theoretically expected capacities and the performance of practical systems. --- paper_title: Space-time block codes: a capacity perspective paper_content: Space-time block codes are a remarkable modulation scheme discovered recently for the multiple antenna wireless channel. They have an elegant mathematical solution for providing full diversity over the coherent, flat-fading channel. In addition, they require extremely simple encoding and decoding. Although these codes provide full diversity at low computational costs, we show that they incur a loss in capacity because they convert the matrix channel into a scalar AWGN channel whose capacity is smaller than the true channel capacity. In this letter the loss in capacity is quantified as a function of channel rank, code rate, and number of receive antennas. --- paper_title: Two signaling schemes for improving the error performance of frequency division duplex (FDD) transmission systems using transmitter antenna diversity paper_content: We propose two signaling schemes that exploit the availability of multiple (N) antennas at the transmitter to provide diversity benefit to the receiver. This is typical of cellular radio systems where a mobile is equipped with only one antenna while the base station is equipped with multiple antennas. We further assume that the mobile-to-base and base-to-mobile channel variations are statistically independent and that the base station has no knowledge of the base-to-mobile channel characteristics. In the first scheme, a channel code of lengthN and minimum Hamming distanced ::: min≤N is used to encode a group ofK information bits. Channel code symbolc ::: ::: i ::: is transmitted with thei th antenna. At the receiver, a maximum likelihood decoder for the channel code provides a diversity ofd ::: min as long as each transmitted code symbol is subjected to independent fading. This can be achieved by spacing the transmit antennas several wavelengths apart. The second scheme introduces deliberate resolvable multipath distortion by transmitting the data-bearing signal with antenna 1, andN−1 delayed versions of it with antennas 2 throughN. The delays are unique to each antenna and are chosen to be multiples of the symbol interval. At the receiver, a maximum likelihood sequence estimator resolves the multipath in an optimal manner to realize a diversity benefit ofN. Both schemes can suppress co-channel interference. We provide code constructions and simulation results for scheme 1 to demonstrate its merit. We derive the receiver structure and provide a bound on the error probability for scheme 2 which we show to be tight, by means of simulations, for the nontrivial and perhaps the most interesting caseN=2 antennas. The second scheme is backward-compatible with two of the proposed digital cellular system standards, viz., GSM for Europe and IS-54 for North America. --- paper_title: On the Capacity of Certain Space-Time Coding Schemes paper_content: We take a capacity view of a number of different space-time coding (STC) schemes. While the Shannon capacity of multiple-input multiple-output (MIMO) channels has been known for a number of years now, the attainment of these capacities remains a challenging issue in many cases. The introduction of space-time coding schemes in the last 2–3 years has, however, begun paving the way towards the attainment of the promised capacities. In this work we attempt to describe what are the attainable information rates of certain STC schemes, by quantifying their inherent capacity penalties. The obtained results, which are validated for a number of typical cases, cast some interesting light on the merits and tradeoffs of different techniques. Further, they point to future work needed in bridging the gap between the theoretically expected capacities and the performance of practical systems. --- paper_title: Space-time coded OFDM for high data-rate wireless communication over wideband channels paper_content: There has been an increasing interest in providing high data-rate services such as video-conferencing, multimedia Internet access and wide area network over wideband wireless channels. Wideband wireless channels available in the PCS band (2 GHz) have been envisioned to be used by mobile (high Doppler) and stationary (low Doppler) units in a variety of delay spread profiles. This is a challenging task, given the limited link budget and severity of wireless environment, and calls for the development of novel robust bandwidth efficient techniques which work reliably at low SNRs. To this end, we design a space-time coded orthogonal frequency division multiplexing (OFDM) modulated physical layer. This combines coding and modulation. Space-time codes were previously proposed for narrowband wireless channels. These codes have high spectral efficiency and operate at very low SNR (within 2-3 dB of the capacity). On the other hand, OFDM has matured as a modulation scheme for wideband channels. We combine these two in a natural manner and propose a system achieving data rates of 1.5-3 Mbps over a 1 MHz bandwidth channel. This system requires 18-23 dB (resp. 9-14 dB) receive SNR at a frame error probability of 10/sup -2/ with two transmit and one receive antennas (resp. two transmit and two receive antennas). As space-time coding does not require any form of interleaving, the proposed system is attractive for delay-sensitive applications. --- paper_title: On the capacity of OFDM-based spatial multiplexing systems paper_content: This paper deals with the capacity behavior of wireless orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing systems in broad-band fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multiple-input multiple-output (MIMO) broad-band fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that, in the MIMO case, unlike the single-input single-output (SISO) case, delay spread channels may provide advantages over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flat fading channels --- paper_title: Spatio-Temporal Coding for Wireless Communication paper_content: Multipath signal propagation has long been viewed as an impairment to reliable communication in wireless channels. This paper shows that the presence of multipath greatly improves achievable data rate if the appropriate communication structure is employed. A compact model is developed for the multiple-input multiple-output (MIMO) dispersive spatially selective wireless communication channel. The multivariate information capacity is analyzed. For high signal-to-noise ratio (SNR) conditions, the MIMO channel can exhibit a capacity slope in bits per decibel of power increase that is proportional to the minimum of the number multipath components, the number of input antennas, or the number of output antennas. This desirable result is contrasted with the lower capacity slope of the well-studied case with multiple antennas at only one side of the radio link. A spatio-temporal vector-coding (STVC) communication structure is suggested as a means for achieving MIMO channel capacity. The complexity of STVC motivates a more practical reduced-complexity discrete matrix multitone (DMMT) space-frequency coding approach. Both of these structures are shown to be asymptotically optimum. An adaptive-lattice trellis-coding technique is suggested as a method for coding across the space and frequency dimensions that exist in the DMMT channel. Experimental examples that support the theoretical results are presented. --- paper_title: Space-frequency coded broadband OFDM systems paper_content: Space-time coding for fading channels is a communication technique that realizes the diversity benefits of multiple transmit antennas. Previous work in this area has focused on the narrowband flat fading case where spatial diversity only is available. We investigate the use of space-time coding in OFDM-based broadband systems where both spatial and frequency diversity are available. We consider a strategy which basically consists of coding across OFDM tones and is therefore called space-frequency coding. For a spatial broadband channel model taking into account physical propagation parameters and antenna spacing, we derive the design criteria for space-frequency codes and we show that space-time codes designed to achieve full spatial diversity in the narrowband case will in general not achieve full space-frequency diversity. Specifically, we show that the Alamouti (see IEEE J. Sel. Areas Comm., vol.16, p.1451-58, 1998) scheme across tones fails to exploit frequency diversity. For a given set of propagation parameters and given antenna spacing, we establish the maximum achievable diversity order. Finally, we provide simulation results studying the influence of delay spread, propagation parameters, and antenna spacing on the performance of space-frequency codes. --- paper_title: Space-time coded OFDM for high data-rate wireless communication over wideband channels paper_content: There has been an increasing interest in providing high data-rate services such as video-conferencing, multimedia Internet access and wide area network over wideband wireless channels. Wideband wireless channels available in the PCS band (2 GHz) have been envisioned to be used by mobile (high Doppler) and stationary (low Doppler) units in a variety of delay spread profiles. This is a challenging task, given the limited link budget and severity of wireless environment, and calls for the development of novel robust bandwidth efficient techniques which work reliably at low SNRs. To this end, we design a space-time coded orthogonal frequency division multiplexing (OFDM) modulated physical layer. This combines coding and modulation. Space-time codes were previously proposed for narrowband wireless channels. These codes have high spectral efficiency and operate at very low SNR (within 2-3 dB of the capacity). On the other hand, OFDM has matured as a modulation scheme for wideband channels. We combine these two in a natural manner and propose a system achieving data rates of 1.5-3 Mbps over a 1 MHz bandwidth channel. This system requires 18-23 dB (resp. 9-14 dB) receive SNR at a frame error probability of 10/sup -2/ with two transmit and one receive antennas (resp. two transmit and two receive antennas). As space-time coding does not require any form of interleaving, the proposed system is attractive for delay-sensitive applications. --- paper_title: On the capacity of OFDM-based spatial multiplexing systems paper_content: This paper deals with the capacity behavior of wireless orthogonal frequency-division multiplexing (OFDM)-based spatial multiplexing systems in broad-band fading environments for the case where the channel is unknown at the transmitter and perfectly known at the receiver. Introducing a physically motivated multiple-input multiple-output (MIMO) broad-band fading channel model, we study the influence of physical parameters such as the amount of delay spread, cluster angle spread, and total angle spread, and system parameters such as the number of antennas and antenna spacing on ergodic capacity and outage capacity. We find that, in the MIMO case, unlike the single-input single-output (SISO) case, delay spread channels may provide advantages over flat fading channels not only in terms of outage capacity but also in terms of ergodic capacity. Therefore, MIMO delay spread channels will in general provide both higher diversity gain and higher multiplexing gain than MIMO flat fading channels --- paper_title: A Simple Transmit Diversity Technique for Wireless Communications paper_content: This paper presents a simple two-branch transmit diversity scheme. Using two transmit antennas and one receive antenna the scheme provides the same diversity order as maximal-ratio receiver combining (MRRC) with one transmit antenna, and two receive antennas. It is also shown that the scheme may easily be generalized to two transmit antennas and receive antennas to provide a diversity order of 2. The new scheme does not require any bandwidth expansion any feedback from the receiver to the transmitter and its computation complexity is similar to MRRC. --- paper_title: Space-time code design in OFDM systems paper_content: We consider a space-time coded (STC) orthogonal frequency-division multiplexing (OFDM) system in frequency-selective fading channels. By analyzing the pairwise error probability (PEP), we show that STC-OFDM systems can potentially provide a diversity order as the product of the number of transmitter antennas, the number of receiver antennas and the frequency selectivity order, and that the large effective length and the ideal interleaving are two most important principles in designing STCs for OFDM systems. Following these principles, we propose a new class of trellis-structured STCs. Compared with the conventional space-time trellis codes, our proposed STC's significantly improve the performance by efficiently exploiting both the spatial diversity and the frequency-selective-fading diversity. --- paper_title: LDPC-based space-time coded OFDM systems over correlated fading channels: performance analysis and receiver design paper_content: We consider the performance analysis and the receiver design of low-density parity-check (LDPC) code based space-time coded (STC) orthogonal frequency-division multiplexing (OFDM) system over correlated frequency- and time-selective fading channels. --- paper_title: A transmit diversity scheme for frequency selective fading channels paper_content: We propose a transmit diversity scheme for frequency selective fading channels using orthogonal frequency division multiplexing (OFDM). The transmit diversity scheme proposed by Alamouti (see IEEE JSAC, vol.16, no.8, p.1451-58, 1998) for flat fading channels is extended to the case when the channel has a delay spread. There is no loss in receive SNR due to the delay spread in the channel. A diversity order of 2N can be achieved by using two transmit and N receive antennas. We discuss several interesting aspects of the approach and compare it with other extensions of the Alamouti scheme for delay spread channels. --- paper_title: Space-frequency codes for broadband fading channels paper_content: The use of space-frequency coding in orthogonal frequency division multiplexing (OFDM)-based broadband multi-antenna systems has been proposed. The design criteria for space-frequency codes derived by Bolcskei and Paulraj (see Proc. IEEE WCNC-2000, Chicago, IL, vol.1, p.1-6, 2000) are vastly different from the well-known design criteria for space-time codes. In this paper, we provide an explicit construction of a class of space-frequency codes which achieve full spatial and frequency diversity. --- paper_title: Space-frequency coded broadband OFDM systems paper_content: Space-time coding for fading channels is a communication technique that realizes the diversity benefits of multiple transmit antennas. Previous work in this area has focused on the narrowband flat fading case where spatial diversity only is available. We investigate the use of space-time coding in OFDM-based broadband systems where both spatial and frequency diversity are available. We consider a strategy which basically consists of coding across OFDM tones and is therefore called space-frequency coding. For a spatial broadband channel model taking into account physical propagation parameters and antenna spacing, we derive the design criteria for space-frequency codes and we show that space-time codes designed to achieve full spatial diversity in the narrowband case will in general not achieve full space-frequency diversity. Specifically, we show that the Alamouti (see IEEE J. Sel. Areas Comm., vol.16, p.1451-58, 1998) scheme across tones fails to exploit frequency diversity. For a given set of propagation parameters and given antenna spacing, we establish the maximum achievable diversity order. Finally, we provide simulation results studying the influence of delay spread, propagation parameters, and antenna spacing on the performance of space-frequency codes. --- paper_title: Space-frequency coded MIMO-OFDM with variable multiplexing-diversity tradeoff paper_content: Space-frequency coded orthogonal frequency division multiplexing (OFDM) is capable of realizing both spatial and frequency-diversity gains in multipath multiple-input multiple-output (MIMO) fading channels. This naturally leads to the question of variable allocation of the channel's degrees of freedom to multiplexing and diversity transmission modes. In this paper, we provide a systematic method for the design of space-frequency codes with variable multiplexing-diversity tradeoffs. Simulation results illustrate the performance of the proposed codes. --- paper_title: Turbo codes for OFDM with antenna diversity paper_content: OFDM with diversity and coding has been proposed as an effective means for achieving high-rates in wireless environments. Turbo codes have been shown to give near-capacity performance in additive white Gaussian noise (AWGN) channels and is being considered to enhance mobile wireless channel performance. In this paper, we present simulation results of an OFDM system with turbo coding. Comparisons with systems employing convolutional and Reed-Solomon (RS) codes are made. We first study diversity, interleaving and soft decoding for convolutional codes. The same structure is employed for turbo codes of similar complexity and varying word size. Even with a low constraint length, convolutional codes have the potential to outperform RS codes, provided that the interleaver and soft decoder are properly designed. We evaluate Turbo code performance under slow fading conditions and study the effects of changing word size. Increasing word size theoretically provides better interleaving between the two component codes. However, this advantage is less clear when the fading rate is significantly lower than the symbol rate, which is typical of the high data-rate system considered here. Under such conditions, the advantage of using two component convolutional codes in turbo codes is limited. A single convolutional code with higher constraint length may be a better choice. --- paper_title: Coding strategies for OFDM with antenna diversity high-bit-rate mobile data applications paper_content: In this paper, we investigate coding strategies for an OFDM system with antenna diversity. To maximize coding efficiency, interleaving in the two-dimensional frequency-time domain is studied. In particular, transmission over multiple antennas, which achieves an interleaving effect by randomizing the fading over the subchannels, is employed. Based on the best interleaving method, we consider convolutional coding and compare its performance with that of Reed-Solomon coding. Simulation results indicate that a 1/2-rate convolutional code requires a constraint length of at least 6, with soft decisions, to achieve better performance than a 1/2-rate (40,20) Reed-Solomon code with 6-bit symbols and a combination of erasure and error correction. In addition, a concatenated coding scheme, combining convolutional and Reed-Solomon coding, is presented and is shown to achieve improved performance with lower complexity. --- paper_title: Impact of the propagation environment on the performance of space-frequency coded MIMO-OFDM paper_content: Previous work on space-frequency coded multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) has been restricted to idealistic propagation conditions. In this paper, using a broadband MIMO channel model taking into account Ricean K-factor, transmit and receive angle spread, and antenna spacing, we study the impact of the propagation environment on the performance of space-frequency coded MIMO-OFDM. For a given space-frequency code, we quantify the achievable diversity order and coding gain as a function of the propagation parameters. We find that while the presence of spatial receive correlation affects all space-frequency codes equally, spatial fading correlation at the transmit array can result in widely varying performance losses. High-rate space-frequency codes such as spatial multiplexing are typically significantly more affected by transmit correlation than low-rate codes such as space-frequency block codes. We show that in the MIMO Ricean case the presence of frequency-selectivity typically results in improved performance compared to the frequency-flat case. ---
Title: An overview of MIMO communications - a key to gigabit wireless Section 1: INTRODUCTION Description 1: Discuss the motivation for high data rate wireless communication and introduce the concept of MIMO technology. Section 2: BUILDING GIGABIT WIRELESS LINKS Description 2: Explore design tradeoffs and limitations in building high-speed wireless systems, highlighting the benefits of MIMO technology. Section 3: MIMO CHANNEL MODEL Description 3: Introduce the MIMO channel model suitable for NLOS environments, including physical scattering models and real-world channel measurements. Section 4: CAPACITY OF MIMO CHANNELS Description 4: Examine the capacity benefits of MIMO channels through theoretical models, providing both ergodic and outage capacity analysis for various fading conditions. Section 5: MIMO SIGNALING Description 5: Review basic signaling techniques for MIMO systems, focusing on space-time diversity coding and spatial multiplexing schemes. Section 6: MIMO RECEIVER ARCHITECTURES Description 6: Discuss receiver architectures, including ML, linear, and successive cancellation receivers, highlighting their performance and complexity tradeoffs. Section 7: FUNDAMENTAL PERFORMANCE LIMITS Description 7: Examine the fundamental tradeoffs between transmission rate, error rate, and SNR for MIMO systems, providing insights into optimal and suboptimal coding schemes. Section 8: MIMO-OFDM Description 8: Discuss the integration of MIMO with OFDM modulation, addressing signaling techniques and receiver strategies for frequency-selective fading channels. Section 9: CONCLUSION Description 9: Summarize the key points discussed in the paper and provide a brief outlook on future research directions and practical implementation challenges.
Overview of measurement-based connection admission control methods in ATM networks
8
--- paper_title: A simple multi-QoS ATM buffer management scheme based on adaptive admission control paper_content: This paper proposes simple and effective multi-QoS ATM buffer management scheme with admission control. The proposed scheme supports three QoS-service classes by two separate buffers with strict priority and measurement-based as well as individual cell loss rate (CLR)-based admission control. Two guaranteed classes for video and reliable data are multiplexed into a high priority buffer and individual CLR management is employed. We show that the proposed individual CLR management is simple and effective compared to separate buffers with WRR (weighted round robin). One best-effort class is assigned to the low priority buffer, and includes LAN intercommunication using an upper-layer congestion control mechanism. Since measurement-based adaptive admission control is employed for best-effort service, the proposed adaptive admission control achieves high bandwidth efficiency while keeping the target CLR. This multi-QoS management scheme can be applied to future multimedia ATM networks that integrate video, data, and other services. --- paper_title: Traffic Models in Broadband Networks paper_content: Traffic models are at the heart of any performance evaluation of telecommunications networks. An accurate estimation of network performance is critical for the success of broadband networks. Such networks need to guarantee an acceptable quality of service (QoS) level to the users. Therefore, traffic models need to be accurate and able to capture the statistical characteristics of the actual traffic. We survey and examine traffic models that are currently used in the literature. Traditional short-range and non-traditional long-range dependent traffic models are presented. The number of parameters needed, parameter estimation, analytical tractability, and ability of traffic models to capture marginal distribution and auto-correlation structure of the actual traffic are discussed. --- paper_title: A call admission control scheme in ATM networks paper_content: In an ATM (asynchronous transfer mode) network, call admission control decides whether or not to accept a new call, so as to ensure the service quality of both individual existing calls and of the new call itself. The measure of service quality used for the call admission control considered here is virtual cell loss probability, PV. PV may be formulated simply from the maximum and the average cell rates alone, and it provides a good estimate of cell loss probability without respect to the burst lengths. PV computational complexity increases rapidly with an increase either in the number of call types, K, or in the number of existing calls of individual call types; and for this reason, the author proposes a method for calculating a PV approximation on the upper side, with the aim of achieving real-time call admission control. Its complexity of computation is a linear order of K. Also proposed is a call admission algorithm which utilizes this approximation method and for which, in some cases, the computational complexity is independent of K. > --- paper_title: Connection admission control in ATM networks paper_content: A connection admission control scheme based on the Gaussian traffic model is proposed. According to the scheme every call declares only its peak rate. Its statistics are then measured during its holding time. The connection admission criteria is based on traffic measurements and loss probability calculations based on the the Gaussian traffic model. Simulation tests show that the scheme is somewhat more efficient than schemes proposed earlier. --- paper_title: High-speed connection admission control in ATM networks by generating virtual requests for connection paper_content: This paper proposes a high-speed connection admission control scheme, named PERB CAC (CAC based on prior estimation for residual bandwidth). This scheme estimates the residual bandwidth in advance by generating a series of virtual requests for connection. When an actual connection request occurs, PERB CAC can instantaneously judge if the required bandwidth is larger than the estimated residual bandwidth, so the connection set-up time can be greatly reduced. Therefore, PERB CAC can realize high-speed connection set-up. --- paper_title: On the theory of general on-off sources with applications in high-speed networks paper_content: A general theory of on-off sources is provided. The basic source model is characterized by alternating independent on (burst) and off (silence) periods, which may have general distributions. Other more complex sources are constructed, and their behavior is characterized in terms of the basic source model. Heterogeneous and homogeneous statistical multiplexers fed by such sources are considered. In the heterogeneous environment, a simple result on the tail behavior of the multiplexer queue length distribution in the heavy traffic is provided. In the homogeneous environment, asymptotic results on the tail behavior of the queue length distribution are provided for all levels of utilization. The results for the heterogeneous environment suggest a new call admission control policy for general on-off sources in high-speed networks, which depends only on the first two moments of the on and off periods of individual sources and their respective peak rates. > --- paper_title: Managing bandwidth in ATM networks with bursty traffic paper_content: Three approaches to the bandwidth management problem that have been proposed and studied by various groups are reviewed to illustrate three distinctly different approaches and identify their strengths and weaknesses. Based on these approaches, a bandwidth management and congestion control scheme for asynchronous transfer mode (ATM) networks that supports both point-to-point and one-to-many multicast virtual circuits is proposed. It is shown that the method can handle fully heterogeneous traffic and can be effectively implemented. The algorithm for making virtual circuit acceptance decisions is straightforward and fast, and the hardware mechanisms needed to implement buffer allocation and traffic monitoring at the user-network interface have acceptable complexities. It is also shown, through numerical examples, that the approach can achieve reasonable link efficiencies even in the presence of very bursty traffic. No advance reservation required, simplifying the interface between the network and the user and avoiding an initial network round trip delay before data can be transmitted. > --- paper_title: Loss performance analysis of an ATM multiplexer loaded with high-speed on-off sources paper_content: The performance of an asynchronous transfer mode (ATM) multiplexer whose input consists of the superposition of a multiplicity of homogeneous on-off sources modeled by a two-state Markovian process is studied. The approach is based on the approximation of the actual input process by means of a suitably chosen two-state Markov modulated Poisson process (MMPP), as a simple and effective choice for the representation of superposition arrival streams. To evaluate the cell loss performance, a new matching procedure that leads to accurate results compared to simulation is developed. The application limits of the proposed method are also discussed. The outstanding physical meaning of this procedure permits a deep insight into the multiplexer performance behavior as the source parameters and the multiplexer buffer size are varied. > --- paper_title: On the asymptotic behavior of heterogeneous statistical multiplexer with applications paper_content: The author combines a heterogeneous statistical multiplexer in heavy traffic with different characteristics in discrete time which is representative of the asynchronous transfer mode (ATM) environment at the cell level. An exact formulation of the queuing model for the multiplexer is presented. Using spectral decomposition method and asymptotic analysis, it is shown that for fixed source average utilization and peak rate, as the burst size of the individual sources increase the tail behavior of the distribution of the number of cells queued in the multiplexer has a simple characterization. This characterization provides a simple approximation of the queuing behavior of the multiplexer, where the impact of each source is quite evident. The accuracy of this approximation is examined. Some applications are considered where both buffer sizing and admission control are discussed. > --- paper_title: Call admission control in an ATM network using upper bound of cell loss probability paper_content: Call admission control in ATM networks without monitoring network load is proposed. Traffic parameters specified by users are employed to obtain the upper bound of cell loss probability, thereby determining call admission. A cell loss probability standard is guaranteed to be satisfied under this control without assumptions of a cell arrival process. Implementation of this control to quickly evaluate cell loss probability after acceptance of a new call is discussed. The result is also applicable to ATM network dimensioning based on traffic parameters. > --- paper_title: Adaptive Connection Admission Control Using Real-time Traffic Measurements in ATM Networks paper_content: The invention disclosed is a device capable of sensing and recording the total thermal energy received by an object in a given location. The device can sense the energy received from any direction by any source of thermal energy. The sensors are attached to a recorder so that variations over time can be observed. --- paper_title: A Framework for Bandwidth Management in ATM Networks--Aggregate Equivalent Bandwidth Estimation Approach paper_content: A unified framework for traffic control and bandwidth management in ATM networks is proposed. It bridges algorithms for real-time and data services. The central concept of this framework is adaptive connection admission. It employs an estimation of the aggregate equivalent bandwidth required by connections carried in each output port of the ATM switches. The estimation process takes into account both the traffic source declarations and the connection superposition process measurements in the switch output ports. This is done in an optimization framework based on a linear Kalman filter. To provide a required quality of service guarantee, bandwidth is reserved for possible estimation error. The algorithm is robust and copes very well with unpredicted changes in source parameters, thereby resulting in high bandwidth utilization while providing the required quality of service. The proposed approach can also take into account the influence of the source policing mechanism. The tradeoff between strict and relaxed source policing is discussed. --- paper_title: A simple bandwidth management strategy based on measurements of instantaneous virtual path utilization in ATM networks paper_content: A new connection admission control method based on actual virtual path traffic measurements is proposed to achieve high bandwidth efficiency for various types of traffic. The proposed method is based on the measurement of instantaneous virtual path utilization, which is defined as the total cell rate of the active virtual channels normalized by the virtual path capacity. A low-pass filter is used to determine the instantaneous virtual path utilization from crude measurements. A smoothing coefficient formula is derived as a function of the peak rate of the virtual channel. The residual bandwidth is derived from the maximum instantaneous utilization observed during a monitoring period. Simulation shows that the proposed method achieves statistical multiplexing gains of up to 80% of the limit possible with optimum control for similar traffic sources. It can be implemented with very simple hardware. The admission decision is simple: the requested bandwidth is compared with the residual bandwidth. This method is therefore well suited for practical asynchronous transfer mode switching systems. --- paper_title: A Decision-Theoretic Approach to Call Admission Control in ATM Networks paper_content: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-off between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types. > --- paper_title: Call admission control in an ATM network using upper bound of cell loss probability paper_content: Call admission control in ATM networks without monitoring network load is proposed. Traffic parameters specified by users are employed to obtain the upper bound of cell loss probability, thereby determining call admission. A cell loss probability standard is guaranteed to be satisfied under this control without assumptions of a cell arrival process. Implementation of this control to quickly evaluate cell loss probability after acceptance of a new call is discussed. The result is also applicable to ATM network dimensioning based on traffic parameters. > --- paper_title: Equivalent capacity and its application to bandwidth allocation in high-speed networks paper_content: The authors propose a computationally simple approximate expression for the equivalent capacity or bandwidth requirement of both individual and multiplexed connections, based on their statistical characteristics and the desired grade-of-service (GOS). The purpose of such an expression is to provide a unified metric to represent the effective bandwidth used by connections and the corresponding effective load of network links. These link metrics can then be used for efficient bandwidth management, routing, and call control procedures aimed at optimizing network usage. While the methodology proposed can provide an exact approach to the computation of the equivalent capacity, the associated complexity makes it infeasible for real-time network traffic control applications. Hence, an approximation is required. The validity of the approximation developed is verified by comparison to both exact computations and simulation results. > --- paper_title: Effectiveness of dynamic bandwidth management mechanisms in ATM networks paper_content: The leaky bucket, a credit management system that allows cells to enter the network only if sufficient credit is available, is considered. Motivation for an in-depth study of dynamic control algorithms, to avoid congestion in high-speed networks under time varying traffic statistics, is provided. The need for real-time traffic measurements and dynamic control actions for long-lived connections is demonstrated. In particular, it is shown through simulations that implementing access control algorithms based on the bandwidth allocation procedures only at connection setup is not sufficient for congestion-free operation of the network. It is further shown that if the bandwidth allocation and the leaky bucket parameters are not dynamically adjusted, the performances of the connections depend on their initial parameters and can be very undesirable as the traffic parameters change over time. Moreover, if corrective actions are not taken, congestion may build up in the network even in the presence of leaky buckets. The simple dynamic approach proposed limits the congestion periods in the network to the time scales of the connection-level controls. Therefore, the probability of many connections sending excess traffic into the network at the same time is greatly reduced, alleviating the congestion problem. > --- paper_title: Effective bandwidth of general Markovian traffic sources and admission control of high speed networks paper_content: A prime instrument for controlling congestion in high-speed broadband ISDN (BISDN) networks is admission control, which limits call and guarantees a grade of service determined by delay and loss probability in the multiplexer. It is shown, for general Markovian traffic sources, that it is possible to assign a notational effective bandwidth to each source which is an explicitly identified, simply computing quantity with provably correct properties in the natural asymptotic regime of small loss probabilities. It is the maximal real eigenvalue of a matrix which is directly obtained from the source characteristics and the admission criterion, and for several sources it is simply additive. Both fluid and point process models are considered, and parallel results are obtained. Numerical results show that the acceptance set for heterogeneous classes of sources is closely approximated and conservatively bounded by the set obtained from the effective bandwidth approximation. > --- paper_title: Fundamental Bounds and Approximations for ATM Multiplexers with Applications to Video Teleconferencing paper_content: The main contributions of this paper are two-fold. First, we prove fundamental, similarly behaving lower and upper bounds, and give an approximation based on the bounds, which is effective for analyzing ATM multiplexers, even when the traffic has many, possibly heterogeneous, sources and their models are of high dimension. Second, we apply our analytic approximation to statistical models of video teleconference traffic, obtain the multiplexing system's capacity as determined by the number of admissible sources for given cell-loss probability, buffer size and trunk bandwidth, and, finally, compare with results from simulations, which are driven by actual data from coders. The results are surprisingly close. Our bounds are based on large deviations theory. The main assumption is that the sources are Markovian and time-reversible. Our approximation to the steady-state buffer distribution is called Chenoff-dominant eigenvalue since one parameter is obtained from Chernoffs theorem and the other is the system's dominant eigenvalue. Fast, effective techniques are given for their computation. In our application we process the output of variable bit rate coders to obtain DAR(1) source models which, while of high dimension, require only knowledge of the mean, variance, and correlation. We require cell-loss probability not to exceed 10/sup -6/, trunk bandwidth ranges from 45 to 150 Mb/s, buffer sizes are such that maximum delays range from 1 to 60 ms, and the number of coder-sources ranges from 15 to 150. Even for the largest systems, the time for analysis is a fraction of a second, while each simulation takes many hours. Thus, the real-time administration of admission control based on our analytic techniques is feasible. > --- paper_title: Real-time cell loss ratio estimation and its applications to ATM traffic controls paper_content: The asymptotics of cell-loss ratio (CLR) in the regime of large buffers are characterized by two parameters, the asymptotic constant and asymptotic decay rate. Both parameters can be expanded in powers of one minus the link utilization (heavy traffic expansion). Thus, once the coefficients in this expansion are obtained, the CLR can be very easily estimated from the measured link utilization by using CLR asymptotics. This paper proposes an algorithm for estimating these coefficients in real time. For this purpose, the notion of state-space representation for a single-server queue is introduced: the elements of the state vector are the coefficients in the expansion of the asymptotic constant and asymptotic decay rate. Bayesian regression analysis is applied to estimate the state vector based on the buffer measurement. Our approach does not require any models describing the statistics of the traffic other than the asymptotic behavior of the CLR in the regime of large buffers. In addition, it allows us to estimate the effective bandwidth of the aggregated process easily, so it is applicable to a wide range of ATM traffic control methods, such as connection admission control and VP bandwidth control. We describe how this method of CLR estimation will be applied to connection admission control and VP bandwidth control by using results from simulation experiments. --- paper_title: Effectiveness of dynamic bandwidth management mechanisms in ATM networks paper_content: The leaky bucket, a credit management system that allows cells to enter the network only if sufficient credit is available, is considered. Motivation for an in-depth study of dynamic control algorithms, to avoid congestion in high-speed networks under time varying traffic statistics, is provided. The need for real-time traffic measurements and dynamic control actions for long-lived connections is demonstrated. In particular, it is shown through simulations that implementing access control algorithms based on the bandwidth allocation procedures only at connection setup is not sufficient for congestion-free operation of the network. It is further shown that if the bandwidth allocation and the leaky bucket parameters are not dynamically adjusted, the performances of the connections depend on their initial parameters and can be very undesirable as the traffic parameters change over time. Moreover, if corrective actions are not taken, congestion may build up in the network even in the presence of leaky buckets. The simple dynamic approach proposed limits the congestion periods in the network to the time scales of the connection-level controls. Therefore, the probability of many connections sending excess traffic into the network at the same time is greatly reduced, alleviating the congestion problem. > --- paper_title: Effective bandwidth of general Markovian traffic sources and admission control of high speed networks paper_content: A prime instrument for controlling congestion in high-speed broadband ISDN (BISDN) networks is admission control, which limits call and guarantees a grade of service determined by delay and loss probability in the multiplexer. It is shown, for general Markovian traffic sources, that it is possible to assign a notational effective bandwidth to each source which is an explicitly identified, simply computing quantity with provably correct properties in the natural asymptotic regime of small loss probabilities. It is the maximal real eigenvalue of a matrix which is directly obtained from the source characteristics and the admission criterion, and for several sources it is simply additive. Both fluid and point process models are considered, and parallel results are obtained. Numerical results show that the acceptance set for heterogeneous classes of sources is closely approximated and conservatively bounded by the set obtained from the effective bandwidth approximation. > --- paper_title: A Decision-Theoretic Approach to Call Admission Control in ATM Networks paper_content: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-off between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types. > --- paper_title: Loss performance analysis of an ATM multiplexer loaded with high-speed on-off sources paper_content: The performance of an asynchronous transfer mode (ATM) multiplexer whose input consists of the superposition of a multiplicity of homogeneous on-off sources modeled by a two-state Markovian process is studied. The approach is based on the approximation of the actual input process by means of a suitably chosen two-state Markov modulated Poisson process (MMPP), as a simple and effective choice for the representation of superposition arrival streams. To evaluate the cell loss performance, a new matching procedure that leads to accurate results compared to simulation is developed. The application limits of the proposed method are also discussed. The outstanding physical meaning of this procedure permits a deep insight into the multiplexer performance behavior as the source parameters and the multiplexer buffer size are varied. > --- paper_title: Adaptive Connection Admission Control Using Real-time Traffic Measurements in ATM Networks paper_content: The invention disclosed is a device capable of sensing and recording the total thermal energy received by an object in a given location. The device can sense the energy received from any direction by any source of thermal energy. The sensors are attached to a recorder so that variations over time can be observed. --- paper_title: Equivalent capacity and its application to bandwidth allocation in high-speed networks paper_content: The authors propose a computationally simple approximate expression for the equivalent capacity or bandwidth requirement of both individual and multiplexed connections, based on their statistical characteristics and the desired grade-of-service (GOS). The purpose of such an expression is to provide a unified metric to represent the effective bandwidth used by connections and the corresponding effective load of network links. These link metrics can then be used for efficient bandwidth management, routing, and call control procedures aimed at optimizing network usage. While the methodology proposed can provide an exact approach to the computation of the equivalent capacity, the associated complexity makes it infeasible for real-time network traffic control applications. Hence, an approximation is required. The validity of the approximation developed is verified by comparison to both exact computations and simulation results. > --- paper_title: A Framework for Bandwidth Management in ATM Networks--Aggregate Equivalent Bandwidth Estimation Approach paper_content: A unified framework for traffic control and bandwidth management in ATM networks is proposed. It bridges algorithms for real-time and data services. The central concept of this framework is adaptive connection admission. It employs an estimation of the aggregate equivalent bandwidth required by connections carried in each output port of the ATM switches. The estimation process takes into account both the traffic source declarations and the connection superposition process measurements in the switch output ports. This is done in an optimization framework based on a linear Kalman filter. To provide a required quality of service guarantee, bandwidth is reserved for possible estimation error. The algorithm is robust and copes very well with unpredicted changes in source parameters, thereby resulting in high bandwidth utilization while providing the required quality of service. The proposed approach can also take into account the influence of the source policing mechanism. The tradeoff between strict and relaxed source policing is discussed. --- paper_title: A simple bandwidth management strategy based on measurements of instantaneous virtual path utilization in ATM networks paper_content: A new connection admission control method based on actual virtual path traffic measurements is proposed to achieve high bandwidth efficiency for various types of traffic. The proposed method is based on the measurement of instantaneous virtual path utilization, which is defined as the total cell rate of the active virtual channels normalized by the virtual path capacity. A low-pass filter is used to determine the instantaneous virtual path utilization from crude measurements. A smoothing coefficient formula is derived as a function of the peak rate of the virtual channel. The residual bandwidth is derived from the maximum instantaneous utilization observed during a monitoring period. Simulation shows that the proposed method achieves statistical multiplexing gains of up to 80% of the limit possible with optimum control for similar traffic sources. It can be implemented with very simple hardware. The admission decision is simple: the requested bandwidth is compared with the residual bandwidth. This method is therefore well suited for practical asynchronous transfer mode switching systems. --- paper_title: A Decision-Theoretic Approach to Call Admission Control in ATM Networks paper_content: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-off between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types. > --- paper_title: Link capacity allocation and network control by filtered input rate in high-speed networks paper_content: We study link capacity allocation for a finite buffer system to transmit multimedia traffic. The queueing process is simulated with real video traffic. Two key concepts are explored in this study. First, the link capacity requirement at each node is essentially captured by its low-frequency input traffic (filtered at a properly selected cut-off frequency). Second, the low-frequency traffic stays intact as it travels through a finite-buffer system without significant loss. Hence, one may overlook the queueing process at each node for network-wide traffic flow in the low-frequency band. We propose a simple, effective method for link capacity allocation and network control using on-line observation of traffic flow in the low-frequency band. The study explores a new direction for measurement-based traffic control in high-speed networks. > --- paper_title: Effectiveness of dynamic bandwidth management mechanisms in ATM networks paper_content: The leaky bucket, a credit management system that allows cells to enter the network only if sufficient credit is available, is considered. Motivation for an in-depth study of dynamic control algorithms, to avoid congestion in high-speed networks under time varying traffic statistics, is provided. The need for real-time traffic measurements and dynamic control actions for long-lived connections is demonstrated. In particular, it is shown through simulations that implementing access control algorithms based on the bandwidth allocation procedures only at connection setup is not sufficient for congestion-free operation of the network. It is further shown that if the bandwidth allocation and the leaky bucket parameters are not dynamically adjusted, the performances of the connections depend on their initial parameters and can be very undesirable as the traffic parameters change over time. Moreover, if corrective actions are not taken, congestion may build up in the network even in the presence of leaky buckets. The simple dynamic approach proposed limits the congestion periods in the network to the time scales of the connection-level controls. Therefore, the probability of many connections sending excess traffic into the network at the same time is greatly reduced, alleviating the congestion problem. > --- paper_title: A call admission control scheme in ATM networks paper_content: In an ATM (asynchronous transfer mode) network, call admission control decides whether or not to accept a new call, so as to ensure the service quality of both individual existing calls and of the new call itself. The measure of service quality used for the call admission control considered here is virtual cell loss probability, PV. PV may be formulated simply from the maximum and the average cell rates alone, and it provides a good estimate of cell loss probability without respect to the burst lengths. PV computational complexity increases rapidly with an increase either in the number of call types, K, or in the number of existing calls of individual call types; and for this reason, the author proposes a method for calculating a PV approximation on the upper side, with the aim of achieving real-time call admission control. Its complexity of computation is a linear order of K. Also proposed is a call admission algorithm which utilizes this approximation method and for which, in some cases, the computational complexity is independent of K. > --- paper_title: A Framework for Bandwidth Management in ATM Networks--Aggregate Equivalent Bandwidth Estimation Approach paper_content: A unified framework for traffic control and bandwidth management in ATM networks is proposed. It bridges algorithms for real-time and data services. The central concept of this framework is adaptive connection admission. It employs an estimation of the aggregate equivalent bandwidth required by connections carried in each output port of the ATM switches. The estimation process takes into account both the traffic source declarations and the connection superposition process measurements in the switch output ports. This is done in an optimization framework based on a linear Kalman filter. To provide a required quality of service guarantee, bandwidth is reserved for possible estimation error. The algorithm is robust and copes very well with unpredicted changes in source parameters, thereby resulting in high bandwidth utilization while providing the required quality of service. The proposed approach can also take into account the influence of the source policing mechanism. The tradeoff between strict and relaxed source policing is discussed. --- paper_title: A simple bandwidth management strategy based on measurements of instantaneous virtual path utilization in ATM networks paper_content: A new connection admission control method based on actual virtual path traffic measurements is proposed to achieve high bandwidth efficiency for various types of traffic. The proposed method is based on the measurement of instantaneous virtual path utilization, which is defined as the total cell rate of the active virtual channels normalized by the virtual path capacity. A low-pass filter is used to determine the instantaneous virtual path utilization from crude measurements. A smoothing coefficient formula is derived as a function of the peak rate of the virtual channel. The residual bandwidth is derived from the maximum instantaneous utilization observed during a monitoring period. Simulation shows that the proposed method achieves statistical multiplexing gains of up to 80% of the limit possible with optimum control for similar traffic sources. It can be implemented with very simple hardware. The admission decision is simple: the requested bandwidth is compared with the residual bandwidth. This method is therefore well suited for practical asynchronous transfer mode switching systems. --- paper_title: Managing bandwidth in ATM networks with bursty traffic paper_content: Three approaches to the bandwidth management problem that have been proposed and studied by various groups are reviewed to illustrate three distinctly different approaches and identify their strengths and weaknesses. Based on these approaches, a bandwidth management and congestion control scheme for asynchronous transfer mode (ATM) networks that supports both point-to-point and one-to-many multicast virtual circuits is proposed. It is shown that the method can handle fully heterogeneous traffic and can be effectively implemented. The algorithm for making virtual circuit acceptance decisions is straightforward and fast, and the hardware mechanisms needed to implement buffer allocation and traffic monitoring at the user-network interface have acceptable complexities. It is also shown, through numerical examples, that the approach can achieve reasonable link efficiencies even in the presence of very bursty traffic. No advance reservation required, simplifying the interface between the network and the user and avoiding an initial network round trip delay before data can be transmitted. > --- paper_title: A Decision-Theoretic Approach to Call Admission Control in ATM Networks paper_content: This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-off between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types. > --- paper_title: On the relevance of long-range dependence in network traffic paper_content: There is much experimental evidence that network traffic processes exhibit ubiquitous properties of self-similarity and long-range dependence, i.e., of correlations over a wide range of time scales. However, there is still considerable debate about how to model such processes and about their impact on network and application performance. In this paper, we argue that much previous modeling work has failed to consider the impact of two important parameters, namely the finite range of time scales of interest in performance evaluation and prediction problems, and the first-order statistics such as the marginal distribution of the process. We introduce and evaluate a model in which these parameters can be controlled. Specifically, our model is a modulated fluid traffic model in which the correlation function of the fluid rate matches that of an asymptotically second-order self-similar process with given Hurst parameter up to an arbitrary cutoff time lag, then drops to zero. We develop a very efficient numerical procedure to evaluate the performance of a single-server queue fed with the above fluid input process. We use this procedure to examine the fluid loss rate for a wide range of marginal distributions, Hurst (1950) parameters, cutoff lags, and buffer sizes. Our main results are as follows. First, we find that the amount of correlation that needs to be taken into account for performance evaluation depends not only on the correlation structure of the source traffic, but also on time scales specific to the system under study. For example, the time scale associated with a queueing system is a function of the maximum buffer size. Thus, for finite buffer queues, we find that the impact on loss of the correlation in the arrival process becomes nil beyond a time scale we refer to as the correlation horizon. This means, in particular, that for performance-modeling purposes, we may choose any model among the panoply of available models (including Markovian and self-similar models) as long as the chosen model captures the correlation structure of the source traffic up to the correlation horizon. Second, we find that loss can depend in a crucial way on the marginal distribution of the fluid rate process. Third, our results suggest that reducing loss by buffering is hard for traffic with correlation over many time scales. We advocate the use of source traffic control and statistical multiplexing instead. --- paper_title: The importance of long-range dependence of VBR video traffic in ATM traffic engineering: myths and realities paper_content: There has been a growing concern about the potential impact of long-term correlations (second-order statistic) in variable-bit-rate (VBR) video traffic on ATM buffer dimensioning. Previous studies have shown that video traffic exhibits long-range dependence (LRD) (Hurst parameter large than 0.5). We investigate the practical implications of LRD in the context of realistic ATM traffic engineering by studying ATM multiplexers of VBR video sources over a range of desirable cell loss rates and buffer sizes (maximum delays). Using results based on large deviations theory, we introduce the notion of Critical Time Scale (CTS). For a given buffer size, link capacity, and the marginal distribution of frame size, the CTS of a VBR video source is defined as the number of frame correlations that contribute to the cell loss rate. In other words, second-order behavior at the time scale beyond the CTS does not significantly affect the network performance. We show that whether the video source model is Markov or has LRD, its CTS is finite, attains a small value for small buffer, and is a non-decreasing function of buffer size. Numerical results show that (i) even in the presence of LRD, long-term correlations do not have significant impact on the cell loss rate; and (ii) short-term correlations have dominant effect on cell loss rate, and therefore, well-designed Markov traffic models are effective for predicting Quality of Service (QOS) of LRD VBR video traffic. Therefore, we conclude that it is unnecessary to capture the long-term correlations of a real-time VBR video source under realistic ATM buffer dimensioning scenarios as far as the cell loss rates and maximum buffer delays are concerned. --- paper_title: Long-range dependence in variable-bit-rate video traffic paper_content: We analyze 20 large sets of actual variable-bit-rate (VBR) video data, generated by a variety of different codecs and representing a wide range of different scenes. Performing extensive statistical and graphical tests, our main conclusion is that long-range dependence is an inherent feature of VBR video traffic, i.e., a feature that is independent of scene (e.g., video phone, video conference, motion picture video) and codec. In particular, we show that the long-range dependence property allows us to clearly distinguish between our measured data and traffic generated by VBR source models currently used in the literature. These findings give rise to novel and challenging problems in traffic engineering for high-speed networks and open up new areas of research in queueing and performance analysis involving long-range dependent traffic models. A small number of analytic queueing results already exist, and we discuss their implications for network design and network control strategies in the presence of long-range dependent traffic. > --- paper_title: Queue Response to Input Correlation Functions: Discrete Spectral Analysis paper_content: The authors explore a new concept of spectral characterization of wide-band input process in high speed networks. It helps them to localize wide-band sources in a subspace, especially in the low-frequency band, which has a dominant impact on queueing performance. They choose simple periodic-chains for the input rate process construction. Analogous to input functions in signal processing, they use elements of DC, sinusoidal, rectangular pulse, triangle pulse, and their superpositions, to represent various input correlation properties. The corresponding input power spectrum is defined in the discrete-frequency domain. In principle, a continuous spectral function of stationary random input process can be asymptotically approached by its discrete version as one sufficiently reduces the discrete-frequency intervals. An understanding of the queue response to the input spectrum will provide a great deal of knowledge to develop advanced network traffic measurement theory, and help to introduce effective network resource allocation policies. The new relation between queue length and input spectrum is a fruitful starting point for further research. > --- paper_title: Queue Response to Input Correlation Functions: Continuous Spectral Analysis paper_content: Queueing performance in a richer, heterogeneous input environment is studied. A unique way to understand the effect of second- and higher-order input statistics on queues is offered, and new concepts of traffic measurement, network control, and resource allocation are developed for high-speed networks in the frequency domain. The technique applies to the analysis of queue response to the individual effects of input power spectrum, bispectrum, trispectrum, and input-rate steady-state distribution. The study provides clear evidence that of the four input statistics, the input power spectrum is most essential to queueing analysis. Furthermore, input power in the low-frequency band has a dominant impact on queueing performance, whereas high-frequency power to a large extent can be neglected. > --- paper_title: Link capacity allocation and network control by filtered input rate in high-speed networks paper_content: We study link capacity allocation for a finite buffer system to transmit multimedia traffic. The queueing process is simulated with real video traffic. Two key concepts are explored in this study. First, the link capacity requirement at each node is essentially captured by its low-frequency input traffic (filtered at a properly selected cut-off frequency). Second, the low-frequency traffic stays intact as it travels through a finite-buffer system without significant loss. Hence, one may overlook the queueing process at each node for network-wide traffic flow in the low-frequency band. We propose a simple, effective method for link capacity allocation and network control using on-line observation of traffic flow in the low-frequency band. The study explores a new direction for measurement-based traffic control in high-speed networks. > --- paper_title: A low-pass filter design for ATM traffic measurement and its application to bandwidth management paper_content: A low-pass filter (LPF) for ATM traffic measurement is presented and an admission control method based on it is proposed. Instantaneous rate is estimated with the LPF and the maximum instantaneous rate (MIR) observed over the monitoring period is used to determine the residual bandwidth in the method. Hence we call it the MIR method. We developed an analytical model to investigate the performance of the MIR method. We confirmed that the MIR method regulates the number of connections properly even for connections with long holding time. In addition, the MIR method is robust against long range dependent traffic, which degrades the robustness of many previously proposed methods. Because the MIR method does not assume any mathematical model and can be implemented with simple hardware, it is well suited for practical ATM switching systems. ---
Title: Overview of measurement-based connection admission control methods in ATM networks Section 1: Introduction Description 1: Introduce the concept of ATM networks and the necessity of connection admission control (CAC) methods. Section 2: Connection Admission Control CAC-Related Issues Description 2: Discuss CAC-related issues, including switch architecture, burst traffic model, traffic descriptor, and requirements for effective CAC methods. Section 3: Overview of CAC Methods Description 3: Provide an overview of CAC methods and present a taxonomy to classify the various approaches. Section 4: Taxonomy Description 4: Explain the taxonomy used to classify CAC methods, detailing the distinctions between rate-sharing multiplexing (RSM) methods, rate-envelope multiplexing (REM) methods, CLR methods, and effective bandwidth (EB) methods. Section 5: Brief Overview of Proposed Methods Description 5: Offer a brief survey of proposed CAC methods, categorized based on the taxonomy introduced earlier, and highlight specific methods and their key features. Section 6: Description of Measurement-Based Admission Control Methods Description 6: Examine the need for measurement-based CAC methods and provide detailed descriptions of selected measurement-based methods. Section 7: Comparison Description 7: Compare the discussed measurement-based CAC methods based on various criteria such as bandwidth efficiency, implementation complexity, and dependency on traffic models. Section 8: Closing Remarks Description 8: Summarize the findings and discuss the potential of measurement-based CAC methods in high-speed backbone ATM networks.
A Review of Localization Systems for Robotic Endoscopic Capsules
6
--- paper_title: At a watershed? Technical developments in wireless capsule endoscopy. paper_content: This article reviews some of the technical developments that allowed the introduction of the wireless capsule 10 years ago into human usage. Technical advances and commercial competition have substantially improved the performance of clinical capsule endoscopy, especially in optical quality. Optical issues including the airless environment, depth of focus, dome reflection, the development of white light light-emitting diodes, exposure length and the advent of adaptive illumination are discussed. The competition between charge coupled devices and complementary metal oxide silicone technologies for imaging, lens improvements and the requirements for different frame rates and their associated power management strategies and battery type choices and the introduction of field enhancement methods into commercial capsule technology are considered. Capsule technology stands at a watershed. It is mainly confined to diagnostic small intestinal imaging. It might overtake other forms of conventional diagnostic endoscopy, especially colonoscopy but also gastroscopy and esophagoscopy but has to improve both technically and compete in price. It might break out of its optical diagnostic confinement and become a therapeutic modality. To make this leap there have to be several technical advances especially in biopsy, command, micromechanical internal movements, remote controlled manipulation and changes in power management, which may include external power transmission. --- paper_title: Recent Patents on Wireless Capsule Endoscopy paper_content: Wireless capsule endoscopy is a medical procedure which has revolutionized endoscopy as it has enabled for the first time a painless inspection of the small intestine. The procedure was unveiled in 2000 and is based on a vitamin- size pill which captures images of the digestive tract while it is transported passively by peristalsis. The device consists of an image sensor, an illumination module, a radio-frequency transmitter and a battery. Wireless capsule endoscopy is a novel breakthrough in the biomedical industry and future progresses in key technologies are expected to drive the development of the next generation of such devices. Therefore, the purpose of this review is to illustrate the most recent and significant inventions patented from 2005 to present in those areas concerning measurement of human body parameters, advanced imaging features, localization, energy management and active propulsion. Finally, the manuscript reports a discussion on current and future developments in wireless capsule endoscopy. --- paper_title: Capsule Endoscopy: From Current Achievements to Open Challenges paper_content: Wireless capsule endoscopy (WCE) can be considered an example of disruptive technology since it represents an appealing alternative to traditional diagnostic techniques. This technology enables inspection of the digestive system without discomfort or need for sedation, thus preventing the risks of conventional endoscopy, and has the potential of encouraging patients to undergo gastrointestinal (GI) tract examinations. However, currently available clinical products are passive devices whose locomotion is driven by natural peristalsis, with the drawback of failing to capture the images of important GI tract regions, since the doctor is unable to control the capsule's motion and orientation. To address these limitations, many research groups are working to develop active locomotion devices that allow capsule endoscopy to be performed in a totally controlled manner. This would enable the doctor to steer the capsule towards interesting pathological areas and to accomplish medical tasks. This review presents a research update on WCE and describes the state of the art of the basic modules of current swallowable devices, together with a perspective on WCE potential for screening, diagnostic, and therapeutic endoscopic procedures. --- paper_title: Wireless capsule endoscopy paper_content: With the introduction of the flexible fiber optic endoscope in 1950s visualization of the esophagus, stomach, upper small bowel and colon became possible. The flexible shaft of the instrument carried the fiber optic light bundles, power and the optical elements. It also contained cables, which allowed for control over the direction of the instrument. Therefore, the instrument was of relatively large diameter, making gastroscopy, small bowel endoscopy and colonoscopy an uncomfortable procedure requiring sedation. Recent advances in development of low power complementary metal‐oxide silicon (CMOS) imagers, mixed signal application specific integrated circuit (ASICs) and white light emitting diodes (LEDs) made possible development of a new type of endoscope – the swallowable video capsule. We describe the development of a video‐telemetry capsule endoscope that is small enough to swallow (11×26 mm) and has no external wires, fiber optic bundles or cables. Extensive clinical and healthy volunteer trials have p... --- paper_title: Wireless capsule endoscopy paper_content: Gastroscopy, small bowel endoscopy, and colonoscopy are uncomfortable because they require comparatively large diameter flexible cables to be pushed into the bowel, which carry light by fibreoptic bundles, power, and video signals. Small bowel endoscopy is currently especially limited by problems of discomfort and failure to advance enteroscopes far into the small bowel. There is a clinical need for better methods to examine the small bowel especially in patients with recurrent gastrointestinal bleeding from this site. --- paper_title: Upper Gastrointestinal Endoscopy: Current Status paper_content: Esophagogastroduodenoscopy occupies a predominant position in the diagnostic evaluation and therapeutic management of foregut disease. The safety, anatomic refinement, and tissue sampling capabilities offered by endoscopic examination support its use as a premier diagnostic tool. An increasingly diverse and ingenious set of endoscopically delivered tools are available to expand the diagnostic capability, and extend the therapeutic application of esophagogastroduodenoscopy to a wide range of pathology, both benign and neoplastic. Comparative outcome data support the utility of therapeutic esophagogastroduodenoscopy in the management of upper gastrointestinal bleeding and the palliative management of foregut neoplasia. Endoscopically delivered therapies may have an increasing role in the management of gastroesophageal reflux disease in the future, and the development of endoluminal ultrasound has added a whole new dimension to endoscopic diagnostic and, potentially, therapeutic capability. This review highlights the current status of esophagogastroduodenoscopy in the diagnosis and management of upper gastrointestinal pathology. --- paper_title: Capsule endoscopy—A mechatronics perspective paper_content: The recent advances in integrated circuit technology, wireless communication, and sensor technology have opened the door for development of miniature medical devices that can be used for enhanced monitoring and treatment of medical conditions. Wireless capsule endoscopy is one of such medical devices that has gained significant attention during the past few years. It is envisaged that future wireless capsule endoscopies replace traditional endoscopy procedures by providing advanced functionalities such as active locomotion, body fluid/tissue sampling, and drug delivery. Development of energy-efficient miniaturized actuation mechanisms is a key step toward achieving this goal. Here, we review some of the actuators that could be integrated into future wireless capsules and discuss the existing challenges. --- paper_title: A further step beyond wireless capsule endoscopy paper_content: Purpose – Aims to report on a new trend of research and development in the wireless capsule endoscope.Design/methodology/approach – Presents a conceptual design of a wireless capsule endoscope having new features like navigation control, self‐propulsion, higher rate image transmission, acquisition of samples, application of medications and so forth.Findings – The basic principle has been verified by experiments. Seems promising.Research limitations/implications – Yet to need a lot of effort for commercialization.Practical implications – If successful, it will provide another way of least invasive medical treatment that must reduce pain of patients drastically.Originality/value – Taking advantage of the state‐of‐the‐art micro‐technology it suggests a further step beyond the current wireless capsule endoscope. --- paper_title: Capsule endoscopy: progress update and challenges ahead paper_content: Capsule endoscopy (CE) enables remote diagnostic inspection of the gastrointestinal tract without sedation and with minimal discomfort. Initially intended for small-bowel endoscopy, modifications to the original capsule have since been introduced for imaging of the esophagus and the colon. This review presents a research update on CE. Emphasis is placed on PillCam SB, PillCam ESO, and PillCam COLON (Given Imaging, Yoqneam, Israel) since the majority of published studies have investigated these devices. Discussion of initial reports on competing devices, such as EndoCapsule (Olympus, Tokyo, Japan) and MiroCam (IntroMedic Co., Seoul, Republic of Korea) are also included. The last section of this review outlines ongoing research and development directed at the identification of capsule location, control of capsule movement and expansion of the capability of microcameras to enhance the diagnostic power of CE. Research efforts aimed at endowing the capsule with a range of functionalities are also discussed, from tissue sampling for biopsy to optical biopsy and, in some cases, actual treatment (interventional CE), so that CE may ultimately replace both diagnostic and interventional flexible endoscopy. --- paper_title: Wireless capsule endoscopy: from diagnostic devices to multipurpose robotic systems paper_content: In the recent past, the introduction of miniaturised image sensors with low power consumption, based on complementary metal oxide semiconductor (CMOS) technology, has allowed the realisation of an ingestible wireless capsule for the visualisation of the small intestine mucosa. The device has received approval from Food and Drug Administration and has gained momentum since it has been more successful than traditional techniques in the diagnosis of small intestine disorders. In 2004 an esophagus specific capsule was launched, while a solution for colon is still under development. However, present solutions suffer from several limitations: they move passively by exploiting peristalsis, are not able to stop intentionally for a prolonged diagnosis, they receive power from an internal battery with short length, and their usage is restricted to one organ, either small bowel or esophagus. However the steady progresses in many branches of engineering, including microelectromechanical systems (MEMS), are envisaged to affect the performances of capsular endoscopy. The near future foreshadows capsules able to pass actively through the whole gastrointestinal tract, to retrieve views from all organs and to perform drug delivery and tissue sampling. In the long term, the advent of robotics could lead to autonomous medical platforms, equipped with the most advanced solutions in terms of MEMS for therapy and diagnosis of the digestive tract. In this review, we discuss the state of the art of wireless capsule endoscopy (WCE): after a description on the current status, we present the most promising solutions. --- paper_title: Swallowable medical devices for diagnosis and surgery: The state of the art paper_content: Abstract The first wireless camera pills created a revolutionary new perspective for engineers and physicians, demonstrating for the first time the feasibility of achieving medical objectives deep within the human body from a swallowable, wireless platform. The approximately 10 years since the first camera pill has been a period of great innovation in swallowable medical devices. Many modules and integrated systems have been devised to enable and enhance the diagnostic and even robotic capabilities of capsules working within the gastrointestinal (GI) tract. This article begins by reviewing the motivation and challenges of creating devices to work in the narrow, winding, and often inhospitable GI environment. Then the basic modules of modern swallowable wireless capsular devices are described, and the state of the art in each is discussed. This article is concluded with a perspective on the future potential of swallowable medical devices to enable advanced diagnostics beyond the capability of human visual perception, and even to directly deliver surgical tools and therapy non-invasively to interventional sites deep within the GI tract. --- paper_title: Novel electromagnetic actuation system for three-dimensional locomotion and drilling of intravascular microrobot paper_content: Various types of actuation methods for microrobots have been proposed. Among the actuation methods, electromagnetic based actuation (EMA) has been considered a promising actuation mechanism. In this paper, a new EMA system for three-dimensional (3D) locomotion and drilling of the microrobot is proposed. The proposed system consists of four fixed coil pairs and one rotating coil pair. In detail, the coil system has three pairs of stationary Helmholtz coil, a pair of stationary Maxwell coil and a pair of rotating Maxwell coil. The Helmholtz coil pairs can magnetize and align the microrobot to the desired direction and the two pairs of Maxwell coil can generate the propulsion force of the microrobot. In addition, the Helmholtz coil pairs can rotate the microrobot about a desired axis. The rotation of the microrobot is a drilling action through an occlusion in a vessel. Through various experiments, the 3D locomotion and drilling of the microrobot by using the proposed EMA system are demonstrated. Compared with other EMA systems, the proposed system can provide the advantages of consecutive locomotion and drilling of the microrobot. --- paper_title: Position and orientation detection of capsule endoscopes in spiral motion paper_content: In this paper, a position and orientation detection method for the capsule endoscopes devised to move through the human digestive organs in spiral motion, is introduced. The capsule is equipped with internal magnets and flexible threads on their outer shell. It is forced to rotate by an external rotating magnetic field that produces a spiral motion. As the external magnetic field is generated by rotating a permanent magnet, the 3-axes Cartesian coordinate position and 3-axes orientation of the capsule endoscopes can be measured by using only 3 hall-effect sensors orthogonally installed inside the capsule. However, in this study, an additional hall-effect sensor is employed along the rotating axis at a symmetrical position inside the capsule body to enhance measurement accuracy. In this way, the largest position detection error appearing along the rotating axis of the permanent magnet could be reduced to less than 15mm, when the relative position of the capsule endoscope to the permanent magnet is changed from 0mm to 50mm in the X-direction, from −50mm to +50mm in the Y-direction and from 200mm to 300mm in the Z-direction. The maximum error of the orientation detection appearing in the pitching direction ranged between −4° and +15°. --- paper_title: A flexible chain-based screw propeller for capsule endoscopes paper_content: In this paper, a new type of screw propeller for capsule endoscopes is introduced whose rotation motion is enforced by magnetic forces between an armature magnet inside the capsule and an external rotating permanent magnet working like an AC stator. The screw propeller consists of flexible threads attached on the capsule body. The spiral angle of the flexible threads switches automatically due to the friction force between intestine wall and the capsule body when the rotating direction of the capsule body is reversed. Therefore, the moving direction of the capsule is independent of the rotating direction of the capsule body. This will help the capsule advance especially in curved tracts by changing its effective thread angle for reduction of friction torque. In order to investigate main parameters that influence the performance of the new screw propeller, we evaluated the speed of the capsule with various shapes and sizes of the flexible threads, changing the parameters such as rotational speed of the external permanent magnet, distance between the external magnet and the armature magnet inside the capsule, and viscosity of lubrication oil in artificial intestine tract. The speed of the capsule with the flexible screw propeller was mainly dependent on the shape of threads. Also, the distance between the magnets highly influenced the speed of the capsule since thrust force of the capsule was dependent on it. On the contrary, rotational speed and oil viscosity insignificantly contributed to the speed of the capsule. Especially in curved tracts, the effective thread angle of the screw propeller which could be changed by its rotation direction played a great role in advancing the capsule efficiently. --- paper_title: Volumetric characterization of the Aurora magnetic tracker system for image-guided transorbital endoscopic procedures. paper_content: In some medical procedures, it is difficult or impossible to maintain a line of sight for a guidance system. For such applications, people have begun to use electromagnetic trackers. Before a localizer can be effectively used for an image-guided procedure, a characterization of the localizer is required. The purpose of this work is to perform a volumetric characterization of the fiducial localization error (FLE) in the working volume of the Aurora magnetic tracker by sampling the magnetic field using a tomographic grid. Since the Aurora magnetic tracker will be used for image-guided transorbital procedures we chose a working volume that was close to the average size of the human head. A Plexiglass grid phantom was constructed and used for the characterization of the Aurora magnetic tracker. A volumetric map of the magnetic space was performed by moving the flat Plexiglass phantom up in increments of 38.4 mm from 9.6 mm to 201.6 mm. The relative spatial and the random FLE were then calculated. Since the target of our endoscopic guidance is the orbital space behind the optic nerve, the maximum distance between the field generator and the sensor was calculated depending on the placement of the field generator from the skull. For the different field generator placements we found the average random FLE to be less than 0.06 mm for the 6D probe and 0.2 mm for the 5D probe. We also observed an average relative spatial FLE of less than 0.7 mm for the 6D probe and 1.3 mm for the 5D probe. We observed that the error increased as the distance between the field generator and the sensor increased. We also observed a minimum error occurring between 48 mm and 86 mm from the base of the tracker. --- paper_title: Diamagnetically-stabilized levitation control of an intraluminal magnetic capsule paper_content: Controlled navigation promotes full utilization of capsule endoscopy for reliable real-time diagnosis in the gastrointestinal (GI) tract, but intermittent natural peristalsis can disturb the navigational control, destabilize the capsule and take it out of levitation. A real-size magnetic navigation system that can handle peristaltic forces of up to 1.5 N was designed utilizing the computer-aided design (CAD) system Maxwell 3D (Ansoft, Pittsburg, PA), and was verified using a small-size physical experimental setup. The proposed system contains a pair of 50-cm in diameter, 10,000-turns copper electromagnets with a 10-cm by 10-cm ferrous core driven by currents of up to 300 Amperes and can successfully maintain position control over the levitating capsule during peristalsis. The addition of Bismuth diamagnetic casing for stabilizing the levitating capsule was also studied. --- paper_title: Capsule Endoscopy: From Current Achievements to Open Challenges paper_content: Wireless capsule endoscopy (WCE) can be considered an example of disruptive technology since it represents an appealing alternative to traditional diagnostic techniques. This technology enables inspection of the digestive system without discomfort or need for sedation, thus preventing the risks of conventional endoscopy, and has the potential of encouraging patients to undergo gastrointestinal (GI) tract examinations. However, currently available clinical products are passive devices whose locomotion is driven by natural peristalsis, with the drawback of failing to capture the images of important GI tract regions, since the doctor is unable to control the capsule's motion and orientation. To address these limitations, many research groups are working to develop active locomotion devices that allow capsule endoscopy to be performed in a totally controlled manner. This would enable the doctor to steer the capsule towards interesting pathological areas and to accomplish medical tasks. This review presents a research update on WCE and describes the state of the art of the basic modules of current swallowable devices, together with a perspective on WCE potential for screening, diagnostic, and therapeutic endoscopic procedures. --- paper_title: Remote magnetic manipulation of a wireless capsule endoscope in the esophagus and stomach of humans (with videos). paper_content: BACKGROUND ::: Remote manipulation of wireless capsule endoscopes might improve diagnostic accuracy and facilitate therapy. ::: ::: ::: OBJECTIVE ::: To test a new capsule-manipulation system. ::: ::: ::: SETTING ::: University hospital. ::: ::: ::: DESIGN AND INTERVENTIONS ::: A first-in-human study tested a new magnetic maneuverable wireless capsule in a volunteer. A wireless capsule endoscope was modified to include neodymium-iron-boron magnets. The capsule's magnetic switch was replaced with a thermal one and turned on by placing it in hot water. One imager was removed from the PillCam colon-based capsule, and the available space was used to house the magnets. A handheld external magnet was used to manipulate this capsule in the esophagus and stomach. The capsule was initiated by placing it in a microg of hot water. The capsule was swallowed and observed in the esophagus and stomach by using a gastroscope. Capsule images were viewed on a real-time viewer. ::: ::: ::: MAIN OUTCOME MEASUREMENTS ::: The capsule was manipulated in the esophagus for 10 minutes. It was easy to make the capsule turn somersaults and to angulate at the cardioesophageal junction. In the stomach, it was easy to move the capsule back from the pylorus to the cardioesophageal junction and hold/spin the capsule at any position in the stomach. The capsule in the esophagus and stomach did not cause discomfort. ::: ::: ::: LIMITATIONS ::: Magnetic force varies with the fourth power of distance. ::: ::: ::: CONCLUSIONS ::: This study suggests that remote manipulation of a capsule in the esophagus and stomach of a human is feasible and might enhance diagnostic endoscopy as well as enable therapeutic wireless capsule endoscopy. --- paper_title: Novel MIMO 4-DOF position control for Capsule Endoscope paper_content: In this paper, a novel actuation system for Wireless Capsule Endoscopes (WCE) based on magnetic levitation is proposed. This study focuses on the design of a multi-input, multi-output (MIMO), controller to maintain a desired position and orientation of the capsule relative to the movable electromagnet frame so that it can navigate the intestine by moving this frame and/or the patient. Tracking algorithms for the linear controller based on pole placement, entire eigenstructure assignment (EEA), and linear quadratic regulator (LQR) techniques are designed and simulated using Matlab/Simulink. Simulation results suggest that the LQR controller can be used for capsule actuation. --- paper_title: Robotic magnetic steering and locomotion of capsule endoscope for diagnostic and surgical endoluminal procedures paper_content: This paper describes a novel approach to capsular endoscopy that takes advantage of active magnetic locomotion in the gastrointestinal tract guided by an anthropomorphic robotic arm. Simulations were performed to select the design parameters allowing an effective and reliable magnetic link between the robot end-effector (endowed with a permanent magnet) and the capsular device (endowed with small permanent magnets). In order to actively monitor the robotic endoluminal system and to efficiently perform diagnostic and surgical medical procedures, a feedback control based on inertial sensing was also implemented. The proposed platform demonstrated to be a reliable solution to move and steer a capsular device in a slightly insufflated gastrointestinal lumen. --- paper_title: A Cubic 3-Axis Magnetic Sensor Array for Wirelessly Tracking Magnet Position and Orientation paper_content: In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm. --- paper_title: Perspective of active capsule endoscope: actuation and localisation paper_content: The self-contained wireless capsule endoscope provides non-invasive, painless and effective diagnosis of the diseases in the small intestine. One anticipation is to embed the capsule with therapeutic tools and enable not only diagnosis but also treatment of the GI diseases. Prior to the accomplishment of this objective, controlled movement of the capsule should be achieved first. In this paper, we focused on the actuation and localisation issues of the active capsule endoscope. After a survey on the related work, the challenges were presented and the possible solutions were discussed. --- paper_title: A novel method of three-dimensional localization based on a neural network algorithm paper_content: In order to track the three-dimensional position of a non-invasive detecting microcapsule in the alimentary tract, a novel localization method has been developed. Three coils were respectively energized by square wave signals to generate an electromagnetic field. A 3-axis magnetic sensor was sealed in the microcapsule to measure the electromagnetic field strength. Based on the principle of magnetic dipole a corresponding localization model was established. By resolving the localization equations by means of the neural network algorithm, the spatial position of the microcapsule can be obtained. The experiment shows that the localization principle is correct and has high precision. Compared with other methods, this novel method needs no special equipment and after integration the system can be made into a portable type. --- paper_title: Sensor Arrangement Optimization of Magnetic Localization and Orientation system paper_content: A magnetic localization and orientation system is used to track the movement of the capsule endoscope during the gastrointestinal examination process. The system is made of a magnetic sensors array collecting the intensity of the magnetic field from a magnet in the capsule. In this paper, we try to optimize the sensor arrangement in the 3D space around the human body, to improve the tracking precision. Different sensor arrangement schemes are evaluated, and the tracking accuracy can be significantly increased with the appropriate arrangement. --- paper_title: Magnetic markers as a noninvasive tool to monitor gastrointestinal transit paper_content: A novel method to monitor gastrointestinal transit of solid oral dosage forms or nutrients is presented, providing a simultaneous recording of gastrointestinal motility of the traversed section. Based on the measurement of the magnetic field of an ingested magnetized marker, its location is found by fitting a magnetic dipole field to the measured data. > --- paper_title: An electromagnetic localization method for medical micro-devices based on adaptive particle swarm optimization with neighborhood search paper_content: Abstract In order to non-invasively track a medical micro-device in gastrointestinal tract, an alternating electromagnetic tracking method was presented and a prototype was developed. In the tracking method, several energizing coils were excited by time-sharing sinusoidal signal to generate varying magnetic fields by one coil and then another coil. A wireless magnetic sensor measured the magnetic field strength at the location of the micro-device. The root-mean-square value of the magnetic field strength is a high-order nonlinear system of equations with respect to the position and orientation of the micro-device. Based on the adaptive particle swarm optimization (PSO) with neighborhood search, the position and orientation of the micro-device could be obtained. The experimental results show that the tracking method is valid and the modified algorithm succeeds in dealing with the nonlinear system of equations in localization. Comparing to the standard PSO algorithm, it does not require a good initial guess to guarantee convergence. Furthermore, it has high precision and fast convergence. --- paper_title: High-resolution monitoring of the gastrointestinal transit of a magnetically marked capsule. paper_content: The purpose of this study was to demonstrate that it is possible to continuously monitor the gastrointestinal transit of magnetically marked, solid, oral dosage forms with multichannel biomagnetic measuring equipment and by magnetic source imaging (MSI) methods. For the investigations presented, a sucrose pellet was coated with powdered magnetite (Fe3O4) in poly(methyl methacrylate). Then, the pellet was enclosed in a capsule prepared from silicone rubber and magnetized to obtain a net magnetic dipole moment. After ingestion of the capsule, its magnetic field distribution over the abdomen was recorded for several time intervals with a 37-channel superconducting quantum interference device (SQUID) magnetometer. At each time point, the position of the capsule within the gastrointestinal tract was calculated from the measured field distribution, assuming a magnetic dipole model. The data presented here demonstrate that with this noninvasive method of magnetic marker monitoring it is possible to investigate the gastrointestinal transit of a solid oral dosage form with a temporal resolution in the order of milliseconds and a spatial resolution within a range of millimeters. --- paper_title: A magnetic tracking system based on highly sensitive integrated hall sensors paper_content: A tracking system with five degrees of freedom based on a 2D-array of 16 Hall sensors and a permanent magnet is presented in this paper. The sensitivity of the Hall sensors is increased by integrated micro- and external macro-flux-concentrators. Detection distance larger than 20cm (during one hour without calibration) is achieved using a magnet of 0.2cm3. This corresponds to a resolution of the sensors of 0.05µTrms. The position and orientation of the marker is displayed in real time at least 20 times per second. The sensing system is small enough to be hand-held and can be used in a normal environment. This presented tracking system has been successfully applied to follow a small swallowed magnet through the entire human digestive tube. This approach is extremely promising as a new non-invasive diagnostic technique in gastro-enterology. --- paper_title: A new calibration method for magnetic sensor array for tracking capsule endoscope paper_content: To track the movement of wireless capsule endoscope in the human body, we design a magnetic localization and orientation system. In this system, capsule contains a permanent magnet as the movable object. A wearable magnetic sensor array is arranged out of the human body to capture the magnetic signal. This sensor array is composed of magnetic sensors, Honeywell product HMC1043. The variations of magnet field intensity and direction are related to the capsule position and orientation. Therefore, the 3D localization information and 2D orientation parameters of capsule can be computed based on the captured magnetic signals and by applying an appropriate algorithm. In order to initialize the system and improve the tracking accuracy, we propose a calibration technique based on high-accurate localization equipment, FASTRAK. The calibration method includes two steps. Firstly, we acquire the accurate reference data from FASTRAK tracking equipment, and transform them into the position and orientation parameters of the magnet. Secondly, we calculate three important parameters for the sensor calibration: the sensitivity, the center position, and the orientation. Based on the calibration, we can adjust the magnetic localization and orientation system quickly and accurately. The experimental results prove that the calibration method used in our system can improve the system with satisfactory tracking accuracy. --- paper_title: Magnetism and Magnetic Materials paper_content: 1. Introduction 2. Magnetostatics 3. Magnetism of electrons 4. Magnetism of localized electrons on the atom 5. Ferromagnetism and exchange 6. Antiferromagnetism and other magnetic order 7. Micromagnetism, domains and hysteresis 8. Nanoscale magnetism 9. Magnetic resonance 10. Experimental methods 11. Magnetic materials 12. Applications of soft magnets 13. Applications of hard magnets 14. Spin electronics and magnetic recording 15. Special topics Appendixes Index. --- paper_title: An hybrid localization system based on optics and magnetics paper_content: Magnetic tracking systems have been increasing in popularity due to their lower expense and small size of their tracked objects, but they are sensitive to ferromagnetic objects within their work volumes. Optical tracking systems can overcome this disadvantage, however there are some occlusions problem in optical system. Hence, we present a novel hybrid localization system combining optical one and magnetic one. The hybrid localization algorithm will be introduced in this paper. Experiments show that the localization accuracy of optical and magnetic hybrid localizer can be enhanced to some extent. --- paper_title: Efficient magnetic localization and orientation technique for capsule endoscopy paper_content: To build a new wireless robotic capsule endoscope with external guidance for controllable and interactive GI tract examination, a sensing system is needed for tracking 3D location and 2D orientation of the capsule endoscope movement. An appropriate sensing method is to enclose a small permanent magnet in the capsule. The intensities of the magnetic field produced by the magnet in different spatial points can be measured by the magnetic sensors outside the patient's body. With the sensing data of magnetic sensor array, the 3D location and 2D orientation of the capsule can be calculated. Higher calculation accuracy can be obtained if more sensors and optimal algorithm are applied. In this paper, different nonlinear optimization algorithms were evaluated to solve the magnet's 5D parameters, e.g. Powell's, Downhill Simplex, DIRECT, Multilevel Coordinate Search, and Levenberg Marquardt method. We have found that Levenberg-Marquardt method provides satisfactory calculation accuracy and faster speed. Simulations were done for investigating the de-noise ability of this algorithm based on different sensor arrays. Also the real experiment shows that the results are satisfactory with high accuracy (average localization error is 5.6 mm). --- paper_title: Tracking system with five degrees of freedom using a 2D-array of Hall sensors and a permanent magnet paper_content: Abstract Based on a 2D-array of 16 cylindrical Hall sensors and a permanent magnet, a tracking system with five degrees of freedom is analysed in this paper. The system accuracy is studied, including offset drifts, sensitivity mismatches and the number of sensors. A detection distance as large as 14 cm (during 1 h without calibration) is achieved using a magnet of 0.2 cm 3 . The position and orientation of the marker is displayed in real time with a sampling frequency up to 50 Hz. The sensing system is small enough to be hand-held and can be used in a normal environment. --- paper_title: 3D magnetic tracking of a single subminiature coil with a large 2D-array of uniaxial transmitters paper_content: A novel system and method for magnetic tracking of a single subminiature coil is described. The novelty of the method consists in employing a large, 8 /spl times/ 8 array of coplanar transmitting coils. This allows us to always keep the receiving coil not far from the wide, flat transmitting array, to increase the signal-to-noise ratio, and to decrease the retransmitted interference. The whole transmitting array, 64 coils, is sequentially activated only at the initiation stage to compute the initial position of the receiving coil. The redundancy in the transmitters number provides fast and unambiguous convergence of the optimization algorithm. At the following tracking stages, a small (8 coils) transmitting subarray is activated. The relatively small subarray size allows us to keep a high update rate and resolution of tracking. For a 50-Hz update rate, the tracking resolution is not worse than 0.25 mm, 0.2/spl deg/ rms at a 200-mm height above the transmitting array's center. This resolution corresponds to an /spl sim/1-mm, 0.6/spl deg/ tracking accuracy. The novelty of the method consists as well in optimizing the transmitting coils' geometry to substantially (down to 0.5 mm) reduce the systematic error caused by the inaccuracy of the dipole field approximation. --- paper_title: Development of a small wireless position sensor for medical capsule devices paper_content: Medical capsule devices such as video capsule endoscopes are finding increasing use in clinical applications. At present, technologies capable of measuring capsule position in the digestive tract have not yet been established. The present study aims to develop a small wireless position sensor capable of measuring capsule position based on the phenomenon of mutual induction. Currents into primary coils are adjusted to maintain electromotive force induced in secondary coils at a constant level. Electromotive forces induced in the secondary coils are modulated to FM signals using an astable multivibrator, and the signals are passed directly through living tissue at low current and then demodulated by detectors on the surface of the body. A prototype wireless sensor was developed and evaluated in vitro. The sensor was capable of accurately measuring capsule position up to 500 (mm) from the primary coils with an accuracy of 5 (mm). Miniaturization of the sensor is necessary for commercialization. --- paper_title: Capsule Endoscopy: From Current Achievements to Open Challenges paper_content: Wireless capsule endoscopy (WCE) can be considered an example of disruptive technology since it represents an appealing alternative to traditional diagnostic techniques. This technology enables inspection of the digestive system without discomfort or need for sedation, thus preventing the risks of conventional endoscopy, and has the potential of encouraging patients to undergo gastrointestinal (GI) tract examinations. However, currently available clinical products are passive devices whose locomotion is driven by natural peristalsis, with the drawback of failing to capture the images of important GI tract regions, since the doctor is unable to control the capsule's motion and orientation. To address these limitations, many research groups are working to develop active locomotion devices that allow capsule endoscopy to be performed in a totally controlled manner. This would enable the doctor to steer the capsule towards interesting pathological areas and to accomplish medical tasks. This review presents a research update on WCE and describes the state of the art of the basic modules of current swallowable devices, together with a perspective on WCE potential for screening, diagnostic, and therapeutic endoscopic procedures. --- paper_title: A Real-Time Tracking System for an Endoscopic Capsule using Multiple Magnetic Sensors paper_content: A tracking system for visualizing the location of a magnetically marked diagnostic capsule in real time is proposed. The intended application is the gas- trointestinal (GI) tract which is regarded by gastroenterologists as a black box. An endoscopic capsule capable of releasing diagnostic biomarker probes to areas of the gastrointestinal tract that are inaccessible via conventional means will enable gas- troenterologists to accurately determine the changes in functionality of the GI tract due to diseases. This requires a tracking system that can show the location of the capsule in real-time as it travels down the digestive tract. This paper presents the de- sign and implementation of such a tracking system using multiple magnetic sensors. It is free from the disadvantages most radio-frequency tracking systems suffer from. We report on the developments of the magnetic sensing hardware, tracking algo- rithm based on empirically developed models and experimental results. The results reveal the suitability of the proposed system for in vivo applications. --- paper_title: Wearable magnetic locating and tracking system for MEMS medical capsule paper_content: Abstract This work focuses on a wearable magnetic tracking technology for the MEMS medical capsule. As a booming non-invasive medical diagnostic technology, the MEMS medical capsule provides effective solutions in deep intestines as wireless endoscopes, wireless parameter detectors or drug deliverers. To provide a precise, easy-to-operate and low-cost tracking solution for the capsule, we develop a wearable magnetic tracking system based on Hall-Effect and magnetic dipole model. The system basically consists of the wearable tracking vest and operating software. To accomplish the real-time data collection and prevent the interference from the earth magnetic field, some effective multi-channel data collecting and processing solutions are implemented. Several trial tests have been performed with relatively accurate expectation to justify the precision of the system. The volunteer experiments showed that the system can be adjusted for suitable wearing. The effective reduction of the interference from the earth magnetic filed allows the volunteer make basic movement during wearing the system. Moreover, the results compared with the X-ray image of the small intestine demonstrate that this system can successfully trace the motion of the capsule inside the twisted intestine. --- paper_title: Position and Orientation Accuracy Analysis for Wireless Endoscope Magnetic Field Based Localization System Design paper_content: This paper focuses on wireless capsule endoscope magnetic field based localization by using a linear algorithm, an unconstrained optimization method and a constrained optimization method. Eight sensor populations are employed for performance evaluation. For each of five sensor populations, four different sensor configurations are investigated, which represent potential sensor placements in practice. Accuracy is evaluated over a range of noise standard deviations and the position area is set on a solid cylinder which well represents the realistic scenario of the human body. It is observed that the optimization method greatly outperforms the linear algorithm that should not be used alone in general. The constrained optimization approach outperforms the unconstrained optimization method in presence of large noise. Simulation results show that best position accuracy is achieved when the sensors are uniformly deployed on a 2D plane with some sensors on the boundary of the position area. For the sensor populations considered, when increasing sensor population by one, the accuracy improves by about 0.45 divided by the sensor population. The results provide useful information for the design of wireless endoscope localization systems. --- paper_title: An improved magnetic localization and orientation algorithm for wireless capsule endoscope paper_content: In this paper, we propose a novel localization algorithm for tracking a magnet inside the capsule endoscope by 3-axis magnetic sensors array. In the algorithm, we first use an improved linear algorithm to obtain the localization parameters by finding the eigenvector corresponding to the minimum eigenvalue of the objective matrix. These parameters are used as the initial guess of the localization parameters in the nonlinear localization algorithm, and the nonlinear algorithm searches for more appropriate parameters that can minimize the objective error function. As the results, we obtain more robust and accurate localization results than those by using linear algorithm only. Nevertheless, the time efficiency of the nonlinear algorithm is enhanced. The real experimental data show that the average localization accuracy is about 2mm and the average orientation accuracy is about 1.6° when the magnet moves within the sensing area of 240mm ×240mm square. --- paper_title: Improved Modeling of Electromagnetic Localization for Implantable Wireless Capsules paper_content: Abstract An electromagnetic localization method for implantable wireless capsules has been developed that employs a three-axial magnetic sensor embedded in the capsules and three energized coils attached on the abdomen. In order to further improve the localization accuracy, a novel localization model has been derived based on the Biot-Savart Law. For simplicity of the calculation without increasing the position error, the method of truncated series expansion has been used in modeling. The experiment showed that the improved model had higher precision than the original dipole model. Using the improved model, the localization error can be greatly reduced. The improved model is an elementary math function and suitable for resolving some inverse magnetic problems in engineering. --- paper_title: A further step beyond wireless capsule endoscopy paper_content: Purpose – Aims to report on a new trend of research and development in the wireless capsule endoscope.Design/methodology/approach – Presents a conceptual design of a wireless capsule endoscope having new features like navigation control, self‐propulsion, higher rate image transmission, acquisition of samples, application of medications and so forth.Findings – The basic principle has been verified by experiments. Seems promising.Research limitations/implications – Yet to need a lot of effort for commercialization.Practical implications – If successful, it will provide another way of least invasive medical treatment that must reduce pain of patients drastically.Originality/value – Taking advantage of the state‐of‐the‐art micro‐technology it suggests a further step beyond the current wireless capsule endoscope. --- paper_title: A New Calibration Procedure for Magnetic Tracking Systems paper_content: In this study, we suggest a new approach for the calibration of magnetic tracking systems that allows us to calibrate the entire system in a single setting. The suggested approach is based on solving a system of equations involving all the system parameters. These parameters include: 1) the magnetic positions of the transmitting coils; 2) their magnetic moments; 3) the magnetic position of the sensor; 4) its sensitivity; and 5) the gain of the sensor output amplifier. We choose a set of parameters that define the origin, orientation, and scale of the reference coordinate system and consider them as constants in the above system of equations. Another set of constants is the sensor output measured at a number of arbitrary positions. The unknowns in the above equations are all the other system parameters. To define the origin and orientation of the reference coordinate system, we first relate it to a physical object, e.g., to the transmitter housing. We then use special supports to align the sensor with the edges of the transmitter housing and measure the sensor output at a number of aligned positions. To define the scale of the reference coordinate system, we measure the distance between two arbitrary sensor locations with a precise instrument (a caliper). This is the only parameter that should be calibrated with the help of an external measurement tool. To illustrate the efficiency of the new approach, we applied the calibration procedure to a magnetic tracking system employing 64 transmitting coils. We have measured the systematic tracking errors before and after applying the calibration. The systematic tracking errors were reduced by an order of magnitude due to applying the new calibration procedure. --- paper_title: Estimating Body Segment Orientation by Applying Inertial and Magnetic Sensing Near Ferromagnetic Materials paper_content: Inertial and magnetic sensors are very suitable for ambulatory monitoring of human posture and movements. However, ferromagnetic materials near the sensor disturb the local magnetic field and, therefore, the orientation estimation. A Kalman-based fusion algorithm was used to obtain dynamic orientations and to minimize the effect of magnetic disturbances. This paper compares the orientation output of the sensor fusion using three-dimensional inertial and magnetic sensors against a laboratory bound opto-kinetic system (Vicon) in a simulated work environment. With the tested methods, the difference between the optical reference system and the output of the algorithm was 2.6 degrees root mean square (rms) when no metal was near the sensor module. Near a large metal object instant errors up to 50 degrees were measured when no compensation was applied. Using a magnetic disturbance model, the error reduced significantly to 3.6 degrees rms. --- paper_title: A Cubic 3-Axis Magnetic Sensor Array for Wirelessly Tracking Magnet Position and Orientation paper_content: In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm. --- paper_title: Perspective of active capsule endoscope: actuation and localisation paper_content: The self-contained wireless capsule endoscope provides non-invasive, painless and effective diagnosis of the diseases in the small intestine. One anticipation is to embed the capsule with therapeutic tools and enable not only diagnosis but also treatment of the GI diseases. Prior to the accomplishment of this objective, controlled movement of the capsule should be achieved first. In this paper, we focused on the actuation and localisation issues of the active capsule endoscope. After a survey on the related work, the challenges were presented and the possible solutions were discussed. --- paper_title: Numerical Study on the Improvement of Detection Accuracy for a Wireless Motion Capture System paper_content: A detection technique having an accuracy of better than 1 mm is required for body motion analysis in the field of medical treatment. A wireless magnetic motion capture system is one such effective detection technique. We propose a candidate system using an LC resonant magnetic marker (LC marker). Previous studies have showed that the system is capable of repeatable position detection accuracy of better than 1 mm if the system has an adequate signal-to-noise (S/N ratio). However, there are some cases in which the detection results include unignorable errors because some approximations, e.g. a magnetic dipole assumption of the LC marker, are applied to solve the inverse problem to determine the position and orientation of the LC marker. Therefore, a numerical analysis is employed to realize a motion capture system having a high detection accuracy. To elucidate the problem of detection error, the influence of variations in the sizes of the LC marker and the pick-up coil are considered in the numerical simulation. After studying the analysis, the main cause of detection error is determined to be the size of the pick-up coil rather than the size of the LC marker. It was also is found that a pick-up coil measuring 10 mm in diameter with a wound coil width of 1 mm achieves a detection accuracy of better than 0.1 mm. --- paper_title: Robotic magnetic steering and locomotion of capsule endoscope for diagnostic and surgical endoluminal procedures paper_content: This paper describes a novel approach to capsular endoscopy that takes advantage of active magnetic locomotion in the gastrointestinal tract guided by an anthropomorphic robotic arm. Simulations were performed to select the design parameters allowing an effective and reliable magnetic link between the robot end-effector (endowed with a permanent magnet) and the capsular device (endowed with small permanent magnets). In order to actively monitor the robotic endoluminal system and to efficiently perform diagnostic and surgical medical procedures, a feedback control based on inertial sensing was also implemented. The proposed platform demonstrated to be a reliable solution to move and steer a capsular device in a slightly insufflated gastrointestinal lumen. --- paper_title: Wireless Magnetic Position-Sensing System Using Optimized Pickup Coils for Higher Accuracy paper_content: With the aim of improving the detection accuracy of a wireless magnetic position-sensing system using an LC resonant magnetic marker, a pickup coil with an optimal size (10 mm in diameter × mm thick), as calculated by a previous simulation study, was used and tested in this paper. Our study confirmed that positional errors were reduced to a submillimeter order in the area within y=120 mm from the pickup coil array. On the contrary, in the area outside y=130 mm from the pickup coil array, the errors increased by about 0.5-2 mm compared to the results for the previous pickup coil size (25 mm in diameter × 2 mm thick). Regardless of the size of the pickup coil, however, compensation can be made for these positional deviations, including the influence of the mutual inductance between the LC marker and the exciting coil. After application of the compensation process, the detection results were corrected approximately to the actual positions of the LC marker. --- paper_title: Position and orientation detection of capsule endoscopes in spiral motion paper_content: In this paper, a position and orientation detection method for the capsule endoscopes devised to move through the human digestive organs in spiral motion, is introduced. The capsule is equipped with internal magnets and flexible threads on their outer shell. It is forced to rotate by an external rotating magnetic field that produces a spiral motion. As the external magnetic field is generated by rotating a permanent magnet, the 3-axes Cartesian coordinate position and 3-axes orientation of the capsule endoscopes can be measured by using only 3 hall-effect sensors orthogonally installed inside the capsule. However, in this study, an additional hall-effect sensor is employed along the rotating axis at a symmetrical position inside the capsule body to enhance measurement accuracy. In this way, the largest position detection error appearing along the rotating axis of the permanent magnet could be reduced to less than 15mm, when the relative position of the capsule endoscope to the permanent magnet is changed from 0mm to 50mm in the X-direction, from −50mm to +50mm in the Y-direction and from 200mm to 300mm in the Z-direction. The maximum error of the orientation detection appearing in the pitching direction ranged between −4° and +15°. --- paper_title: Swallowable medical devices for diagnosis and surgery: The state of the art paper_content: Abstract The first wireless camera pills created a revolutionary new perspective for engineers and physicians, demonstrating for the first time the feasibility of achieving medical objectives deep within the human body from a swallowable, wireless platform. The approximately 10 years since the first camera pill has been a period of great innovation in swallowable medical devices. Many modules and integrated systems have been devised to enable and enhance the diagnostic and even robotic capabilities of capsules working within the gastrointestinal (GI) tract. This article begins by reviewing the motivation and challenges of creating devices to work in the narrow, winding, and often inhospitable GI environment. Then the basic modules of modern swallowable wireless capsular devices are described, and the state of the art in each is discussed. This article is concluded with a perspective on the future potential of swallowable medical devices to enable advanced diagnostics beyond the capability of human visual perception, and even to directly deliver surgical tools and therapy non-invasively to interventional sites deep within the GI tract. --- paper_title: Adaptive Linearized Methods for Tracking a Moving Telemetry Capsule paper_content: In this paper, we discuss system and method of determining the real-time location of an omni-directional diagnostic radio frequency (RF) system while the object (transmitter) is moving freely inside an inaccessible organ. A specific application to the human gastrointestinal (GI) organ is presented, showing the importance of the method in accessing a specific site for drug administration or for extracting fluid or tissue samples for biopsy and similar medical investigations. For practical purposes, omnidirectional antenna on the transmitter at 433 MHz, normalized transmitter power 1 W was modeled for simplicity, and Es/No = 20 dB (corresponding to the linear region of the target transceiver). A brief discussion of how the original analogue signals, after conversion to voltage, was adapted for position tracking. In the tracking algorithm, we employed a path loss scenario based on the popular log-normal model to simulate the effects of organs on signal quality between transmitter and receiver at various distances. --- paper_title: A Review and Adaptation of Methods of Object Tracking to Telemetry Capsules paper_content: Abstract This review considers techniques employing radio frequency (RF) as well as ultrasound signals for tracking. Medical capsules have been employed since the SOs to measure various physiological parameters in the human body. Examples are temperature, pH, or pressure inside the gastrointestinal (GI) tract. The development and subsequent incorporation of new technology into reasonably priced, commercially available devices have made ultrasound and RF devices readily accessible for medical diagnosis. Some applications for telemetry capsules are drug delivery, and collection of tissue/fluid samples. Samples are taken from the GI to understand or treat diseases where diagnosis can only be made by taking a biopsy from the intestinal walls. Such biopsies have traditionally been performed using customized endoscopes. In order for a telemetry capsule to be effective in the above named tasks, accurate knowledge of the location of the capsule within the body during tests is necessary. As such, methods for calcu... --- paper_title: Ground-Based Wireless Positioning paper_content: Ground Based Wireless Positioning provides an in-depth treatment of non-GPS based wireless positioning techniques, with a balance between theory and engineering practice. The book presents the architecture, design and testing of a variety of wireless positioning systems based on the time-of-arrival, signal strength, and angle-of-arrival measurements. These techniques are essential for developing accurate wireless positioning systems which can operate reliably in both indoor and outdoor environments where the Global Positioning System (GPS) proves to be inadequate. The book covers a wide range of issues including radio propagation, parameter identification, statistical signal processing, optimization, and localization in large and multi-hop networks. A comprehensive study on the state-of-the-art techniques and methodologies in wireless positioning and tracking is provided, including anchor-based and anchor-free localisation in wireless sensor networks (WSN). The authors address real world issues such as multipath, non-line-of-sight (NLOS) propagation, accuracy limitations and measurement errors. Presenting the latest advances in the field, Ground Based Wireless Positioning is one of the first books to cover non-GPS based technologies for wireless positioning. It serves as an indispensable reference for researchers and engineers specialising in the fields of localization and tracking, and wireless sensor networks. Provides a comprehensive treatment of methodologies and algorithms for positioning and tracking Includes practical issues and case studies in designing real wireless positioning systems Explains non-line-of-sight (NLOS) radio propagation and NLOS mitigation techniques Balances solid theory with engineering practice of non-GPS wireless systems --- paper_title: Position and Orientation Accuracy Analysis for Wireless Endoscope Magnetic Field Based Localization System Design paper_content: This paper focuses on wireless capsule endoscope magnetic field based localization by using a linear algorithm, an unconstrained optimization method and a constrained optimization method. Eight sensor populations are employed for performance evaluation. For each of five sensor populations, four different sensor configurations are investigated, which represent potential sensor placements in practice. Accuracy is evaluated over a range of noise standard deviations and the position area is set on a solid cylinder which well represents the realistic scenario of the human body. It is observed that the optimization method greatly outperforms the linear algorithm that should not be used alone in general. The constrained optimization approach outperforms the unconstrained optimization method in presence of large noise. Simulation results show that best position accuracy is achieved when the sensors are uniformly deployed on a 2D plane with some sensors on the boundary of the position area. For the sensor populations considered, when increasing sensor population by one, the accuracy improves by about 0.45 divided by the sensor population. The results provide useful information for the design of wireless endoscope localization systems. --- paper_title: A novel RF-based propagation model with tissue absorption for location of the GI tract paper_content: In order to accurately estimate (build) the radio signal propagation attenuation model, especially inside the gastro-intestine (GI) tract of the human body, the Radio Frequency (RF) absorption characterization in human body is investigated. This characterization provides a criterion to design the Received Signal Strength (RSS) based localization system for the objective inside the human body. In this paper, the Specific Absorption Rate (SAR), E-field, H-field of the near and far field are investigated at frequencies of 434MHz, 868MHz, 1.2GHz and 2.4GHz respectively. Then, the numerical electromagnetic analysis with the finite-differencetime-domain (FDTD) is applied to model the in vivo radio propagation channels by using a dipole antenna. Finally, simulation experiments are carried out in homogenous and heterogeneous mediums. The results show that the electromagnetic (EM) propagation is not only distance and orientation dependent, but also tissue absorption dependent in human body. The proposed model is in agreement with measurements in the simulation experiments. --- paper_title: Performance bounds for RF positioning of endoscopy camera capsules paper_content: In this paper, we evaluate the factors affecting the accuracy achievable in localization of an endoscopic wireless capsule as it passes through the digestive system of the human body. Using a three-dimension full electromagnetic wave simulation model, we obtain bounds on the capsule-location estimation errors when the capsule is in each of three individual organs: stomach, small intestine and large intestine. The simulations assume two different external sensor arrays topologies. We compare these performance bounds and draw the conclusion that location-estimation errors are different for different organs and for various topologies of the external sensor arrays. --- paper_title: Adaptive Linearized Methods for Tracking a Moving Telemetry Capsule paper_content: In this paper, we discuss system and method of determining the real-time location of an omni-directional diagnostic radio frequency (RF) system while the object (transmitter) is moving freely inside an inaccessible organ. A specific application to the human gastrointestinal (GI) organ is presented, showing the importance of the method in accessing a specific site for drug administration or for extracting fluid or tissue samples for biopsy and similar medical investigations. For practical purposes, omnidirectional antenna on the transmitter at 433 MHz, normalized transmitter power 1 W was modeled for simplicity, and Es/No = 20 dB (corresponding to the linear region of the target transceiver). A brief discussion of how the original analogue signals, after conversion to voltage, was adapted for position tracking. In the tracking algorithm, we employed a path loss scenario based on the popular log-normal model to simulate the effects of organs on signal quality between transmitter and receiver at various distances. --- paper_title: Capsule endoscopy: the localization system paper_content: Capsule endoscopy (CE) has become a valuable tool in the armamentarium used to investigate the small bowel. It has been shown to be superior to pushenteroscopy in patients with occult gastrointestinal bleeding for the detection of distal lesions, and to barium follow through in patients with suspected small bowel pathology or Crohn’s disease [1–3]. One of the obstacles with CE was its inability to precisely determine the location of the pathologies found. The Localization Module is an additional feature incorporated into the RAPID Workstation Software of the Given Imaging system (Yoqneam, Israel) to assist the physician in determining the relative location of pathologies found by the M2A capsule. The localization is based on the strength of capsule emitted signal received by the eight sensors (antennas) on the exterior of the abdomen. The output of the localization module is a graphic trajectory of the capsule as it is transported through the GI tract. In addition, the module calculates and presents GI transit times based on the identified and labeled images of the entrance to the stomach, the passage in the pylorus, and the passage through the ileocecal valve. The localization capability of the system is based on the off-line processing of the level of the radio frequency (RF) signals from the sensor array and as such, does not require any additional signals or equipment. --- paper_title: Design and Implementation of 3D Positioning Algorithms Based on RF Signal Radiation Patterns for In Vivo Micro-robot paper_content: Recent popularity of capsule endoscopes as a medical means to investigate lesions along digestive tract has prompted complaints by doctors about the in sufficient positioning system for the capsules. The possible major reason would be that current positioning system is based on time difference of arrival whose precision is hard to improve for short flight distances. To enable better control of active capsules, we propose to exploit the radiation patterns of radio frequency (RF) signals from the capsules. The symmetric RF signals can be sensed by receivers in similar areas of external receiver arrays. The similarity, position, shape and size of those areas are considered to make back projection calculation and identify accurate 3D positions of the signal source. Our positioning algorithms are scalable to achieve better precision than the density of receivers. --- paper_title: Phase difference based RFID navigation for medical applications paper_content: RFID localization is a promising new field of work that is eagerly awaited for many different types of applications. For use in a medical context, special requirements and limitations must be taken into account, especially regarding accuracy, reliability and operating range. In this paper we present an experimental setup for a medical navigation system based on RFID. For this we applied a machine learning algorithm, namely support vector regression, to phase difference data gathered from multiple RFID receivers. The performance was tested on six datasets of different shape and placement within the volume spanned by the receivers. In addition, two grid based training sets of different size were considered for the regression. Our results show that it is possible to reach an accuracy of tag localization that is sufficient for some medical applications. Although we could not reach an overall accuracy of less than one millimeter in our experiments so far, the deviation was limited to two millimeters in most cases and the general results indicate that application of RFID localization even to highly critical applications, e. g., for brain surgery, will be possible soon. --- paper_title: A novel radio propagation radiation model for location of the capsule in GI tract paper_content: In this paper, we discuss the influence of the antenna orientation radiation pattern in localization algorithm based on Received Signal Strength Indicator (RSSI). We also improve the empirical model of signal propagation by building the path loss function of the human gastro-intestine (GI) tract. The novel model includes information of both the distance and azimuth angle variables. The numerical electromagnetic analysis with the finite-difference time-domain (FDTD) is applied to model the vivo radio propagation channels by using a dipole antenna suitable for the model related to the human body. The proposed propagation model is compared with empirical model, and the simulation results show that the compensated model is more accurate by calculating the azimuth radiation attenuation. It demonstrates that the often overlooked antenna orientation has the dominant effect on the signal strength sensitivity. --- paper_title: An empirically-based path loss model for wireless channels in suburban environments paper_content: We present a statistical path loss model derived from 1.9 GHz experimental data collected across the United States in 95 existing macrocells. The model is for suburban areas, and it distinguishes between different terrain categories. Moreover, it applies to distances and base antenna heights not well-covered by existing models. The characterization used is a linear curve fitting the dB path loss to the dB-distance, with a Gaussian random variation about that curve due to shadow fading. The slope of the linear curve (corresponding to the path loss exponent, /spl gamma/) is shown to be a random variate from one macrocell to another, as is the standard deviation, /spl sigma/ of the shadow fading. These two parameters are statistically modeled, with the dependencies on base antenna height and terrain category made explicit. The resulting path loss model applies to base antenna heights from 10 to 80 meters; base-to-terminal distances from 0.1 to 8 km; and three distinct terrain categories. --- paper_title: Accurate localization of RFID tags using phase difference paper_content: Due to their light weight, low power, and practically unlimited identification capacity, radio frequency identification (RFID) tags and associated devices offer distinctive advantages and are widely recognized for their promising potential in context-aware computing; by tagging objects with RFID tags, the environment can be sensed in a cost- and energy-efficient means. However, a prerequisite to fully realizing the potential is accurate localization of RFID tags, which will enable and enhance a wide range of applications. In this paper we show how to exploit the phase difference between two or more receiving antennas to compute accurate localization. Phase difference based localization has better accuracy, robustness and sensitivity when integrated with other measurements compared to the currently popular technique of localization using received signal strength. Using a software-defined radio setup, we show experimental results that support accurate localization of RFID tags and activity recognition based on phase difference. --- paper_title: Development of a Tracking Algorithm for an In-Vivo RF Capsule Prototype paper_content: This paper presents the design of a tracking algorithm for an in-vivo capsule swallowed by patients. Such devices play a very important role in the diagnosis and treatment of internal disorders, for example, in the digestive tract. Tracking the location of the capsule as it travels down the digestive tract is critical to ensure its position near the affected area. Various sensing mechanisms that can be applied for tracking the capsule include the use of radio frequency and magnetic signals. The work reported in this paper mainly focuses on the development of an intelligent tracking algorithm that could potentially be coupled with a variety of sensing technologies. In this paper we use RF-based capsule and sensor prototypes for developing and testing the tracking algorithm. Encouraging practical results have been obtained which demonstrate the validity of the tracking algorithm and the underlying look-up table based methodology. --- paper_title: Design of 3D Positioning Algorithm Based on RFID Receiver Array for In Vivo Micro-Robot paper_content: The clinical applications of capsule endoscopes have been increasing consistently since the invention of a passive capsule endoscope was made [1]. Though the capsule endoscopes are effective in detecting large lesions along human digestive tract, they cannot meet doctors’ requirements of active control of the capsules to carefully examine small lesions. The poor precision of positioning system is one of the major hurdles blocking robots from approaching the accurate position of suspected areas. In this paper, we propose a novel algorithm to exploit the radiation pattern of a radio frequency (RF) tag inside the capsule, which forms shadows or traces on a set of receiver arrays. According to the shape of the traces and the radiation pattern, the position of the radiation source, i.e. the tag inside the capsule can be calculated with our algorithm. The details of our algorithm are presented with the simulation results in the paper. With the settings under medical constraints, the degree of positioning precision in simulation is improved to less than 1cm horizontally and 2cm vertically. --- paper_title: Data Processing Tasks in Wireless GI Endoscopy: Image-Based Capsule Localization & Navigation and Video Compression paper_content: The paper addresses data processing support that is required in capsule gastrointestinal endoscopy. First, capsule position estimation method using standard MPEG-7 image features (descriptors) is discussed. The proposed approach makes use of vector quantization, principal component analysis and neural networks. Next, new algorithms dedicated for virtual colonoscopy (VC) human body inspection are described. The VC images can be registered with endoscopic ones and help in capsule localization and navigation. Finally, an original, low- complexity, efficient image compression method, based on integer-to-integer 4x4 DCT transform, is presented and experimentally verified. --- paper_title: Localization of Endoscopic Capsule in the GI Tract Based on MPEG-7 Visual Descriptors paper_content: The paper addresses the problem of localization of video endoscopic capsule in the gastrointestinal (GI) tract on the base of appropriate classification of images received from it. In this context usefulness of MPEG-7 image descriptors as classification features has been verified. For classification purpose various state of the art tools were used including Neural Networks and Vector Quantization. The dimension of the problem was also reduced by the Principal Component Analysis. Novelty of the presented approach consists in joint application of mentioned above techniques for recognition of the GI region inspected by the capsule by means of classification of MPEG-7 features to different parts of GI tract. In this research recognition of the upper part organs of the GI tract has been performed. --- paper_title: Capsule endoscope localization based on computer vision technique paper_content: To build a new type of wireless capsule endoscope with interactive gastrointestinal tract examination, a localization and orientation system is needed for tracking 3D location and 3D orientation of the capsule movement. The magnetic localization and orientation method produces only 5 DOF, but misses the information of rotation angle along capsule’s main axis. In this paper, we presented a complementary orientation approach for the capsule endoscope, and the 3D rotation can be determined by applying computer vision technique on the captured endoscopic images. The experimental results show that the complementary orientation method has good accuracy and high feasibility. --- paper_title: Magnetically Controllable Gastrointestinal Steering of Video Capsules paper_content: Wireless capsule endoscopy (WCE) allows for comfortable video explorations of the gastrointestinal (GI) tract, with special indication for the small bowel. In the other segments of the GI tract also accessible to probe gastroscopy and colonscopy, WCE still exhibits poorer diagnostic efficacy. Its main drawback is the impossibility of controlling the capsule movement, which is randomly driven by peristalsis and gravity. To solve this problem, magnetic maneuvering has recently become a thrust research area. Here, we report the first demonstration of accurate robotic steering and noninvasive 3-D localization of a magnetically enabled sample of the most common video capsule (PillCam, Given Imaging Ltd, Israel) within each of the main regions of the GI tract (esophagus, stomach, small bowel, and colon) in vivo, in a domestic pig model. Moreover, we demonstrate how this is readily achievable with a robotic magnetic navigation system (Niobe, Stereotaxis, Inc, USA) already used for cardiovascular clinical procedures. The capsule was freely and safely moved with omnidirectional steering accuracy of 1°, and was tracked in real time through fluoroscopic imaging, which also allowed for 3-D localization with an error of 1 mm. The accuracy of steering and localization enabled by the Stereotaxis system and its clinical accessibility world wide may allow for immediate and broad usage in this new application. This anticipates magnetically steerable WCE as a near-term reality. The instrumentation should be used with the next generations of video capsules, intrinsically magnetic and capable of real-time optical-image visualization, which are expected to reach the market soon. --- paper_title: The role of gamma-scintigraphy in oral drug delivery. paper_content: The gastrointestinal tract is usually the preferred site of absorption for most therapeutic agents, as seen from the standpoints of convenience of administration, patient compliance and cost. In recent years there has been a tendency to employ sophisticated systems that enable controlled or timed release of a drug, thereby providing a better dosing pattern and greater convenience to the patient. Although much about the performance of a system can be learned from in vitro release studies using conventional and modified dissolution methods, evaluation in vivo is essential in product development. The non-invasive technique of gamma-scintigraphy has been used to follow the gastrointestinal transit and release characteristics of a variety of pharmaceutical dosage forms. Such studies provide an insight into the fate of the delivery system and its integrity and enable the relationship between in vivo performance and resultant pharmacokinetics to be examined (pharmacoscintigraphy). --- paper_title: Development of a new engineering-based capsule for human drug absorption studies. paper_content: Over recent years, there has been a significant growth in the number of new drugs entering development with challenging pharmaceutical (e.g. solubility) and biopharmaceutical (e.g. permeability) properties. As a consequence, there is an increasing number of pharmaceutical companies using human absorption studies to provide a 'route map' for development. These projects are typically undertaken early in clinical development and utilize engineering-based capsules to provide non-invasive drug delivery to the selected sites of the human gut. The Enterion capsule has recently been developed to provide for the delivery of a wide range of different drug formulations, for example, solution, powder and granulate, into any region of the gut, both easily and efficiently. --- paper_title: Ultrasound Emitter Localization in Heterogeneous Media paper_content: A novel algorithm to accurately determine the location of an ultrasound source within heterogeneous media is presented. The method obtains a small spacial error of 748 microm+/-310 microm for 100 different measurements inside a circular area with 140 mm diameter. The new algorithm can be used in targeted drug delivery for cancer therapies as well as to accurately locate any kind of ultrasound sources in heterogeneous media, such as ultrasonically marked medical devices or tumors. --- paper_title: A wireless acoustic emitter for passive localization in liquids paper_content: For the localization of minimally invasive medical devices, such as capsule endoscopes in the human body, ultrasound combines good resolution, minimal adverse health effects, high speed, adequate frame rates, and low cost. In the case of capsule endoscopes, small onboard ultrasonic emitters with minimal power requirements have the potential to provide significantly enhanced localization. We demonstrate for the first time acoustic emission in the kHz range using a wireless emitter based on the actuation principle of the wireless resonant magnetic microactuator developed recently in our institute. Our experiments show good agreement with the theoretical model, and simulations show the potential for high resolution localization. --- paper_title: An MRI-Compatible Robotic System With Hybrid Tracking for MRI-Guided Prostate Intervention paper_content: This paper reports the development, evaluation, and first clinical trials of the access to the prostate tissue (APT) II system-a scanner independent system for magnetic resonance imaging (MRI)-guided transrectal prostate interventions. The system utilizes novel manipulator mechanics employing a steerable needle channel and a novel six degree-of-freedom hybrid tracking method, comprising passive fiducial tracking for initial registration and subsequent incremental motion measurements. Targeting accuracy of the system in prostate phantom experiments and two clinical human-subject procedures is shown to compare favorably with existing systems using passive and active tracking methods. The portable design of the APT II system, using only standard MRI image sequences and minimal custom scanner interfacing, allows the system to be easily used on different MRI scanners. --- paper_title: Real-time position monitoring of invasive devices using magnetic resonance. paper_content: Techniques which can be used to follow the position of invasive devices in real-time using magnetic resonance (MR) are described. Tracking of an invasive device is made possible by incorporating one or more small RF coils into the device. These coils detect MR signals from only those spins near the coil. Pulse sequences which employ nonselective RF pulses to excite all nuclear spins within the field-of-view are used. Readout magnetic field gradient pulses, typically applied along one of the primary axes of the imaging system, are then used to frequency encode the position of the receive coil(s). Data are Fourier transformed and one or more peaks located to determine the position of each receiver coil in the direction of the applied field gradient. Subsequent data collected on orthogonal axes permits the localization of the receiver coil in three dimensions. The process can be repeated rapidly and the position of each coil can be displayed in real-time. --- paper_title: Capsule tracking in the GI tract: a novel microcontroller based solution paper_content: Telemetry capsules that will provide facilities for on-site drug delivery and those that are capable of taking tissue/fluid samples within the human gastro-intestinal (GI) are quite invaluable for diagnosing GI related diseases where diagnosis can only be made by taking a biopsy from the intestinal walls. Such biopsies have traditionally been preformed using custom endoscopes. To be effective in the above named tasks, accurate knowledge of the location of telemetry capsules within the GI tract is a necessity. This paper presents a method of tracking a capsule inside the human GI tract based on the application of a microcontroller to control the generation and transmission of ultrasonic pulses. The capsule, i.e. object, and the central receiver are capable of transmit and receive functions, others are receive-only stations. This ensures the collection of a reliable round-trip time of flight (TOF) data. Application of triangulation to the TOF data facilitates the determination of the real-time positions of the object with less than 10% aggregate measurement error --- paper_title: A hybrid method for 6-DOF tracking of MRI-compatible robotic interventional devices paper_content: This paper reports a novel hybrid method of tracking the position and orientation of robotic medical instruments within the imaging volume of a magnetic resonance imaging (MRI) system. The method utilizes two complementary measurement techniques: passive MRI fiducial markers and MRI compatible joint encoding. This paper reports an experimental evaluation of the tracking accuracy of this system. The accuracy of this system compares favorably to that of a previously reported active tracking system. Moreover, the hybrid system is quickly and easily deployed on different MRI scanner systems --- paper_title: Microrobots for Minimally Invasive Medicine paper_content: Microrobots have the potential to revolutionize many aspects of medicine. These untethered, wirelessly controlled and powered devices will make existing therapeutic and diagnostic procedures less invasive and will enable new procedures never before possible. The aim of this review is threefold: first, to provide a comprehensive survey of the technological state of the art in medical microrobots; second, to explore the potential impact of medical microrobots and inspire future research in this field; and third, to provide a collection of valuable information and engineering tools for the design of medical microrobots. --- paper_title: Design of a novel MRI compatible manipulator for image guided prostate interventions paper_content: This paper reports a novel remotely actuated manipulator for access to prostate tissue under magnetic resonance imaging guidance (APT-MRI) device, designed for use in a standard high-field MRI scanner. The device provides three-dimensional MRI guided needle placement with millimeter accuracy under physician control. Procedures enabled by this device include MRI guided needle biopsy, fiducial marker placements, and therapy delivery. Its compact size allows for use in both standard cylindrical and open configuration MRI scanners. Preliminary in vivo canine experiments and first clinical trials are reported. ---
Title: A Review of Localization Systems for Robotic Endoscopic Capsules Section 1: Introduction Description 1: Overview of endoscopy, limitations of traditional methods, and introduction to wireless capsule endoscopy (WCE). Section 2: Localization Methods Based on Magnetic Field Strength Description 2: Discussion on tracking systems utilizing magnetic field strength, including magnetic localization for passive and active actuation systems. Section 3: Localization Methods Based on Electromagnetic Waves Description 3: Investigation of capsule tracking using different regions of the electromagnetic spectrum, focusing on radio, visible, X-ray, and gamma rays. Section 4: Other Localization Methods Description 4: Additional techniques for localization, including MRI and ultrasound-based methods. Section 5: Conclusion Description 5: Summary of the studied localization systems, their current limitations, and the future challenges for developing an effective capsule localization system. Section 6: References Description 6: Complete list of all the references cited in the paper.
Building Programmable Wireless Networks: An Architectural Survey
6
--- paper_title: Open Signaling for ATM, INTERNET and Mobile Networks (OPENSIG'98) paper_content: The ability to rapidly create and deploy new transport, control and management architectures in response to new service demands is a key factor driving the programmable networking community. Competition between service providers may hinge on the speed at which one provider can respond to new market demands over another. The notion of open programmable networks is having broad impact on service providers and vendors across a range of telecommunication sectors calling for major advances in open network control architecture, network programmability and distributed systems technology. In this paper we discuss the origins of the Open Signalling Working Group (OPENSIG) and present a summary of the fifth Workshop on Open Signaling for ATM, Internet and Mobile Networks (OPENSIG'98), which was held at the University of Toronto, Ontario, October, 1998. We also discuss a number of new initiatives in the area of open programmable networks that have recently emerged. --- paper_title: Towards an active network architecture paper_content: Active networks allow their users to inject customized programs into the nodes of the network. An extreme case, in which we are most interested, replaces packets with "capsules" - program fragments that are executed at each network router/switch they traverse. Active architectures permit a massive increase in the sophistication of the computation that is performed within the network. They enable new applications, especially those based on application-specific multicast, information fusion, and other services that leverage network-based computation and storage. Furthermore, they will accelerate the pace of innovation by decoupling network services from the underlying hardware and allowing new services to be loaded into the infrastructure on demand. In this paper, we describe our vision of an active network architecture, outline our approach to its design, and survey the technologies that can be brought to bear on its implementation. We propose that the research community mount a joint effort to develop and deploy a wide area ActiveNet. --- paper_title: The networking philosopher's problem paper_content: While computer networking is an exciting research field, we are far from having a clear understanding of the core concepts and questions that define our discipline. This position paper, a summary of a talk I gave at the CoNext'10 Student Workshop, captures my current frustrations and hopes about the field. --- paper_title: Frontiers of Wireless and Mobile Communications paper_content: The field of wireless and mobile communication has a remarkable history that spans over a century of technology innovations from Marconi's first transatlantic transmission in 1899 to the worldwide adoption of cellular mobile services by over four billion people today. Wireless has become one of the most pervasive core technology enablers for a diverse variety of computing and communications applications ranging from third-generation/fourth-generation (3G/4G) cellular devices, broadband access, indoor WiFi networks, vehicle-to-vehicle (V2V) systems to embedded sensor and radio-frequency identification (RFID) applications. This has led to an accelerating pace of research and development in the wireless area with the promise of significant new breakthroughs over the next decade and beyond. This paper provides a perspective of some of the research frontiers of wireless and mobile communications, identifying early stage key technologies of strategic importance and the new applications that they will enable. Specific new radio technologies discussed include dynamic spectrum access (DSA), white space, cognitive software-defined radio (SDR), antenna beam steering and multiple-input-multiple-output (MIMO), 60-GHz transmission, and cooperative communications. Taken together, these approaches have the potential for dramatically increasing radio link speeds from current megabit per second rates to gigabit per second, while also improving radio system capacity and spectrum efficiency significantly. The paper also introduces a number of emerging wireless/mobile networking concepts including multihoming, ad hoc and multihop mesh, delay-tolerant routing, and mobile content caching, providing a discussion of the protocol capabilities needed to support each of these usage scenarios. In conclusion, the paper briefly discusses the impact of these wireless technologies and networking techniques on the design of emerging audiovisual and multimedia applications as they migrate to mobile Internet platforms. --- paper_title: Report of the DIMACS working group on abstractions for network services, architecture, and implementation paper_content: A workshop on Abstractions for Network Services, Architecture, and Implementation brought together researchers interested in creating better abstractions for creating and analyzing networked services and network architectures. The workshop took place at DIMACS on May 21-23, 2012. This report summarizes the presentations and discussions that took place at the workshop, organized by areas of abstractions such as layers, domains, and graph properties. --- paper_title: 1 Languages for Software-Defined Networks paper_content: Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. However, managing these networks is unnecessarily complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) with closed and proprietary configuration interfaces. Softwaredefined networks are poised to change this by offering a clean and open interface between networking devices and the software that controls them. In particular, many commercial switches support the OpenFlow protocol, and a number of campus, data center, and backbone networks have deployed the new technology. However, while SDNs make it possible to program the network, they does not make it easy. Today's OpenFlow controllers offer low-level APIs that mimic the underlying switch hardware. To reach SDNs full potential, we need to identify the right higher-level abstractions for creating (and composing) applications. In the Frenetic project, we are designing simple and intuitive abstractions for programming the three main stages of network management: monitoring network traffic, specifying and composing packet forwarding policies, and updating policies in a consistent way. Overall, these abstractions make it dramatically easier for programmers to write and reason about SDN applications. --- paper_title: A survey of programmable networks paper_content: In this paper we present a programmable networking model that provides a common framework for understanding the state-of-the-art in programmable networks. A number of projects are reviewed and discussed against a set of programmable network characteristics. We believe that a number of important innovations are creating a paradigm shift in networking leading to higher levels of network programmability. These innovations include the separation between transmission hardware and control software, availability of open programmable network interfaces, accelerated virtualization of networking infrastructure, rapid creation and deployment of new network services and environments for resource partitioning and coexistence of multiple distinct network architectures. We present a simple qualitative comparison of the surveyed work and make a number of observations about the direction of the field. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: Architecting for innovation paper_content: We argue that the biggest problem with the current Internet architecture is not a particular functional deficiency, but its inability to accommodate innovation. To address this problem we propose a minimal architectural "framework" in which comprehensive architectures can reside. The proposed Framework for Internet Innovation (FII) --- which is derived from the simple observation that network interfaces should be extensible and abstract --- allows for a diversity of architectures to coexist, communicate, and evolve. We demonstrate FII's ability to accommodate diversity and evolution with a detailed examination of how information flows through the architecture and with a skeleton implementation of the relevant interfaces. --- paper_title: B4: experience with a globally-deployed software defined wan paper_content: We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work. --- paper_title: The power of abstraction paper_content: Abstraction is at the center of much work in Computer Science. It encompasses finding the right interface for a system as well as finding an effective design for a system implementation. Furthermore, abstraction is the basis for program construction, allowing programs to be built in a modular fashion. This talk will discuss how the abstraction mechanisms we use today came to be, how they are supported in programming languages, and some possible areas for future research. --- paper_title: Active networking: one view of the past, present, and future paper_content: All distributed computing systems face the architectural question of the location (and nature) of programmability in the telecommunications networks, computers, and other peripheral devices comprising them. The perspective of this paper is that network elements should be as programmable as possible, to enable the most flexible distributed computing systems. There has been a persistent confluence among operating systems, programming languages, networking and distributed systems. We demonstrate how these interactions led to what is called "active networking," and in the spirit of "vox audita peril, littera scripta manel" (the spoken word perishes, but the written word remains), include an account of how it was made to happen. Lessons are drawn both from the broader research agenda, and the specific goals pursued in the SwitchWare project. We speculate on likely futures for active networking. --- paper_title: Open Signaling for ATM, INTERNET and Mobile Networks (OPENSIG'98) paper_content: The ability to rapidly create and deploy new transport, control and management architectures in response to new service demands is a key factor driving the programmable networking community. Competition between service providers may hinge on the speed at which one provider can respond to new market demands over another. The notion of open programmable networks is having broad impact on service providers and vendors across a range of telecommunication sectors calling for major advances in open network control architecture, network programmability and distributed systems technology. In this paper we discuss the origins of the Open Signalling Working Group (OPENSIG) and present a summary of the fifth Workshop on Open Signaling for ATM, Internet and Mobile Networks (OPENSIG'98), which was held at the University of Toronto, Ontario, October, 1998. We also discuss a number of new initiatives in the area of open programmable networks that have recently emerged. --- paper_title: A survey of active network research paper_content: Active networks are a novel approach to network architecture in which the switches (or routers) of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. The authors discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities. They illustrate how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them. These active routers could also interoperate with legacy routers, which transparently forward datagrams in the traditional manner. --- paper_title: Programming telecommunication networks paper_content: The move toward market deregulation and open competition has sparked a wave of serious introspection in the telecommunications service industry. Telecom providers and operators are now required to open up their primary revenue channels to competing industries. The competition for product differentiation increasingly depends on the level of sophistication, degree of flexibility, and speed of deployment of services that a future provider can offer. These factors in turn depend heavily on the flexibility of the software architecture in place in a provider's operational infrastructure. Within this context, we examine the service architecture of two major global communication networks-the telephone network and the Internet and explore their weaknesses and strengths. We discuss the realization of an open programmable networking environment based on a new service architecture for advanced telecommunication services that overcomes the limitations of the existing networks. Our approach to network programmability stems from two angles-one conceptual, the other implementational. In the first, we attempt to develop a service model that is open and reflects the economic market structure of the future telecommunications service industry. Furthermore, we introduce an extended reference model for realizing the service marketplace and present it as a vehicle for creating multimedia services with QoS guarantees. In the second, we investigate the feasibility of engineering the reference model from an implementation standpoint. We describe a realization of the open programmable networking environment as a broadband kernel. Called xbind, the broadband kernel incorporates IP and CORBA technologies for signaling, management, and service creation, and ATM for transport. We also address some of the important QoS, performance, scalability, and implementation issues. --- paper_title: A survey of programmable networks paper_content: In this paper we present a programmable networking model that provides a common framework for understanding the state-of-the-art in programmable networks. A number of projects are reviewed and discussed against a set of programmable network characteristics. We believe that a number of important innovations are creating a paradigm shift in networking leading to higher levels of network programmability. These innovations include the separation between transmission hardware and control software, availability of open programmable network interfaces, accelerated virtualization of networking infrastructure, rapid creation and deployment of new network services and environments for resource partitioning and coexistence of multiple distinct network architectures. We present a simple qualitative comparison of the surveyed work and make a number of observations about the direction of the field. --- paper_title: The Tempest, a Framework for Safe, Resource Assured, Programmable Networks paper_content: Most research in network programmability has stressed the flexibility engendered by increasing the ability of users to configure network elements for their own purposes, without addressing the larger issues of how such advanced control systems can coexist both with each other and with more conventional ones. The Tempest framework presented here extends beyond the provision of simple network programmability to address these larger issues. In particular, we show how network programmability can be achieved without jeopardizing the integrity of the network as a whole, how network programmability fits in with existing networks, and how programmability can be offered at different levels of granularity. Our approach is based on the Tempest's ability to dynamically create virtual private networks over a switched transport architecture (e.g., an ATM network). Each VPN is assigned a set of network resources which can be controlled using either a well-known control system or a control system tailored to the specific needs of a distributed application. The first level of programmability in the Tempest is fairly coarse-grained: an entire virtual network can be programmed by a third party. At a finer level of granularity the Tempest allows user supplied code to be injected into parts of an operational virtual network, thus allowing application specific customization of network control. The article shows how the Tempest framework allows these new approaches to coexist with more conventional solutions. --- paper_title: Active network vision and reality: lessons from a capsule-based system paper_content: Although active networks have generated much debate in the research community on the whole there has been little hard evidence to inform this debate. This paper aims to redress the situation by reporting what we have learned by designing, implementing and using the ANTS active network toolkit over the past two years. At this early stage, active networks remain an open research area. However, we believe that we have made substantial progress towards providing a more flexible network layer while at the same time addressing the performance and security concerns raised by the presence of mobile code in the network. In this paper, we argue our progress towards the original vision and the difficulties that we have not yet resolved in three areas that characterize a "pure" active network: the capsule model of programmability; the accessibility of that model to all users; and the applications that can be constructed in practice. --- paper_title: Towards an active network architecture paper_content: Active networks allow their users to inject customized programs into the nodes of the network. An extreme case, in which we are most interested, replaces packets with "capsules" - program fragments that are executed at each network router/switch they traverse. Active architectures permit a massive increase in the sophistication of the computation that is performed within the network. They enable new applications, especially those based on application-specific multicast, information fusion, and other services that leverage network-based computation and storage. Furthermore, they will accelerate the pace of innovation by decoupling network services from the underlying hardware and allowing new services to be loaded into the infrastructure on demand. In this paper, we describe our vision of an active network architecture, outline our approach to its design, and survey the technologies that can be brought to bear on its implementation. We propose that the research community mount a joint effort to develop and deploy a wide area ActiveNet. --- paper_title: Introducing new internet services: why and how paper_content: Active networks permit applications to inject programs into the nodes of local and, more important, wide area networks. This supports faster service innovation by making it easier to deploy new network services. In this article, we discuss both the potential impact of active network services on applications and how such services can be built and deployed. We explore the impact by suggesting sample uses and arguing how such uses would improve application performance. We explore the design of active networks by presenting a novel architecture, ANTS (active network transport system), that adds extensibility at the network layer and allows for incremental deployment of active nodes within the Internet. In doing so, ANTS tackles the challenges of ensuring that the flexibility offered by active networks does not adversely impact performance or security. Finally, we demonstrate how a new network service may be expressed in ANTS. --- paper_title: A survey of active network research paper_content: Active networks are a novel approach to network architecture in which the switches (or routers) of the network perform customized computations on the messages flowing through them. This approach is motivated by both lead user applications, which perform user-driven computation at nodes within the network today, and the emergence of mobile code technologies that make dynamic network service innovation attainable. The authors discuss two approaches to the realization of active networks and provide a snapshot of the current research issues and activities. They illustrate how the routers of an IP network could be augmented to perform such customized processing on the datagrams flowing through them. These active routers could also interoperate with legacy routers, which transparently forward datagrams in the traditional manner. --- paper_title: Snow on Silk: A NodeOS in the Linux Kernel paper_content: Transferring active networking technology from the research arena to everyday deployment on desktop and edge router nodes, requires a NodeOS design that simultaneously meets three goals: (1) be embedded within a wide-spread, open source operating system; (2) allow non-active applications and regular operating system operation to proceed in a regular manner, unhindered by the active networking component; (3) offer performance competitive with that of networking stacks of general purpose operating systems. Previous NodeOS systems, Bowman, Janos, AMP and Scout, only partially addressed these goals. Our contribution lies in the design and implementation of such a system, a NodeOS within the Linux kernel, and the demonstration of competitive performance for medium and larger packet sizes. We also illustrate how such a design easily renders to the deployment of other networking architectures, such as peer-to-peer networks and extensible routers. --- paper_title: The Tempest, a Framework for Safe, Resource Assured, Programmable Networks paper_content: Most research in network programmability has stressed the flexibility engendered by increasing the ability of users to configure network elements for their own purposes, without addressing the larger issues of how such advanced control systems can coexist both with each other and with more conventional ones. The Tempest framework presented here extends beyond the provision of simple network programmability to address these larger issues. In particular, we show how network programmability can be achieved without jeopardizing the integrity of the network as a whole, how network programmability fits in with existing networks, and how programmability can be offered at different levels of granularity. Our approach is based on the Tempest's ability to dynamically create virtual private networks over a switched transport architecture (e.g., an ATM network). Each VPN is assigned a set of network resources which can be controlled using either a well-known control system or a control system tailored to the specific needs of a distributed application. The first level of programmability in the Tempest is fairly coarse-grained: an entire virtual network can be programmed by a third party. At a finer level of granularity the Tempest allows user supplied code to be injected into parts of an operational virtual network, thus allowing application specific customization of network control. The article shows how the Tempest framework allows these new approaches to coexist with more conventional solutions. --- paper_title: Architectures for the future networks and the next generation Internet: A survey paper_content: Networking research funding agencies in USA, Europe, Japan, and other countries are encouraging research on revolutionary networking architectures that may or may not be bound by the restrictions of the current TCP/IP based Internet. We present a comprehensive survey of such research projects and activities. The topics covered include various testbeds for experimentations for new architectures, new security mechanisms, content delivery mechanisms, management and control frameworks, service architectures, and routing mechanisms. Delay/disruption tolerant networks which allow communications even when complete end-to-end path is not available are also discussed. --- paper_title: Extending Networking into the Virtualization Layer paper_content: The move to virtualization has created a new network access layer residing on hosts that connects the various VMs. Virtualized deployment environments impose requirements on networking for which traditional models are not well suited. They also provide advantages to the networking layer (such as software flexibility and welldefined end host events) that are not present in physical networks. To date, this new virtualization network layer has been largely built around standard Ethernet switching, but this technology neither satisfies these new requirements nor leverages the available advantages. We present Open vSwitch, a network switch specifically built for virtual environments. Open vSwitch differs from traditional approaches in that it exports an external interface for fine-grained control of configuration state and forwarding behavior. We describe how Open vSwitch can be used to tackle problems such as isolation in joint-tenant environments, mobility across subnets, and distributing configuration and visibility across hosts. --- paper_title: PlanetLab: an overlay testbed for broad-coverage services paper_content: PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a disverse collection of links. PlanetLab allows multiple service to run concurrently and continuously, each in its own slice of PlanetLab. This paper discribes our initial implementation of PlanetLab, including the mechanisms used to impelment virtualization, and the collection of core services used to manage PlanetLab. --- paper_title: The Genesis Kernel: a virtual network operating system for spawning network architectures paper_content: The deployment of network architectures is often manual, ad hoc and time consuming. In this paper we introduce a new paradigm for automating the life cycle process for the creation, deployment and management of network architectures and envision programmable networks capable of spawning distinct "child" virtual networks with their own transport, control and management systems. A child network operates on a subset of its "parent's" network resources and in isolation from other virtual networks. Child networks support the controlled access to communities of users with specific connectivity, security and quality of service requirements. In this paper we introduce the Genesis Kernel, a virtual network operating system capable of profiling, spawning and managing virtual network architectures on-the-fly. --- paper_title: Overcoming the Internet impasse through virtualization paper_content: The Internet architecture has proven its worth by the vast array of applications it now supports and the wide variety of network technologies over which it currently runs. Most current Internet research involves either empirical measurement studies or incremental modifications that can be deployed without major architectural changes. Easy access to virtual testbeds could foster a renaissance in applied architectural research that extends beyond these incrementally deployable designs. --- paper_title: Network Virtualization: Technologies, Perspectives, and Frontiers paper_content: Network virtualization refers to a broad set of technologies. Commercial solutions have been offered by the industry for years, while more recently the academic community has emphasized virtualization as an enabler for network architecture research, deployment, and experimentation. We review the entire spectrum of relevant approaches with the goal of identifying the underlying commonalities. We offer a unifying definition of the term “network virtualization” and examine existing approaches to bring out this unifying perspective. We also discuss a set of challenges and research directions that we expect to come to the forefront as network virtualization technologies proliferate. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: Experiences and challenges in deploying openflow over a real wireless mesh network paper_content: Wireless Mesh Networks propose a decentralized architecture for establishing multi-hop wireless communications. The decentralized architecture brings benefits such as ease of deployment and maintenance., scalability and reliability. However., wireless mesh networks lack high level services such as handoff and mobility management or admission control. OpenFlow is an interface for remotely controlling the flow table of switches., routers and access points. The OpenFlow protocol separates the control plane and the data plane of network devices, proposing a centralized architecture for controlling the forwarding of data packets. Furthermore., it offers a framework for developing high level services over the network. Combining this solution with the characteristics of wireless mesh networks allows better performance., by the use of high level services. However., it introduces challenges regarding the opposition between the centralized control of OpenFlow and the distributed architecture of wireless mesh networks. In this paper we expose our experiences deploying an OpenFlow controller over a wireless mesh network based on the 802.11s standard. First., we describe the scenarios used in our testbed. Then., we discuss the considerations for each scenario. Finally we propose some applications using OpenFlow over a Wireless Mesh Network. --- paper_title: The networking philosopher's problem paper_content: While computer networking is an exciting research field, we are far from having a clear understanding of the core concepts and questions that define our discipline. This position paper, a summary of a talk I gave at the CoNext'10 Student Workshop, captures my current frustrations and hopes about the field. --- paper_title: FlowVisor: A Network Virtualization Layer paper_content: Network virtualization has long been a goal of of the network research community. With it, multiple isolated logical networks each with potentially different addressing and forwarding mechanisms can share the same physical infrastructure. Typically this is achieved by taking advantage of the flexibility of software (e.g. [20, 23]) or by duplicating components in (often specialized) hardware[19]. In this paper we present a new approach to switch virtualization in which the same hardware forwarding plane can be shared among multiple logical networks, each with distinct forwarding logic. We use this switch-level virtualization to build a research platform which allows multiple network experiments to run side-by-side with production traffic while still providing isolation and hardware forwarding speeds. We also show that this approach is compatible with commodity switching chipsets and does not require the use of programmable hardware such as FPGAs or network processors. We build and deploy this virtualization platform on our own production network and demonstrate its use in practice by running five experiments simultaneously within a campus network. Further, we quantify the overhead of our approach and evaluate the completeness of the isolation between virtual slices. --- paper_title: Carving research slices out of your production networks with OpenFlow paper_content: 1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies. --- paper_title: PlanetLab: an overlay testbed for broad-coverage services paper_content: PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a disverse collection of links. PlanetLab allows multiple service to run concurrently and continuously, each in its own slice of PlanetLab. This paper discribes our initial implementation of PlanetLab, including the mechanisms used to impelment virtualization, and the collection of core services used to manage PlanetLab. --- paper_title: Virtualizing the network forwarding plane paper_content: Modern system design often employs virtualization to decouple the system service model from its physical realization. Two common examples are the virtualization of computing resources through the use of virtual machines and the virtualization of disks by presenting logical volumes as the storage interface. The insertion of these abstraction layers allows operators great flexibility to achieve operational goals divorced from the underlying physical infrastructure. Today, workloads can be instantiated dynamically, expanded at runtime, migrated between physical servers (or geographic locations), and suspended if needed. Both computation and data can be replicated in real time across multiple physical hosts for purposes of high-availability within a single site, or disaster recovery across multiple sites. --- paper_title: Programming telecommunication networks paper_content: The move toward market deregulation and open competition has sparked a wave of serious introspection in the telecommunications service industry. Telecom providers and operators are now required to open up their primary revenue channels to competing industries. The competition for product differentiation increasingly depends on the level of sophistication, degree of flexibility, and speed of deployment of services that a future provider can offer. These factors in turn depend heavily on the flexibility of the software architecture in place in a provider's operational infrastructure. Within this context, we examine the service architecture of two major global communication networks-the telephone network and the Internet and explore their weaknesses and strengths. We discuss the realization of an open programmable networking environment based on a new service architecture for advanced telecommunication services that overcomes the limitations of the existing networks. Our approach to network programmability stems from two angles-one conceptual, the other implementational. In the first, we attempt to develop a service model that is open and reflects the economic market structure of the future telecommunications service industry. Furthermore, we introduce an extended reference model for realizing the service marketplace and present it as a vehicle for creating multimedia services with QoS guarantees. In the second, we investigate the feasibility of engineering the reference model from an implementation standpoint. We describe a realization of the open programmable networking environment as a broadband kernel. Called xbind, the broadband kernel incorporates IP and CORBA technologies for signaling, management, and service creation, and ATM for transport. We also address some of the important QoS, performance, scalability, and implementation issues. --- paper_title: 1 Languages for Software-Defined Networks paper_content: Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. However, managing these networks is unnecessarily complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) with closed and proprietary configuration interfaces. Softwaredefined networks are poised to change this by offering a clean and open interface between networking devices and the software that controls them. In particular, many commercial switches support the OpenFlow protocol, and a number of campus, data center, and backbone networks have deployed the new technology. However, while SDNs make it possible to program the network, they does not make it easy. Today's OpenFlow controllers offer low-level APIs that mimic the underlying switch hardware. To reach SDNs full potential, we need to identify the right higher-level abstractions for creating (and composing) applications. In the Frenetic project, we are designing simple and intuitive abstractions for programming the three main stages of network management: monitoring network traffic, specifying and composing packet forwarding policies, and updating policies in a consistent way. Overall, these abstractions make it dramatically easier for programmers to write and reason about SDN applications. --- paper_title: A clean slate 4D approach to network control and management paper_content: Today's data networks are surprisingly fragile and difficult to manage. We argue that the root of these problems lies in the complexity of the control and management planes--the software and protocols coordinating network elements--and particularly the way the decision logic and the distributed-systems issues are inexorably intertwined. We advocate a complete refactoring of the functionality and propose three key principles--network-level objectives, network-wide views, and direct control--that we believe should underlie a new architecture. Following these principles, we identify an extreme design point that we call "4D," after the architecture's four planes: decision, dissemination, discovery, and data. The 4D architecture completely separates an AS's decision logic from pro-tocols that govern the interaction among network elements. The AS-level objectives are specified in the decision plane, and en-forced through direct configuration of the state that drives how the data plane forwards packets. In the 4D architecture, the routers and switches simply forward packets at the behest of the decision plane, and collect measurement data to aid the decision plane in controlling the network. Although 4D would involve substantial changes to today's control and management planes, the format of data packets does not need to change; this eases the deployment path for the 4D architecture, while still enabling substantial innovation in network control and management. We hope that exploring an extreme design point will help focus the attention of the research and industrial communities on this crucially important and intellectually challenging area. --- paper_title: Network Innovation using OpenFlow: A Survey paper_content: OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology. --- paper_title: The case for separating routing from routers paper_content: Over the past decade, the complexity of the Internet's routing infrastructure has increased dramatically. This complexity and the problems it causes stem not just from various new demands made of the routing infrastructure, but also from fundamental limitations in the ability of today's distributed infrastructure to scalably cope with new requirements.The limitations in today's routing system arise in large part from the fully distributed path-selection computation that the IP routers in an autonomous system (AS) must perform. To overcome this weakness, interdomain routing should be separated from today's IP routers, which should simply forward packets (for the most part). Instead, a separate Routing Control Platform (RCP) should select routes on behalf of the IP routers in each AS and exchange reachability information with other domains.Our position is that an approach like RCP is a good way of coping with complexity while being responsive to new demands and can lead to a routing system that is substantially easier to manage than today. We present a design overview of RCP based on three architectural principles path computation based on a consistent view of network state, controlled interactions between routing protocol layers, and expressive specification of routing policies and discuss the architectural strengths and weaknesses of our proposal. --- paper_title: OpenFlow: enabling innovation in campus networks paper_content: This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too --- paper_title: Large-scale virtualization in the Emulab network testbed paper_content: Network emulation is valuable largely because of its ability to study applications running on real hosts and "somewhat real" networks. However, conservatively allocating a physical host or network link for each corresponding virtual entity is costly and limits scale. We present a system that can faithfully emulate, on low-end PCs, virtual topologies over an order of magnitude larger than the physical hardware, when running typical classes of distributed applications that have modest resource requirements. This version of Emulab virtualizes hosts, routers, and networks, while retaining near-total application transparency, good performance fidelity, responsiveness suitable for interactive use, high system throughput, and efficient use of resources. Our key design techniques are to use the minimum degree of virtualization that provides transparency to applications, to exploit the hierarchy found in real computer networks, to perform optimistic automated resource allocation, and to use feed-back to adaptively allocate resources. The entire system is highly automated, making it easy to use even when scaling to more than a thousand virtual nodes. This paper identifies the many problems posed in building a practical system, and describes the system's motivation, design, and preliminary evaluation. --- paper_title: In VINI veritas: realistic and controlled network experimentation paper_content: This paper describes VINI, a virtual network infrastructure that allows network researchers to evaluate their protocols and services in a realistic environment that also provides a high degree of control over network conditions. VINI allows researchers to deploy and evaluate their ideas with real routing software, traffic loads, and network events. To provide researchers flexibility in designing their experiments, VINI supports simultaneous experiments with arbitrary network topologies on a shared physical infrastructure. This paper tackles the following important design question: What set of concepts and techniques facilitate flexible, realistic, and controlled experimentation (e.g., multiple topologies and the ability to tweak routing algorithms) on a fixed physical infrastructure? We first present VINI's high-level design and the challenges of virtualizing a single network. We then present PL-VINI, an implementation of VINI on PlanetLab, running the "Internet In a Slice". Our evaluation of PL-VINI shows that it provides a realistic and controlled environment for evaluating new protocols and services. --- paper_title: Onix: A Distributed Control Platform for Large-scale Production Networks paper_content: Computer networks lack a general control paradigm, as traditional networks do not provide any network-wide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. ::: ::: To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability. --- paper_title: OpenQoS: An OpenFlow controller design for multimedia delivery with end-to-end Quality of Service over Software-Defined Networks paper_content: OpenFlow is a Software Defined Networking (SDN) paradigm that decouples control and data forwarding layers of routing. In this paper, we propose OpenQoS, which is a novel OpenFlow controller design for multimedia delivery with end-to-end Quality of Service (QoS) support. Our approach is based on QoS routing where the routes of multimedia traffic are optimized dynamically to fulfill the required QoS. We measure performance of OpenQoS over a real test network and compare it with the performance of the current state-of-the-art, HTTP-based multi-bitrate adaptive streaming. Our experimental results show that OpenQoS can guarantee seamless video delivery with little or no video artifacts experienced by the end-users. Moreover, unlike current QoS architectures, in OpenQoS the guaranteed service is handled without having adverse effects on other types of traffic in the network. --- paper_title: Rethinking end-to-end congestion control in software-defined networks paper_content: TCP is designed to operate in a wide range of networks. Without any knowledge of the underlying network and traffic characteristics, TCP is doomed to continuously increase and decrease its congestion window size to embrace changes in network or traffic. In light of emerging popularity of centrally controlled Software-Defined Networks (SDNs), one might wonder whether we can take advantage of the global network view available at the controller to make faster and more accurate congestion control decisions. In this paper, we identify the need and the underlying requirements for a congestion control adaptation mechanism. To this end, we propose OpenTCP as a TCP adaptation framework that works in SDNs. OpenTCP allows network operators to define rules for tuning TCP as a function of network and traffic conditions. We also present a preliminary implementation of OpenTCP in a ~4000 node data center. --- paper_title: Virtual routers as a service: the RouteFlow approach leveraging software-defined networks paper_content: The networking equipment market is being transformed by the need for greater openness and flexibility, not only for research purposes but also for in-house innovation by the equipment owners. In contrast to networking gear following the model of computer mainframes, where closed software runs on proprietary hardware, the software-defined networking approach effectively decouples the data from the control plane via an open API (i.e., OpenFlow protocol) that allows the (remote) control of packet forwarding engines. Motivated by this scenario, we propose RouteFlow, a commodity routing architecture that combines the line-rate performance of commercial hardware with the flexibility of open-source routing stacks (remotely) running on general purpose computers. The outcome is a novel point in the design space of commodity routing solutions with far-reaching implications towards virtual routers and IP networks as a service. This paper documents the progress achieved in the design and prototype implementation of our work and outlines our research agenda that calls for a community-driven approach. --- paper_title: QuagFlow: partnering Quagga with OpenFlow paper_content: Computing history has shown that open, multi-layer hardware and software stacks encourage innovation and bring costs down. Only recently this trend is meeting the networking world with the availability of entire open source networking stacks being closer than ever. Towards this goal, we are working on QuagFlow, a transparent interplay between the popular Quagga open source routing suite and the low level vendor-independent OpenFlow interface. QuagFlow is a distributed system implemented as a NOX controller application and a series of slave daemons running along the virtual machines hosting the Quagga routing instances. --- paper_title: B4: experience with a globally-deployed software defined wan paper_content: We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge. These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work. --- paper_title: The power of abstraction paper_content: Abstraction is at the center of much work in Computer Science. It encompasses finding the right interface for a system as well as finding an effective design for a system implementation. Furthermore, abstraction is the basis for program construction, allowing programs to be built in a modular fashion. This talk will discuss how the abstraction mechanisms we use today came to be, how they are supported in programming languages, and some possible areas for future research. --- paper_title: The Tempest, a Framework for Safe, Resource Assured, Programmable Networks paper_content: Most research in network programmability has stressed the flexibility engendered by increasing the ability of users to configure network elements for their own purposes, without addressing the larger issues of how such advanced control systems can coexist both with each other and with more conventional ones. The Tempest framework presented here extends beyond the provision of simple network programmability to address these larger issues. In particular, we show how network programmability can be achieved without jeopardizing the integrity of the network as a whole, how network programmability fits in with existing networks, and how programmability can be offered at different levels of granularity. Our approach is based on the Tempest's ability to dynamically create virtual private networks over a switched transport architecture (e.g., an ATM network). Each VPN is assigned a set of network resources which can be controlled using either a well-known control system or a control system tailored to the specific needs of a distributed application. The first level of programmability in the Tempest is fairly coarse-grained: an entire virtual network can be programmed by a third party. At a finer level of granularity the Tempest allows user supplied code to be injected into parts of an operational virtual network, thus allowing application specific customization of network control. The article shows how the Tempest framework allows these new approaches to coexist with more conventional solutions. --- paper_title: NOX: towards an operating system for networks paper_content: As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale? --- paper_title: Software radio: a modern approach to radio engineering paper_content: Software-based approaches enable engineers to build wireless system radios that are easier to manufacture, more flexible, and more cost-effective. Software Radio: A Modern Approach to Radio Engineering systematically reviews the techniques, challenges, and tradeoffs of DSP software radio design. Coverage includes constructing RF front-ends; using digital processing to overcome RF design problems; direct digital synthesis of modulated waveforms; A/D and D/A conversions; smart antennas; object-oriented software design; and choosing among DSP microprocessors, FPGAs, and ASICs. This is an excellent book for all RF and signal processing engineers building advanced wireless systems. --- paper_title: Realizing the future of wireless data communications paper_content: Technologies are available to unlock radio spectrum as consumers need it. --- paper_title: Sora: high-performance software radio using general-purpose multi-core processors paper_content: This paper presents Sora, a fully programmable software radio platform on commodity PC architectures. Sora combines the performance and fidelity of hardware software-defined radio (SDR) platforms with the programmability and flexibility of general-purpose processor (GPP) SDR platforms. Sora uses both hardware and software techniques to address the challenges of using PC architectures for high-speed SDR. The Sora hardware components consist of a radio front-end for reception and transmission, and a radio control board for high-throughput, low-latency data transfer between radio and host memories. Sora makes extensive use of features of contemporary processor architectures to accelerate wireless protocol processing and satisfy protocol timing requirements, including using dedicated CPU cores, large low-latency caches to store lookup tables, and SIMD processor extensions for highly efficient physical layer processing on GPPs. Using the Sora platform, we have developed a few demonstration wireless systems, including SoftWiFi, an 802.11a/b/g implementation that seamlessly interoperates with commercial 802.11 NICs at all modulation rates, and SoftLTE, a 3GPP LTE uplink PHY implementation that supports up to 43.8Mbps data rate. --- paper_title: WARP: a flexible platform for clean-slate wireless medium access protocol design paper_content: The flexible interface between the medium access layer and the custom physical layer of the Rice University Wireless Open-Access Research Platform (WARP) provides a high performance research tool for clean-slate cross layer designs. As we target a community platform, we have implemented various basic PHY and MAC technologies over WARP. Moreover, we are implementing cross-layer schemes such as rate adaptation and crosslayer MIMO MAC protocols. In this demo, we demonstrate the flexibility of the interaction between the the WARP PHY and MAC layers by showing the capability to instantaneously change the modulation scheme, disabling/enabling MAC features such as carrier sensing or RTS/CTS 4-way handshake, and different multi-rate schemes. --- paper_title: Software Defined Radio: Challenges and Opportunities paper_content: Software Defined Radio (SDR) may provide flexible, upgradeable and longer lifetime radio equipment for the military and for civilian wireless communications infrastructure. SDR may also provide more flexible and possibly cheaper multi-standard-terminals for end users. It is also important as a convenient base technology for the future context-sensitive, adaptive and learning radio units referred to as cognitive radios. SDR also poses many challenges, however, some of them causing SDR to evolve slower than otherwise anticipated. Transceiver development challenges include size, weight and power issues such as the required computing capacity, but also SW architectural challenges such as waveform application portability. SDR has demanding implications for regulators, security organizations and business developers. --- paper_title: Airblue: a system for cross-layer wireless protocol development paper_content: Over the past few years, researchers have developed many cross-layer wireless protocols to improve the performance of wireless networks. Experimental evaluations of these protocols have been carried out mostly using software-defined radios, which are typically two to three orders of magnitude slower than commodity hardware. FPGA-based platforms provide much better speeds but are quite difficult to modify because of the way high-speed designs are typically implemented. Experimenting with cross-layer protocols requires a flexible way to convey information beyond the data itself from lower to higher layers, and a way for higher layers to configure lower layers dynamically and within some latency bounds. One also needs to be able to modify a layer's processing pipeline without triggering a cascade of changes. We have developed Airblue, an FPGA-based software radio platform, that has all these properties and runs at speeds comparable to commodity hardware. We discuss the design philosophy underlying Airblue that makes it relatively easy to modify it, and present early experimental results. --- paper_title: Cognitive radio: brain-empowered wireless communications paper_content: Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio. --- paper_title: Artificial Intelligence Based Cognitive Routing for Cognitive Radio Networks paper_content: Cognitive radio networks (CRNs) are networks of nodes equipped with cognitive radios that can optimize performance by adapting to network conditions. While cognitive radio networks (CRN) are envisioned as intelligent networks, relatively little research has focused on the network level functionality of CRNs. Although various routing protocols, incorporating varying degrees of adaptiveness, have been proposed for CRNs, it is imperative for the long term success of CRNs that the design of cognitive routing protocols be pursued by the research community. Cognitive routing protocols are envisioned as routing protocols that fully and seamless incorporate AI-based techniques into their design. In this paper, we provide a self-contained tutorial on various AI and machine-learning techniques that have been, or can be, used for developing cognitive routing protocols. We also survey the application of various classes of AI techniques to CRNs in general, and to the problem of routing in particular. We discuss various decision making techniques and learning techniques from AI and document their current and potential applications to the problem of routing in CRNs. We also highlight the various inference, reasoning, modeling, and learning sub tasks that a cognitive routing protocol must solve. Finally, open research issues and future directions of work are identified. --- paper_title: Security Aspects in Software Defined Radio and Cognitive Radio Networks: A Survey and A Way Ahead paper_content: Software Defined Radio (SDR) and Cognitive Radio (CR) are promising technologies, which can be used to alleviate the spectrum shortage problem or the barriers to communication interoperability in various application domains. The successful deployment of SDR and CR technologies will depend on the design and implementation of essential security mechanisms to ensure the robustness of networks and terminals against security attacks. SDR and CR may introduce entirely new classes of security threats and challenges including download of malicious software, licensed user emulation and selfish misbehaviors. An attacker could disrupt the basic functions of a CR network, cause harmful interference to licensed users or deny communication to other CR nodes. The research activity in this area has started only recently and many challenges are still to be resolved. This paper presents a survey of security aspects in SDR and CR. We identify the requirements for the deployment of SDR and CR, the main security threats and challenges and the related protection techniques. This paper provides an overview of the SDR and CR certification process and how it is related to the security aspects. Finally, this paper summarizes the most critical challenges in the context of the future evolution of SDR/CR technologies. --- paper_title: A Survey of Artificial Intelligence for Cognitive Radios paper_content: Cognitive radio (CR) is an enabling technology for numerous new capabilities such as dynamic spectrum access, spectrum markets, and self-organizing networks. To realize this diverse set of applications, CR researchers leverage a variety of artificial intelligence (AI) techniques. To help researchers better understand the practical implications of AI to their CR designs, this paper reviews several CR implementations that used the following AI techniques: artificial neural networks (ANNs), metaheuristic algorithms, hidden Markov models (HMMs), rule-based systems, ontology-based systems (OBSs), and case-based systems (CBSs). Factors that influence the choice of AI techniques, such as responsiveness, complexity, security, robustness, and stability, are discussed. To provide readers with a more concrete understanding, these factors are illustrated in an extended discussion of two CR designs. --- paper_title: NeXt generation/dynamic spectrum access/cognitive Radio Wireless Networks: A Survey paper_content: Today's wireless networks are characterized by a fixed spectrum assignment policy. However, a large portion of the assigned spectrum is used sporadically and geographical variations in the utilization of assigned spectrum ranges from 15% to 85% with a high variance in time. The limited available spectrum and the inefficiency in the spectrum usage necessitate a new communication paradigm to exploit the existing wireless spectrum opportunistically. This new networking paradigm is referred to as NeXt Generation (xG) Networks as well as Dynamic Spectrum Access (DSA) and cognitive radio networks. The term xG networks is used throughout the paper. The novel functionalities and current research challenges of the xG networks are explained in detail. More specifically, a brief overview of the cognitive radio technology is provided and the xG network architecture is introduced. Moreover, the xG network functions such as spectrum management, spectrum mobility and spectrum sharing are explained in detail. The influence of these functions on the performance of the upper layer protocols such as routing and transport are investigated and open research issues in these areas are also outlined. Finally, the cross-layer design challenges in xG networks are discussed. --- paper_title: CogNet: an architectural foundation for experimental cognitive radio networks within the future internet paper_content: This paper describes a framework for research on architectural tradeoffs and protocol designs for cognitive radio networks at both the local network and the global internetwork levels. Several key architectural issues for cognitive radio networks are discussed, including control and management protocols, support for collaborative PHY, dynamic spectrum coordination, flexible MAC layer protocols, ad hoc group formation and cross-layer adaptation. The overall goal of this work is the design and validation of the control/management and data interfaces between cognitive radio nodes in a local network, and also between cognitive radio networks and the global Internet. Protocol design and implementation based on this framework will result in the CogNet architecture, a prototype open-source cognitive radio protocol stack. Experimental evaluations on emerging cognitive radio platforms are planned for future work, first in a wireless local-area radio network scenario using wireless testbeds such as ORBIT, and later as part of several end-to-end experiments using a wide-area network testbed such as PlanetLab (and GENI in the future). --- paper_title: Sora: high-performance software radio using general-purpose multi-core processors paper_content: This paper presents Sora, a fully programmable software radio platform on commodity PC architectures. Sora combines the performance and fidelity of hardware software-defined radio (SDR) platforms with the programmability and flexibility of general-purpose processor (GPP) SDR platforms. Sora uses both hardware and software techniques to address the challenges of using PC architectures for high-speed SDR. The Sora hardware components consist of a radio front-end for reception and transmission, and a radio control board for high-throughput, low-latency data transfer between radio and host memories. Sora makes extensive use of features of contemporary processor architectures to accelerate wireless protocol processing and satisfy protocol timing requirements, including using dedicated CPU cores, large low-latency caches to store lookup tables, and SIMD processor extensions for highly efficient physical layer processing on GPPs. Using the Sora platform, we have developed a few demonstration wireless systems, including SoftWiFi, an 802.11a/b/g implementation that seamlessly interoperates with commercial 802.11 NICs at all modulation rates, and SoftLTE, a 3GPP LTE uplink PHY implementation that supports up to 43.8Mbps data rate. --- paper_title: Breaking layer 2: A new architecture for programmable wireless interfaces paper_content: This paper introduces a new architecture for programmable wireless interfaces, aiming at responding to the emerging request of wireless access flexibility and adaptability. Instead of implementing a specific MAC protocol stack, the proposed architecture supports a set of programmable services, devised to customize the wireless access operations according to specific network and application scenarios. The services are composed by means of simpler functions, which in turns work on system primitives (i.e. elementary non-programmable functionalities, natively provided by the system) dealing with the physical transmission and reception of the frames. Our approach significantly differs from software-radio solutions, since we argue that most practical needs for promptly adapting and customizing network features and performance may be accomplished by means of advanced and programmable interfaces exposed at a layer higher than the physical one (PHY). This choice does not rule out the possibility of using (i.e. dynamically selecting) advanced PHY mechanisms provided, as elementary primitives, by the interface manufacturers. --- paper_title: The WINLAB Network Centric Cognitive Radio Hardware Platform - WiNC2R paper_content: This paper presents the design goals and architecture of WiNC2R--the WINLAB Network Centric Cognitive radio hardware platform. The platform has been designed for flexible processing at both the radio physical layer and MAC/network layers with sustained bit-rates of ∼10 Mbps and higher. The hardware prototype supports multi band operation with fast spectrum scanning, the ability to dynamically switch between a number of OFDM and DSSS modems and multiple MAC protocols. The radio modems, MAC, and network-layer protocols are implemented in a flexible manner using general-purpose processing engines and a set of dynamically configurable hardware accelerators. An FPGA based platform implementation currently in progress is described in terms of key hardware components including the software-defined modem, the flexible MAC engine and network-level processor. Preliminary prototyping results are reported, and a roadmap for further evolution of the WiNC2R board is provided. --- paper_title: OpenRadio: a programmable wireless dataplane paper_content: We present OpenRadio, a novel design for a programmable wireless dataplane that provides modular and declarative programming interfaces across the entire wireless stack. Our key conceptual contribution is a principled refactoring of wireless protocols into processing and decision planes. The processing plane includes directed graphs of algorithmic actions (eg. 54Mbps OFDM WiFi or special encoding for video). The decision plane contains the logic which dictates which directed graph is used for a particular packet (eg. picking between data and video graphs). The decoupling provides a declarative interface to program the platform while hiding all underlying complexity of execution. An operator only expresses decision plane rules and corresponding processing plane action graphs to assemble a protocol. The scoped interface allows us to build a dataplane that arguably provides the right tradeoff between performance and flexibility. Our current system is capable of realizing modern wireless protocols (WiFi, LTE) on off-the-shelf DSP chips while providing flexibility to modify the PHY and MAC layers to implement protocol optimizations. --- paper_title: Wireless MAC processors: Programming MAC protocols on commodity Hardware paper_content: Programmable wireless platforms aim at responding to the quest for wireless access flexibility and adaptability. This paper introduces the notion of wireless MAC processors. Instead of implementing a specific MAC protocol stack, Wireless MAC processors do support a set of Medium Access Control “commands” which can be run-time composed (programmed) through software-defined state machines, thus providing the desired MAC protocol operation. We clearly distinguish from related work in this area as, unlike other works which rely on dedicated DSPs or programmable hardware platforms, we experimentally prove the feasibility of the wireless MAC processor concept over ultra-cheap commodity WLAN hardware cards. Specifically, we reflash the firmware of the commercial Broadcom AirForce54G off-the-shelf chipset, replacing its 802.11 WLAN MAC protocol implementation with our proposed extended state machine execution engine. We prove the flexibility of the proposed approach through three use-case implementation examples. --- paper_title: WARP: a flexible platform for clean-slate wireless medium access protocol design paper_content: The flexible interface between the medium access layer and the custom physical layer of the Rice University Wireless Open-Access Research Platform (WARP) provides a high performance research tool for clean-slate cross layer designs. As we target a community platform, we have implemented various basic PHY and MAC technologies over WARP. Moreover, we are implementing cross-layer schemes such as rate adaptation and crosslayer MIMO MAC protocols. In this demo, we demonstrate the flexibility of the interaction between the the WARP PHY and MAC layers by showing the capability to instantaneously change the modulation scheme, disabling/enabling MAC features such as carrier sensing or RTS/CTS 4-way handshake, and different multi-rate schemes. --- paper_title: MAClets: active MAC protocols over hard-coded devices paper_content: We introduce MAClets, software programs uploaded and executed on-demand over wireless cards, and devised to change the card's real-time medium access control operation. MAClets permit seamless reconfiguration of the MAC stack, so as to adapt it to mutated context and spectrum conditions and perform tailored performance optimizations hardly accountable by an once-for-all protocol stack design. Following traditional active networking principles, MAClets can be directly conveyed within data packets and executed on hard-coded devices acting as virtual MAC machines. Indeed, rather than executing a pre-defined protocol, we envision a new architecture for wireless cards based on a protocol interpreter (enabling code portability) and a powerful API. Experiments involving the distribution of MAClets within data packets, and their execution over commodity WLAN cards, show the flexibility and viability of the proposed concept. --- paper_title: Enabling MAC Protocol Implementations on Software-Defined Radios paper_content: Over the past few years a range of new Media Access Control (MAC) protocols have been proposed for wireless networks. This research has been driven by the observation that a single one-size-fits-all MAC protocol cannot meet the needs of diverse wireless deployments and applications. Unfortunately, most MAC functionality has traditionally been implemented on the wireless card for performance reasons, thus, limiting the opportunities for MAC customization. Software-defined radios (SDRs) promise unprecedented flexibility, but their architecture has proven to be a challenge for MAC protocols. ::: ::: In this paper, we identify a minimum set of core MAC functions that must be implemented close to the radio in a high-latency SDR architecture to enable high performance and efficient MAC implementations. These functions include: precise scheduling in time, carrier sense, backoff, dependent packets, packet recognition, fine-grained radio control, and access to physical layer information. While we focus on an architecture where the bus latency exceeds common MAC interaction times (tens to hundreds of microseconds), other SDR architectures with lower latencies can also benefit from implementing a subset of these functions closer to the radio. We also define an API applicable to all SDR architectures that allows the host to control these functions, providing the necessary flexibility to implement a diverse range of MAC protocols. We show the effectiveness of our split-functionality approach through an implementation on the GNU Radio and USRP platforms. Our evaluation based on microbenchmarks and end-to-end network measurements, shows that our design can simultaneously achieve high flexibility and high performance. --- paper_title: KNOWS: Cognitive Radio Networks Over White Spaces paper_content: FCC has proposed to allow unlicensed operations in the TV broadcast bands. This portion of the spectrum has several desirable properties for more robust data communication as compared to the ISM bands. However, there are a number of challenges in efficiently using the TV bands. For example, the available spectrum is fragmented, and its availability may vary over time. In this paper, we present a cognitive radio system, called KNOWS, to address these challenges. We present the design of our prototype, which includes a new hardware platform and a spectrum-aware Medium Access Control (MAC) protocol. We have implemented the MAC protocol in QualNet, and our results show that in most scenarios KNOWS increases the throughput by more than 200% when compared to an IEEE 802.11 based system. --- paper_title: A real time cognitive radio testbed for physical and link layer experiments paper_content: Cognitive radios have been advanced as a technology for the opportunistic use of under-utilized spectrum. Cognitive radio is able to sense the spectrum and detect the presence of primary users. However, primary users of the spectrum are skeptical about the robustness of this sensing process and have raised concerns with regards to interference from cognitive radios. Furthermore, while a number of techniques have been advanced to aid the sensing process, none of these techniques have been verified in a practical system. To alleviate these concerns, a real time testbed is required, which can aid the comparison of these techniques and enable the measurement and evaluation of key interference and performance metrics. In this paper, we present such a testbed, which is based on the BEE2, a multi-FPGA emulation engine. The BEE2 can connect to 18 radio front-ends, which can be configured as primary or secondary users. Inherent parallelism of the FPGAs allows the simultaneous operation of multiple radios, which can communicate and exchange information via high speed low latency links --- paper_title: Decomposable MAC Framework for Highly Flexible and Adaptable MAC Realizations paper_content: Cognitive radios are slowly becoming a reality. Besides the need for hardware reconfigurability and the capability to sense spectrum opportunities, adaptability in the MAC designs are required so that the wireless communication systems can support cognitive radio functionalities. In this demo paper, we propose a MAC design framework which enables fast composition of MAC protocols which are best fitted to the application requirements, communication capabilities of the radio and current regulations and policies. Our design is based on decomposition principle and allows on-the-fly realization of the required MAC protocol from a set of basic functional components. By exposing extended meta-data and hardware functionalities for the MAC implementation through our granular components together with the support for run-time re-configuration, spectrum agile and cognitive MAC solutions can easily be realized. We validate our approach through realization of a few MAC solutions on the WARP board from Rice University, USA. We also demonstrate the ease of MAC realization, fast on-the-fly adaptation based on the spectral characteristics and high degree of code reuse. --- paper_title: The Mobiware Toolkit: Programmable Support for Adaptive Mobile Networking paper_content: Existing mobile systems (e.g., mobile IP, mobile ATM, and third-generation cellular systems) lack the intrinsic architectural flexibility to deal with the complexity of supporting adaptive mobile applications in wireless and mobile environments. We believe that there is a need to develop alternative network architectures from the existing ones to deal with the tremendous demands placed on underlying mobile signaling, adaptation management, and wireless transport systems in support of new mobile services (e.g., interactive multimedia and Web access). We present the design, implementation, and evaluation of mobiware, a mobile middleware toolkit that enables adaptive mobile services to dynamically exploit the intrinsic scalable properties of mobile multimedia applications in response to time-varying mobile network conditions. The mobiware toolkit is software-intensive and is built on CORBA and Java distributed object technology. Based on an open programmable paradigm developed by the COMET Group, mobiware runs on mobile devices, wireless access points, and mobile-capable switch/routers providing a set of open programmable interfaces and algorithms for adaptive mobile networking. --- paper_title: NetFPGA--An Open Platform for Gigabit-Rate Network Switching and Routing paper_content: The NetFPGA platform enables students and researchers to build high-performance networking systems in hardware. A new version of the NetFPGA platform has been developed and is available for use by the academic community. The NetFPGA 2.1 platform now has interfaces that can be parameterized, therefore enabling development of modular hardware designs with varied word sizes. It also includes more logic and faster memory than the previous platform. Field Programmable Gate Array (FPGA) logic is used to implement the core data processing functions while software running on embedded cores within the FPGA and/or programs running on an attached host computer implement only control functions. Reference designs and component libraries have been developed for the CS344 course at Stanford University. Open-source Verilog code is available for download from the project website. --- paper_title: NetFPGA: reusable router architecture for experimental research paper_content: Our goal is to enable fast prototyping of networking hardware (e.g. modified Ethernet switches and IP routers) for teaching and research. To this end, we built and made available the NetFPGA platform. Starting from open-source reference designs, students and researchers create their designs in Verilog, and then download them to the NetFPGA board where they can process packets at line-rate for 4-ports of 1GE. The board is becoming widely used for teaching and research, and so it has become important to make it easy to re-use modules and designs. We have created a standard interface between modules, making it easier to plug modules together in pipelines, and to create new re-usable designs. In this paper we describe our modular design, and how we have used it to build several systems, including our IP router reference design and some extensions to it. --- paper_title: TRUMP: Supporting efficient realization of protocols for cognitive radio networks paper_content: Cognitive radios require fast reconfiguration of the protocol stack for dynamic spectrum access and run-time performance optimization. In order to provide rapid on-the-fly adaptability of PHY/MAC protocols, we have designed and implemented TRUMP: a Toolchain for RUn-tiMe Protocol realization. It includes a meta-language compiler, logic controller and an optimizer. TRUMP allows run-time realization and optimization of cognitive network protocols for the requirements of a particular application, communication capabilities of the radio, the current spectrum regulation and policies. TRUMP supports efficient multi-threading for multi-core platforms in order to meet variable computational requirements and to allow parallelization of PHY/MAC processing for cognitive radio systems. We have carried out the performance evaluation for different metrics on WARP SDR platform and embedded Linux based PCs. Our results indicate that TRUMP allows reconfiguration of protocols in the order of a few microseconds through run-time linking of different components, thus meeting the strict timeliness requirements imposed by PHY/MAC processing. --- paper_title: The Genesis Kernel: a virtual network operating system for spawning network architectures paper_content: The deployment of network architectures is often manual, ad hoc and time consuming. In this paper we introduce a new paradigm for automating the life cycle process for the creation, deployment and management of network architectures and envision programmable networks capable of spawning distinct "child" virtual networks with their own transport, control and management systems. A child network operates on a subset of its "parent's" network resources and in isolation from other virtual networks. Child networks support the controlled access to communities of users with specific connectivity, security and quality of service requirements. In this paper we introduce the Genesis Kernel, a virtual network operating system capable of profiling, spawning and managing virtual network architectures on-the-fly. --- paper_title: RouteBricks: exploiting parallelism to scale software routers paper_content: We revisit the problem of scaling software routers, motivated by recent advances in server technology that enable high-speed parallel processing--a feature router workloads appear ideally suited to exploit. We propose a software router architecture that parallelizes router functionality both across multiple servers and across multiple cores within a single server. By carefully exploiting parallelism at every opportunity, we demonstrate a 35Gbps parallel router prototype; this router capacity can be linearly scaled through the use of additional servers. Our prototype router is fully programmable using the familiar Click/Linux environment and is built entirely from off-the-shelf, general-purpose server hardware. --- paper_title: Wireless MAC processors: Programming MAC protocols on commodity Hardware paper_content: Programmable wireless platforms aim at responding to the quest for wireless access flexibility and adaptability. This paper introduces the notion of wireless MAC processors. Instead of implementing a specific MAC protocol stack, Wireless MAC processors do support a set of Medium Access Control “commands” which can be run-time composed (programmed) through software-defined state machines, thus providing the desired MAC protocol operation. We clearly distinguish from related work in this area as, unlike other works which rely on dedicated DSPs or programmable hardware platforms, we experimentally prove the feasibility of the wireless MAC processor concept over ultra-cheap commodity WLAN hardware cards. Specifically, we reflash the firmware of the commercial Broadcom AirForce54G off-the-shelf chipset, replacing its 802.11 WLAN MAC protocol implementation with our proposed extended state machine execution engine. We prove the flexibility of the proposed approach through three use-case implementation examples. --- paper_title: A survey of dynamically adaptable protocol stacks paper_content: The continuous development of new networking standards over the last decade has resulted in an unprecedented proliferation of interfacing technologies and their associated protocol stacks. Never before was such a wide gamut of network architectures, protocol configurations and deployment options available to network designers. Alas, this significant increase in flexibility has come at the cost of an increased complexity in network management tasks, particularly with regard to the accommodation of performance requirements. Especially in mobile settings, this is due to the greater probability of unforeseen communication contexts that renders the efficient provisioning of multiple dissimilar protocol stacks a challenging task. To address this unpredictability, several approaches based on the dynamic adaptation of protocol stacks during runtime have been proposed and investigated over the years. This article surveys major research efforts dealing with the introduction of a dynamic adaptation capacity into protocol stack subsystems. To this end, we present the respective architectures with a focus on their functional entities and their particular mode of operation. Most importantly, we elaborate on the various design approaches to adaptability and the entailed degree of coupling between protocol stack-and layer-entities and their impact on resource allocation models. Furthermore, we classify these research efforts according to a taxonomy for non-monolithic protocol stacks and discuss design trade-offs inherent in each class. We conclude the article with a summary of the key design principles for adaptable protocol stack architectures. --- paper_title: MAClets: active MAC protocols over hard-coded devices paper_content: We introduce MAClets, software programs uploaded and executed on-demand over wireless cards, and devised to change the card's real-time medium access control operation. MAClets permit seamless reconfiguration of the MAC stack, so as to adapt it to mutated context and spectrum conditions and perform tailored performance optimizations hardly accountable by an once-for-all protocol stack design. Following traditional active networking principles, MAClets can be directly conveyed within data packets and executed on hard-coded devices acting as virtual MAC machines. Indeed, rather than executing a pre-defined protocol, we envision a new architecture for wireless cards based on a protocol interpreter (enabling code portability) and a powerful API. Experiments involving the distribution of MAClets within data packets, and their execution over commodity WLAN cards, show the flexibility and viability of the proposed concept. --- paper_title: The Tempest, a Framework for Safe, Resource Assured, Programmable Networks paper_content: Most research in network programmability has stressed the flexibility engendered by increasing the ability of users to configure network elements for their own purposes, without addressing the larger issues of how such advanced control systems can coexist both with each other and with more conventional ones. The Tempest framework presented here extends beyond the provision of simple network programmability to address these larger issues. In particular, we show how network programmability can be achieved without jeopardizing the integrity of the network as a whole, how network programmability fits in with existing networks, and how programmability can be offered at different levels of granularity. Our approach is based on the Tempest's ability to dynamically create virtual private networks over a switched transport architecture (e.g., an ATM network). Each VPN is assigned a set of network resources which can be controlled using either a well-known control system or a control system tailored to the specific needs of a distributed application. The first level of programmability in the Tempest is fairly coarse-grained: an entire virtual network can be programmed by a third party. At a finer level of granularity the Tempest allows user supplied code to be injected into parts of an operational virtual network, thus allowing application specific customization of network control. The article shows how the Tempest framework allows these new approaches to coexist with more conventional solutions. --- paper_title: SwitchBlade: a platform for rapid deployment of network protocols on programmable hardware paper_content: We present SwitchBlade, a platform for rapidly deploying custom protocols on programmable hardware. SwitchBlade uses a pipeline-based design that allows individual hardware modules to be enabled or disabled on the fly, integrates software exception handling, and provides support for forwarding based on custom header fields. SwitchBlade's ease of programmability and wire-speed performance enables rapid prototyping of custom data-plane functions that can be directly deployed in a production network. SwitchBlade integrates common packet-processing functions as hardware modules, enabling different protocols to use these functions without having to resynthesize hardware. SwitchBlade's customizable forwarding engine supports both longest-prefix matching in the packet header and exact matching on a hash value. SwitchBlade's software exceptions can be invoked based on either packet or flow-based rules and updated quickly at runtime, thus making it easy to integrate more flexible forwarding function into the pipeline. SwitchBlade also allows multiple custom data planes to operate in parallel on the same physical hardware, while providing complete isolation for protocols running in parallel. We implemented SwitchBlade using NetFPGA board, but SwitchBlade can be implemented with any FPGA. To demonstrate SwitchBlade's flexibility, we use SwitchBlade to implement and evaluate a variety of custom network protocols: we present instances of IPv4, IPv6, Path Splicing, and an OpenFlow switch, all running in parallel while forwarding packets at line rate. --- paper_title: XORP: an open platform for network research paper_content: Network researchers face a significant problem when deploying software in routers, either for experimentation or for pilot deployment. Router platforms are generally not open systems, in either the open-source or the open-API sense. In this paper we discuss the problems this poses, and present an eXtensible Open Router Platform (XORP) that we are developing to address these issues. Key goals are extensibility, performance and robustness. We show that different parts of a router need to prioritize these differently, and examine techniques by which we can satisfy these often conflicting goals. We aim for XORP to be both a research tool and a stable deployment platform, thus easing the transition of new ideas from the lab to the real world. --- paper_title: NetFPGA--An Open Platform for Gigabit-Rate Network Switching and Routing paper_content: The NetFPGA platform enables students and researchers to build high-performance networking systems in hardware. A new version of the NetFPGA platform has been developed and is available for use by the academic community. The NetFPGA 2.1 platform now has interfaces that can be parameterized, therefore enabling development of modular hardware designs with varied word sizes. It also includes more logic and faster memory than the previous platform. Field Programmable Gate Array (FPGA) logic is used to implement the core data processing functions while software running on embedded cores within the FPGA and/or programs running on an attached host computer implement only control functions. Reference designs and component libraries have been developed for the CS344 course at Stanford University. Open-source Verilog code is available for download from the project website. --- paper_title: RouteBricks: exploiting parallelism to scale software routers paper_content: We revisit the problem of scaling software routers, motivated by recent advances in server technology that enable high-speed parallel processing--a feature router workloads appear ideally suited to exploit. We propose a software router architecture that parallelizes router functionality both across multiple servers and across multiple cores within a single server. By carefully exploiting parallelism at every opportunity, we demonstrate a 35Gbps parallel router prototype; this router capacity can be linearly scaled through the use of additional servers. Our prototype router is fully programmable using the familiar Click/Linux environment and is built entirely from off-the-shelf, general-purpose server hardware. --- paper_title: SwitchBlade: a platform for rapid deployment of network protocols on programmable hardware paper_content: We present SwitchBlade, a platform for rapidly deploying custom protocols on programmable hardware. SwitchBlade uses a pipeline-based design that allows individual hardware modules to be enabled or disabled on the fly, integrates software exception handling, and provides support for forwarding based on custom header fields. SwitchBlade's ease of programmability and wire-speed performance enables rapid prototyping of custom data-plane functions that can be directly deployed in a production network. SwitchBlade integrates common packet-processing functions as hardware modules, enabling different protocols to use these functions without having to resynthesize hardware. SwitchBlade's customizable forwarding engine supports both longest-prefix matching in the packet header and exact matching on a hash value. SwitchBlade's software exceptions can be invoked based on either packet or flow-based rules and updated quickly at runtime, thus making it easy to integrate more flexible forwarding function into the pipeline. SwitchBlade also allows multiple custom data planes to operate in parallel on the same physical hardware, while providing complete isolation for protocols running in parallel. We implemented SwitchBlade using NetFPGA board, but SwitchBlade can be implemented with any FPGA. To demonstrate SwitchBlade's flexibility, we use SwitchBlade to implement and evaluate a variety of custom network protocols: we present instances of IPv4, IPv6, Path Splicing, and an OpenFlow switch, all running in parallel while forwarding packets at line rate. --- paper_title: XORP: an open platform for network research paper_content: Network researchers face a significant problem when deploying software in routers, either for experimentation or for pilot deployment. Router platforms are generally not open systems, in either the open-source or the open-API sense. In this paper we discuss the problems this poses, and present an eXtensible Open Router Platform (XORP) that we are developing to address these issues. Key goals are extensibility, performance and robustness. We show that different parts of a router need to prioritize these differently, and examine techniques by which we can satisfy these often conflicting goals. We aim for XORP to be both a research tool and a stable deployment platform, thus easing the transition of new ideas from the lab to the real world. --- paper_title: Towards programmable enterprise WLANS with Odin paper_content: We present Odin, an SDN framework to introduce programmability in enterprise wireless local area networks (WLANs). Enterprise WLANs need to support a wide range of services and functionalities. This includes authentication, authorization and accounting, policy, mobility and interference management, and load balancing. WLANs also exhibit unique challenges. In particular, access point (AP) association decisions are not made by the infrastructure, but by clients. In addition, the association state machine combined with the broadcast nature of the wireless medium requires keeping track of a large amount of state changes. To this end, Odin builds on a light virtual AP abstraction that greatly simplifies client management. Odin does not require any client side modifications and its design supports WPA2 Enterprise. With Odin, a network operator can implement enterprise WLAN services as network applications. A prototype implementation demonstrates Odin's feasibility. --- paper_title: SoftRAN: software defined radio access network paper_content: An important piece of the cellular network infrastructure is the radio access network (RAN) that provides wide-area wireless connectivity to mobile devices. The fundamental problem the RAN solves is figuring out how best to use and manage limited spectrum to achieve this connectivity. In a dense wireless deployment with mobile nodes and limited spectrum, it becomes a difficult task to allocate radio resources, implement handovers, manage interference, balance load between cells, etc. We argue that LTE's current distributed control plane is suboptimal in achieving the above objective. We propose SoftRAN, a fundamental rethink of the radio access layer. SoftRAN is a software defined centralized control plane for radio access networks that abstracts all base stations in a local geographical area as a virtual big-base station comprised of a central controller and radio elements (individual physical base stations). In defining such an architecture, we create a framework through which a local geographical network can effectively perform load balancing and interference management, as well as maximize throughput, global utility, or any other objective. --- paper_title: Sensor OpenFlow: Enabling Software-Defined Wireless Sensor Networks paper_content: While it has been a belief for over a decade that wireless sensor networks (WSN) are application-specific, we argue that it can lead to resource underutilization and counter-productivity. We also identify two other main problems with WSN: rigidity to policy changes and difficulty to manage. In this paper, we take a radical, yet backward and peer compatible, approach to tackle these problems inherent to WSN. We propose a Software-Defined WSN architecture and address key technical challenges for its core component, Sensor OpenFlow. This work represents the first effort that synergizes software-defined networking and WSN. --- paper_title: SoftCell: scalable and flexible cellular core network architecture paper_content: Cellular core networks suffer from inflexible and expensive equipment, as well as from complex control-plane protocols. To address these challenges, we present SoftCell, a scalable architecture that supports fine-grained policies for mobile devices in cellular core networks, using commodity switches and servers. SoftCell enables operators to realize high-level service policies that direct traffic through sequences of middleboxes based on subscriber attributes and applications. To minimize the size of the forwarding tables, SoftCell aggregates traffic along multiple dimensions---the service policy, the base station, and the mobile device---at different switches in the network. Since most traffic originates from mobile devices, SoftCell performs fine-grained packet classification at the access switches, next to the base stations, where software switches can easily handle the state and bandwidth requirements. SoftCell guarantees that packets belonging to the same connection traverse the same sequence of middleboxes in both directions, even in the presence of mobility. We demonstrate that SoftCell improves the scalability and flexibility of cellular core networks by analyzing real LTE workloads, performing micro-benchmarks on our prototype controller as well as large-scale simulations. --- paper_title: Software Defined Wireless Networks: Unbridling SDNs paper_content: The {\it software defined networking} (SDN) paradigm promises to dramatically simplify network configuration and resource management. Such features are extremely valuable to network operators and therefore, the industrial (besides the academic) research and development community is paying increasing attention to SDN. Although wireless equipment manufacturers are increasing their involvement in SDN-related activities, to date there is not a clear and comprehensive understanding of what are the opportunities offered by SDN in most common networking scenarios involving wireless infrastructure less communications and how SDN concepts should be adapted to suit the characteristics of wireless and mobile communications. This paper is a first attempt to fill this gap as it aims at analyzing how SDN can be beneficial in wireless infrastructure less networking environments with special emphasis on wireless personal area networks (WPAN). Furthermore, a possible approach (called \emph{SDWN}) for such environments is presented and some design guidelines are provided. --- paper_title: A survey of mobile cloud computing: architecture, applications, and approaches paper_content: Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: MPAP: virtualization architecture for heterogenous wireless APs paper_content: This demonstration shows a novel virtualization architecture, called Multi-Purpose Access Point (MPAP), which can virtualize multiple heterogenous wireless standards based on software radio. The basic idea is to deploy a wide-band radio front-end to receive wireless signals from all wireless standards sharing the same spectrum band, and use separate software base-bands to demodulate information stream for each wireless standard. Based on software radio, MPAP consolidates multiple wireless devices into single hardware platform, and allows them to share the same general-purpose computing resource. Different software base-bands can easily communicate and coordinate with one another. Thus, it also provides better coexistence among heterogenous wireless standards. As an example, we demonstrate to use non-contiguous OFDM in 802.11g PHY to avoid the mutual interference with narrow-band ZigBee communication. --- paper_title: Cooperative spectrum sensing in TV White Spaces: When Cognitive Radio meets Cloud paper_content: A Cognitive Radio Cloud Network (CRCN) in TV White Spaces (TVWS) is proposed in this paper. Under the infrastructure of CRCN, cooperative spectrum sensing (SS) and resource scheduling in TVWS can be efficiently implemented making use of the scalability and the vast storage and computing capacity of the Cloud. Based on the sensing reports collected on the Cognitive Radio Cloud (CRC) from distributed secondary users (SUs), we study and implement a sparse Bayesian learning (SBL) algorithm for cooperative SS in TVWS using Microsoft's Windows Azure Cloud platform. A database for the estimated locations and spectrum power profiles of the primary users are established on CRC with Microsoft's SQL Azure. Moreover to enhance the performance of the SBL-based SS on CRC, a hierarchical parallelization method is also implemented with Microsoft's dotNet 4.0 in a MapReduce-like programming model. Based on our simulation studies, a proper programming model and partitioning of the sensing data play crucial roles to the performance of the SBL-based SS on the Cloud. --- paper_title: CloudMAC: towards software defined WLANs paper_content: Traditional enterprise WLAN management systems are hard to extend and require powerful access points (APs). In this paper we introduce and evaluate CloudMAC, an architecture for enterprise WLANs in which MAC frames are generated and processed on virtual APs hosted in a datacenter. The APs only need to forward MAC frames. The APs and the servers are connected via an OpenFlow-enabled network, which allows to control where and how MAC frames are transmitted. --- paper_title: Virtual basestation: architecture for an open shared WiMAX framework paper_content: This paper presents the architecture and performance evaluation of a virtualized wide-area "4G" cellular wireless network. Specifically, it addresses the challenges of virtualization of resources in a cellular base station to enable shared use by multiple independent slice users (experimenters or mobile virtual network operators), each with possibly distinct flow types and network layer protocols. The proposed virtual basestation architecture is based on an external substrate which uses a layer-2 switched datapath, and an arbitrated control path to the WiMAX basestation. The framework implements virtualization of base station's radio resources to achieve isolation between multiple virtual networks. An algorithm for weighted fair sharing among multiple slices based on an airtime fairness metric has been implemented for the first release. Preliminary experimental results from the virtual basestation prototype are given, demonstrating mobile network performance, isolation across slices with different flow types, and custom flow scheduling capabilities. --- paper_title: LTE wireless virtualization and spectrum management paper_content: Many research initiatives have started looking into Future Internet solutions in order to satisfy the ever increasing requirements on the Internet and also to cope with the challenges existing in the current one. Some are proposing further enhancements while others are proposing completely new approaches. Network Virtualization is one solution that is able to combine these approaches and therefore, could play a central role in the Future Internet. It will enable the existence of multiple virtual networks on a common infrastructure even with different network architectures. Network Virtualization means setting up a network composed of individual virtualized network components, such as nodes, links, and routers. Mobility will remain a major requirement, which means that also wireless resources need to be virtualized. In this paper the Long Term Evolution (LTE) was chosen as a case study to extend Network Virtualization into the wireless area. --- paper_title: Wireless Going in the Cloud: A Promising Concept or Just Marketing Hype? paper_content: Cloud computing is a computing service paradigm which attracts increasing attention from academia and industry. In this paper we investigate the use of the terms cloud and cloud computing in the wireless communications and networking domain. We identify four distinct usages of the term cloud in this context and analyze their meaning and relevance to the cloud computing concept. For one of these meanings (which we term Wireless Networking Functionality as a Service) we provide more extensive descriptions of commercial services and proposed architectural designs that illustrate this concept and discuss their advantages and technical challenges. We conclude that the introduction of certain cloud computing ideas in the field of wireless telecommunications and networking not only has the potential to bring along some of the generic advantages of cloud computing but, in some cases, can offer added benefits that are wireless domain specific. --- paper_title: Wireless network cloud: architecture and system requirements paper_content: With the growth of the mobile communication network, from second-generation to third-generation or fourth-generation networks, technologists in the mobile industry continue to consider advanced wireless network architectures that have the potential to reduce networking costs and provide increased flexibility with respect to network features. In this paper, we propose the wireless network cloud (WNC), a wireless system architecture for a wireless access network. This system makes use of emerging cloud-computing technology and various technologies involved with wireless infrastructure, such as software radio technology and remote radio head technology. Based on open information technology architecture, the WNC provides all the necessary transmission and processing resources for a wireless access network operating in a cloud mode. Note that it is useful to separate the hardware and software for different wireless standards and various services and business models, as well as to meet the new system requirements for emerging wireless technologies, such as collaborative processing at different scales of network use. We analyze several important system challenges involving computational requirements of virtual base stations, I/O throughput, and timing networks for synchronization. Based on current information technologies, we make several suggestions with respect to future system design. --- paper_title: Radio access network virtualization for future mobile carrier networks paper_content: This article presents a survey of cellular network sharing, which is a key building block for virtualizing future mobile carrier networks in order to address the explosive capacity demand of mobile traffic, and reduce the CAPEX and OPEX burden faced by operators to handle this demand. We start by reviewing the 3GPP network sharing standardized functionality followed by a discussion on emerging business models calling for additional features. Then an overview of the RAN sharing enhancements currently being considered by the 3GPP RSE Study Item is presented. Based on the developing network sharing needs, a summary of the state of the art of mobile carrier network virtualization is provided, encompassing RAN sharing as well as a higher level of base station programmability and customization for the sharing entities. As an example of RAN virtualization techniques feasibility, a solution based on spectrum sharing is presented: the network virtualization substrate (NVS), which can be natively implemented in base stations. NVS performance is evaluated in an LTE network by means of simulation, showing that it can meet the needs of future virtualized mobile carrier networks in terms of isolation, utilization, and customization. --- paper_title: Enable flexible spectrum access with spectrum virtualization paper_content: Enabling flexible spectrum access (FSA) in existing wireless networks is challenging due to the limited spectrum programmability - the ability to change spectrum properties of a signal to match an arbitrary frequency allocation. This paper argues that spectrum programmability can be separated from general wireless physical layer (PHY) modulation. Therefore, we can support flexible spectrum programmability by inserting a new spectrum virtualization layer (SVL) directly below traditional wireless PHY, and enable FSA for wireless networks without changing their PHY designs. SVL provides a virtual baseband abstraction to wireless PHY, which is static, contiguous, with a desirable width defined by the PHY. At the sender side, SVL reshapes the modulated baseband signals into waveform that matches the dynamically allocated physical frequency bands - which can be of different width, or non-contiguous - while keeping the modulated information unchanged. At the receiver side, SVL performs the inverse reshaping operation that collects the waveform from each physical band, and reconstructs the original modulated signals for PHY. All these reshaping operations are performed at the signal level and therefore SVL is agnostic and transparent to upper PHY. We have implemented a prototype of SVL on a software radio platform, and tested it with various wireless PHYs. Our experiments show SVL is flexible and effective to support FSA in existing wireless networks. --- paper_title: Software Defined Wireless Networks: Unbridling SDNs paper_content: The {\it software defined networking} (SDN) paradigm promises to dramatically simplify network configuration and resource management. Such features are extremely valuable to network operators and therefore, the industrial (besides the academic) research and development community is paying increasing attention to SDN. Although wireless equipment manufacturers are increasing their involvement in SDN-related activities, to date there is not a clear and comprehensive understanding of what are the opportunities offered by SDN in most common networking scenarios involving wireless infrastructure less communications and how SDN concepts should be adapted to suit the characteristics of wireless and mobile communications. This paper is a first attempt to fill this gap as it aims at analyzing how SDN can be beneficial in wireless infrastructure less networking environments with special emphasis on wireless personal area networks (WPAN). Furthermore, a possible approach (called \emph{SDWN}) for such environments is presented and some design guidelines are provided. --- paper_title: OpenRadio: a programmable wireless dataplane paper_content: We present OpenRadio, a novel design for a programmable wireless dataplane that provides modular and declarative programming interfaces across the entire wireless stack. Our key conceptual contribution is a principled refactoring of wireless protocols into processing and decision planes. The processing plane includes directed graphs of algorithmic actions (eg. 54Mbps OFDM WiFi or special encoding for video). The decision plane contains the logic which dictates which directed graph is used for a particular packet (eg. picking between data and video graphs). The decoupling provides a declarative interface to program the platform while hiding all underlying complexity of execution. An operator only expresses decision plane rules and corresponding processing plane action graphs to assemble a protocol. The scoped interface allows us to build a dataplane that arguably provides the right tradeoff between performance and flexibility. Our current system is capable of realizing modern wireless protocols (WiFi, LTE) on off-the-shelf DSP chips while providing flexibility to modify the PHY and MAC layers to implement protocol optimizations. --- paper_title: OpenRoads: empowering research in mobile networks paper_content: We present OpenRoads, an open-source platform for innovation in mobile networks. OpenRoads enable researchers to innovate using their own production networks, through providing an wireless extension OpenFlow. Therefore, you can think of OpenRoads as "OpenFlow Wireless". The OpenRoads' architecture consists of three layers: flow, slicing and controller. These layers provide flexible control, virtualization and high-level abstraction. This allows researchers to implement wildly different algorithms and run them concurrently in one network. OpenRoads also incorporates multiple wireless technologies, specifically WiFi and WiMAX. We have deployed OpenRoads, and used it as our production network. Our goal here is for those to deploy OpenRoads and build their own experiments on it. --- paper_title: Towards programmable enterprise WLANS with Odin paper_content: We present Odin, an SDN framework to introduce programmability in enterprise wireless local area networks (WLANs). Enterprise WLANs need to support a wide range of services and functionalities. This includes authentication, authorization and accounting, policy, mobility and interference management, and load balancing. WLANs also exhibit unique challenges. In particular, access point (AP) association decisions are not made by the infrastructure, but by clients. In addition, the association state machine combined with the broadcast nature of the wireless medium requires keeping track of a large amount of state changes. To this end, Odin builds on a light virtual AP abstraction that greatly simplifies client management. Odin does not require any client side modifications and its design supports WPA2 Enterprise. With Odin, a network operator can implement enterprise WLAN services as network applications. A prototype implementation demonstrates Odin's feasibility. --- paper_title: Blueprint for introducing innovation into wireless mobile networks paper_content: In the past couple of years we've seen quite a change in the wireless industry: Handsets have become mobile computers running user-contributed applications on (potentially) open operating systems. It seems we are on a path towards a more open ecosystem; one that has been previously closed and proprietary. The biggest winners are the users, who will have more choice among competing, innovative ideas. The same cannot be said for the wireless network infrastructure, which remains closed and (mostly) proprietary, and where innovation is bogged down by a glacial standards process. Yet as users, we are surrounded by abundant wireless capacity and multiple wireless networks (WiFi and cellular), with most of the capacity off-limits to us. It seems industry has little incentive to change, preferring to hold onto control as long as possible, keeping an inefficient and closed system in place. This paper is a "call to arms" to the research community to help move the network forward on a path to greater openness. We envision a world in which users can move freely between any wireless infrastructure, while providing payment to infrastructure owners, encouraging continued investment. We think the best path to get there is to separate the network service from the underlying physical infrastructure, and allow rapid innovation of network services, contributed by researchers, network operators, equipment vendors and third party developers. We propose to build and deploy an open - but backward compatible - wireless network infrastructure that can be easily deployed on college campuses worldwide. Through virtualization, we allow researchers to experiment with new network services directly in their production network. --- paper_title: SoftRAN: software defined radio access network paper_content: An important piece of the cellular network infrastructure is the radio access network (RAN) that provides wide-area wireless connectivity to mobile devices. The fundamental problem the RAN solves is figuring out how best to use and manage limited spectrum to achieve this connectivity. In a dense wireless deployment with mobile nodes and limited spectrum, it becomes a difficult task to allocate radio resources, implement handovers, manage interference, balance load between cells, etc. We argue that LTE's current distributed control plane is suboptimal in achieving the above objective. We propose SoftRAN, a fundamental rethink of the radio access layer. SoftRAN is a software defined centralized control plane for radio access networks that abstracts all base stations in a local geographical area as a virtual big-base station comprised of a central controller and radio elements (individual physical base stations). In defining such an architecture, we create a framework through which a local geographical network can effectively perform load balancing and interference management, as well as maximize throughput, global utility, or any other objective. --- paper_title: SoftCell: scalable and flexible cellular core network architecture paper_content: Cellular core networks suffer from inflexible and expensive equipment, as well as from complex control-plane protocols. To address these challenges, we present SoftCell, a scalable architecture that supports fine-grained policies for mobile devices in cellular core networks, using commodity switches and servers. SoftCell enables operators to realize high-level service policies that direct traffic through sequences of middleboxes based on subscriber attributes and applications. To minimize the size of the forwarding tables, SoftCell aggregates traffic along multiple dimensions---the service policy, the base station, and the mobile device---at different switches in the network. Since most traffic originates from mobile devices, SoftCell performs fine-grained packet classification at the access switches, next to the base stations, where software switches can easily handle the state and bandwidth requirements. SoftCell guarantees that packets belonging to the same connection traverse the same sequence of middleboxes in both directions, even in the presence of mobility. We demonstrate that SoftCell improves the scalability and flexibility of cellular core networks by analyzing real LTE workloads, performing micro-benchmarks on our prototype controller as well as large-scale simulations. --- paper_title: Toward Software-Defined Cellular Networks paper_content: Existing cellular networks suffer from inflexible and expensive equipment, complex control-plane protocols, and vendor-specific configuration interfaces. In this position paper, we argue that software defined networking (SDN) can simplify the design and management of cellular data networks, while enabling new services. However, supporting many subscribers, frequent mobility, fine-grained measurement and control, and real-time adaptation introduces new scalability challenges that future SDN architectures should address. As a first step, we propose extensions to controller platforms, switches, and base stations to enable controller applications to (i) express high-level policies based on subscriber attributes, rather than addresses and locations, (ii) apply real-time, fine-grained control through local agents on the switches, (iii)perform deep packet inspection and header compression on packets, and (iv)remotely manage shares of base-station resources. --- paper_title: Sensor OpenFlow: Enabling Software-Defined Wireless Sensor Networks paper_content: While it has been a belief for over a decade that wireless sensor networks (WSN) are application-specific, we argue that it can lead to resource underutilization and counter-productivity. We also identify two other main problems with WSN: rigidity to policy changes and difficulty to manage. In this paper, we take a radical, yet backward and peer compatible, approach to tackle these problems inherent to WSN. We propose a Software-Defined WSN architecture and address key technical challenges for its core component, Sensor OpenFlow. This work represents the first effort that synergizes software-defined networking and WSN. --- paper_title: Software Defined Wireless Networks: Unbridling SDNs paper_content: The {\it software defined networking} (SDN) paradigm promises to dramatically simplify network configuration and resource management. Such features are extremely valuable to network operators and therefore, the industrial (besides the academic) research and development community is paying increasing attention to SDN. Although wireless equipment manufacturers are increasing their involvement in SDN-related activities, to date there is not a clear and comprehensive understanding of what are the opportunities offered by SDN in most common networking scenarios involving wireless infrastructure less communications and how SDN concepts should be adapted to suit the characteristics of wireless and mobile communications. This paper is a first attempt to fill this gap as it aims at analyzing how SDN can be beneficial in wireless infrastructure less networking environments with special emphasis on wireless personal area networks (WPAN). Furthermore, a possible approach (called \emph{SDWN}) for such environments is presented and some design guidelines are provided. --- paper_title: A knowledge plane for the internet paper_content: We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so.We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high-level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective. --- paper_title: Cognitive radio: brain-empowered wireless communications paper_content: Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio. --- paper_title: Artificial Intelligence Based Cognitive Routing for Cognitive Radio Networks paper_content: Cognitive radio networks (CRNs) are networks of nodes equipped with cognitive radios that can optimize performance by adapting to network conditions. While cognitive radio networks (CRN) are envisioned as intelligent networks, relatively little research has focused on the network level functionality of CRNs. Although various routing protocols, incorporating varying degrees of adaptiveness, have been proposed for CRNs, it is imperative for the long term success of CRNs that the design of cognitive routing protocols be pursued by the research community. Cognitive routing protocols are envisioned as routing protocols that fully and seamless incorporate AI-based techniques into their design. In this paper, we provide a self-contained tutorial on various AI and machine-learning techniques that have been, or can be, used for developing cognitive routing protocols. We also survey the application of various classes of AI techniques to CRNs in general, and to the problem of routing in particular. We discuss various decision making techniques and learning techniques from AI and document their current and potential applications to the problem of routing in CRNs. We also highlight the various inference, reasoning, modeling, and learning sub tasks that a cognitive routing protocol must solve. Finally, open research issues and future directions of work are identified. --- paper_title: A Survey of Artificial Intelligence for Cognitive Radios paper_content: Cognitive radio (CR) is an enabling technology for numerous new capabilities such as dynamic spectrum access, spectrum markets, and self-organizing networks. To realize this diverse set of applications, CR researchers leverage a variety of artificial intelligence (AI) techniques. To help researchers better understand the practical implications of AI to their CR designs, this paper reviews several CR implementations that used the following AI techniques: artificial neural networks (ANNs), metaheuristic algorithms, hidden Markov models (HMMs), rule-based systems, ontology-based systems (OBSs), and case-based systems (CBSs). Factors that influence the choice of AI techniques, such as responsiveness, complexity, security, robustness, and stability, are discussed. To provide readers with a more concrete understanding, these factors are illustrated in an extended discussion of two CR designs. --- paper_title: Trends in the development of communication networks: Cognitive networks paper_content: One of the main challenges already faced by communication networks is the efficient management of increasing complexity. The recently proposed concept of cognitive network appears as a candidate that can address this issue. In this paper, we survey the existing research work on cognitive networks, as well as related and enabling techniques and technologies. We start with identifying the most recent research trends in communication networks and classifying them according to the approach taken towards the traditional layered architecture. In the analysis we focus on two related trends: cross-layer design and cognitive networks. We classify the cognitive networks related work in that mainly concerned with knowledge representation and that predominantly dealing with the cognition loop. We discuss the existing definitions of cognitive networks and, with respect to those, position our understanding of the concept. Next, we provide a summary of artificial intelligence techniques that are potentially suitable for the development of cognitive networks, and map them to the corresponding states of the cognition loop. We summarize and compare seven architectural proposals that comply with the requirements for a cognitive network. We discuss their relative merits and identify some future research challenges before we conclude with an overview of standardization efforts. --- paper_title: Cognitive Networks: Towards Self-Aware Networks paper_content: Contributors. Foreword 1. Foreword 2. Preface. Acknowledgements. Introduction. Chapter 1: Biologically Inspired Networking. Chapter 2: The Role of Autonomic Networking in Cognitive Networks. Chapter 3: Adaptive Networks. Chapter 4: Self-Managing Networks. Chapter 5: Machine Learning for Cognitive Networks: Technology Assessment and Research Challenges. Chapter 6: Cross-Layer Design and Optimization in Wireless Networks. Chapter 7: Cognitive Radio Architecture. Chapter 8: The Wisdom of Crowds: Cognitive Ad hoc Networks. Chapter 9: Distributed Learning and Reasoning in Cognitive Networks: Methods and Design Decisions. Chapter 10: The Semantic Side of Cognitive Radio. Chapter 11: Security Issues in Cognitive Radio Networks. Chapter 12: Intrusion Detection in Cognitive Networks. Chapter 13: Erasure Tolerant Coding for Cognitive Radios. Index. --- paper_title: CogNet: an architectural foundation for experimental cognitive radio networks within the future internet paper_content: This paper describes a framework for research on architectural tradeoffs and protocol designs for cognitive radio networks at both the local network and the global internetwork levels. Several key architectural issues for cognitive radio networks are discussed, including control and management protocols, support for collaborative PHY, dynamic spectrum coordination, flexible MAC layer protocols, ad hoc group formation and cross-layer adaptation. The overall goal of this work is the design and validation of the control/management and data interfaces between cognitive radio nodes in a local network, and also between cognitive radio networks and the global Internet. Protocol design and implementation based on this framework will result in the CogNet architecture, a prototype open-source cognitive radio protocol stack. Experimental evaluations on emerging cognitive radio platforms are planned for future work, first in a wireless local-area radio network scenario using wireless testbeds such as ORBIT, and later as part of several end-to-end experiments using a wide-area network testbed such as PlanetLab (and GENI in the future). --- paper_title: Cognitive Radio Rides on the Cloud paper_content: Cognitive Radio (CR) is capable of adaptive learning and reconfiguration, promising consistent communications performance for C4ISR1 systems even in dynamic and hostile battlefield environments. As such, the vision of Network-Centric Operations becomes feasible. However, enabling adaptation and learning in CRs may require both storing a vast volume of data and processing it fast. Because a CR usually has limited computing and storage capacity determined by its size and battery, it may not be able to achieve its full capability. The cloud2 can provide its computing and storage utility for CRs to overcome such challenges. On the other hand, the cloud can also store and process enormous amounts of data needed by C4ISR systems. However, today's wireless technologies have difficulty moving various types of data reliably and promptly in the battlefields. CR networks promise reliable and timely data communications for accessing the cloud. Overall, connecting CRs and the cloud overcomes the performance bottlenecks of each. This paper explores opportunities of this confluence and describes our prototype system. --- paper_title: Cooperative spectrum sensing in TV White Spaces: When Cognitive Radio meets Cloud paper_content: A Cognitive Radio Cloud Network (CRCN) in TV White Spaces (TVWS) is proposed in this paper. Under the infrastructure of CRCN, cooperative spectrum sensing (SS) and resource scheduling in TVWS can be efficiently implemented making use of the scalability and the vast storage and computing capacity of the Cloud. Based on the sensing reports collected on the Cognitive Radio Cloud (CRC) from distributed secondary users (SUs), we study and implement a sparse Bayesian learning (SBL) algorithm for cooperative SS in TVWS using Microsoft's Windows Azure Cloud platform. A database for the estimated locations and spectrum power profiles of the primary users are established on CRC with Microsoft's SQL Azure. Moreover to enhance the performance of the SBL-based SS on CRC, a hierarchical parallelization method is also implemented with Microsoft's dotNet 4.0 in a MapReduce-like programming model. Based on our simulation studies, a proper programming model and partitioning of the sensing data play crucial roles to the performance of the SBL-based SS on the Cloud. --- paper_title: Virtual radio: a framework for configurable radio networks paper_content: Network virtualization has recently been proposed for the development of large scale experimental networks, but also as design principle for a Future Internet. In this paper we describe the background to network virtualization and extend this concept into the wireless domain, which we denote as radio virtualization. With radio virtualization different virtual radio networks can operate on top of a common shared infrastructure and share the same radio resources. We present how this radio resource sharing can be performed efficiently without interference between the different virtual radio networks. Further we discuss how radio transmission functionality can be configured. Radio virtualization provides flexibility in the design and deployment of new wireless networking concepts. It allows customization of radio networks for dedicated networking services at reduced deployment costs. --- paper_title: A survey of mobile cloud computing: architecture, applications, and approaches paper_content: Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: Cognitive Radio Rides on the Cloud paper_content: Cognitive Radio (CR) is capable of adaptive learning and reconfiguration, promising consistent communications performance for C4ISR1 systems even in dynamic and hostile battlefield environments. As such, the vision of Network-Centric Operations becomes feasible. However, enabling adaptation and learning in CRs may require both storing a vast volume of data and processing it fast. Because a CR usually has limited computing and storage capacity determined by its size and battery, it may not be able to achieve its full capability. The cloud2 can provide its computing and storage utility for CRs to overcome such challenges. On the other hand, the cloud can also store and process enormous amounts of data needed by C4ISR systems. However, today's wireless technologies have difficulty moving various types of data reliably and promptly in the battlefields. CR networks promise reliable and timely data communications for accessing the cloud. Overall, connecting CRs and the cloud overcomes the performance bottlenecks of each. This paper explores opportunities of this confluence and describes our prototype system. --- paper_title: MPAP: virtualization architecture for heterogenous wireless APs paper_content: This demonstration shows a novel virtualization architecture, called Multi-Purpose Access Point (MPAP), which can virtualize multiple heterogenous wireless standards based on software radio. The basic idea is to deploy a wide-band radio front-end to receive wireless signals from all wireless standards sharing the same spectrum band, and use separate software base-bands to demodulate information stream for each wireless standard. Based on software radio, MPAP consolidates multiple wireless devices into single hardware platform, and allows them to share the same general-purpose computing resource. Different software base-bands can easily communicate and coordinate with one another. Thus, it also provides better coexistence among heterogenous wireless standards. As an example, we demonstrate to use non-contiguous OFDM in 802.11g PHY to avoid the mutual interference with narrow-band ZigBee communication. --- paper_title: CloudMAC: towards software defined WLANs paper_content: Traditional enterprise WLAN management systems are hard to extend and require powerful access points (APs). In this paper we introduce and evaluate CloudMAC, an architecture for enterprise WLANs in which MAC frames are generated and processed on virtual APs hosted in a datacenter. The APs only need to forward MAC frames. The APs and the servers are connected via an OpenFlow-enabled network, which allows to control where and how MAC frames are transmitted. --- paper_title: Virtual WiFi: bring virtualization from wired to wireless paper_content: As virtualization trend is moving towards "client virtualization", wireless virtualization remains to be one of the technology gaps that haven't been addressed satisfactorily. Today's approaches are mainly developed for wired network, and are not suitable for virtualizing wireless network interface due to the fundamental differences between wireless and wired LAN devices that we will elaborate in this paper. We propose a wireless LAN virtualization approach named virtual WiFi that addresses the technology gap. With our proposed solution, the full wireless LAN functionalities are supported inside virtual machines; each virtual machine can establish its own connection with self-supplied credentials; and multiple separate wireless LAN connections are supported through one physical wireless LAN network interface. We designed and implemented a prototype for our proposed virtual WiFi approach, and conducted detailed performance study. Our results show that with conventional virtualization overhead mitigation mechanisms, our proposed approach can support fully functional wireless functions inside VM, and achieve close to native performance of Wireless LAN with moderately increased CPU utilization. --- paper_title: A Survey of Network Virtualization paper_content: Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. --- paper_title: Wireless Going in the Cloud: A Promising Concept or Just Marketing Hype? paper_content: Cloud computing is a computing service paradigm which attracts increasing attention from academia and industry. In this paper we investigate the use of the terms cloud and cloud computing in the wireless communications and networking domain. We identify four distinct usages of the term cloud in this context and analyze their meaning and relevance to the cloud computing concept. For one of these meanings (which we term Wireless Networking Functionality as a Service) we provide more extensive descriptions of commercial services and proposed architectural designs that illustrate this concept and discuss their advantages and technical challenges. We conclude that the introduction of certain cloud computing ideas in the field of wireless telecommunications and networking not only has the potential to bring along some of the generic advantages of cloud computing but, in some cases, can offer added benefits that are wireless domain specific. --- paper_title: Wireless network cloud: architecture and system requirements paper_content: With the growth of the mobile communication network, from second-generation to third-generation or fourth-generation networks, technologists in the mobile industry continue to consider advanced wireless network architectures that have the potential to reduce networking costs and provide increased flexibility with respect to network features. In this paper, we propose the wireless network cloud (WNC), a wireless system architecture for a wireless access network. This system makes use of emerging cloud-computing technology and various technologies involved with wireless infrastructure, such as software radio technology and remote radio head technology. Based on open information technology architecture, the WNC provides all the necessary transmission and processing resources for a wireless access network operating in a cloud mode. Note that it is useful to separate the hardware and software for different wireless standards and various services and business models, as well as to meet the new system requirements for emerging wireless technologies, such as collaborative processing at different scales of network use. We analyze several important system challenges involving computational requirements of virtual base stations, I/O throughput, and timing networks for synchronization. Based on current information technologies, we make several suggestions with respect to future system design. --- paper_title: Wireless virtualization on commodity 802.11 hardware paper_content: In this paper we describe specific challenges in virtualizing a wireless network and multiple strategies to address them. Among different possible wireless virtualization strategies, our current work in this domain is focussed on a Time-Division Multiplexing (TDM) approach. Hence, we we present our experiences in the design and implementation of such TDM-based wireless virtualization. Our wireless virtualization system is specifically targeted for multiplexing experiments on a large-scale 802.11 wireless testbed facility. --- paper_title: A Framework of Better Deployment for WLAN Access Point Using Virtualization Technique paper_content: Wireless network is common in the indoor and outdoor. One of the important problem is the better deployment of the access point for designing and constructing an efficient wireless network. Therefore, there are researches to determine the better deployment of access point. However, the considerations to fulfill the better deployment seem to be lacking in these researches. To keep the efficient wireless network, framework of better deployment for access point is necessity. Thus, we proposed Virtual Access Point(VAP) to be used for the better deployment of access point. VAP is a logical access point constructed by virtualization technique and independently keeps the configuration of wireless network in physical access point. VAP moves to another physical access point with executing live migration. --- paper_title: Enable flexible spectrum access with spectrum virtualization paper_content: Enabling flexible spectrum access (FSA) in existing wireless networks is challenging due to the limited spectrum programmability - the ability to change spectrum properties of a signal to match an arbitrary frequency allocation. This paper argues that spectrum programmability can be separated from general wireless physical layer (PHY) modulation. Therefore, we can support flexible spectrum programmability by inserting a new spectrum virtualization layer (SVL) directly below traditional wireless PHY, and enable FSA for wireless networks without changing their PHY designs. SVL provides a virtual baseband abstraction to wireless PHY, which is static, contiguous, with a desirable width defined by the PHY. At the sender side, SVL reshapes the modulated baseband signals into waveform that matches the dynamically allocated physical frequency bands - which can be of different width, or non-contiguous - while keeping the modulated information unchanged. At the receiver side, SVL performs the inverse reshaping operation that collects the waveform from each physical band, and reconstructs the original modulated signals for PHY. All these reshaping operations are performed at the signal level and therefore SVL is agnostic and transparent to upper PHY. We have implemented a prototype of SVL on a software radio platform, and tested it with various wireless PHYs. Our experiments show SVL is flexible and effective to support FSA in existing wireless networks. --- paper_title: A knowledge plane for the internet paper_content: We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so.We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high-level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective. --- paper_title: 1 Languages for Software-Defined Networks paper_content: Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. However, managing these networks is unnecessarily complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) with closed and proprietary configuration interfaces. Softwaredefined networks are poised to change this by offering a clean and open interface between networking devices and the software that controls them. In particular, many commercial switches support the OpenFlow protocol, and a number of campus, data center, and backbone networks have deployed the new technology. However, while SDNs make it possible to program the network, they does not make it easy. Today's OpenFlow controllers offer low-level APIs that mimic the underlying switch hardware. To reach SDNs full potential, we need to identify the right higher-level abstractions for creating (and composing) applications. In the Frenetic project, we are designing simple and intuitive abstractions for programming the three main stages of network management: monitoring network traffic, specifying and composing packet forwarding policies, and updating policies in a consistent way. Overall, these abstractions make it dramatically easier for programmers to write and reason about SDN applications. --- paper_title: The Internet of Things: A survey paper_content: This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. ---
Title: Building Programmable Wireless Networks: An Architectural Survey Section 1: INTRODUCTION Description 1: In this section, introduce the importance and widespread deployment of wireless networks and discuss the need for flexible architectural support. Define key concepts like data planes and control planes, and describe the motivations behind programmable wireless networks. Section 2: PROGRAMMABLE NETWORKING ARCHITECTURES Description 2: This section should cover the historical context and development of programmable networking architectures. Discuss major approaches like the OpenSig Approach and the Active Networking Approach, as well as virtualization and SDN technologies. Section 3: BUILDING BLOCKS FOR PROGRAMMABLE WIRELESS NETWORKING Description 3: Detail the various techniques and hardware architectures used in programmable wireless networks, such as Software Defined Radios (SDR), Cognitive Radios (CR), MAC Programmable Wireless Devices, and Programmable Routers. Section 4: THREE DOMINANT TRENDS IN PROGRAMMABLE WIRELESS NETWORKING Description 4: Provide an in-depth overview of the three key trends shaping programmable wireless networking: Software Defined Wireless Networks (SWNs), Cognitive Wireless Networks (CWNs), and Virtualizable Wireless Networks (VWNs). Discuss various projects and applications under each trend. Section 5: OPEN RESEARCH ISSUES AND CHALLENGES Description 5: Highlight the current research challenges and issues faced in the development and deployment of programmable wireless networks. Discuss areas such as Software Defined Cognitive Wireless Networks, development of wireless-specific network APIs, integration with cloud technologies, the wireless Internet of Things, and balancing centralized and distributed paradigms. Section 6: CONCLUSIONS Description 6: Summarize the insights provided in the paper, emphasizing the potential of programmable wireless networks. Discuss the synergy between different programmable networking techniques and propose future directions for research and implementation.
Wireless solutions developed for the Australian healthcare: a review
8
--- paper_title: International Standards for HCI and Usability paper_content: There has been a significant increase in interest and activity in user interface standards over the last decade, many standards organizations have become heavily involved where, previously, there was only minor user interface activity. User interface professionals need to keep abreast of this activity, for it may have an important impact. When these standards and guidelines are in place, customers may demand that products conform. Many information technology customers are demanding easeof-use and interoperability of systems for users, and will see standards conformance as a route to this goal. Governments may require conformance, ostensibly for health and safety reasons. In some cases, programmer development tools may be designed to implement standards. And finally, standards present, in theory, the opportunity to promulgate good user interface design to a wide community of system developers. --- paper_title: Understanding and facilitating the browsing of electronic text paper_content: Browsing tends to be used in two distinctive ways, alternatively associated with the goal of the activity and with the method by which the goal is achieved. In this study, the definition of browsing combines aspects of both concepts to define browsing as an activity in which one gathers information while scanning an information space without an explicit purpose. The objective of this research was to examine how browsers interact with their browsing environment while manipulating two types of interface tools constructed from the content.Forty-seven adults (24 males) performed the two types of tasks (one with no purpose and the second, a control, purposeful) in four sessions over a period of four weeks. Participants scanned and/or searched the textual content of current issue plus three months of back issues of the Halifax Chronicle Herald/Mail Star using a system designed specifically for this research. At any one time only one of each type of tool was available.Those with no assigned goal examined significantly more articles and explored more menu options. They made quick decisions about which articles to examine, spending twice as much time reading the content. They tended not to explore the newspaper to a great extent, examining only 24% of the articles in a single issue. About three-quarters of what they examined was new information on topics that they had not known about before being exposed to the paper. The type of menu had no impact on performance, but differences were discovered between the two items-to-browse tools. Those with no goal selected more articles from the Suggestions and found more interesting articles when the Suggestions were available. --- paper_title: Wireless Communications: A New Frontier in Technology paper_content: Within the last 5 to 10 years, health care has seen a plethora of "breakthrough" technologies--client/server computing, multitasking, multi-user operating systems, E-mail, CD-ROM data storage, the Internet, etc. Now, a newer technology is coming of age. This point-of-care technology uses radio-based systems to transmit signals through the air without physical connections. --- paper_title: Case management and technology: a necessary fit for the future. paper_content: We now reside in a data-driven health care environment and methods for gathering, presenting, and evaluating relevant data about health care systems are paramount. This article expands on the importance of evaluating the outcomes of case management and how collecting relevant clinical and cost data can provide an infrastructure on which to base future decisions. Data-based decision making in case management is crucial for ensuring quality of care and the appropriate management of patient outcomes, and it underpins the viability of this delivery model of care. --- paper_title: Patients will be reminded of appointments by text messages. paper_content: Mobile phone text messages will be sent to patients in England to remind them of upcoming appointments with their doctor in a trial to begin next month. Organisers of the scheme hope to reduce the burden of missed appointments, which cost the NHS an estimated £400m ($660m; ‡560m) a year. ::: ::: ![][1] ::: ::: Rather than paying for the messages themselves, NHS trusts are hoping for sponsorship from companies that will place advertisements in the message after … ::: ::: [1]: /embed/graphic-1.gif ---
Title: Wireless solutions developed for the Australian healthcare: a review Section 1: Introduction Description 1: Provide an overview of the motivation behind adopting wireless technology in Australian healthcare and outline the structure and purpose of the paper. Section 2: Wireless technology in healthcare Description 2: Discuss the different types of wireless technology used in healthcare, their benefits, and potential applications within healthcare environments. Section 3: The development of wireless solutions in healthcare environment Description 3: Describe the methodologies and development practices used in creating wireless solutions in healthcare settings, along with associated challenges. Section 4: Practices and Experiences in Fielding Wireless Applications Description 4: Share real-world examples and experiences from healthcare organizations that have implemented wireless solutions, including successes and limitations. Section 5: Project Management Issues Description 5: Highlight the management challenges encountered during the deployment of wireless solutions in healthcare, and propose ways to address these issues. Section 6: Failure to provide expected benefits Description 6: Analyze specific cases where wireless solutions in Australian healthcare did not deliver the expected benefits, and discuss the reasons for their shortcomings. Section 7: Managing wireless solutions: A techno-management Perspective Description 7: Offer recommendations and a framework for the effective management and deployment of wireless solutions in healthcare, focusing on both technical and management aspects. Section 8: Conclusions Description 8: Summarize the key findings of the paper, discuss the implications for healthcare management, and suggest future directions for research and development.
Survey of Conference Management Systems
12
--- paper_title: The design and implementation of a virtual conference system paper_content: With the enormous use of networks, many real-world activities are realized on the Internet. We propose a complete virtual conference system (VCS) to handle all activities of real-world conferences. The VCS includes a virtual conference management system and a mobile virtual conference system. Video conferencing is a trend of future communications. With the improvement of broadband network technologies, video conferencing becomes possible in the global society. It is feasible to use video conferencing technologies to organize future international conferences. This research proposes a total solution toward virtual conferencing. We use a mobile server/storage pre-broadcasting technique, as well as a communication network optimization algorithm, which is based on a graph computation mechanism. With the assistance of a conference management system, the system is able to support virtual conferencing in the future academic society. ---
Title: Survey of Conference Management Systems Section 1: INTRODUCTION Description 1: This section should introduce the topic of traditional and modern conference management systems, highlighting the transition from manual to web-based methods and outlining the paper's structure. Section 2: FRAMEWORK FOR COMPARISONS Description 2: This section should present the basic concepts and systems used for comparison, detailing functions, behaviors, and communications critical to conference management systems. Section 3: System Function Comparison Description 3: This section should compare various conference management systems based on functions such as user registration, profile management, help support, and database management. Section 4: Conference Function Comparison Description 4: This section should discuss and compare the features related to conference creation, setting dates, managing tracks, and administrative roles across different systems. Section 5: Technical Program Committee Description 5: This section should compare how different systems manage Technical Program Committees, including the functionalities for creating TPCs, forming TPC groups, and assigning reviews. Section 6: Papers Description 6: This section should examine the features for paper submission, editing, downloading, and format checking across different conference management systems. Section 7: Reviewers Description 7: This section should compare the review processes and reviewer management features provided by various systems. Section 8: Notifications Description 8: This section should detail the notification features available in different systems, such as sending emails and reminders. Section 9: Reports Description 9: This section should describe the various types of reports generated by the systems under comparison. Section 10: Assigning Reviewers and TPC Members Description 10: This section should compare the methods (automatic and manual) for assigning reviewers and TPC members across different conference management systems. Section 11: COMPARISON BASED ON SYSTEM FEATURES Description 11: This section should summarize and tabulate the features of the surveyed conference management systems, highlighting the strengths and weaknesses. Section 12: CONCLUSION Description 12: This section should discuss the overall findings, emphasizing the need for multi-server conference management systems and suggesting potential improvements for future systems.
Mashups: A Literature Review and Classification Framework
13
--- paper_title: Predicting Service Mashup Candidates Using Enhanced Syntactical Message Management paper_content: The descriptiveness of capabilities advertised on service-oriented architectures provides a promising platform for crafting new knowledge. Service mashup has been introduced as an approach for integrating the information provided from multiple Web services into one common operational picture. In the future, scale will be a barrier to these types of approaches. With the entry and exit of large numbers of services on the Internet, it will be difficult to find and suggest the most relevant service candidates for new mashups. In this work, we present an efficient syntactical approach for actively discovering Web service candidates for service mashups. This approach leverages the message naming characteristics of the developers and of the target service repository to inform search algorithms. Favorable precision results are described based on experimentation executed on an open repository of Web service from the Internet. --- paper_title: Using Really Simple Syndication (RSS) to enhance student research paper_content: Really Simple Syndication (RSS) is a tool used in business and by users for gleaning relevant information from the Internet. RSS technology can be used in the education environment to enhance research methods for students. Students can use RSS to glean current information from on-line journals, publications, web logs and other sources without visiting the sites daily. A combination of RSS and personal web files can be used for student group projects to connect the students and share research over the Internet. --- paper_title: Market Overview of Enterprise Mashup Tools paper_content: A new paradigm, known as Enterprise Mashups, has been gain momentum during the last years. By empowering actual business end-users to create and adapt individual enterprise applications, Enterprise Mashups implicate a shift concerning a collaborative software development and consumption process. Upcoming Mashup tools prove the growing relevance of this paradigm in the industry, both in the consumer and enterprise-oriented market. However, a market overview of the different tools is still missing. In this paper, we introduce a classification of Mashup tools and evaluate selected tools of the different clusters according to the perspectives general information, functionality and usability. Finally, we classify more than 30 tools in the designed classification model and present the observed market trends in context of Enterprise Mashups. --- paper_title: Mashlight: a Lightweight Mashup Framework for Everyone paper_content: Recently, Web 2.0 has brought a storm on web application development. In particular, mashups have greatly enhanced user creativity across the web, allowing end-users to rapidly combine information from diverse sources, and integrate them into “new” goal-oriented applications. In the meantime, widgets (also built with Web 2.0 technology) have also gained a lot of momentum. Indeed, they have become ever more present in our every day lives, even appearing as first class players in our operating systems (e.g., Mac OS X and Windows 7). In this paper we present Mashlight: a lightweight framework for creating and executing mashups that combines these two worlds. Indeed, it provides users with a simple means to create “process-like” mashups using “widget-like” Web 2.0 applications. Even users that have no technical knowhow can string together building blocks –taken from an extensible library– to define the application they need. The framework is implemented using common Web technology, meaning our mashups can be run from different kinds of devices, in a lightweight fashion without the overhead of a complex application server. The paper presents the main concepts behind Mashlight blocks and Mashlight processes, and demonstrate our prototype on a concrete example. --- paper_title: Potluck: data mash-up tool for casual users paper_content: As more and more reusable structured data appears on the Web, casual users will want to take into their own hands the task of mashing up data rather than wait for mash-up sites to be built that address exactly their individually unique needs. In this paper, we present Potluck, a Web user interface that lets casual users--those without programming skills and data modeling expertise--mash up data themselves. ::: ::: Potluck is novel in its use of drag and drop for merging fields, its integration and extension of the faceted browsing paradigm for focusing on subsets of data to align, and its application of simultaneous editing for cleaning up data syntactically. Potluck also lets the user construct rich visualizations of data in-place as the user aligns and cleans up the data. This iterative process of integrating the data while constructing useful visualizations is desirable when the user is unfamiliar with the data at the beginning--a common case--and wishes to get immediate value out of the data without having to spend the overhead of completely and perfectly integrating the data first. ::: ::: A user study on Potluck indicated that it was usable and learnable, and elicited excitement from programmers who, even with their programming skills, previously had great difficulties performing data integration. --- paper_title: Integrating legacy software into a service oriented architecture paper_content: Legacy programs, i. e. programs which have been developed with an outdated technology make-up for the vast majority of programs in many user application environments. It is these programs which actually run the information systems of the business world. Moving to a new technology such as service oriented architecture is impossible without taking these programs along. This contribution presents a tool supported method for achieving that goal. Legacy code is wrapped behind an XML shell which allows individual functions within the programs, to be offered as Web services to any external user. By means of this wrapping technology, a significant part of the company software assets can be preserved within the framework of a service oriented architecture. --- paper_title: OpenID 2.0: a platform for user-centric identity management paper_content: With the advancement in user-centric and URI-based identity systems over the past two years, it has become clear that a single specification will not be the solution to all problems. Rather, like the other layers of the Internet, developing small, interoperable specifications that are independently implementable and useful will ultimately lead to market adoption of these technologies. This is the intent of the OpenID framework. OpenID Authentication 1.0 began as a lightweight HTTP-based URL authentication protocol. OpenID Authentication 2.0 it is now turning into an open community-driven platform that allows and encourages innovation. It supports both URLs and XRIs as user identifiers, uses Yadis XRDS documents for identity service discovery, adds stronger security, and supports both public and private identifiers. With continuing convergence under this broad umbrella, the OpenID framework is emerging as a viable solution for Internet-scale user-centric identity infrastructure. --- paper_title: Application framework with demand-driven mashup for selective browsing paper_content: This paper proposes a mashup framework for creating flexible mashup applications in which the user can selectively browse through mashup items. Our framework provides a data management engine for on-demand data generation, and GUI components called widgets that can be used to browse through mashed-up data selectively. The application developer has to only prepare a mashup relation specifying the web service combinations and widget configurations specifying how to display the mashed-up data. On the basis of these configurations, widgets monitor user interactions and requests data from the data management engine that processes the demand-driven creation of mashed-up data. To enable selective browsing, a table widget, for instance, allows selection of columns to be displayed, provides a limited view with scroll bars, and filtering facilities. Our framework also offers a mechanism for widget coordination where a widget can change the display target according to states or events of other widgets. We introduce a sample application for tour planning using five cooperative widgets, and discuss the usability and performance advantages of our framework. --- paper_title: Market Overview of Enterprise Mashup Tools paper_content: A new paradigm, known as Enterprise Mashups, has been gain momentum during the last years. By empowering actual business end-users to create and adapt individual enterprise applications, Enterprise Mashups implicate a shift concerning a collaborative software development and consumption process. Upcoming Mashup tools prove the growing relevance of this paradigm in the industry, both in the consumer and enterprise-oriented market. However, a market overview of the different tools is still missing. In this paper, we introduce a classification of Mashup tools and evaluate selected tools of the different clusters according to the perspectives general information, functionality and usability. Finally, we classify more than 30 tools in the designed classification model and present the observed market trends in context of Enterprise Mashups. --- paper_title: Data Integration Support for Mashups paper_content: Mashups are a new type of interactive web applications, combining content from multiple services or sources at runtime. While many such mashups are being developed most of them support rather simple data integration tasks. We therefore propose a framework for the development of more complex dynamic data integration mashups. The framework consists of components for query generation and online matching as well as for additional data transformation. Our architecture supports interactive and sequential result refinement to improve the quality of the presented result stepby-step by executing more elaborate queries when necessary. A script-based definition of mashups facilitates the development as well as the dynamic execution of mashups. We illustrate our approach by a powerful mashup implementation combining bibliographic data to dynamically calculate citation counts for venues and authors. --- paper_title: Predicting Service Mashup Candidates Using Enhanced Syntactical Message Management paper_content: The descriptiveness of capabilities advertised on service-oriented architectures provides a promising platform for crafting new knowledge. Service mashup has been introduced as an approach for integrating the information provided from multiple Web services into one common operational picture. In the future, scale will be a barrier to these types of approaches. With the entry and exit of large numbers of services on the Internet, it will be difficult to find and suggest the most relevant service candidates for new mashups. In this work, we present an efficient syntactical approach for actively discovering Web service candidates for service mashups. This approach leverages the message naming characteristics of the developers and of the target service repository to inform search algorithms. Favorable precision results are described based on experimentation executed on an open repository of Web service from the Internet. --- paper_title: MU: an hybrid language for Web Mashups paper_content: Web Mashup, Web 2.0, recombinant or remixable Web are all terms with the same meaning: informally, they express the possibility of building applications that are able to manage data contained in dierent repositories exposed as Web services. The availability of a broad range of Web services for dierent purposes and domains, from online bids to the weather forecasts, motivates the development of several different mashup applications throwing up a wide class of problems related, mainly, to the data format interoperability. The high number of the data formats, together with a huge technological heterogeneity creates the need for a framework that easily allows the development of such mashup applications. The main objective of this paper, therefore, is to present a novel approach based on a hybrid functional-logic high-level language called MU that allows the description of data source aggregation and the manipulation and presentation phases over multiple external sources with an extremely compact and exible syntax. We also present an eective use case implemented using "em-up", our reference implementation of the MU language, where an example of mashup application is shown in detail. --- paper_title: Towards an Advertising Business Model for Web Service Mashups paper_content: On the Internet, the advertising business model is a cornerstone of many service businesses. In this paper, we propose an advertising business model for machine-oriented Web Services and describe the guiding principles for its mechanisms and implementation. It is not straightforward to make the advertising business method feasible for machine-oriented Web Services, since advertising only makes sense if the human users of a service see the ads. In the proposed business model, a non-human service consumer incurs an obligation to the providers of services it uses to display the advertising they have specified. In addition this obligation can be delegated to yet another service consumer if the later consumer is using a service provided by the earlier non-human consumer. Since each service provider-consumer chain must finally reach human consumers, the advertising from the earlier service providers will ultimately reach human service consumers, thus satisfying the conditions of the proposed business model. We also show how we can leverage the existing Web Service infrastructure to implement this business model. --- paper_title: An Intelligent Ontology and Bayesian Network Based Semantic Mashup for Tourism paper_content: A common perception is that there are two competing vision for the future evolution of the Web: the Semantic Web and Web 2.0. In fact, Semantic Web technologies must integrate with Web 2.0 services for both to leverage each otherpsilas strengths. This paper illustrates how Semantic Web technologies can support information integration and make it easy to create semantic mashups. An intelligent recommendation system for tourism is presented to show the efficiency of our method. Through the ontology of tourism, the system allows the integration of heterogeneous online travel information. Based on Bayesian network technique method, the system recommends tourist attractions to a user by taking into account the travel behaviour both of the user and of other users. --- paper_title: Implementation of Ubiquitous Personal Study Using Web 2.0 Mash-up and OSS Technologies paper_content: The information resources on the Web are diversified, and the amount is increasing rapidly. Demands for selecting useful information from the Internet, managing personal contents, and sharing contents under control have risen. In this study, we propose the ubiquitous personal study (UPS), a framework of personalized virtual study to support accessing, managing, organizing, sharing and recommending information. In this paper, we focus on discussing the issues on how to implement it with Web 2.0 mash-up technology and open source software. --- paper_title: Answering queries using views: A survey paper_content: The problem of answering queries using views is to find efficient methods of answering a query using a set of previously defined materialized views over the database, rather than accessing the database relations. The problem has recently received significant attention because of its relevance to a wide variety of data management problems. In query optimization, finding a rewriting of a query using a set of materialized views can yield a more efficient query execution plan. To support the separation of the logical and physical views of data, a storage schema can be described using views over the logical schema. As a result, finding a query execution plan that accesses the storage amounts to solving the problem of answering queries using views. Finally, the problem arises in data integration systems, where data sources can be described as precomputed views over a mediated schema. This article surveys the state of the art on the problem of answering queries using views, and synthesizes the disparate works into a coherent framework. We describe the different applications of the problem, the algorithms proposed to solve it and the relevant theoretical results. --- paper_title: Mash-o-matic paper_content: Web applications called mash-ups combine information of varying granularity from different, possibly disparate, sources. We describe Mash-o-matic, a utility that can extract, clean, and combine disparate information fragments, and automatically generate data for mash-ups and the mash-ups themselves. As an illustration, we generate a mash-up that displays a map of a university campus, and outline the potential benefits of using Mash-o-matic. Mash-o-matic exploits superimposed information (SI), which is new information and structure created in reference to fragments of existing information. Mashomatic is implemented using middleware called the Superimposed Pluggable Architecture for Contexts and Excerpts (SPARCE), and a query processor for SI and referenced information, both parts of our infrastructure to support SI management. We present a high-level description of the mash-up production process and discuss in detail how Mash-o-matic accelerates that process. --- paper_title: A methodology for quality-based mashup of data sources paper_content: The concept of mashup is gaining tremendous popularity and its application can be seen in a large number of domains. Enterprises using and relying upon mashup have improved their mass collaboration and personalization. In order for mashup technology to be widely accepted and widely used, we need a methodology by which can make use of the quality of the input to the mashup process as a governing principle to carry out mashup. This paper reviews the concept of mashup in different domains and proposes a conceptual solution framework for providing quality based mashup process. --- paper_title: Rapid prototyping of semantic mash-ups through semantic web pipes paper_content: The use of RDF data published on the Web for applications is still a cumbersome and resource-intensive task due to the limited software support and the lack of standard programming paradigms to deal with everyday problems such as combination of RDF data from dierent sources, object identifier consolidation, ontology alignment and mediation, or plain querying and filtering tasks. In this paper we present a framework, Semantic Web Pipes, that supports fast implementation of Semantic data mash-ups while preserving desirable properties such as abstraction, encapsulation, component-orientation, code re-usability and maintainability which are common and well supported in other application areas. --- paper_title: Information integration using logical views paper_content: A number of ideas concerning information-integration tools can be thought of as constructing answers to queries using views that represent the capabilities of information sources. We review the formal basis of these techniques, which are closely related to containment algorithms for conjunctive queries and/or Datalog programs. Then we compare the approaches taken by AT&T Labs' “Information Manifold” and the Stanford “Tsimmis” project in these terms. --- paper_title: A Web Mashup for Social Libraries paper_content: content on the Social Web is often locked within information silos. Inadequate APIs or, worst, the lack of APIs obstruct reuse and prevent the opportunity to integrate similar content from different communities. In this paper we present a Web mashup which combines information from different social libraries. Aggregated information, including both classic book metadata and user-generated content, is represented as linked data in RDF to allow machine computation and foster reuse among different applications. --- paper_title: Data integration: a theoretical perspective paper_content: Data integration is the problem of combining data residing at different sources, and providing the user with a unified view of these data. The problem of designing data integration systems is important in current real world applications, and is characterized by a number of issues that are interesting from a theoretical point of view. This document presents on overview of the material to be presented in a tutorial on data integration. The tutorial is focused on some of the theoretical issues that are relevant for data integration. Special attention will be devoted to the following aspects: modeling a data integration application, processing queries in data integration, dealing with inconsistent data sources, and reasoning on queries. --- paper_title: Integrating legacy software into a service oriented architecture paper_content: Legacy programs, i. e. programs which have been developed with an outdated technology make-up for the vast majority of programs in many user application environments. It is these programs which actually run the information systems of the business world. Moving to a new technology such as service oriented architecture is impossible without taking these programs along. This contribution presents a tool supported method for achieving that goal. Legacy code is wrapped behind an XML shell which allows individual functions within the programs, to be offered as Web services to any external user. By means of this wrapping technology, a significant part of the company software assets can be preserved within the framework of a service oriented architecture. --- paper_title: Towards a mashup-driven end-user programming of SOA-based applications paper_content: Recent technologies and standards in the field of Service-Orientated Architectures (SOA) have focused on Service-to-Service interaction and do not consider Service-to-User scenarios. This results in a lack of a service-consumer-orientation in order to empower the user to get easy access to the functionalities of services. The paper argues for the need of new concepts that extend existing mashup approaches to enable a more end-user driven application development. It presents an insight into existing mashup technologies and identifies shortcomings concerning the creation of more complex applications. The paper offers ways how to extend the existing concepts and shows their potential as a key technology for an end-user programming in the field of SOA. The empowerment of the actual service consumer can bridge the gap between the user and the service infrastructure. --- paper_title: Mashing up visual languages and web mash-ups paper_content: Research on Web mashups and visual languages share an interest in human-centered computing. Both research communities are concerned with supporting programming by everyday, technically inexpert users. Visual programming environments have been a focus for both communities, and we believe that there is much to be gained by further discussion between these research communities. In this paper we explore some connections between web mashups and visual languages, and try to identify what each might be able to learn from the other. Our goal is to establish a framework for a dialog between the communities, and to promote the exchange of ideas and our respective understandings of human-centered computing. --- paper_title: Cloud-based Enterprise Mashup Integration Services for B2B Scenarios paper_content: We observe a huge demand for situational and ad-hoc applications desired by the mass of business end-users that cannot be fully implemented by IT departments. This is especially the case with regard to solutions that support infrequent, situational, and ad-hoc B2B scenarios. End users are not able to implement such solutions without the help of developers. Enterprise Mashup-/ and Lightweight Composition approaches and tools are promising solutions to unleash the huge potential of integrating the mass of end users into development and to overcome this \long-tail" dilemma. In this work, we summarize dierent patterns on how to real --- paper_title: Rapid development of spreadsheet-based web mashups paper_content: The rapid growth of social networking sites and web communities have motivated web sites to expose their APIs to external developers who create mashups by assembling existing functionalities. Current APIs, however, aim toward developers with programming expertise; they are not directly usable by wider class of users who do not have programming background, but would nevertheless like to build their own mashups. To address this need, we propose a spreadsheet-based Web mashups development framework, which enables users to develop mashups in the popular spreadsheet environment. First, we provide a mechanism that makes structured data first class values of spreadsheet cells. Second, we propose a new component model that can be used to develop fairly sophisticated mashups, involving joining data sources and keeping spreadsheet data up to date. Third, to simplify mashup development, we provide a collection of spreadsheet-based mashup patterns that captures common Web data access and spreadsheet presentation functionalities. Users can reuse and customize these patterns to build spreadsheet-based Web mashups instead of developing them from scratch. Fourth, we enable users to manipulate structured data presented on spreadsheet in a drag-and-drop fashion. Finally, we have developed and tested a proof-of-concept prototype to demonstrate the utility of the proposed framework. --- paper_title: End-user programming of mashups with vegemite paper_content: Mashups are an increasingly popular way to integrate data from multiple web sites to fit a particular need, but it often requires substantial technical expertise to create them. To lower the barrier for creating mashups, we have extended the CoScripter web automation tool with a spreadsheet-like environment called Vegemite. Our system uses direct-manipulation and programming-by-demonstration tech-niques to automatically populate tables with information collected from various web sites. A particular strength of our approach is its ability to augment a data set with new values computed by a web site, such as determining the driving distance from a particular location to each of the addresses in a data set. An informal user study suggests that Vegemite may enable a wider class of users to address their information needs. --- paper_title: User-friendly functional programming for web mashups paper_content: MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functions as its scope. --- paper_title: Towards physical mashups in the Web of Things paper_content: Wireless Sensor Networks (WSNs) have promising industrial applications, since they reduce the gap between traditional enterprise systems and the real world. However, every particular application requires complex integration work, and therefore technical expertise, effort and time which prevents users from creating small tactical, ad-hoc applications using sensor networks. Following the success of Web 2.0 “mashups”, we propose a similar lightweight approach for combining enterprise services (e.g. ERPs) with WSNs. Specifically, we discuss the traditional integration solutions, propose and implement an alternative architecture where sensor nodes are accessible according to the REST principles. With this approach, the nodes become part of a “Web of Things” and interacting with them as well as composing their services with existing ones, becomes almost as easy as browsing the web. --- paper_title: Please Permit Me: Stateless Delegated Authorization in Mashups paper_content: Mashups have emerged as a Web 2.0 phenomenon, connecting disjoint applications together to provide unified services. However, scalable access control for mashups is difficult. To enable a mashup to gather data from legacy applications and services, users must give the mashup their login names and passwords for those services. This all-or-nothing approach violates the principle of least privilege and leaves users vulnerable to misuse of their credentials by malicious mashups. In this paper, we introduce delegation permits - a stateless approach to access rights delegation in mashups - and describe our complete implementation of a permit-based authorization delegation service. Our protocol and implementation enable fine grained, flexible, and stateless access control and authorization for distributed delegated authorization in mashups, while minimizing attackers' ability to capture and exploit users' authentication credentials. --- paper_title: Towards Accountable Enterprise Mashup Services paper_content: As a result of the proliferation of Web 2.0 style Web sites, the practice of mashup services has become increasingly popular in the Web development community. While mashup services bring flexibility and speed in delivering new valuable services to consumers, the issue of accountability associated with the mashup practice remains largely ignored by the industry. Furthermore, realizing the great benefits of mashup services, industry leaders are eagerly pushing these services into the enterprise arena. Although enterprise mashup services hold great promise in delivering a flexible SOA solution in a business context, the lack of accountability in current mashup solutions may render this ineffective in the enterprise environment. This paper defines accountability for mashup services, analyses the underlying issues in practice, and finally proposes a framework and ontology to model accountability. This model may then be used to develop effective accountability solutions for mashup environments. --- paper_title: SNAP : a web-based tool for identification and annotation of proxy SNPs using HapMap paper_content: Summary: The interpretation of genome-wide association results is confounded by linkage disequilibrium between nearby alleles. We have developed a flexible bioinformatics query tool for singlenucleotide polymorphisms (SNPs) to identify and to annotate nearby SNPs in linkage disequilibrium (proxies) based on HapMap. By offering functionality to generate graphical plots for these data, the SNAP server will facilitate interpretation and comparison of genomewide association study results, and the design of fine-mapping experiments (by delineating genomic regions harboring associated variants and their proxies). Availability: SNAP server is available at http://www.broad.mit.edu/ --- paper_title: OpenID 2.0: a platform for user-centric identity management paper_content: With the advancement in user-centric and URI-based identity systems over the past two years, it has become clear that a single specification will not be the solution to all problems. Rather, like the other layers of the Internet, developing small, interoperable specifications that are independently implementable and useful will ultimately lead to market adoption of these technologies. This is the intent of the OpenID framework. OpenID Authentication 1.0 began as a lightweight HTTP-based URL authentication protocol. OpenID Authentication 2.0 it is now turning into an open community-driven platform that allows and encourages innovation. It supports both URLs and XRIs as user identifiers, uses Yadis XRDS documents for identity service discovery, adds stronger security, and supports both public and private identifiers. With continuing convergence under this broad umbrella, the OpenID framework is emerging as a viable solution for Internet-scale user-centric identity infrastructure. --- paper_title: Mashlight: a Lightweight Mashup Framework for Everyone paper_content: Recently, Web 2.0 has brought a storm on web application development. In particular, mashups have greatly enhanced user creativity across the web, allowing end-users to rapidly combine information from diverse sources, and integrate them into “new” goal-oriented applications. In the meantime, widgets (also built with Web 2.0 technology) have also gained a lot of momentum. Indeed, they have become ever more present in our every day lives, even appearing as first class players in our operating systems (e.g., Mac OS X and Windows 7). In this paper we present Mashlight: a lightweight framework for creating and executing mashups that combines these two worlds. Indeed, it provides users with a simple means to create “process-like” mashups using “widget-like” Web 2.0 applications. Even users that have no technical knowhow can string together building blocks –taken from an extensible library– to define the application they need. The framework is implemented using common Web technology, meaning our mashups can be run from different kinds of devices, in a lightweight fashion without the overhead of a complex application server. The paper presents the main concepts behind Mashlight blocks and Mashlight processes, and demonstrate our prototype on a concrete example. --- paper_title: Potluck: data mash-up tool for casual users paper_content: As more and more reusable structured data appears on the Web, casual users will want to take into their own hands the task of mashing up data rather than wait for mash-up sites to be built that address exactly their individually unique needs. In this paper, we present Potluck, a Web user interface that lets casual users--those without programming skills and data modeling expertise--mash up data themselves. ::: ::: Potluck is novel in its use of drag and drop for merging fields, its integration and extension of the faceted browsing paradigm for focusing on subsets of data to align, and its application of simultaneous editing for cleaning up data syntactically. Potluck also lets the user construct rich visualizations of data in-place as the user aligns and cleans up the data. This iterative process of integrating the data while constructing useful visualizations is desirable when the user is unfamiliar with the data at the beginning--a common case--and wishes to get immediate value out of the data without having to spend the overhead of completely and perfectly integrating the data first. ::: ::: A user study on Potluck indicated that it was usable and learnable, and elicited excitement from programmers who, even with their programming skills, previously had great difficulties performing data integration. --- paper_title: Increasing the visibility of web-based information systems via client-side mash-ups paper_content: A self-aligning dome nut having a base member for connection into a hole or aperture of a mounting plate by a pressing operation. The base member has a neck portion on one side for connection into the hole and a cavity on its other side into which a nut member is loosely positioned. A protective dome encloses the cavity and covers the nut member. The dome is connected to the base member to leave an exposed portion on its other side for direct application of a clamping force during attachment of the dome nut to the support member. An insulting washer is disposed around the neck portion of the base member between it and the mounting plate, the insulating washer is sized to electrically isolate the nut from the support member and preclude electrical arcing therebetween. --- paper_title: Mashing up visual languages and web mash-ups paper_content: Research on Web mashups and visual languages share an interest in human-centered computing. Both research communities are concerned with supporting programming by everyday, technically inexpert users. Visual programming environments have been a focus for both communities, and we believe that there is much to be gained by further discussion between these research communities. In this paper we explore some connections between web mashups and visual languages, and try to identify what each might be able to learn from the other. Our goal is to establish a framework for a dialog between the communities, and to promote the exchange of ideas and our respective understandings of human-centered computing. --- paper_title: Application framework with demand-driven mashup for selective browsing paper_content: This paper proposes a mashup framework for creating flexible mashup applications in which the user can selectively browse through mashup items. Our framework provides a data management engine for on-demand data generation, and GUI components called widgets that can be used to browse through mashed-up data selectively. The application developer has to only prepare a mashup relation specifying the web service combinations and widget configurations specifying how to display the mashed-up data. On the basis of these configurations, widgets monitor user interactions and requests data from the data management engine that processes the demand-driven creation of mashed-up data. To enable selective browsing, a table widget, for instance, allows selection of columns to be displayed, provides a limited view with scroll bars, and filtering facilities. Our framework also offers a mechanism for widget coordination where a widget can change the display target according to states or events of other widgets. We introduce a sample application for tour planning using five cooperative widgets, and discuss the usability and performance advantages of our framework. --- paper_title: End-user programming of mashups with vegemite paper_content: Mashups are an increasingly popular way to integrate data from multiple web sites to fit a particular need, but it often requires substantial technical expertise to create them. To lower the barrier for creating mashups, we have extended the CoScripter web automation tool with a spreadsheet-like environment called Vegemite. Our system uses direct-manipulation and programming-by-demonstration tech-niques to automatically populate tables with information collected from various web sites. A particular strength of our approach is its ability to augment a data set with new values computed by a web site, such as determining the driving distance from a particular location to each of the addresses in a data set. An informal user study suggests that Vegemite may enable a wider class of users to address their information needs. --- paper_title: User-friendly functional programming for web mashups paper_content: MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functions as its scope. --- paper_title: Predicting Service Mashup Candidates Using Enhanced Syntactical Message Management paper_content: The descriptiveness of capabilities advertised on service-oriented architectures provides a promising platform for crafting new knowledge. Service mashup has been introduced as an approach for integrating the information provided from multiple Web services into one common operational picture. In the future, scale will be a barrier to these types of approaches. With the entry and exit of large numbers of services on the Internet, it will be difficult to find and suggest the most relevant service candidates for new mashups. In this work, we present an efficient syntactical approach for actively discovering Web service candidates for service mashups. This approach leverages the message naming characteristics of the developers and of the target service repository to inform search algorithms. Favorable precision results are described based on experimentation executed on an open repository of Web service from the Internet. --- paper_title: An Intelligent Ontology and Bayesian Network Based Semantic Mashup for Tourism paper_content: A common perception is that there are two competing vision for the future evolution of the Web: the Semantic Web and Web 2.0. In fact, Semantic Web technologies must integrate with Web 2.0 services for both to leverage each otherpsilas strengths. This paper illustrates how Semantic Web technologies can support information integration and make it easy to create semantic mashups. An intelligent recommendation system for tourism is presented to show the efficiency of our method. Through the ontology of tourism, the system allows the integration of heterogeneous online travel information. Based on Bayesian network technique method, the system recommends tourist attractions to a user by taking into account the travel behaviour both of the user and of other users. --- paper_title: Potluck: data mash-up tool for casual users paper_content: As more and more reusable structured data appears on the Web, casual users will want to take into their own hands the task of mashing up data rather than wait for mash-up sites to be built that address exactly their individually unique needs. In this paper, we present Potluck, a Web user interface that lets casual users--those without programming skills and data modeling expertise--mash up data themselves. ::: ::: Potluck is novel in its use of drag and drop for merging fields, its integration and extension of the faceted browsing paradigm for focusing on subsets of data to align, and its application of simultaneous editing for cleaning up data syntactically. Potluck also lets the user construct rich visualizations of data in-place as the user aligns and cleans up the data. This iterative process of integrating the data while constructing useful visualizations is desirable when the user is unfamiliar with the data at the beginning--a common case--and wishes to get immediate value out of the data without having to spend the overhead of completely and perfectly integrating the data first. ::: ::: A user study on Potluck indicated that it was usable and learnable, and elicited excitement from programmers who, even with their programming skills, previously had great difficulties performing data integration. --- paper_title: sMash: semantic-based mashup navigation for data API network paper_content: With the proliferation of data APIs, it is not uncommon that users who have no clear ideas about data APIs will encounter difficulties to build Mashups to satisfy their requirements. In this paper, we present a semantic-based mashup navigation system, sMash that makes mashup building easy by constructing and visualizing a real-life data API network. We build a sample network by gathering more than 300 popular APIs and find that the relationships between them are so complex that our system will play an important role in navigating users and give them inspiration to build interesting mashups easily. The system is accessible at: http://www.dart.zju.edu.cn/mashup. --- paper_title: Current solutions for Web service composition paper_content: Web service composition lets developers create applications on top of service-oriented computing's native description, discovery, and communication capabilities. Such applications are rapidly deployable and offer developers reuse possibilities and users seamless access to a variety of complex services. There are many existing approaches to service composition, ranging from abstract methods to those aiming to be industry standards. The authors describe four key issues for Web service composition. --- paper_title: Application framework with demand-driven mashup for selective browsing paper_content: This paper proposes a mashup framework for creating flexible mashup applications in which the user can selectively browse through mashup items. Our framework provides a data management engine for on-demand data generation, and GUI components called widgets that can be used to browse through mashed-up data selectively. The application developer has to only prepare a mashup relation specifying the web service combinations and widget configurations specifying how to display the mashed-up data. On the basis of these configurations, widgets monitor user interactions and requests data from the data management engine that processes the demand-driven creation of mashed-up data. To enable selective browsing, a table widget, for instance, allows selection of columns to be displayed, provides a limited view with scroll bars, and filtering facilities. Our framework also offers a mechanism for widget coordination where a widget can change the display target according to states or events of other widgets. We introduce a sample application for tour planning using five cooperative widgets, and discuss the usability and performance advantages of our framework. --- paper_title: User-friendly functional programming for web mashups paper_content: MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functions as its scope. --- paper_title: Predicting Service Mashup Candidates Using Enhanced Syntactical Message Management paper_content: The descriptiveness of capabilities advertised on service-oriented architectures provides a promising platform for crafting new knowledge. Service mashup has been introduced as an approach for integrating the information provided from multiple Web services into one common operational picture. In the future, scale will be a barrier to these types of approaches. With the entry and exit of large numbers of services on the Internet, it will be difficult to find and suggest the most relevant service candidates for new mashups. In this work, we present an efficient syntactical approach for actively discovering Web service candidates for service mashups. This approach leverages the message naming characteristics of the developers and of the target service repository to inform search algorithms. Favorable precision results are described based on experimentation executed on an open repository of Web service from the Internet. --- paper_title: An Intelligent Ontology and Bayesian Network Based Semantic Mashup for Tourism paper_content: A common perception is that there are two competing vision for the future evolution of the Web: the Semantic Web and Web 2.0. In fact, Semantic Web technologies must integrate with Web 2.0 services for both to leverage each otherpsilas strengths. This paper illustrates how Semantic Web technologies can support information integration and make it easy to create semantic mashups. An intelligent recommendation system for tourism is presented to show the efficiency of our method. Through the ontology of tourism, the system allows the integration of heterogeneous online travel information. Based on Bayesian network technique method, the system recommends tourist attractions to a user by taking into account the travel behaviour both of the user and of other users. --- paper_title: sMash: semantic-based mashup navigation for data API network paper_content: With the proliferation of data APIs, it is not uncommon that users who have no clear ideas about data APIs will encounter difficulties to build Mashups to satisfy their requirements. In this paper, we present a semantic-based mashup navigation system, sMash that makes mashup building easy by constructing and visualizing a real-life data API network. We build a sample network by gathering more than 300 popular APIs and find that the relationships between them are so complex that our system will play an important role in navigating users and give them inspiration to build interesting mashups easily. The system is accessible at: http://www.dart.zju.edu.cn/mashup. --- paper_title: MU: an hybrid language for Web Mashups paper_content: Web Mashup, Web 2.0, recombinant or remixable Web are all terms with the same meaning: informally, they express the possibility of building applications that are able to manage data contained in dierent repositories exposed as Web services. The availability of a broad range of Web services for dierent purposes and domains, from online bids to the weather forecasts, motivates the development of several different mashup applications throwing up a wide class of problems related, mainly, to the data format interoperability. The high number of the data formats, together with a huge technological heterogeneity creates the need for a framework that easily allows the development of such mashup applications. The main objective of this paper, therefore, is to present a novel approach based on a hybrid functional-logic high-level language called MU that allows the description of data source aggregation and the manipulation and presentation phases over multiple external sources with an extremely compact and exible syntax. We also present an eective use case implemented using "em-up", our reference implementation of the MU language, where an example of mashup application is shown in detail. --- paper_title: Cloud-based Enterprise Mashup Integration Services for B2B Scenarios paper_content: We observe a huge demand for situational and ad-hoc applications desired by the mass of business end-users that cannot be fully implemented by IT departments. This is especially the case with regard to solutions that support infrequent, situational, and ad-hoc B2B scenarios. End users are not able to implement such solutions without the help of developers. Enterprise Mashup-/ and Lightweight Composition approaches and tools are promising solutions to unleash the huge potential of integrating the mass of end users into development and to overcome this \long-tail" dilemma. In this work, we summarize dierent patterns on how to real --- paper_title: Market Overview of Enterprise Mashup Tools paper_content: A new paradigm, known as Enterprise Mashups, has been gain momentum during the last years. By empowering actual business end-users to create and adapt individual enterprise applications, Enterprise Mashups implicate a shift concerning a collaborative software development and consumption process. Upcoming Mashup tools prove the growing relevance of this paradigm in the industry, both in the consumer and enterprise-oriented market. However, a market overview of the different tools is still missing. In this paper, we introduce a classification of Mashup tools and evaluate selected tools of the different clusters according to the perspectives general information, functionality and usability. Finally, we classify more than 30 tools in the designed classification model and present the observed market trends in context of Enterprise Mashups. --- paper_title: User-friendly functional programming for web mashups paper_content: MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functions as its scope. --- paper_title: Please Permit Me: Stateless Delegated Authorization in Mashups paper_content: Mashups have emerged as a Web 2.0 phenomenon, connecting disjoint applications together to provide unified services. However, scalable access control for mashups is difficult. To enable a mashup to gather data from legacy applications and services, users must give the mashup their login names and passwords for those services. This all-or-nothing approach violates the principle of least privilege and leaves users vulnerable to misuse of their credentials by malicious mashups. In this paper, we introduce delegation permits - a stateless approach to access rights delegation in mashups - and describe our complete implementation of a permit-based authorization delegation service. Our protocol and implementation enable fine grained, flexible, and stateless access control and authorization for distributed delegated authorization in mashups, while minimizing attackers' ability to capture and exploit users' authentication credentials. --- paper_title: Cloud-based Enterprise Mashup Integration Services for B2B Scenarios paper_content: We observe a huge demand for situational and ad-hoc applications desired by the mass of business end-users that cannot be fully implemented by IT departments. This is especially the case with regard to solutions that support infrequent, situational, and ad-hoc B2B scenarios. End users are not able to implement such solutions without the help of developers. Enterprise Mashup-/ and Lightweight Composition approaches and tools are promising solutions to unleash the huge potential of integrating the mass of end users into development and to overcome this \long-tail" dilemma. In this work, we summarize dierent patterns on how to real --- paper_title: End-user programming of mashups with vegemite paper_content: Mashups are an increasingly popular way to integrate data from multiple web sites to fit a particular need, but it often requires substantial technical expertise to create them. To lower the barrier for creating mashups, we have extended the CoScripter web automation tool with a spreadsheet-like environment called Vegemite. Our system uses direct-manipulation and programming-by-demonstration tech-niques to automatically populate tables with information collected from various web sites. A particular strength of our approach is its ability to augment a data set with new values computed by a web site, such as determining the driving distance from a particular location to each of the addresses in a data set. An informal user study suggests that Vegemite may enable a wider class of users to address their information needs. --- paper_title: User-friendly functional programming for web mashups paper_content: MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functions as its scope. --- paper_title: Predicting Service Mashup Candidates Using Enhanced Syntactical Message Management paper_content: The descriptiveness of capabilities advertised on service-oriented architectures provides a promising platform for crafting new knowledge. Service mashup has been introduced as an approach for integrating the information provided from multiple Web services into one common operational picture. In the future, scale will be a barrier to these types of approaches. With the entry and exit of large numbers of services on the Internet, it will be difficult to find and suggest the most relevant service candidates for new mashups. In this work, we present an efficient syntactical approach for actively discovering Web service candidates for service mashups. This approach leverages the message naming characteristics of the developers and of the target service repository to inform search algorithms. Favorable precision results are described based on experimentation executed on an open repository of Web service from the Internet. --- paper_title: Please Permit Me: Stateless Delegated Authorization in Mashups paper_content: Mashups have emerged as a Web 2.0 phenomenon, connecting disjoint applications together to provide unified services. However, scalable access control for mashups is difficult. To enable a mashup to gather data from legacy applications and services, users must give the mashup their login names and passwords for those services. This all-or-nothing approach violates the principle of least privilege and leaves users vulnerable to misuse of their credentials by malicious mashups. In this paper, we introduce delegation permits - a stateless approach to access rights delegation in mashups - and describe our complete implementation of a permit-based authorization delegation service. Our protocol and implementation enable fine grained, flexible, and stateless access control and authorization for distributed delegated authorization in mashups, while minimizing attackers' ability to capture and exploit users' authentication credentials. --- paper_title: An Intelligent Ontology and Bayesian Network Based Semantic Mashup for Tourism paper_content: A common perception is that there are two competing vision for the future evolution of the Web: the Semantic Web and Web 2.0. In fact, Semantic Web technologies must integrate with Web 2.0 services for both to leverage each otherpsilas strengths. This paper illustrates how Semantic Web technologies can support information integration and make it easy to create semantic mashups. An intelligent recommendation system for tourism is presented to show the efficiency of our method. Through the ontology of tourism, the system allows the integration of heterogeneous online travel information. Based on Bayesian network technique method, the system recommends tourist attractions to a user by taking into account the travel behaviour both of the user and of other users. --- paper_title: OpenID 2.0: a platform for user-centric identity management paper_content: With the advancement in user-centric and URI-based identity systems over the past two years, it has become clear that a single specification will not be the solution to all problems. Rather, like the other layers of the Internet, developing small, interoperable specifications that are independently implementable and useful will ultimately lead to market adoption of these technologies. This is the intent of the OpenID framework. OpenID Authentication 1.0 began as a lightweight HTTP-based URL authentication protocol. OpenID Authentication 2.0 it is now turning into an open community-driven platform that allows and encourages innovation. It supports both URLs and XRIs as user identifiers, uses Yadis XRDS documents for identity service discovery, adds stronger security, and supports both public and private identifiers. With continuing convergence under this broad umbrella, the OpenID framework is emerging as a viable solution for Internet-scale user-centric identity infrastructure. --- paper_title: TOWARDS A REFERENCE MODEL FOR GRASSROOTS ENTERPRISE MASHUP ENVIRONMENTS paper_content: A new kind of Web-based application, known as Enterprise Mashups, has gained momentum in the last years: Business users with no or limited programming skills are empowered to leverage in a collaborative manner user friendly building blocks and to combine and reuse existing Web-based resources within minutes to new value added applications in order to solve an individual and ad-hoc business problem. Current discussions of the Mashup paradigm in the scientific community are limited on technical aspects. The collaboration and the peer production management aspects of the Mashup development have received less attention yet. In this paper, we propose a reference model for Enterprise Mashups which provides a foundation to develop and to analyse grassroots Enterprise Mashup environments from a managerial and collaborative perspective. By following the design science research approach, we investigate existing reference models and leverage the St. Gallen Media Reference Model (MRM). The development of Enterprise Mashups is structured by market transaction phases similar to electronic markets. The user roles, the necessary processes and the resulting services are modelled according to the views of the MRM. By means of the SAP Research RoofTop Marketplace prototype we demonstrate the application of the designed reference model for grassroots Enterprise Mashups environments. ::: ::: WINNER OF THE BEST PAPER AWARD. http://www.ecis2009.it --- paper_title: Mash-o-matic paper_content: Web applications called mash-ups combine information of varying granularity from different, possibly disparate, sources. We describe Mash-o-matic, a utility that can extract, clean, and combine disparate information fragments, and automatically generate data for mash-ups and the mash-ups themselves. As an illustration, we generate a mash-up that displays a map of a university campus, and outline the potential benefits of using Mash-o-matic. Mash-o-matic exploits superimposed information (SI), which is new information and structure created in reference to fragments of existing information. Mashomatic is implemented using middleware called the Superimposed Pluggable Architecture for Contexts and Excerpts (SPARCE), and a query processor for SI and referenced information, both parts of our infrastructure to support SI management. We present a high-level description of the mash-up production process and discuss in detail how Mash-o-matic accelerates that process. --- paper_title: Integrating legacy software into a service oriented architecture paper_content: Legacy programs, i. e. programs which have been developed with an outdated technology make-up for the vast majority of programs in many user application environments. It is these programs which actually run the information systems of the business world. Moving to a new technology such as service oriented architecture is impossible without taking these programs along. This contribution presents a tool supported method for achieving that goal. Legacy code is wrapped behind an XML shell which allows individual functions within the programs, to be offered as Web services to any external user. By means of this wrapping technology, a significant part of the company software assets can be preserved within the framework of a service oriented architecture. --- paper_title: Understanding Mashup Development paper_content: Web mashups are Web applications developed using contents and services available online. Despite rapidly increasing interest in mashups, comprehensive development tools and frameworks are lacking, and in most cases mashing up a new application implies a significant manual programming effort. This article overviews current tools, frameworks, and trends that aim to facilitate mashup development. The authors use a set of characteristic dimensions to highlight the strengths and weaknesses of some representative approaches. --- paper_title: Data Integration Support for Mashups paper_content: Mashups are a new type of interactive web applications, combining content from multiple services or sources at runtime. While many such mashups are being developed most of them support rather simple data integration tasks. We therefore propose a framework for the development of more complex dynamic data integration mashups. The framework consists of components for query generation and online matching as well as for additional data transformation. Our architecture supports interactive and sequential result refinement to improve the quality of the presented result stepby-step by executing more elaborate queries when necessary. A script-based definition of mashups facilitates the development as well as the dynamic execution of mashups. We illustrate our approach by a powerful mashup implementation combining bibliographic data to dynamically calculate citation counts for venues and authors. --- paper_title: Towards a mashup-driven end-user programming of SOA-based applications paper_content: Recent technologies and standards in the field of Service-Orientated Architectures (SOA) have focused on Service-to-Service interaction and do not consider Service-to-User scenarios. This results in a lack of a service-consumer-orientation in order to empower the user to get easy access to the functionalities of services. The paper argues for the need of new concepts that extend existing mashup approaches to enable a more end-user driven application development. It presents an insight into existing mashup technologies and identifies shortcomings concerning the creation of more complex applications. The paper offers ways how to extend the existing concepts and shows their potential as a key technology for an end-user programming in the field of SOA. The empowerment of the actual service consumer can bridge the gap between the user and the service infrastructure. --- paper_title: Mashing up visual languages and web mash-ups paper_content: Research on Web mashups and visual languages share an interest in human-centered computing. Both research communities are concerned with supporting programming by everyday, technically inexpert users. Visual programming environments have been a focus for both communities, and we believe that there is much to be gained by further discussion between these research communities. In this paper we explore some connections between web mashups and visual languages, and try to identify what each might be able to learn from the other. Our goal is to establish a framework for a dialog between the communities, and to promote the exchange of ideas and our respective understandings of human-centered computing. --- paper_title: Rapid development of spreadsheet-based web mashups paper_content: The rapid growth of social networking sites and web communities have motivated web sites to expose their APIs to external developers who create mashups by assembling existing functionalities. Current APIs, however, aim toward developers with programming expertise; they are not directly usable by wider class of users who do not have programming background, but would nevertheless like to build their own mashups. To address this need, we propose a spreadsheet-based Web mashups development framework, which enables users to develop mashups in the popular spreadsheet environment. First, we provide a mechanism that makes structured data first class values of spreadsheet cells. Second, we propose a new component model that can be used to develop fairly sophisticated mashups, involving joining data sources and keeping spreadsheet data up to date. Third, to simplify mashup development, we provide a collection of spreadsheet-based mashup patterns that captures common Web data access and spreadsheet presentation functionalities. Users can reuse and customize these patterns to build spreadsheet-based Web mashups instead of developing them from scratch. Fourth, we enable users to manipulate structured data presented on spreadsheet in a drag-and-drop fashion. Finally, we have developed and tested a proof-of-concept prototype to demonstrate the utility of the proposed framework. --- paper_title: End-user programming of mashups with vegemite paper_content: Mashups are an increasingly popular way to integrate data from multiple web sites to fit a particular need, but it often requires substantial technical expertise to create them. To lower the barrier for creating mashups, we have extended the CoScripter web automation tool with a spreadsheet-like environment called Vegemite. Our system uses direct-manipulation and programming-by-demonstration tech-niques to automatically populate tables with information collected from various web sites. A particular strength of our approach is its ability to augment a data set with new values computed by a web site, such as determining the driving distance from a particular location to each of the addresses in a data set. An informal user study suggests that Vegemite may enable a wider class of users to address their information needs. --- paper_title: User-friendly functional programming for web mashups paper_content: MashMaker is a web-based tool that makes it easy for a normal user to create web mashups by browsing around, without needing to type, or plan in advance what they want to do. Like a web browser, Mashmaker allows users to create mashups by browsing, rather than writing code, and allows users to bookmark interesting things they find, forming new widgets - reusable mashup fragments. Like a spreadsheet, MashMaker mixes program and data and allows ad-hoc unstructured editing of programs. MashMaker is also a modern functional programming language with non-side effecting expressions, higher order functions, and lazy evaluation. MashMaker programs can be manipulated either textually, or through an interactive tree representation, in which a program is presented together with the values it produces. In order to cope with this unusual domain, MashMaker contains a number of deviations from normal function languages. The most notable of these is that, in order to allow the programmer to write programs directly on their data, all data is stored in a single tree, and evaluation of an expression always takes place at a specific point in this tree, which also functions as its scope. --- paper_title: Towards Accountable Enterprise Mashup Services paper_content: As a result of the proliferation of Web 2.0 style Web sites, the practice of mashup services has become increasingly popular in the Web development community. While mashup services bring flexibility and speed in delivering new valuable services to consumers, the issue of accountability associated with the mashup practice remains largely ignored by the industry. Furthermore, realizing the great benefits of mashup services, industry leaders are eagerly pushing these services into the enterprise arena. Although enterprise mashup services hold great promise in delivering a flexible SOA solution in a business context, the lack of accountability in current mashup solutions may render this ineffective in the enterprise environment. This paper defines accountability for mashup services, analyses the underlying issues in practice, and finally proposes a framework and ontology to model accountability. This model may then be used to develop effective accountability solutions for mashup environments. --- paper_title: Mashlight: a Lightweight Mashup Framework for Everyone paper_content: Recently, Web 2.0 has brought a storm on web application development. In particular, mashups have greatly enhanced user creativity across the web, allowing end-users to rapidly combine information from diverse sources, and integrate them into “new” goal-oriented applications. In the meantime, widgets (also built with Web 2.0 technology) have also gained a lot of momentum. Indeed, they have become ever more present in our every day lives, even appearing as first class players in our operating systems (e.g., Mac OS X and Windows 7). In this paper we present Mashlight: a lightweight framework for creating and executing mashups that combines these two worlds. Indeed, it provides users with a simple means to create “process-like” mashups using “widget-like” Web 2.0 applications. Even users that have no technical knowhow can string together building blocks –taken from an extensible library– to define the application they need. The framework is implemented using common Web technology, meaning our mashups can be run from different kinds of devices, in a lightweight fashion without the overhead of a complex application server. The paper presents the main concepts behind Mashlight blocks and Mashlight processes, and demonstrate our prototype on a concrete example. --- paper_title: Potluck: data mash-up tool for casual users paper_content: As more and more reusable structured data appears on the Web, casual users will want to take into their own hands the task of mashing up data rather than wait for mash-up sites to be built that address exactly their individually unique needs. In this paper, we present Potluck, a Web user interface that lets casual users--those without programming skills and data modeling expertise--mash up data themselves. ::: ::: Potluck is novel in its use of drag and drop for merging fields, its integration and extension of the faceted browsing paradigm for focusing on subsets of data to align, and its application of simultaneous editing for cleaning up data syntactically. Potluck also lets the user construct rich visualizations of data in-place as the user aligns and cleans up the data. This iterative process of integrating the data while constructing useful visualizations is desirable when the user is unfamiliar with the data at the beginning--a common case--and wishes to get immediate value out of the data without having to spend the overhead of completely and perfectly integrating the data first. ::: ::: A user study on Potluck indicated that it was usable and learnable, and elicited excitement from programmers who, even with their programming skills, previously had great difficulties performing data integration. --- paper_title: Mash-o-matic paper_content: Web applications called mash-ups combine information of varying granularity from different, possibly disparate, sources. We describe Mash-o-matic, a utility that can extract, clean, and combine disparate information fragments, and automatically generate data for mash-ups and the mash-ups themselves. As an illustration, we generate a mash-up that displays a map of a university campus, and outline the potential benefits of using Mash-o-matic. Mash-o-matic exploits superimposed information (SI), which is new information and structure created in reference to fragments of existing information. Mashomatic is implemented using middleware called the Superimposed Pluggable Architecture for Contexts and Excerpts (SPARCE), and a query processor for SI and referenced information, both parts of our infrastructure to support SI management. We present a high-level description of the mash-up production process and discuss in detail how Mash-o-matic accelerates that process. --- paper_title: A methodology for quality-based mashup of data sources paper_content: The concept of mashup is gaining tremendous popularity and its application can be seen in a large number of domains. Enterprises using and relying upon mashup have improved their mass collaboration and personalization. In order for mashup technology to be widely accepted and widely used, we need a methodology by which can make use of the quality of the input to the mashup process as a governing principle to carry out mashup. This paper reviews the concept of mashup in different domains and proposes a conceptual solution framework for providing quality based mashup process. --- paper_title: Increasing the visibility of web-based information systems via client-side mash-ups paper_content: A self-aligning dome nut having a base member for connection into a hole or aperture of a mounting plate by a pressing operation. The base member has a neck portion on one side for connection into the hole and a cavity on its other side into which a nut member is loosely positioned. A protective dome encloses the cavity and covers the nut member. The dome is connected to the base member to leave an exposed portion on its other side for direct application of a clamping force during attachment of the dome nut to the support member. An insulting washer is disposed around the neck portion of the base member between it and the mounting plate, the insulating washer is sized to electrically isolate the nut from the support member and preclude electrical arcing therebetween. --- paper_title: Integrating legacy software into a service oriented architecture paper_content: Legacy programs, i. e. programs which have been developed with an outdated technology make-up for the vast majority of programs in many user application environments. It is these programs which actually run the information systems of the business world. Moving to a new technology such as service oriented architecture is impossible without taking these programs along. This contribution presents a tool supported method for achieving that goal. Legacy code is wrapped behind an XML shell which allows individual functions within the programs, to be offered as Web services to any external user. By means of this wrapping technology, a significant part of the company software assets can be preserved within the framework of a service oriented architecture. ---
Title: Mashups: A Literature Review and Classification Framework Section 1: Introduction Description 1: Discuss the background and context of Web 2.0, the emergence of mashup applications, and their significance in current web development. Section 2: Review Methodology Description 2: Describe the methodology used to conduct the literature review, including the process of gathering publications and identifying subtopics in mashup research. Section 3: Literature Review Description 3: Present the result of the literature review, including categorization of mashup research into six main areas and description of the technological challenges and advancements within each category. Section 4: Access Control and Cross Communication Description 4: Discuss issues surrounding access control to mashup data, security risks, and methods to provide secure cross communication between backend resources. Section 5: Mashup Integration Description 5: Address the challenges and solutions related to the integration of various data sources in mashups, examining different data integration models and methodologies. Section 6: Mashup Agents Description 6: Examine tools designed to semantically determine relevant information sources and autonomously include them in the mashup, including AI techniques that support this process. Section 7: Mashup Frameworks Description 7: Review frameworks that support the development of mashups, discussing the need for structured approaches to handle heterogeneous data sources and ad-hoc application development. Section 8: End User Programming Description 8: Explore the development of languages and tools aimed at enabling nontechnical users to create mashups, discussing both passive and proactive approaches to end-user programming. Section 9: Enterprise Mashups Description 9: Investigate the strategic advantages of mashups in enterprise settings, focusing on issues such as accountability, design principles, and integration with legacy systems. Section 10: Mashup Classification Framework Description 10: Aggregate the characterizing attributes from each research category into an overarching classification framework to aid researchers and developers in understanding and designing mashup environments. Section 11: Future Research Description 11: Highlight areas for future research, identifying open questions and challenges remaining in the field of mashup development and application. Section 12: Conclusions Description 12: Summarize the key findings of the literature review, emphasizing the identified themes and their implications for future research and practical application in mashup development. Section 13: Appendix A. Literature Classification Description 13: Provide a comprehensive summary of the classified literature, including the category, title, synopsis, and classification of each reviewed publication.
Visual analytics of movement: An overview of methods, tools and procedures
11
--- paper_title: Readings in Information Visualization-Using Visualization to Think paper_content: A programmable DC power supply including a proportionally controlled transistor connected in series with the load for varying the load current in accordance with the magnitude of a control signal applied to the transistor. A detector coupled across the transistor functions to sense high and low voltage thresholds and thereupon select an incrementally higher or lower voltage for application to the load to assure that the transistor operates continuously in its proportional range, thereby enabling the input signal to maintain control of the load current. --- paper_title: Visualization of Vessel Traffic paper_content: We discuss methods to visualize large amounts of object movements described with so called multivariate trajectories, which are lists of records with multiple attribute values about the state of the object. In this chapter we focus on vessel traffic as one of the examples of this kind of data. The purpose of our visualizations is to reveal what has happened over a period of time. For vessel traffic, this is beneficial for surveillance operators and analysts, since current visualizations do not give an overview of normal behavior, which is needed to find abnormally behaving ships that can be a potential threat. Our approach is inspired by the technique of kernel density estimation and smooths trajectories to obtain an overview picture with a distribution of trajectories: a density map. Using knowledge about the attributes in the data, the user can adapt these pictures by setting parameters, filters, and expressions as means for rapid prototyping, required for quickly finding other types of behavior with our visualization approach. Furthermore, density maps are computationally expensive, which we address by implementing our tools on graphics hardware. We describe different variations of our techniques and illustrate them with real-world vessel traffic data. --- paper_title: WHAT ABOUT PEOPLE IN REGIONAL SCIENCE? paper_content: The described locking mechanism permits two or more rods to be quickly fitted together and adjusted to a fixed, predetermined length. The locking mechanism is of a type such that the rotation of the rods by approximately 45 DEG -90 DEG automatically fixes the rods in place. --- paper_title: Geovisualization of Dynamics, Movement and Change: Key Issues and Developing Approaches in Visualization Research paper_content: The work presented here represents a selection of the contributions made to a workshop coordinated by the International Cartographic Association (ICA) Commission on Geovisualization and the Association of Geographic Information Laboratories in Europe (AGILE) on the Geovisualization of Dynamics, Movement and Change. Theoretical and methodological approaches for exploring and analyzing large datasets with spatial and temporal components were presented, discussed and developed at the meeting in Girona, Catalunya which was held on 5 May 2008 one day before AGILE’s 11th International Conference on Geographic Information Science. --- paper_title: GeoTime information visualization paper_content: Analyzing observations over time and geography is a common task but typically requires multiple, separate tools. The objective of our research has been to develop a method to visualize, and work with, the spatial interconnectedness of information over time and geography within a single, highly interactive 3D view. A novel visualization technique for displaying and tracking events, objects and activities within a combined temporal and geospatial display has been developed. This technique has been implemented as a demonstratable prototype called GeoTime in order to determine potential utility. Initial evaluations have been with military users. However, we believe the concept is applicable to a variety of government and business analysis tasks --- paper_title: INTERACTIVE GEOVISUALIZATION OF ACTIVITY-TRAVEL PATTERNS USING THREE-DIMENSIONAL GEOGRAPHICAL INFORMATION SYSTEMS: A METHODOLOGICAL EXPLORATION WITH A LARGE DATA SET paper_content: A major difficulty in the analysis of disaggregate activity-travel behavior in the past arises from the many interacting dimensions involved (e.g. location, timing, duration and sequencing of trips and activities). Often, the researcher is forced to decompose activity-travel patterns into their component dimensions and focus only on one or two dimensions at a time, or to treat them as a multidimensional whole using multivariate methods to derive generalized activity-travel patterns. This paper describes several GIS-based three-dimensional (3D) geovisualization methods for dealing with the spatial and temporal dimensions of human activity-travel patterns at the same time while avoiding the interpretative complexity of multivariate pattern generalization or recognition methods. These methods are operationalized using interactive 3D GIS techniques and a travel diary data set collected in the Portland (Oregon) metropolitan region. The study demonstrates several advantages in using these methods. First, significance of the temporal dimension and its interaction with the spatial dimension in structuring the daily space-time trajectories of individuals can be clearly revealed. Second, they are effective tools for the exploratory analysis of activity diary data that can lead to more focused analysis in later stages of a study. They can also help the formulation of more realistic computational or behavioral travel models. --- paper_title: Animation: Can it facilitate paper_content: Graphics have been used since ancient times to portray things that are inherently spatiovisual, like maps and building plans. More recently, graphics have been used to portray things that are metaphorically spatiovisual, like graphs and organizational charts. The assumption is that graphics can facilitate comprehension, learning, memory, communication and inference. Assumptions aside, research on static graphics has shown that only carefully designed and appropriate graphics prove to be beneficial for conveying complex systems. Effective graphics conform to the Congruence Principle according to which the content and format of the graphic should correspond to the content and format of the concepts to be conveyed. From this, it follows that animated graphics should be effective in portraying change over time. Yet the research on the efficacy of animated over static graphics is not encouraging. In cases where animated graphics seem superior to static ones, scrutiny reveals lack of equivalence between animated and static graphics in content or procedures; the animated graphics convey more information or involve interactivity. Animations of events may be ineffective because animations violate the second principle of good graphics, the Apprehension Principle, according to which graphics should be accurately perceived and appropriately conceived. Animations are often too complex or too fast to be accurately perceived. Moreover, many continuous events are conceived of as sequences of discrete steps. Judicious use of interactivity may overcome both these disadvantages. Animations may be more effective than comparable static graphics in situations other than conveying complex systems, for example, for real time reorientations in time and space. --- paper_title: The space - time cube revisited from a geovisualization perspective paper_content: At the end of the sixties Hagerstrand introduced a space-time model which included features such as a Space-TimePath, and a Space-Time-Prism. His model is often seen as the start of the time-geography studies. Throughout the years his model has been applied and improved to understand our movements through space. Problems studied can be found in different fields of geography, and range from those on an individual movement to whole theories to optimize transportation. From a visualization perspective the Space-Time-Cube was the most prominent element in Hagerstrand’s approach. In its basic appearance these images consist of a cube with on its base a representation of geography (along the x- and y-axis), while the cube’s height represents time (z-axis). A typical Space-Time-Cube could contain the space time-paths of for instance individuals or bus routes. However, when the concept was introduced the options to create the graphics were limited to manual methods and the user could only experience the single view created by the draftsperson. An alternative view on the cube would mean to go through a laborious drawing exercise. Today’s software has options to automatically create the cube and its contents from a database. Data acquisition of space-time paths for both individuals and groups is also made easier using GPS. Today, the user’s viewing environment is, by default, interactive and allows one to view the cube from any direction. In this paper an extended interactive and dynamic visualization environment is proposed, and demonstrated, in which the user has full flexibility to view, manipulate and query the data in a Space-Time-Cube. Included are options to move slider planes along each of the axes to for instance select, or highlight a period in time or location in space. Examples will be discussed in which the time axis is manipulated by for instance changing world time for event time (time cartograms). Creativity should not stop here since it has been shown that especially an alternative perspective on the data will sparkle the mind with new ideas. The user should be allowed to for instance let the x- and/or y-axis be represented by other variables of the theme studied. Since the cube is seen as an integral part of a geovisualization environment the option to link other views with other graphic representation does exist. --- paper_title: FromDaDy: Spreading Aircraft Trajectories Across Views to Support Iterative Queries paper_content: When displaying thousands of aircraft trajectories on a screen, the visualization is spoiled by a tangle of trails. The visual analysis is therefore difficult, especially if a specific class of trajectories in an erroneous dataset has to be studied. We designed FromDaDy, a trajectory visualization tool that tackles the difficulties of exploring the visualization of multiple trails. This multidimensional data exploration is based on scatterplots, brushing, pick and drop, juxtaposed views and rapid visual design. Users can organize the workspace composed of multiple juxtaposed views. They can define the visual configuration of the views by connecting data dimensions from the dataset to Bertin's visual variables. They can then brush trajectories, and with a pick and drop operation they can spread the brushed information across views. They can then repeat these interactions, until they extract a set of relevant data, thus formulating complex queries. Through two real-world scenarios, we show how FromDaDy supports iterative queries and the extraction of trajectories in a dataset that contains up to 5 million data. --- paper_title: Supporting visual exploration of object movement paper_content: The focus of the presented work is visualization of routes of objects that change their spatial location in time. The challenge is to facilitate investigation of important characteristics of the movement: positions of the objects at any selected moment, directions, speeds and their changes with the time, overall trajectories and those for any specified interval etc. We propose a dynamic map display controlled through a set of interactive devices called time controls to be used as a support to visual exploration of spatial movement. --- paper_title: TripVista: Triple Perspective Visual Trajectory Analytics and its application on microscopic traffic data at a road intersection paper_content: In this paper, we present an interactive visual analytics system, Triple Perspective Visual Trajectory Analytics (TripVista), for exploring and analyzing complex traffic trajectory data. The users are equipped with a carefully designed interface to inspect data interactively from three perspectives (spatial, temporal and multi-dimensional views). While most previous works, in both visualization and transportation research, focused on the macro aspects of traffic flows, we develop visualization methods to investigate and analyze microscopic traffic patterns and abnormal behaviors. In the spatial view of our system, traffic trajectories with various presentation styles are directly interactive with user brushing, together with convenient pattern exploration and selection through ring-style sliders. Improved ThemeRiver, embedded with glyphs indicating directional information, and multiple scatterplots with time as horizontal axes illustrate temporal information of the traffic flows. Our system also harnesses the power of parallel coordinates to visualize the multi-dimensional aspects of the traffic trajectory data. The above three view components are linked closely and interactively to provide access to multiple perspectives for users. Experiments show that our system is capable of effectively finding both regular and abnormal traffic flow patterns. --- paper_title: Interactive visual clustering of large collections of trajectories paper_content: One of the most common operations in exploration and analysis of various kinds of data is clustering, i.e. discovery and interpretation of groups of objects having similar properties and/or behaviors. In clustering, objects are often treated as points in multi-dimensional space of properties. However, structurally complex objects, such as trajectories of moving entities and other kinds of spatio-temporal data, cannot be adequately represented in this manner. Such data require sophisticated and computationally intensive clustering algorithms, which are very hard to scale effectively to large datasets not fitting in the computer main memory. We propose an approach to extracting meaningful clusters from large databases by combining clustering and classification, which are driven by a human analyst through an interactive visual interface. --- paper_title: Visually driven analysis of movement data by progressive clustering paper_content: The paper investigates the possibilities of using clustering techniques in visual exploration and analysis of large numbers of trajectories, that is, sequences of time-stamped locations of some moving entities. Trajectories are complex spatio-temporal constructs characterized by diverse non-trivial properties. To assess the degree of (dis)similarity between traiectories, specific methods (distance functions) are required. A single distance function accounting for all properties of trajectories, (1) is difficult to build, (2) would require much time to compute, and (3) might be difficult to understand and to use. We suggest the procedure of progressive clustering where a simple distance function with a clear meaning is applied on each step, which leads to easily interpretable outcomes. Successive application of several different functions enables sophisticated analyses through gradual refinement of earlier obtained results. Besides the advantages from the sense-making perspective, progressive clustering enables a rational work organization where time-consuming computations are applied to relatively small potentially interesting subsets obtained by means of 'cheap' distance functions producing quick results. We introduce the concept of progressive clustering by an example of analyzing a large real data set. We also review the existing clustering methods, describe the method OPTICS suitable for progressive clustering of trajectories, and briefly present several distance functions for trajectories. --- paper_title: Visual analytics tools for analysis of movement data paper_content: With widespread availability of low cost GPS devices, it is becoming possible to record data about the movement of people and objects at a large scale. While these data hide important knowledge for the optimization of location and mobility oriented infrastructures and services, by themselves they lack the necessary semantic embedding which would make fully automatic algorithmic analysis possible. At the same time, making the semantic link is easy for humans who however cannot deal well with massive amounts of data. In this paper, we argue that by using the right visual analytics tools for the analysis of massive collections of movement data, it is possible to effectively support human analysts in understanding movement behaviors and mobility patterns. We suggest a framework for analysis combining interactive visual displays, which are essential for supporting human perception, cognition, and reasoning, with database operations and computational methods, which are necessary for handling large amounts of data. We demonstrate the synergistic use of these techniques in case studies of two real datasets. --- paper_title: Poster: Dynamic time transformation for interpreting clusters of trajectories with space-time cube paper_content: We propose a set of techniques that support visual interpretation of trajectory clusters by transforming absolute time references into relative positions within temporal cycles or with respect to the starting and/or ending times of the trajectories. We demonstrate the work of the approach on a real data set about individual movement over one year. --- paper_title: From movement tracks through events to places: Extracting and characterizing significant places from mobility data paper_content: We propose a visual analytics procedure for analyzing movement data, i.e., recorded tracks of moving objects. It is oriented to a class of problems where it is required to determine significant places on the basis of certain types of events occurring repeatedly in movement data. The procedure consists of four major steps: (1) event extraction from trajectories; (2) event clustering and extraction of relevant places; (3) spatio-temporal aggregation of events or trajectories; (4) analysis of the aggregated data. All steps are scalable with respect to the amount of the data under analysis. We demonstrate the use of the procedure by example of two real-world problems requiring analysis at different spatial scales. --- paper_title: Exploration through enrichment: a visual analytics approach for animal movement paper_content: The analysis of trajectories has become an important field in geographic visualization, as cheap GPS sensors have become commonplace and, in many cases, valuable information can be derived either from the data themselves or their metadata if processed and visualized in the right way. However, showing the "right" information to highlight dependencies or correlations between different measurements remains a challenge, because the technical intricacies of applying a combination of automatic and visual analysis methods prevents the majority of domain experts from analyzing and exploring the full wealth of their movement data. This paper presents an exploration through enrichment approach, which enables iterative generation of metadata based on exploratory findings and is aimed at enabling domain experts to explore their data beyond traditional means. --- paper_title: A General Framework for Using Aggregation in Visual Exploration of Movement Data paper_content: Abstract To be able to explore visually large amounts of movement data, it is necessary to apply methods for aggregation and summarisation of the data. The goal of our research has been to systemize the possible approaches to aggregation of movement data into a framework clearly defining what kinds of exploratory tasks each approach is suitable for. On the basis of a formal model of movement of multiple entities, we consider two possible views of movement data, situation-oriented and trajectory-oriented. For each view, we discuss the appropriate methods of data aggregation and the visualisation techniques representing the results of aggregation and supporting data exploration. Special attention is given to dynamic aggregation working in combination with interactive filtering and classification of movement data (CR categories and subject descriptors: H.1·2 [user/machine systems]: human information processing - visual analytics; I.6·9 [visualisation]: information visualisation). --- paper_title: The space - time cube revisited from a geovisualization perspective paper_content: At the end of the sixties Hagerstrand introduced a space-time model which included features such as a Space-TimePath, and a Space-Time-Prism. His model is often seen as the start of the time-geography studies. Throughout the years his model has been applied and improved to understand our movements through space. Problems studied can be found in different fields of geography, and range from those on an individual movement to whole theories to optimize transportation. From a visualization perspective the Space-Time-Cube was the most prominent element in Hagerstrand’s approach. In its basic appearance these images consist of a cube with on its base a representation of geography (along the x- and y-axis), while the cube’s height represents time (z-axis). A typical Space-Time-Cube could contain the space time-paths of for instance individuals or bus routes. However, when the concept was introduced the options to create the graphics were limited to manual methods and the user could only experience the single view created by the draftsperson. An alternative view on the cube would mean to go through a laborious drawing exercise. Today’s software has options to automatically create the cube and its contents from a database. Data acquisition of space-time paths for both individuals and groups is also made easier using GPS. Today, the user’s viewing environment is, by default, interactive and allows one to view the cube from any direction. In this paper an extended interactive and dynamic visualization environment is proposed, and demonstrated, in which the user has full flexibility to view, manipulate and query the data in a Space-Time-Cube. Included are options to move slider planes along each of the axes to for instance select, or highlight a period in time or location in space. Examples will be discussed in which the time axis is manipulated by for instance changing world time for event time (time cartograms). Creativity should not stop here since it has been shown that especially an alternative perspective on the data will sparkle the mind with new ideas. The user should be allowed to for instance let the x- and/or y-axis be represented by other variables of the theme studied. Since the cube is seen as an integral part of a geovisualization environment the option to link other views with other graphic representation does exist. --- paper_title: Exploring spatiotemporal patterns by integrating visual analytics with a moving objects database system paper_content: In previous work, we have proposed a tool for Spatiotemporal Pattern Query. It matches individual moving object trajectories against a given movement pattern. For example, it can be used to find the situations of Missed Approach in ATC data (Air Traffic Control systems, used for tracking the movement of aircrafts), where the landing of the aircraft was interrupted for some reason. This tool expresses the pattern as a set of predicates that must be fulfilled in a certain temporal order. It is implemented as a Plugin to the Secondo DBMS system. Although the tool is generic and flexible, domain expertise is required to formulate and tune queries. The user has to decide the set of predicates, their arguments, and the temporal constraints that best describe the pattern. This paper demonstrates a novel solution where a Visual Analytics system, V-Analytics, is used in integration with this query tool to help a human analyst explore such patterns. The demonstration is based on a real ATC data set. --- paper_title: A conceptual framework and taxonomy of techniques for analyzing movement paper_content: Movement data link together space, time, and objects positioned in space and time. They hold valuable and multifaceted information about moving objects, properties of space and time as well as events and processes occurring in space and time. We present a conceptual framework that describes in a systematic and comprehensive way the possible types of information that can be extracted from movement data and on this basis defines the respective types of analytical tasks. Tasks are distinguished according to the type of information they target and according to the level of analysis, which may be elementary (i.e. addressing specific elements of a set) or synoptic (i.e. addressing a set or subsets). We also present a taxonomy of generic analytic techniques, in which the types of tasks are linked to the corresponding classes of techniques that can support fulfilling them. We include techniques from several research fields: visualization and visual analytics, geographic information science, database technology, and data mining. We expect the taxonomy to be valuable for analysts and researchers. Analysts will receive guidance in choosing suitable analytic techniques for their data and tasks. Researchers will learn what approaches exist in different fields and compare or relate them to the approaches they are going to undertake. --- paper_title: TripVista: Triple Perspective Visual Trajectory Analytics and its application on microscopic traffic data at a road intersection paper_content: In this paper, we present an interactive visual analytics system, Triple Perspective Visual Trajectory Analytics (TripVista), for exploring and analyzing complex traffic trajectory data. The users are equipped with a carefully designed interface to inspect data interactively from three perspectives (spatial, temporal and multi-dimensional views). While most previous works, in both visualization and transportation research, focused on the macro aspects of traffic flows, we develop visualization methods to investigate and analyze microscopic traffic patterns and abnormal behaviors. In the spatial view of our system, traffic trajectories with various presentation styles are directly interactive with user brushing, together with convenient pattern exploration and selection through ring-style sliders. Improved ThemeRiver, embedded with glyphs indicating directional information, and multiple scatterplots with time as horizontal axes illustrate temporal information of the traffic flows. Our system also harnesses the power of parallel coordinates to visualize the multi-dimensional aspects of the traffic trajectory data. The above three view components are linked closely and interactively to provide access to multiple perspectives for users. Experiments show that our system is capable of effectively finding both regular and abnormal traffic flow patterns. --- paper_title: An event-based conceptual model for context-aware movement analysis paper_content: Current tracking technologies enable collection of data, describing movements of various kinds of objects, including people, animals, icebergs, vehicles, containers with goods and so on. Analysis of movement data is now a hot research topic. However, most of the suggested analysis methods deal with movement data alone. Little has been done to support the analysis of movement in its spatio-temporal context, which includes various spatial and temporal objects as well as diverse properties associated with spatial locations and time moments. Comprehensive analysis of movement requires detection and analysis of relations that occur between moving objects and elements of the context in the process of the movement. We suggest a conceptual model in which movement is considered as a combination of spatial events of diverse types and extents in space and time. Spatial and temporal relations occur between movement events and elements of the spatial and temporal contexts. The model gives a ground to a generic approach based on extraction of interesting events from trajectories and treating the events as independent objects. By means of a prototype implementation, we tested the approach on complex real data about movement of wild animals. The testing showed the validity of the approach. --- paper_title: Stacking-Based Visualization of Trajectory Attribute Data paper_content: Visualizing trajectory attribute data is challenging because it involves showing the trajectories in their spatio-temporal context as well as the attribute values associated with the individual points of trajectories. Previous work on trajectory visualization addresses selected aspects of this problem, but not all of them. We present a novel approach to visualizing trajectory attribute data. Our solution covers space, time, and attribute values. Based on an analysis of relevant visualization tasks, we designed the visualization solution around the principle of stacking trajectory bands. The core of our approach is a hybrid 2D/3D display. A 2D map serves as a reference for the spatial context, and the trajectories are visualized as stacked 3D trajectory bands along which attribute values are encoded by color. Time is integrated through appropriate ordering of bands and through a dynamic query mechanism that feeds temporally aggregated information to a circular time display. An additional 2D time graph shows temporal information in full detail by stacking 2D trajectory bands. Our solution is equipped with analytical and interactive mechanisms for selecting and ordering of trajectories, and adjusting the color mapping, as well as coordinated highlighting and dedicated 3D navigation. We demonstrate the usefulness of our novel visualization by three examples related to radiation surveillance, traffic analysis, and maritime navigation. User feedback obtained in a small experiment indicates that our hybrid 2D/3D solution can be operated quite well. --- paper_title: Using treemaps for variable selection in spatio-temporal visualisation paper_content: We demonstrate and reflect upon the use of enhanced treemaps that incorporate spatial and temporal ordering for exploring a large multivariate spatio-temporal data set. The resulting data-dense views summarise and simultaneously present hundreds of space-, time-, and variable-constrained subsets of a large multivariate data set in a structure that facilitates their meaningful comparison and supports visual analysis. Interactive techniques allow localised patterns to be explored and subsets of interest selected and compared with the spatial aggregate. Spatial variation is considered through interactive raster maps and high-resolution local road maps. The techniques are developed in the context of 42.2 million records of vehicular activity in a 98 km2 area of central London and informally evaluated through a design used in the exploratory visualisation of this data set. The main advantages of our technique are the means to simultaneously display hundreds of summaries of the data and to interactively browse hundreds of variable combinations with ordering and symbolism that are consistent and appropriate for space- and time-based variables. These capabilities are difficult to achieve in the case of spatio-temporal data with categorical attributes using existing geovisualisation methods. We acknowledge limitations in the treemap representation but enhance the cognitive plausibility of this popular layout through our two-dimensional ordering algorithm and interactions. Patterns that are expected (e.g. more traffic in central London), interesting (e.g. the spatial and temporal distribution of particular vehicle types) and anomalous (e.g. low speeds on particular road sections) are detected at various scales and locations using the approach. In many cases, anomalies identify biases that may have implications for future use of the data set for analyses and applications. Ordered treemaps appear to have potential as interactive interfaces for variable selection in spatio-temporal visualisation. --- paper_title: Visualization of vessel movements paper_content: We propose a geographical visualization to support operators of coastal surveillance systems and decision making analysts to get insights in vessel movements. For a possibly unknown area, they want to know where significant maritime areas, like highways and anchoring zones, are located. We show these features as an overlay on a map. As source data we use AIS data: Many vessels are currently equipped with advanced GPS devices that frequently sample the state of the vessels and broadcast them. Our visualization is based on density fields that are derived from convolution of the dynamic vessel positions with a kernel. The density fields are shown as illuminated height maps. Combination of two fields, with a large and small kernel provides overview and detail. A large kernel provides an overview of area usage revealing vessel highways. Details of speed variations of individual vessels are shown with a small kernel, highlighting anchoring zones where multiple vessels stop. Besides for maritime applications we expect that this approach is useful for the visualization of moving object data in general. --- paper_title: Seeking structure in records of spatio-temporal behaviour: visualization issues, efforts and applications paper_content: Information that contains a geographic component is becoming increasingly prevalent and can be used both to analyse relatively complex behaviours in time and space and to combat the potential for information overload by assessing the geographic relevance of information. Such analysis can be combined with mobile communications technology to fuel location-based services that offer information pertinent in terms of geography, time, experience and preference. This paper aims to raise some issues relating to these advances and describes novel representations designed for interactive graphical exploratory data analysis (EDA). A number of graphical techniques and representation methods are introduced to establish the nature of the kinds of data that are being collected and the suitability of visualization for EDA of spatio-temporal data. These include the interactive views provided by the Location Trends Extractor, 'spotlights'--continuous density surfaces of recorded spatio-temporal activity, networks of morphometric features derived from continuous surfaces representing density of activity and geocentric parallel plots presented in a spatial multimedia environment for data exploration. Some of the benefits and limitations of the techniques are outlined along with suggestions as to how the visualization tools might be utilized and developed to improve our understanding of behaviour in time and space and evaluate and model geographic relevance. --- paper_title: A General Framework for Using Aggregation in Visual Exploration of Movement Data paper_content: Abstract To be able to explore visually large amounts of movement data, it is necessary to apply methods for aggregation and summarisation of the data. The goal of our research has been to systemize the possible approaches to aggregation of movement data into a framework clearly defining what kinds of exploratory tasks each approach is suitable for. On the basis of a formal model of movement of multiple entities, we consider two possible views of movement data, situation-oriented and trajectory-oriented. For each view, we discuss the appropriate methods of data aggregation and the visualisation techniques representing the results of aggregation and supporting data exploration. Special attention is given to dynamic aggregation working in combination with interactive filtering and classification of movement data (CR categories and subject descriptors: H.1·2 [user/machine systems]: human information processing - visual analytics; I.6·9 [visualisation]: information visualisation). --- paper_title: Space–time density of trajectories: exploring spatio-temporal patterns in movement data paper_content: Modern positioning and identification technologies enable tracking of almost any type of moving object. A remarkable amount of new trajectory data is thus available for the analysis of various phenomena. In cartography, a typical way to visualise and explore such data is to use a space-time cube, where trajectories are shown as 3D polylines through space and time. With increasingly large movement datasets becoming available, this type of display quickly becomes cluttered and unclear. In this article, we introduce the concept of 3D space-time density of trajectories to solve the problem of cluttering in the space-time cube. The space-time density is a generalisation of standard 2D kernel density around 2D point data into 3D density around 3D polyline data (i.e. trajectories). We present the algorithm for space-time density, test it on simulated data, show some basic visualisations of the resulting density volume and observe particular types of spatio-temporal patterns in the density that are specific to trajectory data. We also present an application to real-time movement data, that is, vessel movement trajectories acquired using the Automatic Identification System (AIS) equipment on ships in the Gulf of Finland. Finally, we consider the wider ramifications to spatial analysis of using this novel type of spatio-temporal visualisation. --- paper_title: Visual Mobility Analysis using T-Warehouse paper_content: Technological advances in sensing technologies and wireless telecommunication devices enable research fields related to the management of trajectory data. The challenge after storing the data is the implementation of appropriate analytics for extracting useful knowledge. However, traditional data warehousing systems and techniques were not designed for analyzing trajectory data. In this paper, the authors demonstrate a framework that transforms the traditional data cube model into a trajectory warehouse. As a proof-of-concept, the authors implement T-Warehouse, a system that incorporates all the required steps for Visual Trajectory Data Warehousing, from trajectory reconstruction and ETL processing to Visual OLAP analysis on mobility data. --- paper_title: Spatially Ordered Treemaps paper_content: Existing treemap layout algorithms suffer to some extent from poor or inconsistent mappings between data order and visual ordering in their representation, reducing their cognitive plausibility. While attempts have been made to quantify this mismatch, and algorithms proposed to minimize inconsistency, solutions provided tend to concentrate on one-dimensional ordering. We propose extensions to the existing squarified layout algorithm that exploit the two-dimensional arrangement of treemap nodes more effectively. Our proposed spatial squarified layout algorithm provides a more consistent arrangement of nodes while maintaining low aspect ratios. It is suitable for the arrangement of data with a geographic component and can be used to create tessellated cartograms for geovisualization. Locational consistency is measured and visualized and a number of layout algorithms are compared. CIELab color space and displacement vector overlays are used to assess and emphasize the spatial layout of treemap nodes. A case study involving locations of tagged photographs in the Flickr database is described. --- paper_title: Spatiotemporal Analysis of Sensor Logs using Growth Ring Maps paper_content: Spatiotemporal analysis of sensor logs is a challenging research field due to three facts: a) traditional two-dimensional maps do not support multiple events to occur at the same spatial location, b) three-dimensional solutions introduce ambiguity and are hard to navigate, and c) map distortions to solve the overlap problem are unfamiliar to most users. This paper introduces a novel approach to represent spatial data changing over time by plotting a number of non-overlapping pixels, close to the sensor positions in a map. Thereby, we encode the amount of time that a subject spent at a particular sensor to the number of plotted pixels. Color is used in a twofold manner; while distinct colors distinguish between sensor nodes in different regions, the colors' intensity is used as an indicator to the temporal property of the subjects' activity. The resulting visualization technique, called growth ring maps, enables users to find similarities and extract patterns of interest in spatiotemporal data by using humans' perceptual abilities. We demonstrate the newly introduced technique on a dataset that shows the behavior of healthy and Alzheimer transgenic, male and female mice. We motivate the new technique by showing that the temporal analysis based on hierarchical clustering and the spatial analysis based on transition matrices only reveal limited results. Results and findings are cross-validated using multidimensional scaling. While the focus of this paper is to apply our visualization for monitoring animal behavior, the technique is also applicable for analyzing data, such as packet tracing, geographic monitoring of sales development, or mobile phone capacity planning. --- paper_title: Visual Analytics for Understanding Spatial Situations from Episodic Movement Data paper_content: Continuing advances in modern data acquisition techniques result in rapidly growing amounts of geo-referenced data about moving objects and in emergence of new data types. We define episodic movement data as a new complex data type to be considered in the research fields relevant to data analysis. In episodic movement data, position measurements may be separated by large time gaps, in which the positions of the moving objects are unknown and cannot be reliably reconstructed. Many of the existing methods for movement analysis are designed for data with fine temporal resolution and cannot be applied to discontinuous trajectories. We present an approach utilising Visual Analytics methods to explore and understand the temporal variation of spatial situations derived from episodic movement data by means of spatio-temporal aggregation. The situations are defined in terms of the presence of moving objects in different places and in terms of flows (collective movements) between the places. The approach, which combines interactive visual displays with clustering of the spatial situations, is presented by example of a real dataset collected by Bluetooth sensors. --- paper_title: Spatial Generalization and Aggregation of Massive Movement Data paper_content: Movement data (trajectories of moving agents) are hard to visualize: numerous intersections and overlapping between trajectories make the display heavily cluttered and illegible. It is necessary to use appropriate data abstraction methods. We suggest a method for spatial generalization and aggregation of movement data, which transforms trajectories into aggregate flows between areas. It is assumed that no predefined areas are given. We have devised a special method for partitioning the underlying territory into appropriate areas. The method is based on extracting significant points from the trajectories. The resulting abstraction conveys essential characteristics of the movement. The degree of abstraction can be controlled through the parameters of the method. We introduce local and global numeric measures of the quality of the generalization, and suggest an approach to improve the quality in selected parts of the territory where this is deemed necessary. The suggested method can be used in interactive visual exploration of movement data and for creating legible flow maps for presentation purposes. --- paper_title: An exploratory data analysis (EDA) of the paths of moving animals paper_content: Abstract This work presents an exploratory data analysis of the trajectories of deer and elk moving about in the Starkey Experimental Forest and Range in eastern Oregon. The animals’ movements may be affected by habitat variables and the behavior of the other animals. In the work of this paper a stochastic differential equation-based model is developed in successive stages. Equations of motion are set down motivated by corresponding equations of physics. Functional parameters appearing in the equations are estimated nonparametrically and plots of vector fields of animal movements are prepared. Residuals are used to look for interactions amongst the movements of the animals. There are exploratory analyses of various sorts. Statistical inferences are based on Fourier transforms of the data, which are unequally spaced. The sections of the paper start with motivating quotes and aphorisms from the writings of John W. Tukey. --- paper_title: Activities, ringmaps and geovisualization of large human movement fields paper_content: The timeline or track of any individual, mobile, sentient organism, whether animal or human being, represents a fundamental building block in understanding the interactions of such entities with their environment and with each other. New technologies have emerged to capture the (x, y, t) dimension of such timelines in large volumes and at relatively low cost, with various degrees of precision and with different sampling properties. This has proved a catalyst to research on data mining and visualizing such movement fields. However, a good proportion of this research can only infer, implicitly or explicitly, the activity of the individual at any point in time. This paper in contrast focuses on a data set in which activity is known. It uses this to explore ways to visualize large movement fields of individuals, using activity as the prime referential dimension for investigating space-time patterns. Visually central to the paper is the ringmap, a representation of cyclic time and activity, that is itself quasi spatial and is directly linked to a variety of visualizations of other dimensions and representations of spatio-temporal activity. Conceptuatly central is the ability to explore different levels of generalization in each of the space, time and activity dimensions, and to do this in any combination of the (s, t, a) phenomena. The fundamental tenet for this approach is that activity drives movement, and logically it is the key to comprehending pattern. The paper discusses these issues, illustrates the approach with specific example visualizations and invites critiques of the progress to date. --- paper_title: Space-in-Time and Time-in-Space Self-Organizing Maps for Exploring Spatiotemporal Patterns paper_content: Spatiotemporal data pose serious challenges to analysts in geographic and other domains. Owing to the complexity of the geospatial and temporal components, this kind of data cannot be analyzed by fully automatic methods but require the involvement of the human analyst's expertise. For a comprehensive analysis, the data need to be considered from two complementary perspectives: (1) as spatial distributions (situations) changing over time and (2) as profiles of local temporal variation distributed over space. In order to support the visual analysis of spatiotemporal data, we suggest a framework based on the "Self-Organizing Map" (SOM) method combined with a set of interactive visual tools supporting both analytic perspectives. SOM can be considered as a combination of clustering and dimensionality reduction. In the first perspective, SOM is applied to the spatial situations at different time moments or intervals. In the other perspective, SOM is applied to the local temporal evolution profiles. The integrated visual analytics environment includes interactive coordinated displays enabling various transformations of spatiotemporal data and post-processing of SOM results. The SOM matrix display offers an overview of the groupings of data objects and their two-dimensional arrangement by similarity. This view is linked to a cartographic map display, a time series graph, and a periodic pattern view. The linkage of these views supports the analysis of SOM results in both the spatial and temporal contexts. The variable SOM grid coloring serves as an instrument for linking the SOM with the corresponding items in the other displays. The framework has been validated on a large dataset with real city traffic data, where expected spatiotemporal patterns have been successfully uncovered. We also describe the use of the framework for discovery of previously unknown patterns in 41-years time series of 7 crime rate attributes in the states of the USA. --- paper_title: Composite Density Maps for Multivariate Trajectories paper_content: We consider moving objects as multivariate time-series. By visually analyzing the attributes, patterns may appear that explain why certain movements have occurred. Density maps as proposed by Scheepens et al. [25] are a way to reveal these patterns by means of aggregations of filtered subsets of trajectories. Since filtering is often not sufficient for analysts to express their domain knowledge, we propose to use expressions instead. We present a flexible architecture for density maps to enable custom, versatile exploration using multiple density fields. The flexibility comes from a script, depicted in this paper as a block diagram, which defines an advanced computation of a density field. We define six different types of blocks to create, compose, and enhance trajectories or density fields. Blocks are customized by means of expressions that allow the analyst to model domain knowledge. The versatility of our architecture is demonstrated with several maritime use cases developed with domain experts. Our approach is expected to be useful for the analysis of objects in other domains. --- paper_title: A Visualization System for Space-Time and Multivariate Patterns (VIS-STAMP) paper_content: The research reported here integrates computational, visual and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial and temporal dimensions via clustering, sorting and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 contest data set, which contains time-varying, geographically referenced and multivariate data for technology companies in the US --- paper_title: Visual analytics of spatial interaction patterns for pandemic decision support paper_content: Population mobility, i.e. the movement and contact of individuals across geographic space, is one of the essential factors that determine the course of a pandemic disease spread. This research views both individual-based daily activities and a pandemic spread as spatial interaction problems, where locations interact with each other via the visitors that they share or the virus that is transmitted from one place to another. The research proposes a general visual analytic approach to synthesize very large spatial interaction data and discover interesting (and unknown) patterns. The proposed approach involves a suite of visual and computational techniques, including (1) a new graph partitioning method to segment a very large interaction graph into a moderate number of spatially contiguous subgraphs (regions); (2) a reorderable matrix, with regions 'optimally' ordered on the diagonal, to effectively present a holistic view of major spatial interaction patterns; and (3) a modified flow map, interactively linked to the reorderable matrix, to enable pattern interpretation in a geographical context. The implemented system is able to visualize both people's daily movements and a disease spread over space in a similar way. The discovered spatial interaction patterns provide valuable insight for designing effective pandemic mitigation strategies and supporting decision-making in time-critical situations. --- paper_title: Flowstrates: An Approach for Visual Exploration of Temporal Origin-Destination Data paper_content: Many origin-destination datasets have become available in the recent years, e.g. flows of people, animals, money, material, or network traffic between pairs of locations, but appropriate techniques for their exploration still have to be developed. Especially, supporting the analysis of datasets with a temporal dimension remains a significant challenge. Many techniques for the exploration of spatio-temporal data have been developed, but they prove to be only of limited use when applied to temporal origin-destination datasets. We present Flowstrates, a new interactive visualization approach in which the origins and the destinations of the flows are displayed in two separate maps, and the changes over time of the flow magnitudes are represented in a separate heatmap view in the middle. This allows the users to perform spatial visual queries, focusing on different regions of interest for the origins and destinations, and to analyze the changes over time provided with the means of flow ordering, filtering and aggregation in the heatmap. In this paper, we discuss the challenges associated with the visualization of temporal origin-destination data, introduce our solution, and present several usage scenarios showing how the tool we have developed supports them. --- paper_title: Skeleton-Based Edge Bundling for Graph Visualization paper_content: In this paper, we present a novel approach for constructing bundled layouts of general graphs. As layout cues for bundles, we use medial axes, or skeletons, of edges which are similar in terms of position information. We combine edge clustering, distance fields, and 2D skeletonization to construct progressively bundled layouts for general graphs by iteratively attracting edges towards the centerlines of level sets of their distance fields. Apart from clustering, our entire pipeline is image-based with an efficient implementation in graphics hardware. Besides speed and implementation simplicity, our method allows explicit control of the emphasis on structure of the bundled layout, i.e. the creation of strongly branching (organic-like) or smooth bundles. We demonstrate our method on several large real-world graphs. --- paper_title: Experiments in migration mapping by computer paper_content: Migration maps represent patterns of geographical movement by arrows or bands between places, using information arriving in “from-to” tables. In the most interesting cases the tables are of large size, suggesting that computer assistance would be useful in the preparation of the maps. A computer program prepared for this purpose shows that graphical representation is feasible for tables as large as fifty by fifty, and possibly larger. The program contains options for alternate forms of movement depiction, and rules are suggested for the parsing of migration tables prior to the cartographic display, without loss of spatial resolution. --- paper_title: Visual Analytics for Understanding Spatial Situations from Episodic Movement Data paper_content: Continuing advances in modern data acquisition techniques result in rapidly growing amounts of geo-referenced data about moving objects and in emergence of new data types. We define episodic movement data as a new complex data type to be considered in the research fields relevant to data analysis. In episodic movement data, position measurements may be separated by large time gaps, in which the positions of the moving objects are unknown and cannot be reliably reconstructed. Many of the existing methods for movement analysis are designed for data with fine temporal resolution and cannot be applied to discontinuous trajectories. We present an approach utilising Visual Analytics methods to explore and understand the temporal variation of spatial situations derived from episodic movement data by means of spatio-temporal aggregation. The situations are defined in terms of the presence of moving objects in different places and in terms of flows (collective movements) between the places. The approach, which combines interactive visual displays with clustering of the spatial situations, is presented by example of a real dataset collected by Bluetooth sensors. --- paper_title: Spatial Generalization and Aggregation of Massive Movement Data paper_content: Movement data (trajectories of moving agents) are hard to visualize: numerous intersections and overlapping between trajectories make the display heavily cluttered and illegible. It is necessary to use appropriate data abstraction methods. We suggest a method for spatial generalization and aggregation of movement data, which transforms trajectories into aggregate flows between areas. It is assumed that no predefined areas are given. We have devised a special method for partitioning the underlying territory into appropriate areas. The method is based on extracting significant points from the trajectories. The resulting abstraction conveys essential characteristics of the movement. The degree of abstraction can be controlled through the parameters of the method. We introduce local and global numeric measures of the quality of the generalization, and suggest an approach to improve the quality in selected parts of the territory where this is deemed necessary. The suggested method can be used in interactive visual exploration of movement data and for creating legible flow maps for presentation purposes. --- paper_title: Flow map layout paper_content: Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data. --- paper_title: Exploring city structure from georeferenced photos using graph centrality measures paper_content: We explore the potential of applying graph theory measures of centrality to the network of movements extracted from sequences of georeferenced photo captures in order to identify interesting places and explore city structure. We adopt a systematic procedure composed of a series of stages involving the combination of computational methods and interactive visual analytics techniques. The approach is demonstrated using a collection of Flickr photos from the Seattle metropolitan area. --- paper_title: Interactive Analysis of Object Group Changes over Time paper_content: The analysis of time-dependent data is an important task in various application domains. Often, the analyzed data objects belong to groups. The group memberships may stem from natural arrangements (e.g., animal herds), or may be constructed during analysis (e.g., by clustering). Group membership may change over time. Therefore, one important analytical aspect is to examine these changes (e.g., which herds change members and when). In this paper, we present a technique for visual analysis of group changes over time. We combine their visualization and automatic analysis. Interactive functions allow for tracking the data changes over time on group, set and individual level. We also consider added and removed objects (e.g., newly born or died animals). For large time series, automatic data analysis selects interesting time points and group changes for detailed examination. We apply our approach on the VAST 2008 challenge data set revealing new insights. --- paper_title: Flow Map Layout via Spiral Trees paper_content: Flow maps are thematic maps that visualize the movement of objects, such as people or goods, between geographic regions. One or more sources are connected to several targets by lines whose thickness corresponds to the amount of flow between a source and a target. Good flow maps reduce visual clutter by merging (bundling) lines smoothly and by avoiding self-intersections. Most flow maps are still drawn by hand and only few automated methods exist. Some of the known algorithms do not support edge-bundling and those that do, cannot guarantee crossing-free flows. We present a new algorithmic method that uses edge-bundling and computes crossing-free flows of high visual quality. Our method is based on so-called spiral trees, a novel type of Steiner tree which uses logarithmic spirals. Spiral trees naturally induce a clustering on the targets and smoothly bundle lines. Our flows can also avoid obstacles, such as map features, region outlines, or even the targets. We demonstrate our approach with extensive experiments. --- paper_title: From movement tracks through events to places: Extracting and characterizing significant places from mobility data paper_content: We propose a visual analytics procedure for analyzing movement data, i.e., recorded tracks of moving objects. It is oriented to a class of problems where it is required to determine significant places on the basis of certain types of events occurring repeatedly in movement data. The procedure consists of four major steps: (1) event extraction from trajectories; (2) event clustering and extraction of relevant places; (3) spatio-temporal aggregation of events or trajectories; (4) analysis of the aggregated data. All steps are scalable with respect to the amount of the data under analysis. We demonstrate the use of the procedure by example of two real-world problems requiring analysis at different spatial scales. --- paper_title: Geo-historical context support for information foraging and sensemaking: Conceptual model, implementation, and assessment paper_content: Information foraging and sensemaking with heterogeneous information are context-dependent activities. Thus visual analytics tools to support these activities must incorporate context. But, context is a difficult concept to define, model, and represent. Creating and representing context in support of visually-enabled reasoning about complex problems with complex information is a complementary but different challenge than that addressed in context-aware computing. In the latter, the goal is automated adaptation of the system to meet user needs for applications such as mobile location-based services where information about the location, the user, and the user goals filters what gets presented on a small mobile device. In contrast, for visual analytics-enabled information foraging and sensemaking, the user is likely to take an active role in foraging for the contextual information needed to support sensemaking in relation to some multifaceted problem. In this paper, we address the challenges of constructing and representing context within visual interfaces that support analytical reasoning in crisis management and humanitarian relief. The challenges stem from the diverse forms of information that can provide context and difficulty in defining and operationalizing context itself. Here, we pay particular attention to document foraging to support construction of the geographic and historical context within which monitoring and sensemaking can be carried out. Specifically, we present the concept of geo-historical context (GHC) and outline an empirical assessment of both the concept and its implementation in the Context Discovery Application, a web-based tool that supports document foraging and sensemaking. --- paper_title: Uncovering Interaction Patterns in Mobile Outdoor Gaming paper_content: Significant advances in recreation planning have been achieved thanks to the mobile technology and the ubiquitous computation. Today it is possible to know the real time position of a group of individuals participating in an outdoor game, and to obtain a large amount of data about their movements. However, most analyses have focused onindividual movements based on trajectories. In this paper we present a novel form of conceptualising and analysing human movement, based on a metaphor we have called "movement as interaction" which conceives movement as a result of interaction between individuals, collectives and the environment. The implementation of the proposed approach has enabled us to uncover interaction patterns in a group of children participating in an outdoor game based on mobile technology. The first results demonstrate the feasibility of our approach to detect game interactions. --- paper_title: An event-based conceptual model for context-aware movement analysis paper_content: Current tracking technologies enable collection of data, describing movements of various kinds of objects, including people, animals, icebergs, vehicles, containers with goods and so on. Analysis of movement data is now a hot research topic. However, most of the suggested analysis methods deal with movement data alone. Little has been done to support the analysis of movement in its spatio-temporal context, which includes various spatial and temporal objects as well as diverse properties associated with spatial locations and time moments. Comprehensive analysis of movement requires detection and analysis of relations that occur between moving objects and elements of the context in the process of the movement. We suggest a conceptual model in which movement is considered as a combination of spatial events of diverse types and extents in space and time. Spatial and temporal relations occur between movement events and elements of the spatial and temporal contexts. The model gives a ground to a generic approach based on extraction of interesting events from trajectories and treating the events as independent objects. By means of a prototype implementation, we tested the approach on complex real data about movement of wild animals. The testing showed the validity of the approach. --- paper_title: Interactive Visualization of Weather and Ship Data paper_content: This paper focus on the development of a tool for Ship and Weather Information Monitoring (SWIM) visualizing weather data combined with data from ship voyages. The project was done in close collaboration with the Swedish Meteorological and Hydrological Institute (SMHI) who also evaluated the result. The goal was to implement a tool which will help shipping companies to monitor their fleet and the weather development along planned routes and provide support for decisions regarding route choice and to evade hazard. A qualitative usability study was performed to gather insight about usability issues and to aid future development. Overall the result of the study was positive and the users felt that the tool would aid them in the daily work. ---
Title: Visual analytics of movement: An overview of methods, tools and procedures Section 1: Introduction Description 1: This section introduces the main idea of visual analytics, its importance in geospatial data analysis, and provides an overview of the paper's scope and structure. Section 2: Looking at trajectories Description 2: This section discusses techniques for visual representation of entire trajectories, including clustering methods and time transformations. Section 3: Visualizing trajectories Description 3: This section explains common display types for visualizing movements, along with interaction techniques and their challenges. Section 4: Clustering trajectories Description 4: This section covers the use of clustering techniques in visual analytics for handling and refining results for large trajectory datasets. Section 5: Transforming times in trajectories Description 5: This section introduces time transformations to facilitate the comparison of dynamic properties of trajectories. Section 6: Looking inside trajectories: attributes, events and patterns Description 6: This section explores methods for analyzing and visualizing variations of movement characteristics at the level of trajectory points and segments. Section 7: Bird's-eye view on movement: generalization and aggregation Description 7: This section describes methods for obtaining an overall view of multiple movements through generalization and aggregation. Section 8: Analysing presence and density Description 8: This section discusses techniques for characterizing the presence of moving objects in specific locations and their temporal variations. Section 9: Tracing flows Description 9: This section covers methods for spatial aggregation by transitions between locations, resulting in the analysis of flows. Section 10: Investigation of movement in context Description 10: This section examines the relationships between movement data and their spatio-temporal context using visual and computational techniques. Section 11: Conclusion Description 11: This section summarizes the development of visual analytics methods and tools for movement data analysis and emphasizes the potential for future research and cross-disciplinary collaboration.
An Overview of Rotating Stall and Surge Control for Axial Flow Compressors
6
--- paper_title: Active Control of Rotating Stall in a Low Speed Axial Compressor paper_content: The onset of rotating stall has been delayed in a low speed, single-stage, axial research compressor using active feedback control. Control was implemented using a circumferential array of hot wires to sense rotating waves of axial velocity upstream of the compressor. Circumferentially travelling waves were then generated with appropriate phase and amplitude by “wiggling” inlet guide vanes driven by individual actuators. The control scheme considered the wave pattern in terms of the individual spatial Fourier components. A simple proportional control law was implemented for each harmonic. Control of the first spatial harmonic yielded an 11% decrease in the stalling mass flow, while control of the first and second harmonics together reduced the stalling mass flow by 20%. The control system was also used to measure die sine wave response of the compressor, which behaved as would be expected for a second order system.Copyright © 1991 by ASME --- paper_title: Modeling for Control of Rotating Stall in High Speed Multi-Stage Axial Compressors paper_content: Using a two dimensional compressible flow representation of axial compressor dynamics, a control-theoretic input-output model is derived which is of general utility in rotating stall/surge active control studies. The derivation presented here begins with a review of the fluid dynamic model, which is a 2D stage stacking technique that accounts for blade row pressure rise, loss and deviation as well as blade row and inter-blade row compressible flow. This model is extended to include the effects of the upstream and downstream geometry and boundary conditions, and then manipulated into a transfer function form that dynamically relates actuator motion to sensor measurements. Key relationships in this input-output form are then approximated using rational polynomials. Further manipulation yields an approximate model which is in standard form for studying active control of rotating stall and surge. As an example of high current relevance, the transfer function from an array of jet actuators to an array of static pressure sensors is derived. Numerical examples are also presented, including a demonstration of the importance of proper choice of sensor and actuator locations, as well as a comparison between sensor types. Under a variety of conditions, it was found that sensor locations near the front of the compressor or in the downstream gap are consistently the best choices, based on a quadratic optimization criterion and a specific 3-stage compressor model. The modeling and evaluation procedures presented here are a first step toward a rigorous approach to the design of active control systems for high speed axial compressors.Copyright © 1994 by ASME --- paper_title: Active suppression of aerodynamic instabilities in turbomachines paper_content: In this paper, we advocate a strategy for controlling a class of turbomachine instabilities, whose primitive phases can be modeled by linear theory, but that eventually grow into a performance-limiting modification of the basic flow. The phenomena of rotating stall and surge are two very different practical examples in which small disturbances grow to magnitudes such that they limit machine performance. We develop a theory that shows how an additional disturbance, driven from real-time data measured within the turbomachine, can be generated so as to realize a device with characteristics fundamentally different than those of the machine without control. For the particular compressor analyzed, the control increases the stable operating range by 20% of the mean flow. We show that active control can also be used to destabilize a compressor in an undesirable state such as nonrecoverable stall. Examination of the energetics of the controlled system shows the required control power scales with the square of the ambient disturbance level, which can be several orders of magnitude below the power of the machine. Brief mention is also made of the use of structural dynamics, rather than active control, to enhance stability. --- paper_title: Bifurcation Analysis of Surge and Rotating Stall in Axial Flow Compressors paper_content: The surge and rotating stall post-instability behaviors of axial flow compressors are investigated from a bifurcation-theoretic perspective, using a model and system data presented by Greitzer (1976a). For this model, a sequence of local and global bifurcations of the nonlinear system dynamics is uncovered. This includes a global bifurcation of a pair of large-amplitude periodic solutions. Resulting from this bifurcation are a stable oscillation (surge) and an unstable oscillation (antisurge). The latter oscillation is found to have a deciding significance regarding the particular post-instability behavior experienced by the compressor. These results are used to reconstruct Greitzer's (1976b) findings regarding the manner in which post-instability behavior depends on system parameters. Although the model does not directly reflect non axisymmetric dynamics, use of a steady-state compressor characteristic approximating the measured characteristic of Greitzer (1976a) is found to result in conclusions that compare well with observation. Thus, the paper gives a convenient and simple explanation of the boundary between surge and rotating stall behaviors, without the use of more intricate models and analyses including non axisymmetric flow dynamics. --- paper_title: Bifurcation analysis and control for surge model via the projection method paper_content: A bifurcation approach is adopted to analyze and control the surge model for axial flow compressors. An explicit expression is obtained for the first nonzero coefficient of the characteristic exponents of the periodic solutions born from the Hopf bifurcation associated with surge. The sign of this coefficient determines stability of the surge model at the criticality. Local nonlinear feedback control laws are then developed to stabilize the Hopf bifurcation associated with surge. Both quadratic and cubic state feedback control laws are investigated. Feedback stabilization using output measurement such as pressure rise is also studied where stabilizing gains are characterized that can be used for synthesis of surge control laws. --- paper_title: Stall analysis of high-frequency data for three swept-blade compressor rotors paper_content: High frequency spectra of three single-stage, high-speed compressor rotors were analyzed for behavior indicative of rotating stall. The compressor rotors were straight, backward swept, and forward swept. Power integrated over time showed promise as a prestall warning pararmeter. For the straight and swept back rotors, stall warning times varied from several seconds to 0.5 seconds. The forward swept rotor however, was difficult to characterize since surge played an important role in the stalling characteristic and stall appeared to originate at the hub of the rotor, away from pressure transducers located on the casing.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Bifurcation based nonlinear feedback control for rotating stall in axial flow compressors paper_content: Classical bifurcation analysis for nonlinear dynamics is used to derive a nonlinear feedback control law that eliminates the hysteresis loop associated with rotating stall, and extends the stable operating range in axial flow compressors. The proposed control system employs pressure rise as output measurement and throttle position as the actuating signal for which both sensor and actuator exist in the current configuration of axial flow compressors. Thus, our results provide a practical solution to rotating stall control for axial flow compressors. --- paper_title: The Unstable Behavior of Low and High Speed Compressors paper_content: By far the greater part of our understanding about stall and surge in axial compressors comes from work on low-speed laboratory machines. As a general rule, these machines do not model the compressibility effects present in high-speed compressors and therefore doubt has always existed about the application of low-speed results to high-speed machines. In recent years interest in active control has led to a number of studies of compressor stability in engine type compressors. The instrumentation used in these experiments has been sufficiently detailed that, for the first time, adequate data is available to make direct comparisons between high-speed and low-speed compressors. This paper presents new data from an eight-stage fixed geometry engine compressor and compares this with low-speed laboratory data. The results show remarkable similarities in both the stalling and surging behaviour of the two machines, particularly when the engine compressor is run at intermediate speeds. The engine results also show that, as in the laboratory tests, surge is precipitated by the onset of rotating stall. This is true even at very high speeds where it had previously been thought that surge might be the result of a blast wave moving through the compressor. This paper therefore contains new information about high-speed compressors and confirms that low speed testing is an effective means of obtaining insight into the behaviour of high-speed machines.Copyright © 1993 by ASME --- paper_title: Integrated control of rotating stall and surge in aeroengines paper_content: Aeroengines operate in regimes for which both rotating stall and surge impose low flow operability limits. Thus, active control strategies designed to enhance operability of aeroengines must address both rotating stall and surge as well as their interaction. In this paper, a nonlinear control strategy is designed based on an analytical model to achieve simultaneous active control of rotating stall and surge in an axial flow compression system with relevant dynamics representative of modern aeroengines. The controller is experimentally validated on a 3-stage low-speed axial flow compression system. This rig is dynamically scaled to replicate the interaction between rotating stall and surge typical of modern aeroengines, and several experimental results are presented for this rig. For actuation, the control stategy utilizes a single plenum bleed valve with bandwidth on the order of the rotor frequency. For sensing, measurements of the circumferential asymmetry and annulus-averaged unsteadiness of the flow through the compressor are used. Experimental validation of simultaneous control of rotating stall and surge with minimal sensing and actuation requirements is viewed as an important step towards applying active control to enhance operability of compression systems in modern aeroengines. --- paper_title: Rotating Stall Control via Bifurcation Stabilization paper_content: Rotating stall is a fundamental aerodynamic instability in axial flow compressors, induced by nonlinear bifurcation. It effectively reduces the performance of aeroengines. In this paper classical bifurcation theory is used to derive output feedback control laws in which throttle position is employed as actuator and pressure rise as output measurement. The challenge to the proposed control system is that the critical mode of the linearized system corresponding to rotating stall is neither controllable nor observable. Using the projection method from Iooss and Joseph (1980), and Abed and Fu (1986), it is shown that linear output feedback controllers are adequate for bifurcation stabilization. Both linear and nonlinear feedback control laws are proposed and are shown to be effective in elimination of hysteresis loop associated with rotating stall, and in extending the stable operating range of axial flow compressors. --- paper_title: Modeling for control of rotating stall paper_content: Abstract An analytical model for control of rotating stall has been obtained from the basic fluid equations describing the process at inception. The model describes rotating stall as a traveling wave packet, sensed—in spatial components—via the Fourier decomposition of measurements obtained from a circumferential array of evenly distributed sensors (hot wires) upstream of the compressor. A set of “wiggly” inlet guide vanes (IGVs) equally spaced around the compressor annulus constitute the “forced” part of the model. Control is effected by launching waves at appropriate magnitude and phase, synthesized by spatial Fourier synthesis from individual IGV deflections. The effect of the IGV motion on the unsteady fluid process was quantified via identification experiments carried out on a low speed, single-stage axial research compressor. These experiments served to validate the theoretical model and refine key parameters in it. Further validation of the model was provided by the successful implementation of a complex-valued proportional control law, using a combination of first and second harmonic feedback; this resulted in an 18% reduction of stalling mass flow, at essentially the same pressure rise. --- paper_title: Active Control of Rotating Stall in a Low Speed Axial Compressor paper_content: The onset of rotating stall has been delayed in a low speed, single-stage, axial research compressor using active feedback control. Control was implemented using a circumferential array of hot wires to sense rotating waves of axial velocity upstream of the compressor. Circumferentially travelling waves were then generated with appropriate phase and amplitude by “wiggling” inlet guide vanes driven by individual actuators. The control scheme considered the wave pattern in terms of the individual spatial Fourier components. A simple proportional control law was implemented for each harmonic. Control of the first spatial harmonic yielded an 11% decrease in the stalling mass flow, while control of the first and second harmonics together reduced the stalling mass flow by 20%. The control system was also used to measure die sine wave response of the compressor, which behaved as would be expected for a second order system.Copyright © 1991 by ASME --- paper_title: Active suppression of aerodynamic instabilities in turbomachines paper_content: In this paper, we advocate a strategy for controlling a class of turbomachine instabilities, whose primitive phases can be modeled by linear theory, but that eventually grow into a performance-limiting modification of the basic flow. The phenomena of rotating stall and surge are two very different practical examples in which small disturbances grow to magnitudes such that they limit machine performance. We develop a theory that shows how an additional disturbance, driven from real-time data measured within the turbomachine, can be generated so as to realize a device with characteristics fundamentally different than those of the machine without control. For the particular compressor analyzed, the control increases the stable operating range by 20% of the mean flow. We show that active control can also be used to destabilize a compressor in an undesirable state such as nonrecoverable stall. Examination of the energetics of the controlled system shows the required control power scales with the square of the ambient disturbance level, which can be several orders of magnitude below the power of the machine. Brief mention is also made of the use of structural dynamics, rather than active control, to enhance stability. --- paper_title: Active Stabilization of Rotating Stall in a Three-Stage Axial Compressor paper_content: A three-stage, low speed axial research compressor has been actively stabilized by damping low amplitude circumferentially travelling waves which can grow into rotating stall. Using a circumferential array of hot wire sensors, and an array of high speed individually positioned control vanes as the actuator, the first and second spatial harmonics of the compressor were stabilized down to a characteristic slope of 0.9, yielding an 8% increase in operating flow range. Stabilization of the third spatial harmonic did not alter the stalling flow coefficient. The actuators were also used open loop to determine the forced response behavior of the compressor. A system identification procedure applied to the forced response data then yielded the compressor transfer function. The Moore-Greitzer, 2-D, stability model was modified as suggested by the measurements to include the effect of blade row time lags on the compressor dynamics. This modified Moore-Greitzer model was then used to predict both the open and closed loop dynamic response of the compressor. The model predictions agreed closely with the experimental results. In particular, the model predicted both the mass flow at stall without control and the design parameters needed by, and the range extension realized from, active control.Copyright © 1993 by ASME --- paper_title: Modeling for control of rotating stall paper_content: Abstract An analytical model for control of rotating stall has been obtained from the basic fluid equations describing the process at inception. The model describes rotating stall as a traveling wave packet, sensed—in spatial components—via the Fourier decomposition of measurements obtained from a circumferential array of evenly distributed sensors (hot wires) upstream of the compressor. A set of “wiggly” inlet guide vanes (IGVs) equally spaced around the compressor annulus constitute the “forced” part of the model. Control is effected by launching waves at appropriate magnitude and phase, synthesized by spatial Fourier synthesis from individual IGV deflections. The effect of the IGV motion on the unsteady fluid process was quantified via identification experiments carried out on a low speed, single-stage axial research compressor. These experiments served to validate the theoretical model and refine key parameters in it. Further validation of the model was provided by the successful implementation of a complex-valued proportional control law, using a combination of first and second harmonic feedback; this resulted in an 18% reduction of stalling mass flow, at essentially the same pressure rise. --- paper_title: Active Control of Rotating Stall in a Low Speed Axial Compressor paper_content: The onset of rotating stall has been delayed in a low speed, single-stage, axial research compressor using active feedback control. Control was implemented using a circumferential array of hot wires to sense rotating waves of axial velocity upstream of the compressor. Circumferentially travelling waves were then generated with appropriate phase and amplitude by “wiggling” inlet guide vanes driven by individual actuators. The control scheme considered the wave pattern in terms of the individual spatial Fourier components. A simple proportional control law was implemented for each harmonic. Control of the first spatial harmonic yielded an 11% decrease in the stalling mass flow, while control of the first and second harmonics together reduced the stalling mass flow by 20%. The control system was also used to measure die sine wave response of the compressor, which behaved as would be expected for a second order system.Copyright © 1991 by ASME --- paper_title: Modeling for Control of Rotating Stall in High Speed Multi-Stage Axial Compressors paper_content: Using a two dimensional compressible flow representation of axial compressor dynamics, a control-theoretic input-output model is derived which is of general utility in rotating stall/surge active control studies. The derivation presented here begins with a review of the fluid dynamic model, which is a 2D stage stacking technique that accounts for blade row pressure rise, loss and deviation as well as blade row and inter-blade row compressible flow. This model is extended to include the effects of the upstream and downstream geometry and boundary conditions, and then manipulated into a transfer function form that dynamically relates actuator motion to sensor measurements. Key relationships in this input-output form are then approximated using rational polynomials. Further manipulation yields an approximate model which is in standard form for studying active control of rotating stall and surge. As an example of high current relevance, the transfer function from an array of jet actuators to an array of static pressure sensors is derived. Numerical examples are also presented, including a demonstration of the importance of proper choice of sensor and actuator locations, as well as a comparison between sensor types. Under a variety of conditions, it was found that sensor locations near the front of the compressor or in the downstream gap are consistently the best choices, based on a quadratic optimization criterion and a specific 3-stage compressor model. The modeling and evaluation procedures presented here are a first step toward a rigorous approach to the design of active control systems for high speed axial compressors.Copyright © 1994 by ASME --- paper_title: Active Suppression of Rotating Stall and Surge in Axial Compressors paper_content: This paper reports on an experimental program in which active control was successfully applied to both rotating stall and surge in a multi-stage compressor. Two distinctly different methods were used to delay the onset of rotating stall in a four stage compressor using fast acting air injection valves. The amount of air injected was small compared to the machine mass flow, the maximum being less than 1.0%. In some compressor configurations modal perturbations were observed prior to stall. By using the air injection valves to damp out these perturbations an improvement of about 4.0% in stall margin was achieved. The second method of stall suppression was to remove emerging stall cells by injecting air in their immediate vicinity. Doing this repeatedly delayed the onset of stall, giving a stall margin improvement of about 6.0%. Further studies were conducted using a large plenum downstream of the compressor to induce the system to surge rather than stall. The resulting surge cycles were all found to be initiated by rotating stall and therefore the stall suppression systems mentioned above could also be used to suppress surge. In addition, it was possible to arrest the cyclical pulsing of a compressor already in surge.Copyright © 1991 by ASME --- paper_title: Evaluation of Approaches to Active Compressor Surge Stabilization paper_content: Recent work has shown that compression systems can be actively stabilized against the instability known as surge, thereby realizing a significant gain in system mass flow range. Ideally, this surge stabilization requires only a single sensor and a single actuator connected by a suitable control law. Almost all research to date has been aimed at proof of concept studies of this technique, using various actuators and sensor combinaltons. In contrast, the work reported here can be regarded as a step toward developing active control into a practical technique. In this context, the paper presents the first systematic definition of the influence of sensor and actuator selection on increasing the range of stabilized compressor performance --- paper_title: Theoretical study of sensor-actuator schemes for rotating stall control paper_content: A theoretical study has been conducted to determine the influence of actuator and sensor choice on active control of rotating stall in axial-flow compressors. The sensors are used to detect small amplitude traveling waves that have been observed at the inception of rotating stall in several different compressors. Control is achieved by feeding the sensed quantity back to the actuator with a suitable gain and spatial phase shift relative to the measured wave. Actuators using circumferential arrays of jets, intake ports, and movable inlet guide vanes upstream of the compressor, and valves downstream of the compressor were considered. The effect of axial velocity, static pressure, or total pressure measurement on control effectiveness was investigated. In addition, the influence of the actuator bandwidth on the performance of the controlled system was determined. The results of the study indicate that the potential for active control of rotating stall is greater than that achieved thus far with movable inlet guide vanes. Furthermore, axial velocity sensing was most effective. Actuator bandwidth affected the performance of the controlled compressors significantly, but certain actuators were affected less severely than others. --- paper_title: Experimental techniques for actuation, sensing, and measurement of rotating stall dynamics in high-speed compressors paper_content: This report describes the experiemental design and validation of sensing and actuation hardware to be incorporated into a NASA Lewis Research Center high-speed compressor test rig. The purpose of the control-augmented rig will be to investigate the dynamics of rotating stall in high-speed compressors, and to demonstrate stabilization of the perturbations which lead to rotating stall and surge. The overall experimental design is first described. Then the design of jet injection actuation is presented, including mechanical/fluid mechanical design rules, bandwidth limitations imposed by the electromagnetic valve, and by the fluid mechanics, and experimental validation of the actuation system. Specialized probes for 3D high-response flow measurements are then discussed, along with experimental validation of their performance. Finally, procedures for modeling and measurement of aerodynamic oscillation modes will be described.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. ---
Title: An Overview of Rotating Stall and Surge Control for Axial Flow Compressors Section 1: Introduction Description 1: Introduce the topic of rotating stall and surge in axial flow compressors and the significance of controlling these phenomena to improve engine performance. Section 2: Rotating Stall and Surge in Axial Flow Compressors Description 2: Describe the aerodynamic instabilities known as rotating stall and surge, their characteristics, how they affect compressor performance, and the necessity for control measures. Section 3: Moore-Greitzer Model Description 3: Provide a review of the Moore-Greitzer model, its assumptions, derivation, and its importance in the study of nonlinear behavior of rotating stall and surge in axial flow compressors. Section 4: Linear Perturbation Model and Feedback Control Description 4: Discuss the development of linear control methods based on the Moore-Greitzer model, including the use of inlet guide vanes as actuators for damping rotating stall. Section 5: Further Developments on Modeling and Control Description 5: Summarize recent advancements in modeling and control techniques, including multi-mode models, Lyapunov stability procedures, and developments in control methods for high-speed compressors. Section 6: Conclusion Description 6: Summarize the findings of the survey, emphasizing the progress made and the challenges that remain in the control of rotating stall and surge in axial flow compressors.
Visual Question Answering: A Survey of Methods and Datasets
7
--- paper_title: CIDEr: Consensus-based image description evaluation paper_content: Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking. --- paper_title: Visual7W: Grounded Question Answering in Images paper_content: We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks. --- paper_title: What Value Do Explicit High Level Concepts Have in Vision to Language Problems? paper_content: Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems. --- paper_title: Deep Fragment Embeddings for Bidirectional Image Sentence Mapping paper_content: We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: Conversational Robots: Building Blocks For Grounding Word Meaning paper_content: How can we build robots that engage in fluid spoken conversations with people, moving beyond canned responses to words and towards actually understanding? As a step towards addressing this question, we introduce a robotic architecture that provides a basis for grounding word meanings. The architecture provides perceptual, procedural, and affordance representations for grounding words. A perceptually-coupled on-line simulator enables sensory-motor representations that can shift points of view. Held together, we show that this architecture provides a rich set of data structures and procedures that provide the foundations for grounding the meaning of certain classes of words. --- paper_title: Composing Simple Image Descriptions using Web-scale N-grams paper_content: Studying natural language, and especially how people describe the world around them can help us better understand the visual world. In turn, it can also help us in the quest to generate natural language that describes this world in a human manner. We present a simple yet effective approach to automatically compose image descriptions given computer vision based inputs and using web-scale n-grams. Unlike most previous work that summarizes or retrieves pre-existing text relevant to an image, our method composes sentences entirely from scratch. Experimental results indicate that it is viable to generate simple textual descriptions that are pertinent to the specific content of an image, while permitting creativity in the description -- making for more human-like annotations than previous approaches. --- paper_title: Robust spoken instruction understanding for HRI paper_content: Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are required that (1) process language incrementally in order to be able to provide early backchannel feedback to human speakers; (2) use pragmatic contexts throughout the understanding process to infer missing information; and (3) handle the underspecified, fragmentary, or otherwise ungrammatical utterances that are common in spontaneous speech. In this paper, we describe our attempts at developing an integrated natural language understanding architecture for HRI, and demonstrate its novel capabilities using challenging data collected in human-human interaction experiments. --- paper_title: Compositional Memory for Visual Question Answering paper_content: Visual Question Answering (VQA) emerges as one of the most fascinating topics in computer vision recently. Many state of the art methods naively use holistic visual features with language features into a Long Short-Term Memory (LSTM) module, neglecting the sophisticated interaction between them. This coarse modeling also blocks the possibilities of exploring finer-grained local features that contribute to the question answering dynamically over time. ::: This paper addresses this fundamental problem by directly modeling the temporal dynamics between language and all possible local image patches. When traversing the question words sequentially, our end-to-end approach explicitly fuses the features associated to the words and the ones available at multiple local patches in an attention mechanism, and further combines the fused information to generate dynamic messages, which we call episode. We then feed the episodes to a standard question answering module together with the contextual visual information and linguistic information. Motivated by recent practices in deep learning, we use auxiliary loss functions during training to improve the performance. Our experiments on two latest public datasets suggest that our method has a superior performance. Notably, on the DARQUAR dataset we advanced the state of the art by 6$\%$, and we also evaluated our approach on the most recent MSCOCO-VQA dataset. --- paper_title: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations paper_content: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of $$35$$35 objects, $$26$$26 attributes, and $$21$$21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs. --- paper_title: Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics paper_content: In [Hodosh et al., 2013], we establish a rankingbased framework for sentence-based image description and retrieval. We introduce a new dataset of images paired with multiple descriptive captions that was specifically designed for these tasks. We also present strong KCCA-based baseline systems for description and search, and perform an in-depth study of evaluation metrics for these two tasks. Our results indicate that automatic evaluation metrics for our ranking-based tasks are more accurate and robust than those proposed for generation-based image description. --- paper_title: Visual Madlibs: Fill in the blank Image Generation and Question Answering paper_content: In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. --- paper_title: Yin and Yang: Balancing and Answering Binary Visual Questions paper_content: The complex compositional structure of language makes problems at the intersection of vision and language challenging. But language also provides a strong prior that can result in good superficial performance, without the underlying models truly understanding the visual content. This can hinder progress in pushing state of art in the computer vision aspects of multi-modal AI. In this paper, we address binary Visual Question Answering (VQA) on abstract scenes. We formulate this problem as visual verification of concepts inquired in the questions. Specifically, we convert the question to a tuple that concisely summarizes the visual concept to be detected in the image. If the concept can be found in the image, the answer to the question is "yes", and otherwise "no". Abstract scenes play two roles (1) They allow us to focus on the high-level semantics of the VQA task as opposed to the low-level recognition problems, and perhaps more importantly, (2) They provide us the modality to balance the dataset such that language priors are controlled, and the role of vision is essential. In particular, we collect fine-grained pairs of scenes for every question, such that the answer to the question is "yes" for one scene, and "no" for the other for the exact same question. Indeed, language priors alone do not perform better than chance on our balanced dataset. Moreover, our proposed approach matches the performance of a state-of-the-art VQA approach on the unbalanced dataset, and outperforms it on the balanced dataset. --- paper_title: Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) paper_content: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html . --- paper_title: ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering paper_content: We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions. --- paper_title: ImageNet: A large-scale hierarchical image database paper_content: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. --- paper_title: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention paper_content: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO. --- paper_title: A Joint Model of Language and Perception for Grounded Attribute Learning paper_content: As robots become more ubiquitous and capable, it becomes ever more important for untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to the physical world. We present an approach for joint learning of language and perception models for grounded attribute induction. The perception model includes classifiers for physical characteristics and a language model based on a probabilistic categorial grammar that enables the construction of compositional meaning representations. We evaluate on the task of interpreting sentences that describe sets of objects in a physical workspace, and demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes. --- paper_title: Stacked Attention Networks for Image Question Answering paper_content: This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer. --- paper_title: Show and tell: A neural image caption generator paper_content: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. --- paper_title: Long-term recurrent convolutional networks for visual recognition and description paper_content: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized. --- paper_title: Ask Me Anything: Free-Form Visual Question Answering Based on Knowledge from External Sources paper_content: We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases. --- paper_title: Microsoft COCO: Common Objects in Context paper_content: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. --- paper_title: Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering paper_content: In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. --- paper_title: FVQA: Fact-Based Visual Question Answering paper_content: Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. ::: We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . ::: We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Understanding Natural Language paper_content: Abstract This paper describes a computer system for understanding English. The system answers questions, executes commands, and accepts information in an interactive English dialog. It is based on the belief that in modeling language understanding, we must deal in an integrated way with all of the aspects of language—syntax, semantics, and inference. The system contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system. We assume that a computer cannot deal reasonably with language unless it can understand the subject it is discussing. Therefore, the program is given a detailed model of a particular domain. In addition, the system has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carrying them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, asking for clarification when its heuristic programs cannot understand a sentence through the use of syntactic, semantic, contextual, and physical knowledge. Knowledge in the system is represented in the form of procedures, rather than tables of rules or lists of patterns. By developing special procedural representations for syntax, semantics, and inference, we gain flexibility and power. Since each piece of knowledge can be a procedure, it can call directly on any other piece of knowledge in the system. --- paper_title: Explicit Knowledge-based Reasoning for Visual Question Answering paper_content: We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering. --- paper_title: Describing Videos by Exploiting Temporal Structure paper_content: Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. --- paper_title: Learning to Answer Questions From Image using Convolutional Neural Network paper_content: In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA). Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art. --- paper_title: Neural Module Networks paper_content: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. --- paper_title: A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input paper_content: We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test. --- paper_title: Visual Turing test for computer vision systems paper_content: Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. --- paper_title: Joint Video and Text Parsing for Understanding Events and Answering Queries paper_content: This article proposes a multimedia analysis framework to process video and text jointly for understanding events and answering user queries. The framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events), and causal information (causalities between events and fluents) in the video and text. The knowledge representation of the framework is based on a spatial-temporal-causal AND-OR graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes, and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. The authors present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs, and the joint parse graph. Based on the probabilistic model, the authors propose a joint parsing system consisting of three modules: video parsing, text parsing, and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text, respectively. The joint inference module produces a joint parse graph by performing matching, deduction, and revision on the video and text parse graphs. The proposed framework has the following objectives: to provide deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; to perform parsing and reasoning across the spatial, temporal, and causal dimensions based on the joint S/T/C-AOG representation; and to show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where, and why. The authors empirically evaluated the system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results. --- paper_title: Glove: Global Vectors for Word Representation paper_content: Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition. --- paper_title: What Value Do Explicit High Level Concepts Have in Vision to Language Problems? paper_content: Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems. --- paper_title: Deep Fragment Embeddings for Bidirectional Image Sentence Mapping paper_content: We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) paper_content: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html . --- paper_title: Show and tell: A neural image caption generator paper_content: Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art. --- paper_title: Long-term recurrent convolutional networks for visual recognition and description paper_content: Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized. --- paper_title: Efficient Estimation of Word Representations in Vector Space paper_content: We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities. --- paper_title: Describing Videos by Exploiting Temporal Structure paper_content: Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper_content: Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge. --- paper_title: Multimodal Residual Learning for Visual QA paper_content: Deep neural networks continue to advance the state-of-the-art of image recognition tasks with various methods. However, applications of these methods to multimodality remain limited. We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning. Unlike the deep residual learning, MRN effectively learns the joint representation from vision and language information. The main idea is to use element-wise multiplication for the joint residual mappings exploiting the residual learning of the attentional models in recent studies. Various alternative models introduced by multimodality are explored based on our study. We achieve the state-of-the-art results on the Visual QA dataset for both Open-Ended and Multiple-Choice tasks. Moreover, we introduce a novel method to visualize the attention effect of the joint representations for each learning block using back-propagation algorithm, even though the visual features are collapsed without spatial information. --- paper_title: Simple Baseline for Visual Question Answering paper_content: We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo and open-source code. . --- paper_title: Answer-Type Prediction for Visual Question Answering paper_content: Recently, algorithms for object recognition and related tasks have become sufficiently proficient that new vision tasks can now be pursued. In this paper, we build a system capable of answering open-ended text-based questions about images, which is known as Visual Question Answering (VQA). Our approach's key insight is that we can predict the form of the answer from the question. We formulate our solution in a Bayesian framework. When our approach is combined with a discriminative model, the combined model achieves state-of-the-art results on four benchmark datasets for open-ended VQA: DAQUAR, COCO-QA, The VQA Dataset, and Visual7W. --- paper_title: DualNet: Domain-invariant network for visual question answering paper_content: Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016. --- paper_title: Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering paper_content: In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. --- paper_title: Image Question Answering Using Convolutional Neural Network with Dynamic Parameter Prediction paper_content: We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network---joint network with the CNN for ImageQA and the parameter prediction network---is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Learning to Answer Questions From Image using Convolutional Neural Network paper_content: In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA). Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper_content: Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge. --- paper_title: Simple Baseline for Visual Question Answering paper_content: We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo and open-source code. . --- paper_title: Image Question Answering Using Convolutional Neural Network with Dynamic Parameter Prediction paper_content: We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network---joint network with the CNN for ImageQA and the parameter prediction network---is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Visual7W: Grounded Question Answering in Images paper_content: We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks. --- paper_title: Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper_content: Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge. --- paper_title: Compositional Memory for Visual Question Answering paper_content: Visual Question Answering (VQA) emerges as one of the most fascinating topics in computer vision recently. Many state of the art methods naively use holistic visual features with language features into a Long Short-Term Memory (LSTM) module, neglecting the sophisticated interaction between them. This coarse modeling also blocks the possibilities of exploring finer-grained local features that contribute to the question answering dynamically over time. ::: This paper addresses this fundamental problem by directly modeling the temporal dynamics between language and all possible local image patches. When traversing the question words sequentially, our end-to-end approach explicitly fuses the features associated to the words and the ones available at multiple local patches in an attention mechanism, and further combines the fused information to generate dynamic messages, which we call episode. We then feed the episodes to a standard question answering module together with the contextual visual information and linguistic information. Motivated by recent practices in deep learning, we use auxiliary loss functions during training to improve the performance. Our experiments on two latest public datasets suggest that our method has a superior performance. Notably, on the DARQUAR dataset we advanced the state of the art by 6$\%$, and we also evaluated our approach on the most recent MSCOCO-VQA dataset. --- paper_title: Where to Look: Focus Regions for Visual Question Answering paper_content: We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method exhibits significant improvements in answering questions such as "what color," where it is necessary to evaluate a specific location, and "what room," where it selectively identifies informative image regions. Our model is tested on the VQA dataset which is the largest human-annotated visual question answering dataset to our knowledge. --- paper_title: ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering paper_content: We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions. --- paper_title: Show, Attend and Tell: Neural Image Caption Generation with Visual Attention paper_content: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO. --- paper_title: Stacked Attention Networks for Image Question Answering paper_content: This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer. --- paper_title: Hierarchical Question-Image Co-Attention for Visual Question Answering paper_content: A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA. --- paper_title: A Focused Dynamic Attention Model for Visual Question Answering paper_content: Visual Question and Answering (VQA) problems are attracting increasing interest from multiple research disciplines. Solving VQA problems requires techniques from both computer vision for understanding the visual contents of a presented image or video, as well as the ones from natural language processing for understanding semantics of the question and generating the answers. Regarding visual content modeling, most of existing VQA methods adopt the strategy of extracting global features from the image or video, which inevitably fails in capturing fine-grained information such as spatial configuration of multiple objects. Extracting features from auto-generated regions -- as some region-based image recognition methods do -- cannot essentially address this problem and may introduce some overwhelming irrelevant features with the question. In this work, we propose a novel Focused Dynamic Attention (FDA) model to provide better aligned image content representation with proposed questions. Being aware of the key words in the question, FDA employs off-the-shelf object detector to identify important regions and fuse the information from the regions and global features via an LSTM unit. Such question-driven representations are then combined with question representation and fed into a reasoning unit for generating the answers. Extensive evaluation on a large-scale benchmark dataset, VQA, clearly demonstrate the superior performance of FDA over well-established baselines. --- paper_title: Neural Module Networks paper_content: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. --- paper_title: Neural Module Networks paper_content: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. --- paper_title: The Stanford Typed Dependencies Representation paper_content: This paper examines the Stanford typed dependencies representation, which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understanding. For such purposes, we argue that dependency schemes must follow a simple design and provide semantically contentful information, as well as offer an automatic procedure to extract the relations. We consider the underlying design principles of the Stanford scheme from this perspective, and compare it to the GR and PARC representations. Finally, we address the question of the suitability of the Stanford scheme for parser evaluation. --- paper_title: Learning to Compose Neural Networks for Question Answering paper_content: We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains. --- paper_title: Learning Dependency-Based Compositional Semantics paper_content: Suppose we want to build a system that answers a natural language question by representing its semantics as a logical form and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive. Our goal is to learn a semantic parser from question-answer pairs instead, where the logical form is modeled as a latent variable. Motivated by this challenging learning problem, we develop a new semantic formalism, dependency-based compositional semantics (DCS), which has favorable linguistic, statistical, and computational properties. We define a log-linear distribution over DCS logical forms and estimate the parameters using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, our system outperforms all existing state-of-the-art systems, despite using no annotated logical forms. --- paper_title: Neural Module Networks paper_content: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. --- paper_title: Towards Neural Network-based Reasoning paper_content: We propose Neural Reasoner, a framework for neural network-based reasoning over natural language sentences. Given a question, Neural Reasoner can infer over multiple supporting facts and find an answer to the question in specific forms. Neural Reasoner has 1) a specific interaction-pooling mechanism, allowing it to examine multiple facts, and 2) a deep architecture, allowing it to model the complicated logical relations in reasoning tasks. Assuming no particular structure exists in the question and facts, Neural Reasoner is able to accommodate different types of reasoning and different forms of language expressions. Despite the model complexity, Neural Reasoner can still be trained effectively in an end-to-end manner. Our empirical studies show that Neural Reasoner can outperform existing neural reasoning systems with remarkable margins on two difficult artificial tasks (Positional Reasoning and Path Finding) proposed in [8]. For example, it improves the accuracy on Path Finding(10K) from 33.4% [6] to over 98%. --- paper_title: Large-scale Simple Question Answering with Memory Networks paper_content: Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance. --- paper_title: Training Recurrent Answering Units with Joint Loss Minimization for VQA paper_content: We propose a novel algorithm for visual question answering based on a recurrent deep neural network, where every module in the network corresponds to a complete answering unit with attention mechanism by itself. The network is optimized by minimizing loss aggregated from all the units, which share model parameters while receiving different information to compute attention probability. For training, our model attends to a region within image feature map, updates its memory based on the question and attended image feature, and answers the question based on its memory state. This procedure is performed to compute loss in each step. The motivation of this approach is our observation that multi-step inferences are often required to answer questions while each problem may have a unique desirable number of steps, which is difficult to identify in practice. Hence, we always make the first unit in the network solve problems, but allow it to learn the knowledge from the rest of units by backpropagation unless it degrades the model. To implement this idea, we early-stop training each unit as soon as it starts to overfit. Note that, since more complex models tend to overfit on easier questions quickly, the last answering unit in the unfolded recurrent neural network is typically killed first while the first one remains last. We make a single-step prediction for a new question using the shared model. This strategy works better than the other options within our framework since the selected model is trained effectively from all units without overfitting. The proposed algorithm outperforms other multi-step attention based approaches using a single step prediction in VQA dataset. --- paper_title: Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks paper_content: One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks. --- paper_title: Ask Me Anything: Dynamic Memory Networks for Natural Language Processing paper_content: Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets. --- paper_title: Weakly Supervised Memory Networks paper_content: In this paper we introduce a variant of Memory Networks that needs significantly less supervision to perform question and answering tasks. The original model requires that the sentences supporting the answer be explicitly indicated during training. In contrast, our approach only requires the answer to the question during training. We apply the model to the synthetic bAbI tasks, showing that our approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Furthermore, it decisively beats other weakly supervised approaches based on LSTMs. The approach is quite general and can potentially be applied to many other tasks that require capturing long-term dependencies. --- paper_title: Dynamic Memory Networks for Visual and Textual Question Answering paper_content: Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision. --- paper_title: Toward an architecture for never-ending language learning paper_content: We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74% after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent. --- paper_title: Acquiring comparative commonsense knowledge from the web paper_content: Applications are increasingly expected to make smart decisions based on what humans consider basic commonsense. An often overlooked but essential form of commonsense involves comparisons, e.g. the fact that bears are typically more dangerous than dogs, that tables are heavier than chairs, or that ice is colder than water. In this paper, we first rely on open information extraction methods to obtain large amounts of comparisons from the Web. We then develop a joint optimization model for cleaning and disambiguating this knowledge with respect to WordNet. This model relies on integer linear programming and semantic coherence scores. Experiments show that our model outperforms strong baselines and allows us to obtain a large knowledge base of disambiguated commonsense assertions. --- paper_title: Identifying Relations for Open Information Extraction paper_content: Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-of-the-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the ReVerb Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TextRunner and woepos. More than 30% of ReVerb's extractions are at precision 0.8 or higher---compared to virtually none for earlier systems. The paper concludes with a detailed analysis of ReVerb's errors, suggesting directions for future work. --- paper_title: DBpedia: A Nucleus for a Web of Open Data paper_content: DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. --- paper_title: ConceptNet: A Practical Commonsense Reasoning Toolkit paper_content: ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning tasks over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind Common Sense Project — a World Wide Web based collaboration with over 14 000 authors. --- paper_title: Freebase: a collaboratively created graph database for structuring human knowledge paper_content: Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications. --- paper_title: WebChild: harvesting and organizing commonsense knowledge from the web paper_content: This paper presents a method for automatically constructing a large commonsense knowledge base, called WebChild, from Web contents. WebChild contains triples that connect nouns with adjectives via fine-grained relations like hasShape, hasTaste, evokesEmotion, etc. The arguments of these assertions, nouns and adjectives, are disambiguated by mapping them onto their proper WordNet senses. Our method is based on semi-supervised Label Propagation over graphs of noisy candidate assertions. We automatically derive seeds from WordNet and by pattern matching from Web text collections. The Label Propagation algorithm provides us with domain sets and range sets for 19 different relations, and with confidence-ranked assertions between WordNet senses. Large-scale experiments demonstrate the high accuracy (more than 80 percent) and coverage (more than four million fine grained disambiguated assertions) of WebChild. --- paper_title: Ask Me Anything: Free-Form Visual Question Answering Based on Knowledge from External Sources paper_content: We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases. --- paper_title: FVQA: Fact-Based Visual Question Answering paper_content: Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. ::: We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . ::: We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts. --- paper_title: YAGO3: A Knowledge Base from Multilingual Wikipedias paper_content: We present YAGO3, an extension of the YAGO knowledge base that combines the information from the Wikipedias in multiple languages. Our technique fuses the multilingual information with the English WordNet to build one coherent knowledge base. We make use of the categories, the infoboxes, and Wikidata, and learn the meaning of infobox attributes across languages. We run our method on 10 different languages, and achieve a precision of 95%-100% in the attribute mapping. Our technique enlarges YAGO by 1m new entities and 7m new facts. --- paper_title: Explicit Knowledge-based Reasoning for Visual Question Answering paper_content: We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering. --- paper_title: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions paper_content: We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics , which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph , i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions. --- paper_title: A Survey of Current Datasets for Vision and Language Research paper_content: Integrating vision and language has long been a dream in work on artificial intelligence (AI). In the past two years, we have witnessed an explosion of work that brings together vision and language from images to videos and beyond. The available corpora have played a crucial role in advancing this area of research. In this paper, we propose a set of quality metrics for evaluating and analyzing the vision & language datasets and categorize them accordingly. Our analyses show that the most recent datasets have been using more complex language and more abstract concepts, however, there are different strengths and weaknesses in each. --- paper_title: Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics paper_content: In [Hodosh et al., 2013], we establish a rankingbased framework for sentence-based image description and retrieval. We introduce a new dataset of images paired with multiple descriptive captions that was specifically designed for these tasks. We also present strong KCCA-based baseline systems for description and search, and perform an in-depth study of evaluation metrics for these two tasks. Our results indicate that automatic evaluation metrics for our ranking-based tasks are more accurate and robust than those proposed for generation-based image description. --- paper_title: Natural Language Object Retrieval paper_content: In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer. --- paper_title: Generation and Comprehension of Unambiguous Object Descriptions paper_content: We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox. --- paper_title: ReferItGame: Referring to Objects in Photographs of Natural Scenes paper_content: In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets. --- paper_title: Microsoft COCO: Common Objects in Context paper_content: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. --- paper_title: Microsoft COCO Captions: Data Collection and Evaluation Server paper_content: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided. --- paper_title: Ask Your Neurons: A Neural-Based Approach to Answering Questions about Images paper_content: We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus. --- paper_title: A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input paper_content: We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test. --- paper_title: Visual Turing test for computer vision systems paper_content: Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. --- paper_title: Microsoft COCO: Common Objects in Context paper_content: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. --- paper_title: Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering paper_content: In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Joint Video and Text Parsing for Understanding Events and Answering Queries paper_content: This article proposes a multimedia analysis framework to process video and text jointly for understanding events and answering user queries. The framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events), and causal information (causalities between events and fluents) in the video and text. The knowledge representation of the framework is based on a spatial-temporal-causal AND-OR graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes, and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. The authors present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs, and the joint parse graph. Based on the probabilistic model, the authors propose a joint parsing system consisting of three modules: video parsing, text parsing, and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text, respectively. The joint inference module produces a joint parse graph by performing matching, deduction, and revision on the video and text parse graphs. The proposed framework has the following objectives: to provide deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; to perform parsing and reasoning across the spatial, temporal, and causal dimensions based on the joint S/T/C-AOG representation; and to show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where, and why. The authors empirically evaluated the system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results. --- paper_title: Learning to Answer Questions From Image using Convolutional Neural Network paper_content: In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA). Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art. --- paper_title: Visual7W: Grounded Question Answering in Images paper_content: We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks. --- paper_title: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations paper_content: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of $$35$$35 objects, $$26$$26 attributes, and $$21$$21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Visual Madlibs: Fill in the blank Image Generation and Question Answering paper_content: In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. --- paper_title: A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input paper_content: We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Visual7W: Grounded Question Answering in Images paper_content: We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks. --- paper_title: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations paper_content: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of $$35$$35 objects, $$26$$26 attributes, and $$21$$21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs. --- paper_title: Visual Madlibs: Fill in the blank Image Generation and Question Answering paper_content: In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks. --- paper_title: A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input paper_content: We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test. --- paper_title: Microsoft COCO: Common Objects in Context paper_content: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. --- paper_title: Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering paper_content: In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Adopting Abstract Images for Semantic Scene Understanding paper_content: Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages over real images. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of real images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of real images that are semantically similar would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract images with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity. Finally, we study the relation between the saliency and memorability of objects and their semantic importance. --- paper_title: Don't just listen, use your imagination: Leveraging visual common sense for non-visual tasks paper_content: Artificial agents today can answer factual questions. But they fall short on questions that require common sense reasoning. Perhaps this is because most existing common sense databases rely on text to learn and represent knowledge. But much of common sense knowledge is unwritten - partly because it tends not to be interesting enough to talk about, and partly because some common sense is unnatural to articulate in text. While unwritten, it is not unseen. In this paper we leverage semantic common sense knowledge learned from images - i.e. visual common sense - in two textual tasks: fill-in-the-blank and visual paraphrasing. We propose to “imagine” the scene behind the text, and leverage visual cues from the “imagined” scenes in addition to textual cues while answering these questions. We imagine the scenes as a visual ab]ion. Our approach outperforms a strong text-only baseline on these tasks. Our proposed tasks can serve as benchmarks to quantitatively evaluate progress in solving tasks that go “beyond recognition”. Our code and datasets are publicly available. --- paper_title: Bringing Semantics into Focus Using Visual Abstraction paper_content: Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of semantically similar real images would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract scenes with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity. --- paper_title: Predicting Object Dynamics in Scenes paper_content: Given a static scene, a human can trivially enumerate the myriad of things that can happen next and characterize the relative likelihood of each. In the process, we make use of enormous amounts of commonsense knowledge about how the world works. In this paper, we investigate learning this commonsense knowledge from data. To overcome a lack of densely annotated spatiotemporal data, we learn from sequences of abstract images gathered using crowdsourcing. The abstract scenes provide both object location and attribute information. We demonstrate qualitatively and quantitatively that our models produce plausible scene predictions on both the abstract images, as well as natural images taken from the Internet. --- paper_title: Yin and Yang: Balancing and Answering Binary Visual Questions paper_content: The complex compositional structure of language makes problems at the intersection of vision and language challenging. But language also provides a strong prior that can result in good superficial performance, without the underlying models truly understanding the visual content. This can hinder progress in pushing state of art in the computer vision aspects of multi-modal AI. In this paper, we address binary Visual Question Answering (VQA) on abstract scenes. We formulate this problem as visual verification of concepts inquired in the questions. Specifically, we convert the question to a tuple that concisely summarizes the visual concept to be detected in the image. If the concept can be found in the image, the answer to the question is "yes", and otherwise "no". Abstract scenes play two roles (1) They allow us to focus on the high-level semantics of the VQA task as opposed to the low-level recognition problems, and perhaps more importantly, (2) They provide us the modality to balance the dataset such that language priors are controlled, and the role of vision is essential. In particular, we collect fine-grained pairs of scenes for every question, such that the answer to the question is "yes" for one scene, and "no" for the other for the exact same question. Indeed, language priors alone do not perform better than chance on our balanced dataset. Moreover, our proposed approach matches the performance of a state-of-the-art VQA approach on the unbalanced dataset, and outperforms it on the balanced dataset. --- paper_title: Zero-Shot Learning via Visual Abstraction paper_content: One of the main challenges in learning fine-grained visual categories is gathering training images. Recent work in Zero-Shot Learning (ZSL) circumvents this challenge by describing categories via attributes or text. However, not all visual concepts, e.g., two people dancing, are easily amenable to such descriptions. In this paper, we propose a new modality for ZSL using visual abstraction to learn difficult-to-describe concepts. Specifically, we explore concepts related to people and their interactions with others. Our proposed modality allows one to provide training data by manipulating abstract visualizations, e.g., one can illustrate interactions between two clipart people by manipulating each person’s pose, expression, gaze, and gender. The feasibility of our approach is shown on a human pose dataset and a new dataset containing complex interactions between two people, where we outperform several baselines. To better match across the two domains, we learn an explicit mapping between the abstract and real worlds. --- paper_title: Learning Common Sense through Visual Abstraction paper_content: Common sense is essential for building intelligent machines. While some commonsense knowledge is explicitly stated in human-generated text and can be learnt by mining the web, much of it is unwritten. It is often unnecessary and even unnatural to write about commonsense facts. While unwritten, this commonsense knowledge is not unseen! The visual world around us is full of structure modeled by commonsense knowledge. Can machines learn common sense simply by observing our visual world? Unfortunately, this requires automatic and accurate detection of objects, their attributes, poses, and interactions between objects, which remain challenging problems. Our key insight is that while visual common sense is depicted in visual content, it is the semantic features that are relevant and not low-level pixel information. In other words, photorealism is not necessary to learn common sense. We explore the use of human-generated abstract scenes made from clipart for learning common sense. In particular, we reason about the plausibility of an interaction or relation between a pair of nouns by measuring the similarity of the relation and nouns with other relations and nouns we have seen in abstract scenes. We show that the commonsense knowledge we learn is complementary to what can be learnt from sources of text. --- paper_title: Learning the Visual Interpretation of Sentences paper_content: Sentences that describe visual scenes contain a wide variety of information pertaining to the presence of objects, their attributes and their spatial relations. In this paper we learn the visual features that correspond to semantic phrases derived from sentences. Specifically, we extract predicate tuples that contain two nouns and a relation. The relation may take several forms, such as a verb, preposition, adjective or their combination. We model a scene using a Conditional Random Field (CRF) formulation where each node corresponds to an object, and the edges to their relations. We determine the potentials of the CRF using the tuples extracted from the sentences. We generate novel scenes depicting the sentences' visual meaning by sampling from the CRF. The CRF is also used to score a set of scenes for a text-based image retrieval task. Our results show we can generate (retrieve) scenes that convey the desired semantic meaning, even when scenes (queries) are described by multiple sentences. Significant improvement is found over several baseline approaches. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Visual7W: Grounded Question Answering in Images paper_content: We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks. --- paper_title: Acquiring comparative commonsense knowledge from the web paper_content: Applications are increasingly expected to make smart decisions based on what humans consider basic commonsense. An often overlooked but essential form of commonsense involves comparisons, e.g. the fact that bears are typically more dangerous than dogs, that tables are heavier than chairs, or that ice is colder than water. In this paper, we first rely on open information extraction methods to obtain large amounts of comparisons from the Web. We then develop a joint optimization model for cleaning and disambiguating this knowledge with respect to WordNet. This model relies on integer linear programming and semantic coherence scores. Experiments show that our model outperforms strong baselines and allows us to obtain a large knowledge base of disambiguated commonsense assertions. --- paper_title: A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input paper_content: We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test. --- paper_title: DBpedia: A Nucleus for a Web of Open Data paper_content: DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. --- paper_title: ConceptNet: A Practical Commonsense Reasoning Toolkit paper_content: ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning tasks over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind Common Sense Project — a World Wide Web based collaboration with over 14 000 authors. --- paper_title: WebChild: harvesting and organizing commonsense knowledge from the web paper_content: This paper presents a method for automatically constructing a large commonsense knowledge base, called WebChild, from Web contents. WebChild contains triples that connect nouns with adjectives via fine-grained relations like hasShape, hasTaste, evokesEmotion, etc. The arguments of these assertions, nouns and adjectives, are disambiguated by mapping them onto their proper WordNet senses. Our method is based on semi-supervised Label Propagation over graphs of noisy candidate assertions. We automatically derive seeds from WordNet and by pattern matching from Web text collections. The Label Propagation algorithm provides us with domain sets and range sets for 19 different relations, and with confidence-ranked assertions between WordNet senses. Large-scale experiments demonstrate the high accuracy (more than 80 percent) and coverage (more than four million fine grained disambiguated assertions) of WebChild. --- paper_title: Microsoft COCO: Common Objects in Context paper_content: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. --- paper_title: Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering paper_content: In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: \url{http://idl.baidu.com/FM-IQA.html}. --- paper_title: FVQA: Fact-Based Visual Question Answering paper_content: Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. ::: We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . ::: We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts. --- paper_title: VQA: Visual Question Answering paper_content: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance. --- paper_title: Explicit Knowledge-based Reasoning for Visual Question Answering paper_content: We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering. --- paper_title: A Diagram Is Worth A Dozen Images paper_content: Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images. Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little attention. In this paper, we study the problem of diagram interpretation, the challenging task of identifying the structure of a diagram and the semantics of its constituents and their relationships. We introduce Diagram Parse Graphs (DPG) as our representation to model the structure of diagrams. We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering. We devise an LSTM-based method for syntactic parsing of diagrams and introduce a DPG-based attention model for diagram question answering. We compile a new dataset of diagrams with exhaustive annotations of constituents and relationships for about 5,000 diagrams and 15,000 questions and answers. Our results show the significance of our models for syntactic parsing and question answering in diagrams using DPGs. --- paper_title: Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks paper_content: One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks. --- paper_title: Neural Module Networks paper_content: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes. --- paper_title: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations paper_content: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of $$35$$35 objects, $$26$$26 attributes, and $$21$$21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs. --- paper_title: Acquiring comparative commonsense knowledge from the web paper_content: Applications are increasingly expected to make smart decisions based on what humans consider basic commonsense. An often overlooked but essential form of commonsense involves comparisons, e.g. the fact that bears are typically more dangerous than dogs, that tables are heavier than chairs, or that ice is colder than water. In this paper, we first rely on open information extraction methods to obtain large amounts of comparisons from the Web. We then develop a joint optimization model for cleaning and disambiguating this knowledge with respect to WordNet. This model relies on integer linear programming and semantic coherence scores. Experiments show that our model outperforms strong baselines and allows us to obtain a large knowledge base of disambiguated commonsense assertions. --- paper_title: DBpedia: A Nucleus for a Web of Open Data paper_content: DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. --- paper_title: ConceptNet: A Practical Commonsense Reasoning Toolkit paper_content: ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning tasks over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind Common Sense Project — a World Wide Web based collaboration with over 14 000 authors. --- paper_title: WebChild: harvesting and organizing commonsense knowledge from the web paper_content: This paper presents a method for automatically constructing a large commonsense knowledge base, called WebChild, from Web contents. WebChild contains triples that connect nouns with adjectives via fine-grained relations like hasShape, hasTaste, evokesEmotion, etc. The arguments of these assertions, nouns and adjectives, are disambiguated by mapping them onto their proper WordNet senses. Our method is based on semi-supervised Label Propagation over graphs of noisy candidate assertions. We automatically derive seeds from WordNet and by pattern matching from Web text collections. The Label Propagation algorithm provides us with domain sets and range sets for 19 different relations, and with confidence-ranked assertions between WordNet senses. Large-scale experiments demonstrate the high accuracy (more than 80 percent) and coverage (more than four million fine grained disambiguated assertions) of WebChild. --- paper_title: Ask Me Anything: Free-Form Visual Question Answering Based on Knowledge from External Sources paper_content: We propose a method for visual question answering which combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. This allows more complex questions to be answered using the predominant neural network-based approach than has previously been possible. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain the whole answer. The method constructs a textual representation of the semantic content of an image, and merges it with textual information sourced from a knowledge base, to develop a deeper understanding of the scene viewed. Priming a recurrent neural network with this combined information, and the submitted question, leads to a very flexible visual question answering approach. We are specifically able to answer questions posed in natural language, that refer to information not contained in the image. We demonstrate the effectiveness of our model on two publicly available datasets, Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported results in both cases. --- paper_title: FVQA: Fact-Based Visual Question Answering paper_content: Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. ::: We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . ::: We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts. --- paper_title: Explicit Knowledge-based Reasoning for Visual Question Answering paper_content: We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering. --- paper_title: Answer-Type Prediction for Visual Question Answering paper_content: Recently, algorithms for object recognition and related tasks have become sufficiently proficient that new vision tasks can now be pursued. In this paper, we build a system capable of answering open-ended text-based questions about images, which is known as Visual Question Answering (VQA). Our approach's key insight is that we can predict the form of the answer from the question. We formulate our solution in a Bayesian framework. When our approach is combined with a discriminative model, the combined model achieves state-of-the-art results on four benchmark datasets for open-ended VQA: DAQUAR, COCO-QA, The VQA Dataset, and Visual7W. --- paper_title: Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base paper_content: We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5% on the WEBQUESTIONS dataset. --- paper_title: Question Answering: A Survey of Research, Techniques and Issues paper_content: With the huge amount of data available on web, it has turned out to be a fertile area for Question Answering QA research. Question answering, an instance of information retrieval research is at the cross road from several research communities such as, machine learning, statistical learning, natural language processing and pattern learning. In this paper, the authors survey the research in area of question answering with respect to different prospects of NLP, machine learning, statistical learning and pattern learning. Then they situate some of the prominent QA systems concerning these prospects and present a comparative study on the basis of question types. --- paper_title: Recurrent Neural Networks with External Memory for Language Understanding paper_content: Recurrent Neural Networks (RNNs) have become increasingly popular for the task of language understanding. In this task, a semantic tagger is deployed to associate a semantic label to each word in an input sequence. The success of RNN may be attributed to its ability to memorize long-term dependence that relates the current-time semantic label prediction to the observations many time instances away. However, the memory capacity of simple RNNs is limited because of the gradient vanishing and exploding problem. We propose to use an external memory to improve memorization capability of RNNs. We conducted experiments on the ATIS dataset, and observed that the proposed model was able to achieve the state-of-the-art results. We compare our proposed model with alternative models and report analysis results that may provide insights for future research. --- paper_title: Language to Logical Form with Neural Attention paper_content: Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domainor representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations. --- paper_title: A Neural Network for Factoid Question Answering over Paragraphs paper_content: Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players. --- paper_title: Learning to Compose Neural Networks for Question Answering paper_content: We describe a question answering model that applies to both images and structured knowledge bases. The model uses natural language strings to automatically assemble neural networks from a collection of composable modules. Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning, with only (world, question, answer) triples as supervision. Our approach, which we term a dynamic neural model network, achieves state-of-the-art results on benchmark datasets in both visual and structured domains. --- paper_title: FVQA: Fact-Based Visual Question Answering paper_content: Visual Question Answering (VQA) has attracted a lot of attention in both Computer Vision and Natural Language Processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA, a VQA dataset which requires, and supports, much deeper reasoning. FVQA only contains questions which require external information to answer. ::: We thus extend a conventional visual question answering dataset, which contains image-question-answerg triplets, through additional image-question-answer-supporting fact tuples. The supporting fact is represented as a structural triplet, such as . ::: We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting facts. --- paper_title: Dynamic Memory Networks for Visual and Textual Question Answering paper_content: Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision. --- paper_title: A survey on question answering technology from an information retrieval perspective paper_content: This article provides a comprehensive and comparative overview of question answering technology. It presents the question answering task from an information retrieval perspective and emphasises the importance of retrieval models, i.e., representations of queries and information documents, and retrieval functions which are used for estimating the relevance between a query and an answer candidate. The survey suggests a general question answering architecture that steadily increases the complexity of the representation level of questions and information objects. On the one hand, natural language queries are reduced to keyword-based searches, on the other hand, knowledge bases are queried with structured or logical queries obtained from the natural language questions, and answers are obtained through reasoning. We discuss different levels of processing yielding bag-of-words-based and more complex representations integrating part-of-speech tags, classification of the expected answer type, semantic roles, discourse analysis, translation into a SQL-like language and logical representations. --- paper_title: Explicit Knowledge-based Reasoning for Visual Question Answering paper_content: We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering. --- paper_title: Online Learning of Relaxed CCG Grammars for Parsing to Logical Form paper_content: We consider the problem of learning to parse sentences to lambda-calculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar (CCG). A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar—for example allowing flexible word order, or insertion of lexical items— with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Results for the approach on ATIS data show 86% F-measure in recovering fully correct semantic analyses and 95.9% F-measure by a partial-match criterion, a more than 5% improvement over the 90.3% partial-match figure reported by He and Young (2006). ---
Title: Visual Question Answering: A Survey of Methods and Datasets Section 1: Introduction Description 1: Introduce the concept of Visual Question Answering (VQA), its motivation, and its significance in combining computer vision and natural language processing. Section 2: Methods for VQA Description 2: Provide a comprehensive review of various methods proposed for VQA, categorized based on their main contributions such as joint embedding approaches, attention mechanisms, compositional models, and knowledge base-enhanced approaches. Section 3: Datasets and Evaluation Description 3: Discuss the various datasets available for training and evaluating VQA systems, detailing their characteristics and the types of questions they include. Also, address the evaluation metrics used for VQA. Section 4: Structured Scene Annotations for VQA Description 4: Analyze the use of structured scene annotations such as scene graphs from the Visual Genome dataset and evaluate their utility in answering visual questions. Section 5: Discussion and Future Directions Description 5: Discuss the current state of VQA research, the challenges faced, and suggest future directions, including the incorporation of external knowledge bases and the potential use of advanced NLP tools. Section 6: Textual Question Answering Description 6: Draw parallels between textual question answering and visual question answering, highlighting techniques from the NLP community that could be beneficial for VQA. Section 7: Conclusion Description 7: Summarize the key points of the survey, highlighting the progress made in the field of VQA, and reiterate the promising research directions identified.
Augmented Reality in Tourism - Research and Applications Overview
4
--- paper_title: Experiments with Multi-modal Interfaces in a Context-Aware City Guide paper_content: In recent years there has been considerable research into the development of mobile context-aware applications. The canonical example of such an application is the context-aware tour-guide that offers city visitors information tailored to their preferences and environment. The nature of the user interface for these applications is critical to their success. Moreover, the user interface and the nature and modality of information presented to the user impacts on many aspects of the system’s overall requirements, such as screen size and network provision. Current prototypes have used a range of different interfaces developed in a largely ad-hoc fashion and there has been no systematic exploration of user preferences for information modality in mobile context-aware applications. In this paper we describe a series of experiments with multi-modal interfaces for context-aware city guides. The experiments build on our earlier research into the GUIDE system and include a series of field trials involving members of the general public. We report on the results of these experiments and extract design guidelines for the developers of future mobile context-aware applications. --- paper_title: [Computer-assisted intraoperative visualization of dental implants. Augmented reality in medicine]. paper_content: Abstract In this paper, a recently developed computer-based dental implant positioning system with an image-to-tissue interface is presented. On a computer monitor or in a head-up display, planned implant positions and the implant drill are graphically superimposed on the patient's anatomy. Electromagnetic 3D sensors track all skull and jaw movements; their signal feedback to the workstation induces permanent real-time updating of the virtual graphics' position. An experimental study and a clinical case demonstrates the concept of the augmented reality environment--the physician can see the operating field and superimposed virtual structures, such as dental implants and surgical instruments, without loosing visual control of the operating field. Therefore, the operation system allows visualization of CT planned implantposition and the implementation of important anatomical structures. The presented method for the first time links preoperatively acquired radiologic data, planned implant location and intraoperative navigation assistance for orthotopic positioning of dental implants. --- paper_title: A Survey of Augmented Reality paper_content: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR. From early research in the1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched. --- paper_title: A head-mounted operating binocular for augmented reality visualization in medicine - design and initial evaluation paper_content: Computer-aided surgery (CAS), the intraoperative application of biomedical visualization techniques, appears to be one of the most promising fields of application for augmented reality (AR), the display of additional computer-generated graphics over a real-world scene. Typically a device such as a head-mounted display (HMD) is used for AR. However, considerable technical problems connected with AR have limited the intraoperative application of HMDs up to now. One of the difficulties in using HMDs is the requirement for a common optical focal plane for both the realworld scene and the computer-generated image, and acceptance of the HMD by the user in a surgical environment. In order to increase the clinical acceptance of AR, we have adapted the Varioscope (Life Optics, Vienna), a miniature, cost-effective head-mounted operating binocular, for AR. In this paper, we present the basic design of the modified HMD, and the method and results of an extensive laboratory study for photogrammetric calibration of the Varioscope's computer displays to a real-world scene. In a series of 16 calibrations with varying zoom factors and object distances, mean calibration error was found to be 1.24 /spl plusmn/ 0.38 pixels or 0.12 /spl plusmn/ 0.05 mm for a 640 /spl times/ 480 display. Maximum error accounted for 3.33 /spl plusmn/ 1.04 pixels or 0.33 /spl plusmn/ 0.12 mm. The location of a position measurement probe of an optical tracking system was transformed to the display with an error of less than 1 mm in the real world in 56% of all cases. For the remaining cases, error was below 2 mm. We conclude that the accuracy achieved in our experiments is sufficient for a wide range of CAS applications. --- paper_title: Augmented reality: an application of heads-up display technology to manual manufacturing processes paper_content: The authors describe the design and prototyping steps they have taken toward the implementation of a heads-up, see-through, head-mounted display (HUDset). Combined with head position sensing and a real world registration system, this technology allows a computer-produced diagram to be superimposed and stabilized on a specific position on a real-world object. Successful development of the HUDset technology will enable cost reductions and efficiency improvements in many of the human-involved operations in aircraft manufacturing, by eliminating templates, formboard diagrams, and other masking devices. > --- paper_title: Telegeoinformatics: Location-based Computing and Services paper_content: THEORIES AND TECHNOLOGIES Telegeoinformatics: Current Trends and Future Direction Introduction Architecture Internet-Based GIS Spatial Databases Intelligent Query Analyzer (IQA) Predictive Computing Adaptation Final Remarks References Remote Sensing Introductory Concepts Remote Sensing Systems Imaging Characteristics of Remote Sensing Systems Active Microwave Remote Sensing Extraction of Thematic Information from Remotely Sensed Imagery Extraction of Metric Information from Remotely Sensed Imagery Remote Sensing in Telegeoinformatics References Positioning and Tracking Approaches and Technologies Introduction Global Positioning System Positioning Methods Based on Cellular Networks Other Positioning and Tracking Techniques: An Overview Hybrid Systems Summary References Wireless Communications Introduction Overview of Wireless Systems Radio Propagation and Physical Layer Issues Medium Access in Wireless Networks Network Planning, Design and Deployment Wireless Network Operations Conclusions and the Future References INTEGRATED DATA AND TECHNOLOGIES Chapter Five: Location-Based Computing Introduction LBC Infrastructure Location-Based Interoperability Location-Based Data Management Adaptive Location-Based Computing Location-Based Routing as Adaptive LBC Concluding Remarks References Location-Based Services Introduction Types of Location-Based Services What is Unique About Location-Based Services? Enabling Technologies Market for Location-Based Services Importance of Architecture and Standards Example Location-Based Services: J-Phone J-Navi (Japan) Conclusions References Wearable Tele-Informatic Systems for Personal Imaging Introduction Humanistic Intelligence as a Basis for Intelligent Image Processing Humanistic Intelligence 'WEARCOMP' as a Means of Realizing Humanistic Intelligence Where on the Body Should a Visual Tele-Informatic Device be Placed? Telepointer: Wearable Hands-Free Completely Self Contained Visual Augmented Reality Without Headwear and Without any Infrastructural Reliance Portable Personal Pulse Doppler Radar Vision System When Both the Camera and Display are Headword: Personal Imaging and Mediated Reality Personal Imaging for Location-Based Services Reality Window Manager (RWM) Personal Telegeoinformatics: Blocking Spam with a Photonic Filter Conclusion References Mobile Augmented Reality Introduction MARS: Promises, Applications, and Challenges Components and Requirements MARS UI Concepts Conclusions Acknowledgements References APPLICATIONS Emergency Response Systems Overview of Emergency Response Systems State-of-the-Art ERSs Examples of Developing ERSs for Earthquakes and Other Disasters Future Aspects of Emergency Response Systems Concluding Remarks References Location-Based Computing for Infrastructure Field Tasks Introduction LBC-Infra Concept Technological Components of LBC-Infra General Requirements of LBC-Infra Interaction Patterns and Framework of LBC-Infra Prototype System and Case Study Conclusions References The Role of Telegeoinformatics in ITS Introduction to Intelligent Tranaportation Systems Telegeoinformatics Within ITS The Role of Positioning Systems In ITS Geospatial Data for ITS Communication Systems in ITS ITS-Telegeoinformatics Applications Non-Technical Issues Impacting on ITS Concluding Remarks Remarks The Impact and Penetration of Location-Based Services The Definition of Technologies LBSs: Definitions, Software, and Usage The Market for LBSs: A Model of the Development of LBSs Penetration of Mobile Devices: Predictions of Future Markets Impacts of LBSs on Geographical Locations Conclusions References --- paper_title: Robot programming using augmented reality: An interactive method for planning collision-free paths paper_content: Current robot programming approaches lack the intuitiveness required for quick and simple applications. As new robotic applications are being identified, there is a greater need to be able to programme robots safely and quickly. This paper discusses the use of an augmented reality (AR) environment for facilitating intuitive robot programming, and presents a novel methodology for planning collision-free paths for an n-d.o.f. (degree-of-freedom) manipulator in a 3D AR environment. The methodology is interactive because the human is involved in defining the free space or collision-free volume (CFV), and selecting the start and goal configurations. The methodology uses a heuristic beam search algorithm to generate the paths. A number of possible scenarios are discussed. --- paper_title: Using Augmented Reality to Visualise Architecture Designs In An Outdoor Environment paper_content: This paper presents the use of a wearable computer system to visualise outdoor architectural features using augmented reality. The paper examines the question How does one visualise a design for a building, modification to a building, or extension to an existing building relative to its physical surroundings? The solution presented to this problem is to use a mobile augmented reality platform to visualise the design in spatial context of its final physical surroundings. The paper describes the mobile augmented reality platform TINMITH2 used in the investigation. The operation of the system is described through a detailed example of the system in operation. The system was used to visualise a simple extension to a building on one of the University of South Australia campuses. --- paper_title: Augmented reality systems for medical applications paper_content: Augmented reality (AR) is a technology in which a computer-generated image is superimposed onto the user's vision of the real world, giving the user additional information generated from the computer model. This technology is different from virtual reality, in which the user is immersed in a virtual world generated by the computer. Rather, the AR system brings the computer into the "world" of the user by augmenting the real environment with virtual objects. Using an AR system, the user's view of the real world is enhanced. This enhancement may be in the form of labels, 3D rendered models, or shaded modifications. In this article, the authors review some of the research involving AR systems, basic system configurations, image-registration approaches, and technical problems involved with AR technology. They also touch upon the requirements for an interventive AR system, which can help guide surgeons in executing a surgical plan. --- paper_title: Haptic and audio displays for augmented reality tourism applications paper_content: Augmented Reality (AR) technology has potential for supporting applications such as tourism. However, non-visual interaction modalities are undervalued and underused in AR tourism applications. Visual displays are ineffective or inappropriate in some situations such as in strong sunlight or when walking or driving. Meanwhile, non-visual modalities are becoming increasingly important in mobile user experiences. In this paper, two non-visual interaction modalities, haptic display and audio display, and their combination are evaluated in representing tourism information to users with a mobile phone. An experimental evaluation was conducted with different tourism information presented by haptic display, audio display and both, with 3 different rhythms and 3 levels of amplitude. The results show a main effect of interaction modality, with identification rate highest for information represented in the combined Haptic-Audio display at 86.7%, while no significant effect was found for rhythm or amplitude alone. Qualitative data from the participants indicated that, across all interaction modalities, different levels of amplitude were more difficult to distinguish than different rhythms or different combinations of rhythm and amplitude. --- paper_title: Wearable tactile display of directions for pedestrian navigation: Comparative lab and field evaluations paper_content: We aim to contribute to the development of tactile-based pedestrian navigation systems that help users to navigate urban environments with minimal attention to the user-device interface. This paper describes the design and evaluation of a prototype and reports findings from (i) a lab-based study that directly compared features of two widely researched forms of tactile display: a waist belt and a back array; and (ii) a field evaluation which compared our prototype tactile-based navigation system (TactNav) with a visual mobile maps application (Nokia Maps™). Lab results indicated that the waist belt afforded significantly better performance than the back array across a wide range of metrics. Field results indicated that users' performance with the tactile-based system was equivalent to that with the visual-based system in terms of accuracy while route completion time was significantly faster with the tactile-based directional display. --- paper_title: Mixing Virtual and Real scenes in the site of ancient Pompeii paper_content: This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real-time revival of their fauna and flora, featuring groups of virtual animated characters with artificial-life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment. The main goal is to push the limits of current AR and virtual storytelling technologies and to explore the processes of mixed narrative design of fictional spaces (e.g. fresco paintings) where visitors can experience a high degree of realistic immersion. Based on a captured/real-time video sequence of the real scene in a video-see-through HMD set-up, these scenes are enhanced by the seamless accurate real-time registration and 3D rendering of realistic complete simulations of virtual flora and fauna (virtual humans and plants) in a real-time storytelling scenario-based environment. Thus the visitor of the ancient site is presented with an immersive and innovative multi-sensory interactive trip to the past --- paper_title: Overview of smartphone augmented reality applications for tourism. paper_content: Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance tourists’ experiences and make them exceptional. However, effective and usable design is still in its infancy. In this publication we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists. --- paper_title: Telegeoinformatics: Location-based Computing and Services paper_content: THEORIES AND TECHNOLOGIES Telegeoinformatics: Current Trends and Future Direction Introduction Architecture Internet-Based GIS Spatial Databases Intelligent Query Analyzer (IQA) Predictive Computing Adaptation Final Remarks References Remote Sensing Introductory Concepts Remote Sensing Systems Imaging Characteristics of Remote Sensing Systems Active Microwave Remote Sensing Extraction of Thematic Information from Remotely Sensed Imagery Extraction of Metric Information from Remotely Sensed Imagery Remote Sensing in Telegeoinformatics References Positioning and Tracking Approaches and Technologies Introduction Global Positioning System Positioning Methods Based on Cellular Networks Other Positioning and Tracking Techniques: An Overview Hybrid Systems Summary References Wireless Communications Introduction Overview of Wireless Systems Radio Propagation and Physical Layer Issues Medium Access in Wireless Networks Network Planning, Design and Deployment Wireless Network Operations Conclusions and the Future References INTEGRATED DATA AND TECHNOLOGIES Chapter Five: Location-Based Computing Introduction LBC Infrastructure Location-Based Interoperability Location-Based Data Management Adaptive Location-Based Computing Location-Based Routing as Adaptive LBC Concluding Remarks References Location-Based Services Introduction Types of Location-Based Services What is Unique About Location-Based Services? Enabling Technologies Market for Location-Based Services Importance of Architecture and Standards Example Location-Based Services: J-Phone J-Navi (Japan) Conclusions References Wearable Tele-Informatic Systems for Personal Imaging Introduction Humanistic Intelligence as a Basis for Intelligent Image Processing Humanistic Intelligence 'WEARCOMP' as a Means of Realizing Humanistic Intelligence Where on the Body Should a Visual Tele-Informatic Device be Placed? Telepointer: Wearable Hands-Free Completely Self Contained Visual Augmented Reality Without Headwear and Without any Infrastructural Reliance Portable Personal Pulse Doppler Radar Vision System When Both the Camera and Display are Headword: Personal Imaging and Mediated Reality Personal Imaging for Location-Based Services Reality Window Manager (RWM) Personal Telegeoinformatics: Blocking Spam with a Photonic Filter Conclusion References Mobile Augmented Reality Introduction MARS: Promises, Applications, and Challenges Components and Requirements MARS UI Concepts Conclusions Acknowledgements References APPLICATIONS Emergency Response Systems Overview of Emergency Response Systems State-of-the-Art ERSs Examples of Developing ERSs for Earthquakes and Other Disasters Future Aspects of Emergency Response Systems Concluding Remarks References Location-Based Computing for Infrastructure Field Tasks Introduction LBC-Infra Concept Technological Components of LBC-Infra General Requirements of LBC-Infra Interaction Patterns and Framework of LBC-Infra Prototype System and Case Study Conclusions References The Role of Telegeoinformatics in ITS Introduction to Intelligent Tranaportation Systems Telegeoinformatics Within ITS The Role of Positioning Systems In ITS Geospatial Data for ITS Communication Systems in ITS ITS-Telegeoinformatics Applications Non-Technical Issues Impacting on ITS Concluding Remarks Remarks The Impact and Penetration of Location-Based Services The Definition of Technologies LBSs: Definitions, Software, and Usage The Market for LBSs: A Model of the Development of LBSs Penetration of Mobile Devices: Predictions of Future Markets Impacts of LBSs on Geographical Locations Conclusions References --- paper_title: SPETA: Social pervasive e-Tourism advisor paper_content: Tourism is one of the major sources of income for many countries. Therefore, providing efficient, real-time service for tourists is a crucial competitive asset which needs to be enhanced using major technological advances. The current research has the objective of integrating technological innovation into an information system, in order to build a better user experience for the tourist. The principal strength of the approach is the fusion of context-aware pervasive systems, GIS systems, social networks and semantics. This paper presents the SPETA system, which uses knowledge of the user's current location, preferences, as well as a history of past locations, in order to provide the type of recommender services that tourists expect from a real tour guide. --- paper_title: Mobile Augmented Reality for Tourists - MARFT paper_content: The aim of the project MARFT is to demonstrate the next generation of augmented reality targeting current mass market mobile phones. MARFT sets out to launch an interactive service for tourists visiting mountainous rural regions. During local trips they will be able to explore the surrounding landscape by pointing the lens of the smart-phone camera towards the area of interest. As soon as the view-finder shows the area of interest, the tourist will be able to choose between two products: (i) an augmented photo superimposed with tourist information like hiking tours or lookout points or (ii) a rendered 3D virtual reality view showing the same view as the real photo also augmented with tourist objects. The outstanding step beyond current augmented reality applications is that MARFT is able to augment the reality with cartographic accuracy. In addition to the benefit of presenting reliable information, MARFT is able to consider the visibility of objects and further to work completely offline in order to avoid roaming costs especially for tourists visiting from abroad. --- paper_title: User expectations for mobile mixed reality services: an initial user study paper_content: Mixed reality, i.e. the integration and merging of physical and digital worlds, has become an integral part of the ubicomp research agenda. Often, however, in development of first technology concepts and prototypes, the expectations of potential users are not considered, and the development easily becomes technology-driven. To understand the expectations and needs of potential users of future mobile mixed reality (MMR) services, we conducted altogether five focus group sessions with varying user groups. We investigated the early impressions and expectations of MMR as a technology by evaluating various usage scenarios. Based on this initial study, we found relevance issues (what information to receive, how and when) and the reliability of MMR information to be the most salient elements that were anticipated to affect the overall user experience. In mobile and busy situations the MMR information content has to be something that is very important or useful for the user, especially if receiving the information or interacting with it draws the user's attention away from the tasks executed in the real world. --- paper_title: Evaluation of Mobile Augmented Reality Applications for Tourism Destinations paper_content: Every city contains interesting places and stories to be discovered. Mobile Augmented Reality provides the means to enrich tourists through precise and tailored information about the surroundings of the area they are visiting. MobiAR is an AR platform based on Android, which assists users who need tourist information about a city. When users observe reality through the MobiAR application via their mobile devices, they can experience events that took place at their location through multimedia content, and can access useful information to plan their routes in the city. This paper describes the MobiAR platform and presents the evaluation process that has been applied to the MobiAR application, in order to gather the opinion of real users. --- paper_title: Dublin AR: Implementing Augmented Reality in Tourism paper_content: The use of modern technology is becoming a necessity of many destinations to stay competitive and attractive to the modern tourist. A new form of technology that is being used increasingly in the public space is virtual- and Augmented Reality (AR). The aim of this paper is to investigate tourists’ requirements for the development of a mobile AR tourism application in urban heritage. In-depth interviews with 26 international and domestic tourists visiting Dublin city were conducted and thematic analysis was used to analyze the findings of the interviews. The findings suggest that although Augmented Reality has passed the hype stage, the technology is just on the verge of being implemented in a meaningful way in the tourism industry. Furthermore, they reveal that it needs to be designed to serve a specific purpose for the user, while multi-language functionality, ease of use and the capability to personalize the application are among the main requirements that need to be considered in order to attract tourists and encourage regular use. This paper discusses several significant implications for AR Tourism research and practice. Limitations of the study which should be addressed in future research are discussed and recommendations for further research are provided. --- paper_title: A Survey of Augmented Reality Technologies, Applications and Limitations paper_content: We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our percep- tion and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome. --- paper_title: Revisiting the visit:: understanding how technology can shape the museum visit paper_content: This paper reports findings from a study of how a guidebook was used by pairs of visitors touring a historic house. We describe how the guidebook was incorporated into their visit in four ways: shared listening, independent use, following one another, and checking in on each other. We discuss how individual and groupware features were adopted in support of different visiting experiences, and illustrate how that adoption was influenced by social relationships, the nature of the current visit, and any museum visiting strategies that the couples had. Finally, we describe how the guidebook facilitated awareness between couples, and how awareness of non-guidebook users (strangers) influenced use. --- paper_title: Design and implementation of a mobile device for outdoor augmented reality in the archeoguide project paper_content: This paper presents the design and implementation issues associated with the development of a mobile device for the ARCHEOGUIDE project. We describe general and application specific design goals as well as the technical requirements the implementation is based upon. Since speed is crucial for an interactive application we provide a survey of mobile and wearable computing equipment especially considering performance aspects. A detailed overview of available hardware components follows. We describe the decisions made during prototype development and present the final result --- a mobile unit for outdoor Augmented Reality tours in cultural-heritage sites. Finally we discuss the experiences we made using the system during a first trials phase at ancient Olympia in Greece. --- paper_title: Online user survey on current mobile augmented reality applications paper_content: Augmented reality (AR) as an emerging technology in the mobile computing domain is becoming mature enough to engender publicly available applications for end users. Various commercial applications have recently been emerging in the mobile consumer domain at an increasing pace — Layar, Junaio, Google Goggles, and Wikitude are perhaps the most prominent ones. However, the research community lacks an understanding of how well such timely applications have been accepted, what kind of user experiences they have evoked, and what the users perceive as the weaknesses of the various applications overall. During the spring of 2011 we conducted an online survey to study the overall acceptance and user experience of the mobile AR-like consumer applications currently existing on the market. This paper reports the first analyses of the qualitative and quantitative survey data of 90 respondents. We highlight an extensive set of user-oriented issues to be considered in developing the applications further, as well as in directing future user research in AR. The results indicate that the experiences have been inconsistent: generally positive evaluations are overshadowed by mentions of applications' pragmatic uselessness in everyday life and technical unreliability, as well as excessive or limited and irrelevant content. --- paper_title: Archeoguide: first results of an augmented reality, mobile computing system in cultural heritage sites paper_content: This paper presents the ARCHEOGUIDE project (Augmented Reality-based Cultural Heritage On-site GUIDE). ARCHEOGUIDE is an IST project, funded by the EU, aiming at providing a personalized electronic guide and tour assistant to cultural site visitors. The system provides on-site help and Augmented Reality reconstructions of ancient ruins, based on user's position and orientation in the cultural site, and realtime image rendering. It incorporates a multimedia database of cultural material for on-line access to cultural data, virtual visits, and restoration information. It uses multi-modal user interfaces and personalizes the flow of information to its user's profile in order to cater for both professional and recreational users, and for applications ranging from archaeological research, to education, multimedia publishing, and cultural tourism. This paper presents the ARCHEOGUIDE system and the experiences gained from the evaluation of an initial prototype by representative user groups at the archeological site of Olympia, Greece. --- paper_title: Enhancing the Tourism Experience through Mobile Augmented Reality: Challenges and Prospects paper_content: This paper discusses the use of Augmented Reality (AR) applications for the needs of tourism. It describes the technology's evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR application development, emphasizing the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR's substantial end-user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR's full potential within the field. --- paper_title: Enhancing Cultural Tourism experiences with Augmented Reality Technologies paper_content: This paper describes the development of an interactive visualization system based on Augmented Reality Technologies and the integration into a tourist application. The basic idea is the combination of the commonly known concept of tourist binoculars with Augmented Reality. By means of Augmented Reality, the real scene is enhanced by multimedia personalized interactive information to increase the tourist experience of the user, who can retrieve this information by a user-friendly interface. --- paper_title: Augmented reality-based on-site tour guide: a study in Gyeongbokgung paper_content: This paper presents an on-site tour guide using augmented reality in which past life is virtually reproduced and visualized at cultural heritage sites. In the tour guide, animated 3-D virtual characters are superimposed on the cultural heritage sites by visually tracking simple geometric primitives of the sites such as rectangles and estimating camera poses (positions and orientations) that can be considered as a tourist's viewpoints. Contextual information, such as a tourist's locations and profiles, is used to support personalized tour guides. In particular, the tourist's locations are obtained by visually recognizing wooden tablets of the cultural heritage sites. The prototype of the augmented reality tour guide was tested at Gangnyeongjeon and Gyotaejeon in Gyeongbokgung, which is a symbolic cultural heritage site in Korea and its user evaluation is discussed. --- paper_title: PocketNavigator: vibro-tactile waypoint navigation for everyday mobile devices paper_content: Pedestrian navigation systems are becoming popular but the currently dominant audio-visual interaction can have drawbacks. Tactile feedback is studied as a solution, but currently only available as research prototypes. With the PocketNavigator we propose a demonstrator that adds tactile feedback to a simple but robust map-based navigation system that runs on any Android Smartphone. Users can leave the device in the pocket, while being guided non-visually through vibration cues. Like a compass we "point at" the next waypoint by encoding its direction and distance in vibration patterns. As an advantage over previous approaches it allows giving continuous feedback instead of isolated turning instructions and it can be realized without custom-built tactile displays. Preliminary results from a field study show that pedestrian can effectively use this Tactile Compass to reach a destination without turn-by-turn instructions. Integrated into the PocketNavigator we can now deploy it at the Android Market to evaluate the Tactile Compass with a wide range of users. --- paper_title: Survey of User-Based Experimentation in Augmented Reality paper_content: Although augmented reality (AR) was first conceptualized over 35 years ago (Sutherland, 1968), until recently the field was primarily concerned with the engineering challenges associated with developing AR hardware and software. Because AR is such a compelling medium with many potential uses, there is a need to further develop AR systems from a technology-centric medium to a user-centric medium. This transformation will not be realized without systematic user-based experimentation. This paper surveys and categorizes the user-based studies that have been conducted using AR to date. Our survey finds that the work is progressing along three complementary lines of effort: (1) those that study low-level tasks, with the goal of understanding how human perception and cognition operate in AR contexts, (2) those that examine user task performance within specific AR applications or application domains, in order to gain an understanding of how AR technology could impact underlying tasks, and (3) those that examine user interaction and communication between collaborating users. --- paper_title: Virtual-reality heritage presentation at Ename paper_content: Virtual reality (VR) and multimedia are central components of the heritage presentation programme at Ename, Belgium. These techniques are designed to help the visitor understand and experience the past as revealed through archaeological and historical research. The programme uses different VR approaches to bring to life archaeological remains, standing monuments and elements of the historical landscape for visitors. We named the overall project "Ename 974" to commemorate the foundation date of the first mediaeval settlement. Its major aim is to communicate new insights about archaeology, history and conservation to the general public, paying great attention to scholarly accuracy and by means of multimedia technologies. Among the most important of these technologies are on-site virtual reconstructions, museum multimedia and educational projects. Since 1998, the Ename Centre for Public Archaeology and Heritage Presentation has served as an international extension of the Ename 974 project. Its goal is to develop new technologies and new standards for heritage presentation. It also coordinates heritage presentation projects and educational programmes for partner sites around the world. --- paper_title: Overview of smartphone augmented reality applications for tourism. paper_content: Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance tourists’ experiences and make them exceptional. However, effective and usable design is still in its infancy. In this publication we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists. ---
Title: Augmented Reality in Tourism - Research and Applications Overview Section 1: INTRODUCTION Description 1: This section discusses the general concept of augmented reality, distinguishing it from virtual reality, and provides an overview of related work in various fields, including tourism. Section 2: AUGMENTED REALITY IN TOURISM Description 2: This section explores the potential of augmented reality to enhance the tourist experience, providing a comprehensive literature review of existing research, technological advancements, and applications in the tourism context. Section 3: OVERVIEW OF RELEVANT FACTORS Description 3: This section identifies and elaborates on key factors impacting the success of augmented reality applications in tourism, categorizing them into general requirements, functionalities, issues, overlay types, and technologies. Section 4: CONCLUSION Description 4: This section summarizes the key findings from the research and application overviews, underlines the categories of identified factors, and highlights areas needing further development and research for augmented reality technology in tourism.
A REVIEW OF RECORDING TECHNOLOGIES FOR DIGITAL FABRICATION IN HERITAGE CONSERVATION
9
--- paper_title: A Dictionary of Construction, Surveying, and Civil Engineering paper_content: This A to Z is the most up-to-date dictionary of building, surveying, and civil engineering terms and definitions available. Written by an experienced team of experts in the respective fields, it covers in over 7,500 entries the key areas of construction technology and practice, civil and construction engineering, construction management techniques and processes, and legal aspects such as contracts and procurement. Illustrations complement entries where necessary and other extra features include entry-level web links, which are listed and regularly updated on a companion website. Its wide coverage makes it the ideal reference for students of construction and related areas, as well as for professionals in the field. --- paper_title: The Grove Encyclopedia of Materials and Techniques in Art paper_content: The Grove Encyclopedia of Materials and Techniques deals with all aspects of materials, techniques, conservation, and restoration in both traditional and nontraditional media, including ceramics, sculpture, metalwork, painting, works on paper, textiles, video, digital art, and more. Drawing upon the expansive scholarship in The Dictionary of Art and adding new entries, this work is a comprehensive reference resource for artists, art dealers, collectors, curators, conservators, students, researchers, and scholars. Similar in design to The Grove Encyclopedia of Decorative Arts, this one-volume reference work contains articles of various lengths in alphabetical order. The shorter, more factual articles are combined with larger, multi-section articles tracing the development of materials and techniques in various geographical locations. The Encyclopedia provides unparalleled scope and depth, and it offers fully updated articles and bibliography as well as over 150 illustrations and color plates. The Grove Encyclopedia of Materials and Techniques offers scholarly information on materials and techniques in art for anyone who studies, creates, collects, or deals in works of art. The entries are written to be accessible to a wide range of readers, and the work is designed as a reliable and convenient resource covering this essential area in the visual arts. --- paper_title: Close Range Photogrammetry: Principles, Techniques and Applications paper_content: This book provides a thorough presentation of the methods, mathematics, systems and applications which comprise the subject of close range photogrammetry, which uses accurate imaging techniques to analyse the three-dimensional shape of a wide range of manufactured and natural objects. Close range photogrammetry, for the most part entirely digital, has become an accepted, powerful and readily available technique for engineers and scientists who wish to utilise images to make accurate 3-D measurements of complex objects. After an introduction, the book provides fundamental mathematics, including orientation, digital imaging processing and 3-D reconstruction methods, as well as presenting a discussion of imaging technology including targeting and illumination, hardware and software systems. Finally it gives a short overview of photogrammetric solutions for typical applications in engineering, manufacturing, medical science, architecture, archaeology and other fields. --- paper_title: REPLICAS IN CULTURAL HERITAGE: 3D PRINTING AND THE MUSEUM EXPERIENCE paper_content: Abstract. 3D printing has seen a recent massive diffusion for several applications, not least the field of Cultural Heritage. Being used for different purposes, such as study, analysis, conservation or access in museum exhibitions, 3D printed replicas need to undergo a process of validation also in terms of metrical precision and accuracy. The Laboratory of Photogrammetry of Iuav University of Venice has started several collaborations with Italian museum institutions firstly for the digital acquisition and then for the physical reproduction of objects of historical and artistic interest. The aim of the research is to analyse the metric characteristics of the printed model in relation to the original data, and to optimize the process that from the survey leads to the physical representation of an object. In fact, this could be acquired through different methodologies that have different precisions (multi-image photogrammetry, TOF laser scanner, triangulation based laser scanner), and it always involves a long processing phase. It should not be forgotten that the digital data have to undergo a series of simplifications, which, on one hand, eliminate the noise introduced by the acquisition process, but on the other one, they can lead to discrepancies between the physical copy and the original geometry. In this paper we will show the results obtained on a small archaeological find that was acquired and reproduced for a museum exhibition intended for blind and partially sighted people. ---
Title: A REVIEW OF RECORDING TECHNOLOGIES FOR DIGITAL FABRICATION IN HERITAGE CONSERVATION Section 1: ANALOG TECHNIQUES Description 1: Describe traditional survey instruments, hybrid techniques, and tools like the pointing machine and pantograph used before the advent of digital technologies. Section 2: Photogrammetry Description 2: Explain Structure from Motion (SfM) photogrammetry, its various applications, and case studies illustrating its use in heritage conservation. Section 3: Hand Laser Scanning Description 3: Discuss the method of hand laser scanning, its applications, and examples of its use in conservation projects. Section 4: Lucida Scanner Description 4: Detail the development, capabilities, and applications of the Lucida Scanner, especially in recording micro surface details. Section 5: Terrestrial Laser Scanning Description 5: Outline the use of terrestrial laser scanning, different types of scanners, and examples of their use in documenting and restoring heritage structures. Section 6: DIGITAL RECORDING FOR FABRICATION Description 6: Summarize the distinctions between subtractive and additive machining, and focus on how digital recording techniques are utilized in fabrication for heritage conservation. Section 7: Lucida Scanning Description 7: Provide specific examples of how the Lucida Scanner's results contribute to the recreation of micro surface details and the production of facsimiles. Section 8: DISCUSSION Description 8: Review the current and potential applications of digital fabrication techniques for cultural heritage, including different significant applications and the connections between various techniques. Section 9: CONCLUSIONS Description 9: Conclude the findings by summarizing how recording and fabrication technologies assist in the conservation of cultural heritage.
FORMATS FOR DIGITAL PRESERVATION: A REVIEW OF ALTERNATIVES AND ISSUES Submitted By CENDI Digital Preservation Task Group Revised
31
---
Title: FORMATS FOR DIGITAL PRESERVATION: A REVIEW OF ALTERNATIVES AND ISSUES Section 1: EXECUTIVE SUMMARY Description 1: Summarizes the review of alternative formats and the issues related to digital preservation. Section 2: BACKGROUND Description 2: Provides context on CENDI Members' interest in digital preservation formats and the factors leading to this review. Section 3: FORMAT ASSESSMENT FACTORS Description 3: Outlines the factors used to assess the appropriateness of different digital formats for preservation. Section 4: EXTERNAL DEPENDENCIES Description 4: Discusses the importance of avoiding external dependencies for digital preservation formats. Section 5: IMPACT OF PATENTS Description 5: Examines how patents can affect the sustainability of various digital preservation formats. Section 6: TECHNICAL PROTECTION MECHANISMS Description 6: Reviews technical protection mechanisms like encryption and their implications for preservation. Section 7: INTEGRITY OF STRUCTURE Description 7: Explains the importance of representing the logical structure of documents in digital formats. Section 8: INTEGRITY OF LAYOUT Description 8: Discusses the significance of preserving the layout of documents in digital formats. Section 9: INTEGRITY OF RENDERING OF EQUATIONS Description 9: Looks at how different formats handle the rendering of equations. Section 10: BEYOND NORMAL RENDERING Description 10: Explores the support for embedding media objects and other advanced features in digital formats. Section 11: CONCLUSION Description 11: Recaps the key points and considerations for selecting the most appropriate digital preservation formats. Section 12: Introduction Description 12: Describes the origins of the assessment request and the initial concerns regarding digital preservation formats. Section 13: What is a Preservation Format? Description 13: Defines preservation formats and the factors that make them suitable for long-term preservation. Section 14: The Major Formats Description 14: Provides an overview of major digital formats such as TIFF, PDF, PDF/A-1, and XML. Section 15: TIFF Description 15: Details the history, capabilities, and limitations of the TIFF format. Section 16: PDF (Portable Document Format) Description 16: Discusses the origins, features, and preservation suitability of PDF. Section 17: PDF/A-1 (Portable Document Format/Archival) Description 17: Explains the specific standards and benefits of the PDF/A-1 format. Section 18: XML (Extensible Markup Language) Description 18: Describes the structure, benefits, and use cases of XML for digital preservation. Section 19: History of the Discussion Description 19: Reviews the historical context and evolution of digital preservation format discussions. Section 20: Status in 1999 Description 20: Summarizes the state of digital preservation formats and practices in 1999. Section 21: Status in 2004 Description 21: Reviews advancements and continuing concerns in digital preservation as of 2004. Section 22: The Advent of PDF/A-1 Description 22: Discusses the development of PDF/A-1 and its impact on digital preservation strategies. Section 23: The Current Situation Description 23: Details the current practices and ongoing efforts in the field of digital preservation. Section 24: Format Assessment Description 24: Provides an overview of the format assessment framework and its application. Section 25: Technical Factors Description 25: Outlines the technical considerations in evaluating digital preservation formats. Section 26: Quality and Functionality Description 26: Discusses the importance of content quality and functionality in preservation formats. Section 27: Preserving Content for Re-use Description 27: Explains the need and methods to preserve content for future re-use. Section 28: Preserving Layout and Presentation Description 28: Highlights the importance of maintaining the original layout and presentation in preservation. Section 29: Striking a Balance Description 29: Emphasizes the need to balance technical, quality, and functionality factors in choosing preservation formats. Section 30: Preservation Formats as Part of the Archival Process Description 30: Notes the importance of implementing preservation formats within broader policies and procedures. Section 31: Conclusion Description 31: Reiterates the factors to consider when determining the most appropriate format for preservation and making balanced decisions.
A Survey of Temporal Knowledge Discovery Paradigms and Methods
8
--- paper_title: Advanced Database Systems paper_content: Advanced Database System by Carlo Zaniolo, Stefano Ceri, Christos Faloutsos, Richard T. Snodgrass, V.S. Subrahmanian, and Roberto Zicari Preface 1 Introduction Part I Active Databases 2 Syntax and Semantics of Active Databases 2.1 Starburst 2.1.1 Syntax of the Starburst CREATE RULE Statement 2.1.2 Semantics of Active Rules in Starburst 2.1.3 Other Active Rule Commands 2.1.4 Examples of Active Rule Executions 2.2 Oracle 2.2.1 Syntax of the Oracle CREATE TRIGGER Statement 2.2.2 Semantics of Oracles Triggers 2.2.3 Example of Trigger Executions 2.3 DB2 2.3.1 Syntax of the DB2 CREATE TRIGGER Statement 2.3.2 Semantics of DB2 Triggers 2.3.3. Examples of Trigger Executions 2.4 Chimera 2.4.1 Summary of Chimera 2.4.2 Syntax of the Chimera Define Trigger Statement 2.4.3 Semantics of Chimera Triggers 2.4.4 Examples of Trigger Executions 2.5 Taxonomy of Active Database Concepts 2.6 Bibliographic Notes 2.7 Exercises 3 Applications of Active Databases 3.1 Integrity Management 3.1.1 Rule Generation 3.1.2 Example 3.2 Derived Data Maintenance 3.2.1 Rule Generation 3.2.2 Example 3.3 Replication 3.4 Workflow Management 3.5 Business Rules 3.5.1 A Case Study: Energy Management System (EMS) 3.5.2 Database Schema for the EMS Case Study 3.5.3 Business Rules for the EMS Case Study 3.6 Bibliographic Notes 3.7 Exercises 4 Design Principles for Active Rules 4.1 Properties of Active Rule Execution 4.1.1 Termination 4.1.2 Confluence 4.1.3 Observable Determinism 4.2 Rule Modularization 4.2.1 Behavioral Stratification 4.2.2 Assertional Stratification 4.2.3 Event-Based Stratification 4.3 Rule Debugging and Monitoring 4.4 IDEA Methodology 4.4.1 Active Rule Design 4.4.2 Active Rule Prototyping 4.4.3 Active Rule Implementation 4.4.4 Design Tools Supporting the IDEA Methodology 4.5 Summary and Open Problems 4.6 Bibliographic Notes 4.7 Exercises Part II Temporal Databases 5 Overview of Temporal Databases 5.1 A Case Study 5.1.1 Temporal Projection 5.1.2 Temporal Join 5.1.3 Summary 5.2 The Time Domain 5.3 Time Data Types 5.4 Associating Facts with Time 5.4.1 Dimensionality 5.4.2 Underlying Data Model 5.4.3 Representative Data Models 5.5 Temporal Query Languages 5.6 Summary 5.7 Bibliographic Notes 5.8 Exercises 6 TSQL2 6.1 Time Ontology 6.2 Data Model 6.3 Language Constructs 6.3.1 Schema Definition 6.3.2 The SELECT Statement 6.3.3 Restructuring 6.3.4 Partitioning 6.3.5 The VALID Clause 6.3.6 The Modification Statements 6.3.7 Event Relations 6.3.8 Transaction-Time Support 6.3.9 Aggregates 6.3.10 Schema Evolution and Versioning 6.4 Other Constructs 6.5 Summary 6.6 Bibliographic Notes 6.7 Exercises 7 Implementation 7.1 System Architecture 7.2 Adding Temporal Support 7.2.1 DDL Compiler 7.2.2 Query Compiler 7.2.3 Run-Time Evaluator 7.3 Minimal support Needed for TSQL2 7.3.1 Data Dictionary and Data Files 7.3.2 DDL Compiler 7.3.3 Query Compiler 7.3.4 Run-Time Evaluator 7.3.5 Transaction and Data Manager 7.4 Summary and Open Problems 7.5 Bibliographic Notes 7.6 Exercises Part III Complex Queries and Reasoning 8 The Logic of Query Languages 8.1 Datalog 8.2 Relational Calculi 8.3 Relational Algebra 8.4 From Safe Datalog to Relational Algebra 8.4.1 Commercial Query Languages 8.5 Recursive Rules 8.6 Stratification 8.7 Expressive Power and Data Complexity 8.8 Syntax and Semantics of Datalog Languages 8.8.1 Syntax of First-Order Logic and Datalog 8.8.2 Semantics 8.8.3 Interpretations 8.9 The Models of a Program 8.10 Fixpoint-Based Semantics 8.10.1 Operational Semantics: Powers of Tp 8.11 Bibliographic Notes 8.12 Exercises 9 Implementation of Rules and Recursion 9.1 Operational Semantics: Bottom-Up Execution 9.2 Stratified Programs and Iterated Fixpoint 9.3 Differential Fixpoint Computation 9.4 Top-Down Execution 9.4.1 Unification 9.4.2 SLD-Resolution 9.5 Rule-Rewriting Methods 9.5.1 Left-Linear and Right-Linear Recursion 9.5.2 Magic Sets Method 9.5.3 The Counting Method 9.5.4 Supplementary Magic Sets 9.6 Compilation and Optimization 9.6.1 Nonrecursive Programs 9.6.2 Recursive Predicates 9.6.3 Selecting a Method for Recursion 9.6.4 Optimization Strategies and Execution Plan 9.7 Recursive Queries in SQL 9.7.1 Implementation of Recursive SQL Queries 9.8 Bibliographic Notes 9.9 Exercises 10 Database Updates and Nonmonotonic Reasoning 10.1 Nonmonotonic Reasoning 10.2 Stratification and Well-Founded Methods 10.2.1 Locally Stratified Programs 10.2.2 Well-Founded Models 10.3 Datalog (1s) and Temporal Reasoning 10.4 XY-Stratification 10.5 Updates and Active Rules 10.6 Nondeterministic Reasoning 10.7 Research Directions 10.8 Bibliographic Notes 10.9 Exercises Part IV Spatial, Text, and Multimedia Databases 11 Traditional Indexing Methods 11.1 Secondary Keys 11.1.1 Inverted Files 11.1.2 Grid File 11.1.3 k-D Trees 11.1.4 Conclusions 11.2 Spatial Access Methods (SAMs) 11.2.1 Space-Filling Curves 11.2.2 R-Trees 11.2.3 Transforming to Higher-D Points 11.2.4 Conclusions 11.3 Text Retrieval 11.3.1 Full Text Scanning 11.3.2 Inversion 11.3.3 Signature Files 11.3.4 Vector Space Model and Clustering 11.3.5 Conclusions 11.4 Summary and Future Research 11.5 Bibliographic Notes 11.6 Exercises 12 Multimedia Indexing 12.1 Basic Idea 12.2 GEMINI for Whole Match Queries 12.3 1-D Time Series 12.3.1 Distance Function 12.3.2 Feature Extraction and Lower-Bounding 12.3.3 Introduction to DFT 12.3.4 Energy-Concentrating Properties of DFT 12.3.5 Experiments 12.4 2-D Color Images 12.4.1 Image Features and Distance Functions 12.4.2 Lower-Bounding 12.4.3 Experiments and Conclusions 12.5 Subpattern Matching 12.5.1 Sketch of the Approach-ST-Index 12.5.2 Experiments 12.6 Summary and Future Research 12.7 Bibliographic Notes 12.8 Exercises Part V Uncertainty in Databases and Knowledge Bases 13 Models of Uncertainty 13.1 Introduction 13.1.1 Uncertainty in DBs: An Image Database Example 13.1.2 Uncertainty in DBs: A Temporal Database Example 13.1.3 Uncertainty in DBs: A Null-Value Example 13.2 Models of Uncertainty 13.2.1 Fuzzy Sets 13.2.2 Lattice-Based Approaches 13.2.3 Relationship to Fuzzy Logic 13.2.4 Probability Theory 13.3 Bibliographic Notes 13.4 Exercises 14 Uncertainty in Relational Databases 14.1 Lattice-Based Relational Databases 14.1.1 An Example 14.1.2 Querying Lattice-Based Databases 14.2 Probabilistic Relational Databases 14.2.1 Probabilistic Relations 14.2.2 Annotated Relations 14.2.3 Converting Probabilistic Tuples to Annotated Tuples 14.2.4 Manipulating Annotated Relations 14.2.5 Querying Probabilisitc Databases 14.3 Bibliographic Notes 14.4 A Final Note 14.5 Exercises 15 Including Uncertainty in Deductive Databases 15.1 Generalized Annotated Programs (GAPs) 15.1.1 Lattice-Based KBs: Model Theory 15.1.2 Lattice-Based KBs: Fixpoint Theory 15.1.3 Lattice-Based KBs: Query Processing 15.2 Probabilisic Knowledge Bases 15.2.1 Probabilistic KBs: Fixpoint Theory 15.2.2 Probabilistic KBs: Model Theory 15.2.3 Probabilistic KBs: Query Processing 15.3 Bibliographic Notes 15.4 Research Directions 15.5 Summary 15.6 Exercises Part VI Schema and Database Evolution in Object Database Systems 16 Object Databases and Change Management 16.1 Why Changes Are Needed 16.2 The Essence of Object Databases 16.2.1 Basics of Object Databases 16.2.2 Standards 16.2.3 Change Management in Object Database Systems 16.3 Bibliographic Notes 16.4 Exercises 17 How to Change the Schema 17.1 Changing the Schema Using Primitives 17.1.1 Taxonomy of Schema Modifications 17.1.2 Schema Evolution Primitives in O2 17.2 Schema Invariants 17.3 Semantics of Schema Modifications 17.4 Bibliographic Notes 17.5 Exercises 18 How to Change the Database 18.1 Immediate vs. Deferred Transformations 18.1.1 Immediate Database Trasformation 18.1.2 Deferred Database Transformation 18.2 Preserving Structural Consistency 18.2.1 Structural Consistency Preserving Primitives 18.2.2 Structural Consistency Modifying Primitives 18.3 User-Defined and Default Transformations 18.3.1 Default Database Transformations 18.3.2 User-Defined Database Transformations 18.3.3 User-Defined Object Migration Functions 18.4 Implementing Database Updates in O2 18.4.1 Problems with Deferred Database Transformations 18.4.2 Data Structures 18.4.3 the Deferred Database Update Algorithm 18.5 Related Work 18.6 Bibliographic Notes 18.7 Exercises 19 How to Change the Database Fast 19.1 Factors Influencing the Performance of a Database Transformation 19.1.1 Immediate Database Tranformation 19.1.2 Deferred Transformation 19.1.3 Hybrid 19.2 How to Benchmark Database Updates 19.2.1 Basic Benchmark Organization 19.2.2 How to Run the Benchmark 19.3 Performance Evaluation 19.3.1 Small Databases 19.3.2 Large Databases 19.4 Open Problems 19.5 Bibliographic Notes Bibliography Author Index Subject Index --- paper_title: Maintaining knowledge about temporal intervals paper_content: An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between --- paper_title: Handling discovered structure in database systems paper_content: Most database systems research assumes that the database schema is determined by a database administrator. With the recent increase in interest in knowledge discovery from databases and the predicted increase in the volume of data expected to be stored it is appropriate to reexamine this assumption and investigate how derived or induced, rather than database administrator supplied, structure can be accommodated and used by database systems. The paper investigates some of the characteristics of inductive learning and knowledge discovery as they pertain to database systems and the constraints that would be imposed on appropriate inductive learning algorithms is discussed. A formal method of defining induced dependencies (both static and temporal) is proposed as the inductive analogue to functional dependencies. The Boswell database system exemplifying some of these characteristics is also briefly discussed. --- paper_title: Knowledge discovery from databases: The NYU project paper_content: More and more application domains, from financial market analysis to weatherprediction, from monitoring supermarket purchases to monitoring satellite images, arebecomingly increasingly data-intensive. The result is massive databases that are growingat a rapid rate - it has been estimated that the worldA¢Â¬Â"s electronic data almostdoubles every year. With this rate of data explosion, there is a pressing need for computersto play an increasing role in analyzing these huge data repositories which areimpossible to penetrate manually. The challenge is to ferret out the regularities in thedata that will prove to be interesting to the user.A group in the Information Systems department at the NYU Business School hasbeen working in this area for a number of years. The focus of our project is now on thediscovery of patterns from time series data. In this paper we give an overview of thekinds of databases we are "miningA¢Â¬Â? and the kinds of temporal patterns and rules whichwe are attempting to discover. In the first phase of this research, we have developed ataxonomy of patterns as a way to organize our research agenda. We wish to share thetaxonomy with the research community in the "knowledge discovery in databases" areasince we have found it useful in classifying the universe of regularities or patterns intodistinct types, that is, patterns which differ in terms of their structure and the amount6f search effort required to find them. Although the primary focus of our project ison time series data, and the examples we will present are chosen from this arena, thetaxonomy is general enough to apply to any type of data. --- paper_title: A temporal logic for reasoning about processes and plans paper_content: Much previous work in artificial intelligence has neglected representing time in all its complexity. In particular, it has neglected continuous change and the indeterminacy of the future. To rectify this, I have developed a first-order temporal logic, in which it is possible to name and prove things about facts, events, plans, and world histories. In particular, the logic provides analyses of causality, continuous change in quantities, the persistence of facts (the frame problem), and the relationship between tasks and actions. It may be possible to implement a temporal-inference machine based on this logic, which keeps track of several “maps” of a time line, one per possible history. --- paper_title: Maintaining knowledge about temporal intervals paper_content: An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between --- paper_title: Learning to Predict Rare Events in Event Sequences paper_content: Learning to predict rare events from sequences of events with categorical features is an important, real-world, problem that existing statistical and machine learning methods are not well suited to solve. This paper describes timeweaver, a genetic algorithm based machine learning system that predicts rare events by identifying predictive temporal and sequential patterns. Timeweaver is applied to the task of predicting telecommunication equipment failures from 110,000 alarm messages and is shown to outperform existing learning methods. --- paper_title: Discovering frequent event patterns with multiple granularities in time sequences paper_content: An important usage of time sequences is to discover temporal patterns. The discovery process usually starts with a user specified skeleton, called an event structure, which consists of a number of variables representing events and temporal constraints among these variables; the goal of the discovery is to find temporal patterns, i.e., instantiations of the variables in the structure that appear frequently in the time sequence. The paper introduces event structures that have temporal constraints with multiple granularities, defines the pattern discovery problem with these structures, and studies effective algorithms to solve it. The basic components of the algorithms include timed automata with granularities (TAGs) and a number of heuristics. The TAGs are for testing whether a specific temporal pattern, called a candidate complex event type, appears frequently in a time sequence. Since there are often a huge number of candidate event types for a usual event structure, heuristics are presented aiming at reducing the number of candidate event types and reducing the time spent by the TAGs testing whether a candidate type does appear frequently in the sequence. These heuristics exploit the information provided by explicit and implicit temporal constraints with granularity in the given event structure. The paper also gives the results of an experiment to show the effectiveness of the heuristics on a real data set. --- paper_title: Finding temporal patterns - A set-based approach paper_content: We created an inference engine and query language for expressing temporal patterns in data. The patterns are represented by using temporally-ordered sets of data objects. Patterns are elaborated by reference to new objects inferred from original data, and by interlocking temporal and other relationships among sets of these objects. We found the tools well-suited to define scenarios of events that are evidence of inappropriate use of prescription drugs, using Medicaid administrative data that describe medical events. The tools' usefulness in research might be considerably more general. --- paper_title: Data-Driven D iscovery of Quantitative Rules in Relational Databases paper_content: A quantitative rule is a rule associated with quantitative information which assesses the representativeness of the rule in the database. An efficient induction method is developed for learning quantitative rules in relational databases. With the assistance of knowledge about concept hierarchies, data relevance, and expected rule forms, attribute-oriented induction can be performed on the database, which integrates database operations with the learning process and provides a simple, efficient way of learning quantitative rules from large databases. The method involves the learning of both characteristic rules and classification rules. Quantitative information facilitates quantitative reasoning, incremental learning, and learning in the presence of noise. Moreover, learning qualitative rules can be treated as a special case of learning quantitative rules. It is shown that attribute-oriented induction provides an efficient and effective mechanism for learning various kinds of knowledge rules from relational databases. > --- paper_title: Database Issues in Knowledge Discovery and Data Mining paper_content: In recent years both the number and the size of organisational databases have increased rapidly. However, although available processing power has also grown, the increase in stored data has not necessarily led to a corresponding increase in useful information and knowledge. This has led to a growing interest in the development of tools capable of harnessing the increased processing power available to better utilise the potential of stored data. The terms "Knowledge Discovery in Databases" and "Data Mining" have been adopted for a field of research dealing with the automatic discovery of knowledge implicit within databases. Data mining is useful in situations where the volume of data is either too large or too complicated for manual processing or, to a lesser extent, where human experts are unavailable to provide knowledge. The success already attained by a wide range of data mining applications has continued to prompt further investigation into alternative data mining techniques and the extension of data mining to new domains. This paper surveys, from the standpoint of the database systems community, current issues in data mining research by examining the architectural and process models adopted by knowledge discovery systems, the different types of discovered knowledge, the way knowledge discovery systems operate on different data types, various techniques for knowledge discovery and the ways in which discovered knowledge is used. --- paper_title: Handling discovered structure in database systems paper_content: Most database systems research assumes that the database schema is determined by a database administrator. With the recent increase in interest in knowledge discovery from databases and the predicted increase in the volume of data expected to be stored it is appropriate to reexamine this assumption and investigate how derived or induced, rather than database administrator supplied, structure can be accommodated and used by database systems. The paper investigates some of the characteristics of inductive learning and knowledge discovery as they pertain to database systems and the constraints that would be imposed on appropriate inductive learning algorithms is discussed. A formal method of defining induced dependencies (both static and temporal) is proposed as the inductive analogue to functional dependencies. The Boswell database system exemplifying some of these characteristics is also briefly discussed. --- paper_title: Data-Driven D iscovery of Quantitative Rules in Relational Databases paper_content: A quantitative rule is a rule associated with quantitative information which assesses the representativeness of the rule in the database. An efficient induction method is developed for learning quantitative rules in relational databases. With the assistance of knowledge about concept hierarchies, data relevance, and expected rule forms, attribute-oriented induction can be performed on the database, which integrates database operations with the learning process and provides a simple, efficient way of learning quantitative rules from large databases. The method involves the learning of both characteristic rules and classification rules. Quantitative information facilitates quantitative reasoning, incremental learning, and learning in the presence of noise. Moreover, learning qualitative rules can be treated as a special case of learning quantitative rules. It is shown that attribute-oriented induction provides an efficient and effective mechanism for learning various kinds of knowledge rules from relational databases. > --- paper_title: An information theoretic approach to rule induction from databases paper_content: An algorithm for the induction of rules from examples is introduced. The algorithm is novel in the sense that it not only learns rules for a given concept (classification), but it simultaneously learns rules relating multiple concepts. This type of learning, known as generalized rule induction, is considerably more general than existing algorithms, which tend to be classification oriented. Initially, it is focused on the problem of determining a quantitative, well-defined rule preference measure. In particular, a quantity called the J-measure is proposed as an information-theoretic alternative to existing approaches. The J-measure quantifies the information content of a rule or a hypothesis. The information theoretic origins of this measure are outlined, and its plausibility as a hypothesis preference measure is examined. The ITRULE algorithm, which uses the measure to learn a set of optimal rules from a set of data samples, is defined. Experimental results on real-world data are analyzed. > --- paper_title: Learning to Predict Rare Events in Event Sequences paper_content: Learning to predict rare events from sequences of events with categorical features is an important, real-world, problem that existing statistical and machine learning methods are not well suited to solve. This paper describes timeweaver, a genetic algorithm based machine learning system that predicts rare events by identifying predictive temporal and sequential patterns. Timeweaver is applied to the task of predicting telecommunication equipment failures from 110,000 alarm messages and is shown to outperform existing learning methods. --- paper_title: What makes patterns interesting in knowledge discovery systems paper_content: One of the central problems in the field of knowledge discovery is the development of good measures of interestingness of discovered patterns. Such measures of interestingness are divided into objective measures-those that depend only on the structure of a pattern and the underlying data used in the discovery process, and the subjective measures-those that also depend on the class of users who examine the pattern. The focus of the paper is on studying subjective measures of interestingness. These measures are classified into actionable and unexpected, and the relationship between them is examined. The unexpected measure of interestingness is defined in terms of the belief system that the user has. Interestingness of a pattern is expressed in terms of how it affects the belief system. The paper also discusses how this unexpected measure of interestingness can be used in the discovery process. --- paper_title: What makes patterns interesting in knowledge discovery systems paper_content: One of the central problems in the field of knowledge discovery is the development of good measures of interestingness of discovered patterns. Such measures of interestingness are divided into objective measures-those that depend only on the structure of a pattern and the underlying data used in the discovery process, and the subjective measures-those that also depend on the class of users who examine the pattern. The focus of the paper is on studying subjective measures of interestingness. These measures are classified into actionable and unexpected, and the relationship between them is examined. The unexpected measure of interestingness is defined in terms of the belief system that the user has. Interestingness of a pattern is expressed in terms of how it affects the belief system. The paper also discusses how this unexpected measure of interestingness can be used in the discovery process. --- paper_title: An information theoretic approach to rule induction from databases paper_content: An algorithm for the induction of rules from examples is introduced. The algorithm is novel in the sense that it not only learns rules for a given concept (classification), but it simultaneously learns rules relating multiple concepts. This type of learning, known as generalized rule induction, is considerably more general than existing algorithms, which tend to be classification oriented. Initially, it is focused on the problem of determining a quantitative, well-defined rule preference measure. In particular, a quantity called the J-measure is proposed as an information-theoretic alternative to existing approaches. The J-measure quantifies the information content of a rule or a hypothesis. The information theoretic origins of this measure are outlined, and its plausibility as a hypothesis preference measure is examined. The ITRULE algorithm, which uses the measure to learn a set of optimal rules from a set of data samples, is defined. Experimental results on real-world data are analyzed. > --- paper_title: Finding temporal patterns - A set-based approach paper_content: We created an inference engine and query language for expressing temporal patterns in data. The patterns are represented by using temporally-ordered sets of data objects. Patterns are elaborated by reference to new objects inferred from original data, and by interlocking temporal and other relationships among sets of these objects. We found the tools well-suited to define scenarios of events that are evidence of inappropriate use of prescription drugs, using Medicaid administrative data that describe medical events. The tools' usefulness in research might be considerably more general. ---
Title: A Survey of Temporal Knowledge Discovery Paradigms and Methods Section 1: INTRODUCTION Description 1: Introduce the topic of temporal data mining, its importance, and provide an overview of the paper's structure. Section 2: THE SEMANTICS OF TEMPORAL DATA AND TEMPORAL KNOWLEDGE Description 2: Discuss the conceptual framework for categorizing literature on temporal knowledge discovery based on types of temporal data, mining paradigms, and discovery goals. Section 3: APRIORI-LIKE DISCOVERY OF ASSOCIATION RULES Description 3: Examine Apriori-like mechanisms for discovering association rules in temporal data. Section 4: TEMPLATE-BASED MINING FOR SEQUENCES Description 4: Discuss methods for describing and discovering common trends in time series and sequence mining. Section 5: CLASSIFICATION OF TEMPORAL DATA Description 5: Explore ways to generalize conventional classification algorithms for temporal data and discuss classification techniques for time series and event sequences. Section 6: MEASURING INTERESTINGNESS OF TEMPORAL PATTERNS Description 6: Investigate methodologies for determining what constitutes interesting or useful mining results in the context of temporal data mining. Section 7: DATA MINING REQUIREMENTS AND ENVIRONMENTS Description 7: Examine the requirements for temporal data mining systems and provide examples of temporal mining systems, as well as discuss temporal mining within temporally aware systems. Section 8: CONCLUSIONS AND FURTHER RESEARCH Description 8: Summarize the findings, open issues, and suggest areas for future research in temporal data mining.
Active Recognition through Next View Planning: A Survey
10
--- paper_title: Three-dimensional object recognition paper_content: A general-purpose computer vision system must be capable of recognizing three-dimensional (3-D) objects. This paper proposes a precise definition of the 3-D object recognition problem, discusses basic concepts associated with this problem, and reviews the relevant literature. Because range images (or depth maps) are often used as sensor input instead of intensity images, techniques for obtaining, processing, and characterizing range data are also surveyed. --- paper_title: Model-based recognition in robot vision paper_content: This paper presents a comparative study and survey of model-based object-recognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the "bin-picking" problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2-D, 2½-D, and 3-D object representations, which are used as the basis for the recognition algorithms. Three central issues common to each category, namely, feature extraction, modeling, and matching, are examined in detail. An evaluation and comparison of existing industrial part-recognition systems and algorithms is given, providing insights for progress toward future robot vision systems. --- paper_title: Shape from symmetry: detecting and exploiting symmetry in affine images paper_content: We investigate the constraints placed on the image projection of a planar object having local reflectional symmetry. Under the affine approximation to projection, we demonstrate an efficient (low-complexity) algorithm for detecting and verifying symmetries despite the distorting effects of image skewing. The symmetries are utilized for three distinct tasks: first, determining image back-projection up to a similarity transformation ambiguity; second, determining the object plane orientation (slant and tilt); and third, as a test for non-coplanarity amongst a collection of objects. These results are illustrated throughout with examples from images of real scenes. --- paper_title: Planning multiple observations for object recognition paper_content: Most computer vision systems perform object recognition on the basis of the features extracted from a single image of the object. The problem with this approach is that it implicitly assumes that the available features are sufficient to determine the identity and pose of the object uniquely. If this assumption is not met, then the feature set is insufficient, and ambiguity results. Consequently, much research in computer vision has gone toward finding sets of features that are sufficient for specific tasks, with the result that each system has its own associated set of features. A single, general feature set would be desirable. However, research in automatic generation of object recognition programs has demonstrated that predetermined, fixed feature sets are often incapable of providing enough information to unambiguously determine either object identity or pose. One approach to overcoming the inadequacy of any feature set is to utilize multiple sensor observations obtained from different viewpoints, and combine them with knowledge of the 3-D structure of the object to perform unambiguous object recognition. This article presents initial results toward performing object recognition by using multiple observations to resolve ambiguities. Starting from the premise that sensor motions should be planned in advance, the difficulties involved in planning with ambiguous information are discussed. A representation for planning that combines geometric information with viewpoint uncertainty is presented. A sensor planner utilizing the representation was implemented, and the results of pose-determination experiments performed with the planner are discussed. --- paper_title: 3D Object Recognition using Invariance paper_content: Abstract The systems and concepts described in this paper document the evolution of the geometric invariance approach to object recognition over the last five years. Invariance overcomes one of the fundamental difficulties in recognising objects from images: that the appearance of an object depends on viewpoint. This problem is entirely avoided if the geometric description is unaffected by the imaging transformation. Such invariant descriptions can be measured from images without any prior knowledge of the position, orientation and calibration of the camera. These invariant measurements can be used to index a library of object models for recognition and provide a principled basis for the other stages of the recognition process such as feature grouping and hypothesis verification. Object models can be acquired directly from images, allowing efficient construction of model libraries without manual intervention. A significant part of the paper is a summary of recent results on the construction of invariants for 3D objects from a single perspective view. A proposed recognition architecture is described which enables the integration of multiple general object classes and provides a means for enforcing global scene consistency. Various criticisms of the invariant approach are articulated and addressed. --- paper_title: Viewpoint-invariant representation of generalized cylinders using the symmetry set paper_content: We demonstrate that viewpoint-invariant representations can be obtained from images for a useful class of 3D smooth object. The class of surfaces are those generated as the envelope of a sphere of varying radius swept along an axis. This class includes canal surfaces and surfaces of revolution. The representations are computed, using only image information, from the symmetry set of the object's outline. They are viewpoint-invariant under weak-perspective imaging, and quasi-invariant to an excellent approximation under perspective imaging. To this approximation, the planar axis of a canal surface is recovered up to an affine ambiguity from perspective images. Examples are given of the representations obtained from real images, which demonstrate stability and object discrimination, for both canal surfaces and surfaces of revolution. Finally, the representations are used as the basis for a model-based object recognition system --- paper_title: Reflectance based object recognition paper_content: Neighboring points on a smoothly curved surface have similar surface normals and illumination conditions. Therefore, their brightness values can be used to compute the ratio of their reflectance coefficients. Based on this observation, we develop an algorithm that estimates a reflectance ratio for each region in an image with respect to its background. The algorithm is efficient as it computes ratios for all image regions in just two raster scans. The region reflectance ratio represents a physical property that is invariant to illumination and imaging parameters. Several experiments are conducted to demonstrate the accuracy and robustness of ratio invariant. The ratio invariant is used to recognize objects from a single brightness image of a scene. Object models are automatically acquired and represented using a hash table. Recognition and pose estimation algorithms are presented that use ratio estimates of scene regions as well as their geometric properties to index the hash table. The result is a hypothesis for the existence of an object in the image. This hypothesis is verified using the ratios and locations of other regions in the scene. This approach to recognition is effective for objects with printed characters and pictures. Recognition experiments are conducted on images with illumination variations, occlusions, and shadows. The paper is concluded with a discussion on the simultaneous use of reflectance and geometry for visual perception. --- paper_title: Reconstruction-Based Recognition of Scenes with Translationally Repeated Quadrics paper_content: This paper addresses the problem of invariant-based recognition of quadric configurations from a single image. These configurations consist of a pair of rigidly connected translationally repeated quadric surfaces. This problem is approached via a reconstruction framework. A new mathematical framework, using relative affine structure, on the lines of Luong and Vieville (1996), has been proposed. Using this mathematical framework, translationally repeated objects have been projectively reconstructed, from a single image, with four image point correspondences of the distinguished points on the object and its translate. This has been used to obtain a reconstruction of a pair of translationally repeated quadrics. We have proposed joint projective invariants of a pair of proper quadrics. For the purpose of recognition of quadric configurations, we compute these invariants for the pair of reconstructed quadrics. Experimental results on synthetic and real images, establish the discriminatory power and stability of the proposed invariant-based recognition strategy. As a specific example, we have applied this technique for discriminating images of monuments which are characterized by translationally repeated domes modeled as quadrics. --- paper_title: Three-dimensional object recognition paper_content: A general-purpose computer vision system must be capable of recognizing three-dimensional (3-D) objects. This paper proposes a precise definition of the 3-D object recognition problem, discusses basic concepts associated with this problem, and reviews the relevant literature. Because range images (or depth maps) are often used as sensor input instead of intensity images, techniques for obtaining, processing, and characterizing range data are also surveyed. --- paper_title: Model-based recognition in robot vision paper_content: This paper presents a comparative study and survey of model-based object-recognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the "bin-picking" problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2-D, 2½-D, and 3-D object representations, which are used as the basis for the recognition algorithms. Three central issues common to each category, namely, feature extraction, modeling, and matching, are examined in detail. An evaluation and comparison of existing industrial part-recognition systems and algorithms is given, providing insights for progress toward future robot vision systems. --- paper_title: Color constant color indexing paper_content: Objects can be recognized on the basis of their color alone by color indexing, a technique developed by Swain-Ballard (1991) which involves matching color-space histograms. Color indexing fails, however, when the incident illumination varies either spatially or spectrally. Although this limitation might be overcome by preprocessing with a color constancy algorithm, we instead propose histogramming color ratios. Since the ratios of color RGB triples from neighboring locations are relatively insensitive to changes in the incident illumination, this circumvents the need for color constancy preprocessing. Results of tests with the new color-constant-color-indexing algorithm on synthetic and real images show that it works very well even when the illumination varies spatially in its intensity and color. > --- paper_title: A Framework for Reconstruction based Recognition of Partially Occluded Repeated Objects paper_content: In this paper we propose a reconstruction based recognition scheme for objects with repeated components, using a single image of such a configuration, in which one of the repeated components may be partially occluded. In our strategy we reconstruct each of the components with respect to the same frame and use these to compute invariants.We propose a new mathematical framework for the projective reconstruction of affinely repeated objects. This uses the repetition explicitly and hence is able to handle substantial occlusion of one of the components. We then apply this framework to the reconstruction of a pair of repeated quadrics. The image information required for the reconstruction are the outline conic of one of the quadrics and correspondence between any four points which are images of points in general position on the quadric and its repetition. Projective invariants computed using the reconstructed quadrics have been used for recognition. The recognition strategy has been applied to images of monuments with multi-dome architecture. Experiments have established the discriminatory ability of the invariants. --- paper_title: Symmetry From Shape and Shape From Symmetry paper_content: This article discusses the detection and use of symmetry in planar shapes. The methods are especially useful for indus trial workpieces, where symmetry is omnipresent. "Symmetry" is interpreted in a broad sense as repeated, coplanar shape fragments. In particular, fragments that are "similar" in the mathematical sense are considered symmetric. As a general tool for the extraction and analysis of symmetries, "Arc Length Space" is proposed. In this space symmetries take on a very simple form: they correspond to straight-line segments, as suming an appropriate choice is made for the shapes' contour parameterizations. Reasoning about the possible coexistence of symmetries also becomes easier in this space. Only a restricted number of symmetry patterns can be formed. By making ap propriate choices for the contour parameters, the essential properties of Arc Length Space can be inherited for general viewpoints. Invariance to affine transformations is a key is sue. Specific results include the (informal) deductio... --- paper_title: Active and exploratory perception paper_content: Abstract The main goal of this paper is to show that there is a natural flow from active perception through exploration to perceptual learning. W have attempted to conceptualize the perceptual process of an organism that has the top-level task of surviving in an unknown environment. During this conceptualization process. Four necessary ingredients have emerged for either artificial or biological organisms. First, the sensory apparatus and processing of the organism must be active and flexible. Second, the organism must have exploratory capabilities. Third, the organism must be selective in its data acquisition process. Fourth, the organism must be able to learn. In the section on learning, we have clearly delineated the difference between what must be innate and what must be learned. In order to test our theory, we present the system's architecture that follows from the perceptual task decomposition. The predictions of this theory are that an artificial system can explore and learn about its environment modulo its sensors, manipulators, end effectors, and exploratory procedures/attribute extractors. It can describe its world with respect to the built-in alphabet, that is, the set of perceptual primitives. --- paper_title: Reactions to peripheral image motion using a head/eye platform paper_content: The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone. > --- paper_title: Control of selective perception using bayes nets and decision theory paper_content: A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decision-making are central issues for selective vision, which takes advantage of prior knowledge of a domain''s abstract and geometrical structure and models for the expected performance and cost of visual operators. .pp The TEA-1 selective vision system uses Bayes nets for representation and benefit-cost analysis for control of visual and non-visual actions. It is the high-level control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA-1 demonstrates that Bayes nets and decision theoretic techniques provide a general, re-usable framework for constructing computer vision systems that are selective perception systems, and that Bayes nets provide a general framework for representing visual tasks. Control, or decision making, is the most important issue in a selective vision system. TEA-1''s decisions about what to do next are based on general hand-crafted ``goodness functions'''' constructed around core decision theoretic elements. Several goodness functions for different decisions are presented and evaluated. .pp The TEA-1 system solves a version of the T-world problem, an abstraction of a large set of domains and tasks. Some key factors that affect the success of selective perception are analyzed by examining how each factor affects the overall performance of TEA-1 when solving ensembles of randomly produced, simulated T-world domains and tasks. TEA-1''s decision making algorithms are also evaluated in this manner. Experiments in the lab for one specific T-world domain, table settings, are also presented. --- paper_title: Integration and control of reactive visual processes paper_content: This paper describes a new approach to the integration and control of continuously operating visual processes. Visual processes are expressed as transformations which map signals from virtual sensors into commands for devices. These transformations define reactive processes which tightly couple perception and action. Such transformations may be used to control robotic devices, including fixation an active binocular head, as well as the to select and control the processes which interpret visual data. --- paper_title: Uncalibrated Visual Tasks via Linear Interaction paper_content: We propose an approach for the design and control of both reflexive and purposive visual tasks with an uncalibrated camera. The approach is based on the bi-dimensional appearance of the objects in the environment, and explicitly takes into account independent object motions. The introduction of a linear model of camera-object interaction dramatically simplifies visual analysis and control by reducing the size of the visual representation. We discuss the implementation of three tasks of increasing complexity, based on active contour analysis and polynomial planning of image contour transformations. Real-time experiments with a robot wrist-mounted camera demonstrate that the approach is conveniently usable for visual navigation, active exploration and perception, and man-robot interaction. --- paper_title: A survey of sensor planning in computer vision paper_content: A survey of research in the area of vision sensor planning is presented. The problem can be summarized as follows: given information about the environment as well as information about the task that the vision system is to accomplish, develop strategies to automatically determine sensor parameter values that achieve this task with a certain degree of satisfaction. With such strategies, sensor parameters values can be selected and can be purposefully changed in order to effectively perform the task at hand. The focus here is on vision sensor planning for the task of robustly detecting object features. For this task, camera and illumination parameters such as position, orientation, and optical settings are determined so that object features are, for example, visible, in focus, within the sensor field of view, magnified as required, and imaged with sufficient contrast. References to, and a brief description of, representative sensing strategies for the tasks of object recognition and scene reconstruction are also presented. For these tasks, sensor configurations are sought that will prove most useful when trying to identify an object or reconstruct a scene. > --- paper_title: The MVP sensor planning system for robotic vision tasks paper_content: The MVP (machine vision planner) model-based sensor planning system for robotic vision is presented. MVP automatically synthesizes desirable camera views of a scene based on geometric models of the environment, optical models of the vision sensors, and models of the task to be achieved. The generic task of feature detectability has been chosen since it is applicable to many robot-controlled vision systems. For such a task, features of interest in the environment are required to simultaneously be visible, inside the field of view, in focus, and magnified as required. In this paper, we present a technique that poses the vision sensor planning problem in an optimization setting and determines viewpoints that satisfy all previous requirements simultaneously and with a margin. In addition, we present experimental results of this technique when applied to a robotic vision system that consists of a camera mounted on a robot manipulator in a hand-eye configuration. > --- paper_title: Planning for Complete Sensor Coverage in Inspection paper_content: Abstract General purpose CAD-based inspection of manufactured objects often involves comparing a model created using intensity or range images of the actual object to a tolerance reference model of the ideal object. Before this comparison is made, a sufficiently complete geometric model of the workpiece must be synthesized from sensor data. In this paper we present planning algorithms for finding a set of sensing operations for completely measuring the exposed surface of an object to be inspected. While these planning algorithms were developed as part of a particular inspection system, the algorithms are applicable to other inspection systems and other applications than inspection. --- paper_title: Automatic Sensor Placement for Accurate Dimensional Inspection paper_content: Deriving accurate 3D object dimensions with a passive vision system demands, in general, the use of multistation sensor configurations. In such configurations, object features appear in images from multiple viewpoints, facilitating their measurement by means of optical triangulation. Previous efforts toward automatic sensor placement have been restricted to single sensor station solutions. In this paper we review photogrammetric expertise in the design of multistation configurations, including the bundle method?a general mathematical model for optical triangulation?and fundamental considerations and constraints influencing the placement of sensor stations. An overview of CONSENS, an expert system-based software tool which exploits these considerations and constraints in automatically designing multistation configurations, is given. Examples of multistation configurations designed by CONSENS demonstrate the tool's capabilities and the potential of our approach for automating sensor placement for inspection tasks. --- paper_title: A survey of sensor planning in computer vision paper_content: A survey of research in the area of vision sensor planning is presented. The problem can be summarized as follows: given information about the environment as well as information about the task that the vision system is to accomplish, develop strategies to automatically determine sensor parameter values that achieve this task with a certain degree of satisfaction. With such strategies, sensor parameters values can be selected and can be purposefully changed in order to effectively perform the task at hand. The focus here is on vision sensor planning for the task of robustly detecting object features. For this task, camera and illumination parameters such as position, orientation, and optical settings are determined so that object features are, for example, visible, in focus, within the sensor field of view, magnified as required, and imaged with sufficient contrast. References to, and a brief description of, representative sensing strategies for the tasks of object recognition and scene reconstruction are also presented. For these tasks, sensor configurations are sought that will prove most useful when trying to identify an object or reconstruct a scene. > --- paper_title: Viewpoint Selection for Complete Surface Coverage of Three Dimensional Objects paper_content: Many machine vision tasks, e.g. object recognition and object inspection, cannot be performed robustly from a single image. For certain tasks (e.g. 3D object recognition and automated inspection) the availability of multiple views of an object is a requirement. This paper presents a novel approach to selecting a minimised number of views that allow each object face to be adequately viewed according to specified constraints on viewpoints and other features. The planner is generic and can be employed for a wide range of multiple view acquisition systems, ranging from camera systems mounted on the end of a robot arm, i.e. an eye-in-hand camera setup, to a turntable and fixed stereo cameras to allow different views of an object to be obtained. The results (both simulated and real) given focus on planning with a fixed camera and turntable. --- paper_title: Automatic sensor placement from vision task requirements paper_content: The problem of automatically generating the possible camera locations for observing an object is defined, and an approach to its solution is presented. The approach, which uses models of the object and the camera, is based on meeting the requirements that: the spatial resolution be above a minimum value, all surface points be in focus, all surfaces lie within the sensor field of view and no surface points be occluded. The approach converts each sensing requirement into a geometric constraint on the sensor location, from which the three-dimensional region of viewpoints that satisfies that constraint is computed. The intersection of these regions is the space where a sensor may be located. The extension of this approach to laser-scanner range sensors is also described. Examples illustrate the resolution, focus, and field-of-view constraints for two vision tasks. > --- paper_title: Visual learning and recognition of 3-d objects from appearance paper_content: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology. --- paper_title: Active Object Recognition in Parametric Eigenspace paper_content: We present an efficient method within an active vision framework for recognizing objects which are ambiguous from certain viewpoints. The system is allowed to reposition the camera to capture additional views and, therefore, to resolve the classification result obtained from a single view. The approach uses an appearance based object representation, namely the parametric eigenspace, and augments it by probability distributions. This captures possible variations in the input images due to errors in the pre-processing chain or the imaging system. Furthermore, the use of probability distributions gives us a gauge to view planning. View planning is shown to be of great use in reducing the number of images to be captured when compared to a random strategy. --- paper_title: Visual learning and recognition of 3-d objects from appearance paper_content: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology. --- paper_title: Probabilistic object recognition using multidimensional receptive field histograms paper_content: This paper describes a probabilistic object recognition technique which does not require correspondence matching of images. This technique is an extension of our earlier work (1996) on object recognition using matching of multi-dimensional receptive field histograms. In the earlier paper we have shown that multi-dimensional receptive field histograms can be matched to provide object recognition which is robust in the face of changes in viewing position and independent of image plane rotation and scale. In this paper we extend this method to compute the probability of the presence of an object in an image. The paper begins with a review of the method and previously presented experimental results. We then extend the method for histogram matching to obtain a genuine probability of the presence of an object. We present experimental results on a database of 100 objects showing that the approach is capable recognizing all objects correctly by using only a small portion of the image. Our results show that receptive field histograms provide a technique for object recognition which is robust, has low computational cost and a computational complexity which is linear with the number of pixels. --- paper_title: Transinformation for Active Object Recognition paper_content: This article develops an analogy between object recognition and the transmission of information through a channel based on the statistical representation of the appearances of 3D objects. This analogy provides a means to quantitatively evaluate the contribution of individual receptive field vectors, and to predict the performance of the object recognition process. Transinformation also provides a quantitative measure of the discrimination provided by each viewpoint, thus permitting the determination of the most discriminant viewpoints. As an application, the article develops an active object recognition algorithm which is able to resolve ambiguities inherent in a single-view recognition algorithm. --- paper_title: Aspect graph construction with noisy feature detectors paper_content: Many three-dimensional (3D) object recognition strategies use aspect graphs to represent objects in the model base. A crucial factor in the success of these object recognition strategies is the accurate construction of the aspect graph, its ease of creation, and the extent to which it can represent all views of the object for a given setup. Factors such as noise and nonadaptive thresholds may introduce errors in the feature detection process. This paper presents a characterization of errors in aspect graphs, as well as an algorithm for estimating aspect graphs, given noisy sensor data. We present extensive results of our strategies applied on a reasonably complex experimental set, and demonstrate applications to a robust 3D object recognition problem. --- paper_title: Automatic generation of object recognition programs paper_content: Issues and techniques are discussed to automatically compile object and sensor models into a visual recognition strategy for recognizing and locating an object in three-dimensional space from visual data. Automatic generation of recognition programs by compilation, in an attempt to automate this process, is described. An object model describes geometric and photometric properties of an object to be recognized. A sensor model specifies the sensor characteristics in predicting object appearances and variations of feature values. It is emphasized that the sensors, as well as objects, must be explicitly modeled to achieve the goal of automatic generation of reliable and efficient recognition programs. Actual creation of interpretation trees for two objects and their execution for recognition from a bin of parts are demonstrated. > --- paper_title: Appearance-based vision and the automatic generation of object recognition programs paper_content: Abstract : The generation of recognition programs by hand is a time-consuming, labor-intensive task that typically results in a special purpose program for the recognition of a single object or a small set of objects. Recent work in automatic code generation has demonstrated the feasibility of automatically generating object recognition programs from CAD-based descriptions of objects. Many of the programs which perform automatic code generation employ a common paradigm of utilizing explicit object and sensor models to predict object appearances; we refer to the paradigm as appearance-based vision, and refer to the programs as vision algorithm compilers (VACs). A CAD-like object model augmented with sensor-specific information like color and reflectance, in conjunction with a sensor model, provides all the information needed to predict the appearance of an object under any specified set of viewing conditions. Appearances, characterized in terms of feature values, can be predicted in two ways: analytically, or synthetically. In relatively simple domains, feature values can be analytically determined from model information. However, in complex domains, the analytic prediction method is impractical. An alternative method for appearance prediction is to use an appearance simulator to generate synthetic im ages of objects which can then be processed to extract feature values. In this paper, we discuss the paradigm of appearance-based vision and present in detail two specific VACs: one that computes feature values analytically, and a second that utilizes an appearance simulator to synthesize sample images. --- paper_title: Characteristic Views As A Basis For Three-Dimensional Object Recognition paper_content: This paper describes a new technique for modeling 3D objects that is applicable to recog-nition tasks in advanced automation. Objects are represented in terms of canonic 2D models which can be used to determine the identity, location and orientation of an unknown object. The reduction in dimensionality is achieved by factoring the space of all possible perspective projections of an object into a set of characteristic views, where each such view defines a characteristic-view domain within which all projections are topologically identical and related by a linear transformation. The characteristic views of an object can then be hierarchically structured for efficient classification. The line-junction labelling constraints are used to match a characteristic view to a given unknown-object projection, and determination of the unknown-object projection-to-characteristic view transformation then provides information about the identity as well as the location and orientation of the object. --- paper_title: The Scale Space Aspect Graph paper_content: Currently the aspect graph is computed from the theoretical standpoint of perfect resolution in object shape, the viewpoint and the projected image. This means that the aspect graph may include details that an observer could never see in practice. Introducing the notion of scale into the aspect graph framework provides a mechanism for selecting a level of detail that is "large enough" to merit explicit representation. This effectively allows control over the number of nodes retained in the aspect graph. This paper introduces the concept of the scale space aspect graph, defines three different interpretations of the scale dimension, and presents a detailed example for a simple class of objects, with scale defined in terms of the spatial extent of features in the image. > --- paper_title: Planning multiple observations for object recognition paper_content: Most computer vision systems perform object recognition on the basis of the features extracted from a single image of the object. The problem with this approach is that it implicitly assumes that the available features are sufficient to determine the identity and pose of the object uniquely. If this assumption is not met, then the feature set is insufficient, and ambiguity results. Consequently, much research in computer vision has gone toward finding sets of features that are sufficient for specific tasks, with the result that each system has its own associated set of features. A single, general feature set would be desirable. However, research in automatic generation of object recognition programs has demonstrated that predetermined, fixed feature sets are often incapable of providing enough information to unambiguously determine either object identity or pose. One approach to overcoming the inadequacy of any feature set is to utilize multiple sensor observations obtained from different viewpoints, and combine them with knowledge of the 3-D structure of the object to perform unambiguous object recognition. This article presents initial results toward performing object recognition by using multiple observations to resolve ambiguities. Starting from the premise that sensor motions should be planned in advance, the difficulties involved in planning with ambiguous information are discussed. A representation for planning that combines geometric information with viewpoint uncertainty is presented. A sensor planner utilizing the representation was implemented, and the results of pose-determination experiments performed with the planner are discussed. --- paper_title: An Investigation into the Use of Physical Modeling for the Prediction of Various Feature Types Visible from Different Viewpoints paper_content: Given that aspect graph and viewsphere-based object recognition systems provide a valid mechanism for 3D object recognition of man-made objects, this paper provides a flexible, automated, and general purpose technique for generating the view information for each viewpoint. An advantage of the work is that the technique is unaffected by object complexity because each step makes no assumptions about object shape. The only limitation is that the object can be described by a boundary representation. A second advantage is that the technique can include other feature types such as specularity. The reason for this is that raytracing techniques are used to simulate the physical process of image generation. Hence it is extendible to visible features resulting from effects due to lighting, surface texture, color, transparency, etc. The work described in this paper shows how occluding and nonoccluding edge-based features can be extracted using image processing techniques and then parametrized and also how regions of specularity can be predicted and described. The use of physical modeling enables situations to be simulated and predicted that are intractable for CAD-based methods (e.g., multiscale feature prediction). An advantage of the method is that the interface between the technique and the raytracing module is a rendered image. Should better physics-based image formation algorithms become available, then they could replace the raytracing module with little modification to the rest of the method. --- paper_title: The internal representation of solid shape with respect to vision paper_content: It is argued that the internal model of any object must take the form of a function, such that for any intended action the resulting reafference is predictable. This function can be derived explicitly for the case of visual perception of rigid bodies by ambulant observers. The function depends on physical causation, not physiology; consequently, one can make a priori statements about possible internal models. A posteriori it seems likely that the orientation sensitive units described by Hubel and Wiesel constitute a physiological substrate subserving the extraction of the invariants of this function. The function is used to define a measure for the visual complexity of solid shape. Relations with Gestalt theories of perception are discussed. --- paper_title: A relational pyramid approach to view class determination paper_content: Given a CAD model of an object, the authors would like to automatically generate a vision model and matching procedure that can be used in robot guidance and inspection tasks. They are building a system that can predict features that will appear in a 2D view of a 3D object, represent each such view with a hierarchical, relational structure, group together similar views into view classes, and match an unknown view to the appropriate view class to find its pose. They describe the relational pyramid structure for describing the features in a particular view or view class of an object, the summary structure that is used to summarize the relational information in the relational pyramid, and an accumulator-based method for rapidly determining the view class(es) that best match an unknown view of an object. > --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Qualitative 3-D shape reconstruction using distributed aspect graph matching paper_content: An approach is presented to 3-D primitive reconstruction that is independent of the selection of volumetric primitives used to model objects. The approach first takes an arbitrary set of 3-D volumetric primitives and generates a hierarchical aspect representation based on the projected surfaces of the primitives; conditional probabilities capture the ambiguity of mappings between levels of the hierarchy. The integration of object-centered and viewer-centered representations provides the indexing power of 3-D volumetric primitives, while supporting a 2-D matching paradigm for primitive reconstruction. Formulation of the problem based on grouping the image regions according to aspect is presented. No domain dependent heuristics are used; the authors exploit only the probabilities inherent in the aspect hierarchy. For a given selection of primitives, the success of the heuristic depends on the likelihood of the various aspects; best results are achieved when certain aspects are more likely, and fewer primitives project to a given aspect. > --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Recognizing large 3-D objects through next view planning using an uncalibrated camera paper_content: We present a new on-line scheme for the recognition and pose estimation of a large isolated 3-D object, which may not entirely fit in a camera's field of view. We do not assume any knowledge of the internal parameters of the camera, or their constancy. We use a probabilistic reasoning framework for recognition and next view planning. We show results of successful recognition and pose estimation even in cases of a high degree of interpretation ambiguity associated with the initial view. --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Generic object recognition: building and matching coarse descriptions from line drawings paper_content: Primal access recognition of visual objects (PARVO), a computer vision system that addresses the problem of fast and generic recognition of unexpected 3D objects from single 2D views, is considered. Recently, recognition by components (RBC), which is a new human image understanding theory, based on some psychological results, has been proposed as an explanation of how PARVO works. However, no systematic computational evaluation of its many aspects has yet been reported. The PARVO system discussed is a first step toward this goal, since its design respects and makes explicit the main assumptions of the proposed theory. It analyzes single-view 2D line drawings of 3D objects typical of the ones used in human image understanding studies. It is designed to handle partially occluded objects of different shape and dimension in various spatial orientations and locations in the image plane. The system is shown to successfully compute generic descriptions and then recognize many common man-made objects. > --- paper_title: Qualitative 3-D shape reconstruction using distributed aspect graph matching paper_content: An approach is presented to 3-D primitive reconstruction that is independent of the selection of volumetric primitives used to model objects. The approach first takes an arbitrary set of 3-D volumetric primitives and generates a hierarchical aspect representation based on the projected surfaces of the primitives; conditional probabilities capture the ambiguity of mappings between levels of the hierarchy. The integration of object-centered and viewer-centered representations provides the indexing power of 3-D volumetric primitives, while supporting a 2-D matching paradigm for primitive reconstruction. Formulation of the problem based on grouping the image regions according to aspect is presented. No domain dependent heuristics are used; the authors exploit only the probabilities inherent in the aspect hierarchy. For a given selection of primitives, the success of the heuristic depends on the likelihood of the various aspects; best results are achieved when certain aspects are more likely, and fewer primitives project to a given aspect. > --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Object recognition using appearance-based parts and relations paper_content: The recognition of general three-dimensional objects in cluttered scenes is a challenging problem. In particular, the design of a good representation suitable to model large numbers of generic objects that is also robust to occlusion has been a stumbling block in achieving success. In this paper, we propose a representation using appearance-based parts and relations to overcome these problems. Appearance-based parts and relations are defined in terms of closed regions and the union of these regions, respectively. The regions are segmented using the MDL principle, and their appearance is obtained from collection of images and compactly represented by parametric manifolds in the two eigenspaces spanned by the parts and the relations. --- paper_title: Recognizing large 3-D objects through next view planning using an uncalibrated camera paper_content: We present a new on-line scheme for the recognition and pose estimation of a large isolated 3-D object, which may not entirely fit in a camera's field of view. We do not assume any knowledge of the internal parameters of the camera, or their constancy. We use a probabilistic reasoning framework for recognition and next view planning. We show results of successful recognition and pose estimation even in cases of a high degree of interpretation ambiguity associated with the initial view. --- paper_title: Hierarchical organization of appearance-based parts and relations for object recognition paper_content: Previously a new object representation using appearance-based parts and relations to recognize 3D objects from 2D images, in the presence of occlusion and background clutter, was introduced. Appearance-based parts and relations are defined in terms of closed regions and the union of these regions, respectively. The regions are segmented using the MDL principle, and their appearance is obtained from collection of images and compactly represented by parametric manifolds in the eigenspaces spanned by the parts and the relations. In this paper we introduce the discriminatory power of the proposed features and describe how to use it to organize large databases of objects. --- paper_title: Fusion, Propagation, and Structuring in Belief Networks paper_content: Belief networks are directed acyclic graphs in which the nodes represent propositions (or variables), the arcs signify direct dependencies between the linked propositions, and the strengths of these dependencies are quantified by conditional probabilities. A network of this sort can be used to represent the generic knowledge of a domain expert, and it turns into a computational architecture if the links are used not merely for storing factual knowledge but also for directing and activating the data flow in the computations which manipulate this knowledge. The first part of the paper deals with the task of fusing and propagating the impacts of new information through the networks in such a way that, when equilibrium is reached, each proposition will be assigned a measure of belief consistent with the axioms of probability theory. It is shown that if the network is singly connected (e.g. tree-structured), then probabilities can be updated by local propagation in an isomorphic network of parallel and autonomous processors and that the impact of new information can be imparted to all propositions in time proportional to the longest path in the network. The second part of the paper deals with the problem of finding a tree-structured representation for a collection of probabilistically coupled propositions using auxiliary (dummy) variables, colloquially called "hidden causes." It is shown that if such a tree-structured representation exists, then it is possible to uniquely uncover the topology of the tree by observing pairwise dependencies among the available propositions (i.e., the leaves of the tree). The entire tree structure, including the strengths of all internal relationships, can be reconstructed in time proportional to n log n, where n is the number of leaves. --- paper_title: A Generalization of Bayesian Inference paper_content: Procedures of statistical inference are described which generalize Bayesian inference in specific ways. Probability is used in such a way that in general only bounds may be placed on the probabilities of given events, and probability systems of this kind are suggested both for sample information and for prior information. These systems are then combined using a specified rule. Illustrations are given for inferences about trinomial probabilities, and for inferences about a monotone sequence of binomial pi. Finally, some comments are made on the general class of models which produce upper and lower probabilities, and on the specific models which underlie the suggested inference procedures. --- paper_title: Control of selective perception using bayes nets and decision theory paper_content: A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decision-making are central issues for selective vision, which takes advantage of prior knowledge of a domain''s abstract and geometrical structure and models for the expected performance and cost of visual operators. .pp The TEA-1 selective vision system uses Bayes nets for representation and benefit-cost analysis for control of visual and non-visual actions. It is the high-level control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA-1 demonstrates that Bayes nets and decision theoretic techniques provide a general, re-usable framework for constructing computer vision systems that are selective perception systems, and that Bayes nets provide a general framework for representing visual tasks. Control, or decision making, is the most important issue in a selective vision system. TEA-1''s decisions about what to do next are based on general hand-crafted ``goodness functions'''' constructed around core decision theoretic elements. Several goodness functions for different decisions are presented and evaluated. .pp The TEA-1 system solves a version of the T-world problem, an abstraction of a large set of domains and tasks. Some key factors that affect the success of selective perception are analyzed by examining how each factor affects the overall performance of TEA-1 when solving ensembles of randomly produced, simulated T-world domains and tasks. TEA-1''s decision making algorithms are also evaluated in this manner. Experiments in the lab for one specific T-world domain, table settings, are also presented. --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Recognizing large 3-D objects through next view planning using an uncalibrated camera paper_content: We present a new on-line scheme for the recognition and pose estimation of a large isolated 3-D object, which may not entirely fit in a camera's field of view. We do not assume any knowledge of the internal parameters of the camera, or their constancy. We use a probabilistic reasoning framework for recognition and next view planning. We show results of successful recognition and pose estimation even in cases of a high degree of interpretation ambiguity associated with the initial view. --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Active recognition: using uncertainty to reduce ambiguity paper_content: Scene ambiguity, due to noisy measurements and uncertain object models, can be quantified and actively used by an autonomous agent to efficiently gather new data and improve its information about the environment. In this work an information-based utility measure is used to derive from a learned classification of shape models an efficient data collection strategy, specifically aimed at increasing classification confidence when recognizing uncertain shapes. --- paper_title: Transinformation for Active Object Recognition paper_content: This article develops an analogy between object recognition and the transmission of information through a channel based on the statistical representation of the appearances of 3D objects. This analogy provides a means to quantitatively evaluate the contribution of individual receptive field vectors, and to predict the performance of the object recognition process. Transinformation also provides a quantitative measure of the discrimination provided by each viewpoint, thus permitting the determination of the most discriminant viewpoints. As an application, the article develops an active object recognition algorithm which is able to resolve ambiguities inherent in a single-view recognition algorithm. --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Active Object Recognition in Parametric Eigenspace paper_content: We present an efficient method within an active vision framework for recognizing objects which are ambiguous from certain viewpoints. The system is allowed to reposition the camera to capture additional views and, therefore, to resolve the classification result obtained from a single view. The approach uses an appearance based object representation, namely the parametric eigenspace, and augments it by probability distributions. This captures possible variations in the input images due to errors in the pre-processing chain or the imaging system. Furthermore, the use of probability distributions gives us a gauge to view planning. View planning is shown to be of great use in reducing the number of images to be captured when compared to a random strategy. --- paper_title: Visual learning and recognition of 3-d objects from appearance paper_content: The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image. A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology. --- paper_title: Planning multiple observations for object recognition paper_content: Most computer vision systems perform object recognition on the basis of the features extracted from a single image of the object. The problem with this approach is that it implicitly assumes that the available features are sufficient to determine the identity and pose of the object uniquely. If this assumption is not met, then the feature set is insufficient, and ambiguity results. Consequently, much research in computer vision has gone toward finding sets of features that are sufficient for specific tasks, with the result that each system has its own associated set of features. A single, general feature set would be desirable. However, research in automatic generation of object recognition programs has demonstrated that predetermined, fixed feature sets are often incapable of providing enough information to unambiguously determine either object identity or pose. One approach to overcoming the inadequacy of any feature set is to utilize multiple sensor observations obtained from different viewpoints, and combine them with knowledge of the 3-D structure of the object to perform unambiguous object recognition. This article presents initial results toward performing object recognition by using multiple observations to resolve ambiguities. Starting from the premise that sensor motions should be planned in advance, the difficulties involved in planning with ambiguous information are discussed. A representation for planning that combines geometric information with viewpoint uncertainty is presented. A sensor planner utilizing the representation was implemented, and the results of pose-determination experiments performed with the planner are discussed. --- paper_title: Aspect graph construction with noisy feature detectors paper_content: Many three-dimensional (3D) object recognition strategies use aspect graphs to represent objects in the model base. A crucial factor in the success of these object recognition strategies is the accurate construction of the aspect graph, its ease of creation, and the extent to which it can represent all views of the object for a given setup. Factors such as noise and nonadaptive thresholds may introduce errors in the feature detection process. This paper presents a characterization of errors in aspect graphs, as well as an algorithm for estimating aspect graphs, given noisy sensor data. We present extensive results of our strategies applied on a reasonably complex experimental set, and demonstrate applications to a robust 3D object recognition problem. --- paper_title: Robot localization using uncalibrated camera invariants paper_content: We describe a set of image measurements which are invariant to the camera internals but are location variant. We show that using these measurements it is possible to calculate the self-localization of a robot using known landmarks and uncalibrated cameras. We also show that it is possible to compute, using uncalibrated cameras, the Euclidean structure of 3-D world points using multiple views from known positions. We are free to alter the internal parameters of the camera during these operations. Our initial experiments demonstrate the applicability of the method. --- paper_title: Recognizing large 3-D objects through next view planning using an uncalibrated camera paper_content: We present a new on-line scheme for the recognition and pose estimation of a large isolated 3-D object, which may not entirely fit in a camera's field of view. We do not assume any knowledge of the internal parameters of the camera, or their constancy. We use a probabilistic reasoning framework for recognition and next view planning. We show results of successful recognition and pose estimation even in cases of a high degree of interpretation ambiguity associated with the initial view. --- paper_title: Planning multiple observations for object recognition paper_content: Most computer vision systems perform object recognition on the basis of the features extracted from a single image of the object. The problem with this approach is that it implicitly assumes that the available features are sufficient to determine the identity and pose of the object uniquely. If this assumption is not met, then the feature set is insufficient, and ambiguity results. Consequently, much research in computer vision has gone toward finding sets of features that are sufficient for specific tasks, with the result that each system has its own associated set of features. A single, general feature set would be desirable. However, research in automatic generation of object recognition programs has demonstrated that predetermined, fixed feature sets are often incapable of providing enough information to unambiguously determine either object identity or pose. One approach to overcoming the inadequacy of any feature set is to utilize multiple sensor observations obtained from different viewpoints, and combine them with knowledge of the 3-D structure of the object to perform unambiguous object recognition. This article presents initial results toward performing object recognition by using multiple observations to resolve ambiguities. Starting from the premise that sensor motions should be planned in advance, the difficulties involved in planning with ambiguous information are discussed. A representation for planning that combines geometric information with viewpoint uncertainty is presented. A sensor planner utilizing the representation was implemented, and the results of pose-determination experiments performed with the planner are discussed. --- paper_title: Active Object Recognition in Parametric Eigenspace paper_content: We present an efficient method within an active vision framework for recognizing objects which are ambiguous from certain viewpoints. The system is allowed to reposition the camera to capture additional views and, therefore, to resolve the classification result obtained from a single view. The approach uses an appearance based object representation, namely the parametric eigenspace, and augments it by probability distributions. This captures possible variations in the input images due to errors in the pre-processing chain or the imaging system. Furthermore, the use of probability distributions gives us a gauge to view planning. View planning is shown to be of great use in reducing the number of images to be captured when compared to a random strategy. --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Active Object Recognition in Parametric Eigenspace paper_content: We present an efficient method within an active vision framework for recognizing objects which are ambiguous from certain viewpoints. The system is allowed to reposition the camera to capture additional views and, therefore, to resolve the classification result obtained from a single view. The approach uses an appearance based object representation, namely the parametric eigenspace, and augments it by probability distributions. This captures possible variations in the input images due to errors in the pre-processing chain or the imaging system. Furthermore, the use of probability distributions gives us a gauge to view planning. View planning is shown to be of great use in reducing the number of images to be captured when compared to a random strategy. --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Planning multiple observations for object recognition paper_content: Most computer vision systems perform object recognition on the basis of the features extracted from a single image of the object. The problem with this approach is that it implicitly assumes that the available features are sufficient to determine the identity and pose of the object uniquely. If this assumption is not met, then the feature set is insufficient, and ambiguity results. Consequently, much research in computer vision has gone toward finding sets of features that are sufficient for specific tasks, with the result that each system has its own associated set of features. A single, general feature set would be desirable. However, research in automatic generation of object recognition programs has demonstrated that predetermined, fixed feature sets are often incapable of providing enough information to unambiguously determine either object identity or pose. One approach to overcoming the inadequacy of any feature set is to utilize multiple sensor observations obtained from different viewpoints, and combine them with knowledge of the 3-D structure of the object to perform unambiguous object recognition. This article presents initial results toward performing object recognition by using multiple observations to resolve ambiguities. Starting from the premise that sensor motions should be planned in advance, the difficulties involved in planning with ambiguous information are discussed. A representation for planning that combines geometric information with viewpoint uncertainty is presented. A sensor planner utilizing the representation was implemented, and the results of pose-determination experiments performed with the planner are discussed. --- paper_title: Active Object Recognition in Parametric Eigenspace paper_content: We present an efficient method within an active vision framework for recognizing objects which are ambiguous from certain viewpoints. The system is allowed to reposition the camera to capture additional views and, therefore, to resolve the classification result obtained from a single view. The approach uses an appearance based object representation, namely the parametric eigenspace, and augments it by probability distributions. This captures possible variations in the input images due to errors in the pre-processing chain or the imaging system. Furthermore, the use of probability distributions gives us a gauge to view planning. View planning is shown to be of great use in reducing the number of images to be captured when compared to a random strategy. --- paper_title: Recognizing large 3-D objects through next view planning using an uncalibrated camera paper_content: We present a new on-line scheme for the recognition and pose estimation of a large isolated 3-D object, which may not entirely fit in a camera's field of view. We do not assume any knowledge of the internal parameters of the camera, or their constancy. We use a probabilistic reasoning framework for recognition and next view planning. We show results of successful recognition and pose estimation even in cases of a high degree of interpretation ambiguity associated with the initial view. --- paper_title: Planning multiple observations for object recognition paper_content: Most computer vision systems perform object recognition on the basis of the features extracted from a single image of the object. The problem with this approach is that it implicitly assumes that the available features are sufficient to determine the identity and pose of the object uniquely. If this assumption is not met, then the feature set is insufficient, and ambiguity results. Consequently, much research in computer vision has gone toward finding sets of features that are sufficient for specific tasks, with the result that each system has its own associated set of features. A single, general feature set would be desirable. However, research in automatic generation of object recognition programs has demonstrated that predetermined, fixed feature sets are often incapable of providing enough information to unambiguously determine either object identity or pose. One approach to overcoming the inadequacy of any feature set is to utilize multiple sensor observations obtained from different viewpoints, and combine them with knowledge of the 3-D structure of the object to perform unambiguous object recognition. This article presents initial results toward performing object recognition by using multiple observations to resolve ambiguities. Starting from the premise that sensor motions should be planned in advance, the difficulties involved in planning with ambiguous information are discussed. A representation for planning that combines geometric information with viewpoint uncertainty is presented. A sensor planner utilizing the representation was implemented, and the results of pose-determination experiments performed with the planner are discussed. --- paper_title: Active Object Recognition in Parametric Eigenspace paper_content: We present an efficient method within an active vision framework for recognizing objects which are ambiguous from certain viewpoints. The system is allowed to reposition the camera to capture additional views and, therefore, to resolve the classification result obtained from a single view. The approach uses an appearance based object representation, namely the parametric eigenspace, and augments it by probability distributions. This captures possible variations in the input images due to errors in the pre-processing chain or the imaging system. Furthermore, the use of probability distributions gives us a gauge to view planning. View planning is shown to be of great use in reducing the number of images to be captured when compared to a random strategy. --- paper_title: Planning sensing strategies in a robot work cell with multi-sensor capabilities paper_content: An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses. > --- paper_title: Recognizing large 3-D objects through next view planning using an uncalibrated camera paper_content: We present a new on-line scheme for the recognition and pose estimation of a large isolated 3-D object, which may not entirely fit in a camera's field of view. We do not assume any knowledge of the internal parameters of the camera, or their constancy. We use a probabilistic reasoning framework for recognition and next view planning. We show results of successful recognition and pose estimation even in cases of a high degree of interpretation ambiguity associated with the initial view. --- paper_title: Transinformation for Active Object Recognition paper_content: This article develops an analogy between object recognition and the transmission of information through a channel based on the statistical representation of the appearances of 3D objects. This analogy provides a means to quantitatively evaluate the contribution of individual receptive field vectors, and to predict the performance of the object recognition process. Transinformation also provides a quantitative measure of the discrimination provided by each viewpoint, thus permitting the determination of the most discriminant viewpoints. As an application, the article develops an active object recognition algorithm which is able to resolve ambiguities inherent in a single-view recognition algorithm. --- paper_title: Active Object Recognition Integrating Attention and Viewpoint Control paper_content: We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3D object in a 2D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the 3D object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled. --- paper_title: Active recognition: using uncertainty to reduce ambiguity paper_content: Scene ambiguity, due to noisy measurements and uncertain object models, can be quantified and actively used by an autonomous agent to efficiently gather new data and improve its information about the environment. In this work an information-based utility measure is used to derive from a learned classification of shape models an efficient data collection strategy, specifically aimed at increasing classification confidence when recognizing uncertain shapes. --- paper_title: Occlusions as a guide for planning the next view paper_content: A strategy for acquiring 3-D data of an unknown scene, using range images obtained by a light stripe range finder is addressed. The foci of attention are occluded regions, i.e., only the scene at the borders of the occlusions is modeled to compute the next move. Since the system has knowledge of the sensor geometry, it can resolve the appearance of occlusions by analyzing them. The problem of 3-D data acquisition is divided into two subproblems due to two types of occlusions. An occlusion arises either when the reflected laser light does not reach the camera or when the directed laser light does not reach the scene surface. After taking the range image of a scene, the regions of no data due to the first kind of occlusion are extracted. The missing data are acquired by rotating the sensor system in the scanning plane, which is defined by the first scan. After a complete image of the surface illuminated from the first scanning plane has been built, the regions of missing data due to the second kind of occlusions are located. Then, the directions of the next scanning planes for further 3-D data acquisition are computed. > --- paper_title: A Best Next View selection algorithm incorporating a quality criterion paper_content: This paper presents a method for solving the Best Next View problem. This problem arises while gathering range data for the purpose of building 3D models of objects. The novelty of our solution is the introduction of a quality criterion in addition to the visibility criterion used by previous researchers. This quality criterion aims at obtaining views that improve the overall range data quality of the imaged surfaces. Results demonstrate that this method selects views which generate reasonable volumetric models for convex, concave and curved objects. --- paper_title: A Two-Stage Algorithm for Planning the Next View From Range Images paper_content: A new technique is presented for determining the positions where a range sensor should be located to acquire the surfaces of a complex scene. The algorithm consists of two stages. The first stage applies a voting scheme that considers occlusion edges. Most of the surfaces of the scene are recovered through views computed in that way. Then, the second stage fills up remaining holes through a scheme based on visibility analysis. By leaving the more expensive visibility computations at the end of the exploration process, efficiency is increased. --- paper_title: Recovering shape by purposive viewpoint adjustment paper_content: An approach for recovering surface shape from the occluding contour using an active (i.e., moving) observer is presented. It is based on a relationship between the geometries of a surface in a scene and its occluding contour: If the viewing direction of the observer is along a principal direction for a surface point whose projection is on the contour, surface shape (i.e., curvature) at the surface point can be recovered from the contour. An observer that purposefully changes viewpoint in order to achieve a well-defined geometric relationship with respect to a 3D shape prior to its recognition is used. It is shown that there is a simple and efficient viewing strategy that allows the observer to align the viewing direction with one of the two principal directions for a point on the surface. Experimental results demonstrate that the method can be easily implemented and can provide reliable shape information.<<ETX>> --- paper_title: An Autonomous Active Vision System for Complete and Accurate 3D Scene Reconstruction paper_content: We propose in this paper an active vision approach for performing the 3D reconstruction of static scenes. The perception-action cycles are handled at various levels: from the definition of perception strategies for scene exploration downto the automatic generation of camera motions using visual servoing. To perform the reconstruction, we use a structure from controlled motion method which allows an optimal estimation of geometrical primitive parameters. As this method is based on particular camera motions, perceptual strategies able to appropriately perform a succession of such individual primitive reconstructions are proposed in order to recover the complete spatial structure of the scene. Two algorithms are proposed to ensure the exploration of the scene. The former is an incremental reconstruction algorithm based on the use of a prediction/verification scheme managed using decision theory and Bayes nets. It allows the visual system to get a high level description of the observed part of the scene. The latter, based on the computation of new viewpoints ensures the complete reconstruction of the scene. Experiments carried out on a robotic cell have demonstrated the validity of our approach. --- paper_title: Using intermediate objects to improve the efficiency of visual search paper_content: When using a mobile camera to search for a target object, it is often important to maximize the efficiency of the search. We consider a method for increasing efficiency by searching only those subregions that are especially likely to contain the object. These subregions are identified via spatial relationships. Searches that use this method repeatedly find an “intermediate” object that commonly participates in a spatial relationship with the target object, and then look for the target in the restricted region specified by this relationship. Intuitively, such searches, calledindirect searches, seem likely to provide efficiency increases when the intermediate objects can be recognized at low resolutions and hence can be found with little extra overhead, and when they significantly restrict the area that must be searched for the target. But what is the magnitude of this increase, and upon what other factors does efficiency depend? Although the idea of exploiting spatial relationships has been used in vision systems before, few have quantitatively examined these questions. --- paper_title: Control of selective perception using bayes nets and decision theory paper_content: A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decision-making are central issues for selective vision, which takes advantage of prior knowledge of a domain''s abstract and geometrical structure and models for the expected performance and cost of visual operators. .pp The TEA-1 selective vision system uses Bayes nets for representation and benefit-cost analysis for control of visual and non-visual actions. It is the high-level control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA-1 demonstrates that Bayes nets and decision theoretic techniques provide a general, re-usable framework for constructing computer vision systems that are selective perception systems, and that Bayes nets provide a general framework for representing visual tasks. Control, or decision making, is the most important issue in a selective vision system. TEA-1''s decisions about what to do next are based on general hand-crafted ``goodness functions'''' constructed around core decision theoretic elements. Several goodness functions for different decisions are presented and evaluated. .pp The TEA-1 system solves a version of the T-world problem, an abstraction of a large set of domains and tasks. Some key factors that affect the success of selective perception are analyzed by examining how each factor affects the overall performance of TEA-1 when solving ensembles of randomly produced, simulated T-world domains and tasks. TEA-1''s decision making algorithms are also evaluated in this manner. Experiments in the lab for one specific T-world domain, table settings, are also presented. --- paper_title: Dynamic Relevance: Vision-Based Focus of Attention using Artificial Neural Networks paper_content: Abstract This paper presents a method for ascertaining the relevance of inputs in vision-based tasks by exploiting temporal coherence and predictability. In contrast to the tasks explored in many previous relevance experiments, the class of tasks examined in this study is one in which relevance is a time-varying function of the previous and current inputs. The method proposed in this paper dynamically allocates relevance to inputs by using expectations of their future values. As a model of the task is learned, the model is simultaneously extended to create task-specific predictions of the future values of inputs. Inputs that are not relevant, and therefore not accounted for in the model, will not be predicted accurately. These inputs can be de-emphasized, and, in turn, a new, improved, model of the task created. The techniques presented in this paper have been successfully applied to the vision-based autonomous control of a land vehicle, vision-based hand tracking in cluttered scenes, and the detection of faults in the plasma-etch step of semiconductor wafers. --- paper_title: Map Building for a Mobile Robot from Sensory Data paper_content: A method for building a three-dimensional (3-D) world model for a mobile robot from sensory data derived from outdoor scenes is presented. The 3-D world model consists of four kinds of maps: a physical sensor map, a virtual sensor map, a local map, and a global map. First, a range image (physical sensor map) is transformed to a height map (virtual sensor map) relative to the mobile robot. Next, the height map is segmented into unexplored, occluded, traversable and obstacle regions from the height information. Moreover, obstacle regions are classified into artificial objects or natural objects according to their geometrical properties such as slope and curvature. A drawback of the height map (recovery of planes vertical to the ground plane) is overcome by using multiple-height maps that include the maximum and minimum height for each point on the ground plane. Multiple-height maps are useful not only for finding vertical planes but also for mapping obstacle regions into video images for segmentation. Finally, the height maps are integrated into a local map by matching geometrical parameters and by updating region labels. The results obtained using landscape models and the autonomous land vehicle simulator of the University of Maryland are shown, and constructing a global map with local maps is discussed. > --- paper_title: A comparison of position estimation techniques using occupancy grids paper_content: Abstract A mobile robot requires a perception of its local environment for both sensor-based locomotion and for position estimation. Occupancy grids, based on ultrasonic range data, provide a robust description of the local environment for locomotion. Unfortunately, current techniques for position estimation based on occupancy grids are both unreliable and computationally expensive. This paper reports on experiments with four techniques for position estimation using occupancy grids. A world modelling technique based on combining global and local occupancy grids is described. Techniques are described for extracting line segments from an occupancy grid based on a Hough transform. The use of an extended Kalman filter for position estimation is then adapted to this framework. Four matching techniques are presented for obtaining the innovation vector required by the Kalman filter equations. Experimental results show that matching of segments extracted from both the local and global occupancy grids gives results which are superior to a direct matching of grids, or to a mixed matching of segments to grids. --- paper_title: Three-dimensional computer vision: a geometric viewpoint paper_content: Projective geometry modelling and calibrating cameras edge detection representing geometric primitives and their uncertainty stereo vision determining discrete motion from points and lines tracking tokens over time motion fields of curves interpolating and approximating three-dimensional data recognizing and locating objects and places answers to problems. Appendices: constrained optimization some results from algebraic geometry differential geometry. --- paper_title: A Bayesian Approach to Landmark Discovery and Active Perception in Mobile Robot Navigation paper_content: To operate successfully in indoor environments, mobile robots must be able to localize themselves. Over the past few years, localization based on landmarks has become increasingly popular. Virtually all existing approaches to landmark-based navigation, however, rely on the human designer to decide what constitutes appropriate landmarks. This paper presents an approach that enables mobile robots to select their landmarks by themselves. Landmarks are chosen based on their utility for localization. This is done by training neural network landmark detectors so as to minimize the a posteriori localization error that the robot is expected to make after querying its sensors. An empirical study illustrates that self-selected landmarks are superior to landmarks carefully selected by a human. The Bayesian approach is also applied to control the direction of the robot’s camera, and empirical data demonstrates the appropriateness of this approach for active perception. The author is also affiliated with the Computer Science Department III of the University of Bonn, Germany, where part of this research was carried out. This research is sponsored in part by the National Science Foundation under award IRI-9313367, and by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Defense Advanced Research Projects Agency (DARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of NSF, Wright Laboratory or the United States Government. --- paper_title: A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots paper_content: This paper addresses the problem of building large-scale geometric maps of indoor environments with mobile robots. It poses the map building problem as a constrained, probabilistic maximum-likelihood estimation problem. It then devises a practical algorithm for generating the most likely map from data, along with the most likely path taken by the robot. Experimental results in cyclic environments of size up to 80 by 25 meter illustrate the appropriateness of the approach. --- paper_title: Indoor scene terrain modeling using multiple range images for autonomous mobile robots paper_content: The authors consider the perception subsystem of an autonomous mobile robot which must be able to navigate in 3D terrain. They describe their approach to building a rough geometric model of a 3D terrain accounting for the locomotion capabilities of the vehicle, using a laser range finder. This model may be used as direct input for the robot's path planner. The terrain model relies on two grid-based representations: the local elevation map and the local navigation map. Both are incrementally built at arbitrary resolution using new interpolation and localization methods and other 3D vision techniques. The authors validate the proposed approach by presenting some comprehensive results using real range images of an indoor structured environment. > --- paper_title: Generation of architectural CAD models using a mobile robot paper_content: This paper describes new algorithms for automatically constructing a computer aided design (CAD) model of a structured scene as imaged by a single camera on a mobile robot. The scene to be modeled is assumed to be composed mostly of linear edges with particular orientations in 3-D. This is the case for most indoor scenes as well as some outdoor urban scenes. The orientation data is used by a motion stereo algorithm to estimate the 3-D structure using a sequence of images. The algorithm assumes that the linear edges are the boundaries of opaque planar patches, such as the floor, the ceiling and the walls. The resulting 3-D description is a CAD model of the scene. Applications of this technique include CAD modeling for architecture and computer graphics, and robot navigation. This paper completes earlier publications and therefore concentrates on the latter parts of processing, including automatically tracking segments, and using the resulting models in different applications. > --- paper_title: Learning metric-topological maps for indoor mobile robot navigation paper_content: Abstract Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. --- paper_title: Extraction and interpretation of semantically significant line segments for a mobile robot paper_content: The authors describe algorithms for detecting and interpreting linear features of a real scene as images by a single camera on a mobile robot. The low-level processing stages were specifically designed to increase the usefulness and the quality of the extracted features for a semantic interpretation. The detection and interpretation processes provided a 3-D orientation hypothesis for each 2-D segment. This, in turn, was used to estimate the robot's orientation and relative position in the environment and to delimit the free space visible in the image. The orientation data was used by a motion stereo algorithm to fully estimate the 3-D structure when a sequence of images becomes available. From detection to 3-D estimation, an emphasis was placed on real-world applications and very fast processing with conventional hardware. > --- paper_title: Building visual maps by combining noisy stereo measurements paper_content: This paper deals with the problem of coping with noise disturbing data in Stereo, 3-D modelling and navigation. We introduce the idea of a Realistic Uncertain Description of the Environment (RUDE) which is local, i.e attached to a specific reference frame, and incorporates both, information about the geometry and about the parameters measuring this geometry. We also relate this uncertainty to the pixel uncertainty and show how the RUDE corresponding to different frames can be used to relate these frames by a rigid displacement which we describe both by a rotation and translation and a measure of their uncertainty. Finally, we use the relations between frames to update the associated RUDE and decrease their uncertainty. --- paper_title: Position estimation for a mobile robot using vision and odometry paper_content: The authors describe a method for locating a mobile robot moving in a known environment. This technique combines position estimation from odometry with observations of the environment from a mobile camera. Fixed objects in the world provide landmarks which are listed in a database. The system calculates the angle to each landmark and then orients the camera. An extended Kalman filter is used to correct the error between the observed and estimated angle to each landmark. Results from experiments in a real environment are presented. > --- paper_title: Dynamic Relevance: Vision-Based Focus of Attention using Artificial Neural Networks paper_content: Abstract This paper presents a method for ascertaining the relevance of inputs in vision-based tasks by exploiting temporal coherence and predictability. In contrast to the tasks explored in many previous relevance experiments, the class of tasks examined in this study is one in which relevance is a time-varying function of the previous and current inputs. The method proposed in this paper dynamically allocates relevance to inputs by using expectations of their future values. As a model of the task is learned, the model is simultaneously extended to create task-specific predictions of the future values of inputs. Inputs that are not relevant, and therefore not accounted for in the model, will not be predicted accurately. These inputs can be de-emphasized, and, in turn, a new, improved, model of the task created. The techniques presented in this paper have been successfully applied to the vision-based autonomous control of a land vehicle, vision-based hand tracking in cluttered scenes, and the detection of faults in the plasma-etch step of semiconductor wafers. --- paper_title: Control of selective perception using bayes nets and decision theory paper_content: A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decision-making are central issues for selective vision, which takes advantage of prior knowledge of a domain''s abstract and geometrical structure and models for the expected performance and cost of visual operators. .pp The TEA-1 selective vision system uses Bayes nets for representation and benefit-cost analysis for control of visual and non-visual actions. It is the high-level control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA-1 demonstrates that Bayes nets and decision theoretic techniques provide a general, re-usable framework for constructing computer vision systems that are selective perception systems, and that Bayes nets provide a general framework for representing visual tasks. Control, or decision making, is the most important issue in a selective vision system. TEA-1''s decisions about what to do next are based on general hand-crafted ``goodness functions'''' constructed around core decision theoretic elements. Several goodness functions for different decisions are presented and evaluated. .pp The TEA-1 system solves a version of the T-world problem, an abstraction of a large set of domains and tasks. Some key factors that affect the success of selective perception are analyzed by examining how each factor affects the overall performance of TEA-1 when solving ensembles of randomly produced, simulated T-world domains and tasks. TEA-1''s decision making algorithms are also evaluated in this manner. Experiments in the lab for one specific T-world domain, table settings, are also presented. --- paper_title: An Autonomous Active Vision System for Complete and Accurate 3D Scene Reconstruction paper_content: We propose in this paper an active vision approach for performing the 3D reconstruction of static scenes. The perception-action cycles are handled at various levels: from the definition of perception strategies for scene exploration downto the automatic generation of camera motions using visual servoing. To perform the reconstruction, we use a structure from controlled motion method which allows an optimal estimation of geometrical primitive parameters. As this method is based on particular camera motions, perceptual strategies able to appropriately perform a succession of such individual primitive reconstructions are proposed in order to recover the complete spatial structure of the scene. Two algorithms are proposed to ensure the exploration of the scene. The former is an incremental reconstruction algorithm based on the use of a prediction/verification scheme managed using decision theory and Bayes nets. It allows the visual system to get a high level description of the observed part of the scene. The latter, based on the computation of new viewpoints ensures the complete reconstruction of the scene. Experiments carried out on a robotic cell have demonstrated the validity of our approach. --- paper_title: A Bayesian Approach to Landmark Discovery and Active Perception in Mobile Robot Navigation paper_content: To operate successfully in indoor environments, mobile robots must be able to localize themselves. Over the past few years, localization based on landmarks has become increasingly popular. Virtually all existing approaches to landmark-based navigation, however, rely on the human designer to decide what constitutes appropriate landmarks. This paper presents an approach that enables mobile robots to select their landmarks by themselves. Landmarks are chosen based on their utility for localization. This is done by training neural network landmark detectors so as to minimize the a posteriori localization error that the robot is expected to make after querying its sensors. An empirical study illustrates that self-selected landmarks are superior to landmarks carefully selected by a human. The Bayesian approach is also applied to control the direction of the robot’s camera, and empirical data demonstrates the appropriateness of this approach for active perception. The author is also affiliated with the Computer Science Department III of the University of Bonn, Germany, where part of this research was carried out. This research is sponsored in part by the National Science Foundation under award IRI-9313367, and by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Defense Advanced Research Projects Agency (DARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of NSF, Wright Laboratory or the United States Government. --- paper_title: Spatial Learning for Navigation in Dynamic Environments paper_content: This article describes techniques that have been developed for spatial learning in dynamic environments and a mobile robot system, ELDEN, that integrates these techniques for exploration and navigation. In this research, we introduce the concept of adaptive place networks, incrementally-constructed spatial representations that incorporate variable-confidence links to model uncertainty about topological adjacency. These networks guide the robot's navigation while constantly adapting to any topological changes that are encountered. ELDEN integrates these networks with a reactive controller that is robust to transient changes in the environment and a relocalization system that uses evidence grids to recalibrate dead reckoning. --- paper_title: Indoor scene terrain modeling using multiple range images for autonomous mobile robots paper_content: The authors consider the perception subsystem of an autonomous mobile robot which must be able to navigate in 3D terrain. They describe their approach to building a rough geometric model of a 3D terrain accounting for the locomotion capabilities of the vehicle, using a laser range finder. This model may be used as direct input for the robot's path planner. The terrain model relies on two grid-based representations: the local elevation map and the local navigation map. Both are incrementally built at arbitrary resolution using new interpolation and localization methods and other 3D vision techniques. The authors validate the proposed approach by presenting some comprehensive results using real range images of an indoor structured environment. > --- paper_title: Generation of architectural CAD models using a mobile robot paper_content: This paper describes new algorithms for automatically constructing a computer aided design (CAD) model of a structured scene as imaged by a single camera on a mobile robot. The scene to be modeled is assumed to be composed mostly of linear edges with particular orientations in 3-D. This is the case for most indoor scenes as well as some outdoor urban scenes. The orientation data is used by a motion stereo algorithm to estimate the 3-D structure using a sequence of images. The algorithm assumes that the linear edges are the boundaries of opaque planar patches, such as the floor, the ceiling and the walls. The resulting 3-D description is a CAD model of the scene. Applications of this technique include CAD modeling for architecture and computer graphics, and robot navigation. This paper completes earlier publications and therefore concentrates on the latter parts of processing, including automatically tracking segments, and using the resulting models in different applications. > --- paper_title: Learning metric-topological maps for indoor mobile robot navigation paper_content: Abstract Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. --- paper_title: Extraction and interpretation of semantically significant line segments for a mobile robot paper_content: The authors describe algorithms for detecting and interpreting linear features of a real scene as images by a single camera on a mobile robot. The low-level processing stages were specifically designed to increase the usefulness and the quality of the extracted features for a semantic interpretation. The detection and interpretation processes provided a 3-D orientation hypothesis for each 2-D segment. This, in turn, was used to estimate the robot's orientation and relative position in the environment and to delimit the free space visible in the image. The orientation data was used by a motion stereo algorithm to fully estimate the 3-D structure when a sequence of images becomes available. From detection to 3-D estimation, an emphasis was placed on real-world applications and very fast processing with conventional hardware. > --- paper_title: Localization and Homing using Combinations of Model Views paper_content: Abstract Navigation involves recognizing the environment, identifying the current position within the environment, and reaching particular positions. We present a method for localization (the act of recognizing the environment), positioning (the act of computing the exact coordinates of a robot in the environment), and homing (the act of returning to a previously visited position) from visual input. The method is based on representing the scene as a set of 2D views and predicting the appearances of novel views by linear combinations of the model views. The method accurately approximates the appearance of scenes under weak-perspective projection. Analysis of this projection as well as experimental results demonstrate that in many cases this approximation is sufficient to accurately describe the scene. When weak-perspective approximation is invalid, either a larger number of models can be acquired or an iterative solution to account for the perspective distortions can be employed. The method has several advantages over other approaches. It uses relatively rich representations; the representations are 2D rather than 3D; and localization can be done from only a single 2D view without calibration. The same principal method is applied for both the localization and positioning problems, and a simple “qualitative” algorithm for homing is derived from this method. --- paper_title: Model-directed mobile robot navigation paper_content: The authors report on the system and methods used by UMass Mobile Robot Project. Model-based processing of the visual sensory data is the primary mechanism used for controlling movement of an autonomous land vehicle through the environment, measuring progress towards a given goal, and avoiding obstacles. Goal-oriented navigation takes place through a partially modeled, unchanging environment that contains no unmodeled obstacles; this simplified environment provides a foundation for research in more complicated domains. The navigation system integrates perception, planning, and execution of actions. Of particular importance is that the planning processes are reactive and reason about landmarks that should be perceived at various stages of task execution. Correspondence between image features and expected landmark locations are used at several abstraction levels to ensure proper plan execution. The system and some experiments that demonstrate the performance of its components is described. > --- paper_title: Occlusions as a guide for planning the next view paper_content: A strategy for acquiring 3-D data of an unknown scene, using range images obtained by a light stripe range finder is addressed. The foci of attention are occluded regions, i.e., only the scene at the borders of the occlusions is modeled to compute the next move. Since the system has knowledge of the sensor geometry, it can resolve the appearance of occlusions by analyzing them. The problem of 3-D data acquisition is divided into two subproblems due to two types of occlusions. An occlusion arises either when the reflected laser light does not reach the camera or when the directed laser light does not reach the scene surface. After taking the range image of a scene, the regions of no data due to the first kind of occlusion are extracted. The missing data are acquired by rotating the sensor system in the scanning plane, which is defined by the first scan. After a complete image of the surface illuminated from the first scanning plane has been built, the regions of missing data due to the second kind of occlusions are located. Then, the directions of the next scanning planes for further 3-D data acquisition are computed. > --- paper_title: Control of selective perception using bayes nets and decision theory paper_content: A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decision-making are central issues for selective vision, which takes advantage of prior knowledge of a domain''s abstract and geometrical structure and models for the expected performance and cost of visual operators. .pp The TEA-1 selective vision system uses Bayes nets for representation and benefit-cost analysis for control of visual and non-visual actions. It is the high-level control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA-1 demonstrates that Bayes nets and decision theoretic techniques provide a general, re-usable framework for constructing computer vision systems that are selective perception systems, and that Bayes nets provide a general framework for representing visual tasks. Control, or decision making, is the most important issue in a selective vision system. TEA-1''s decisions about what to do next are based on general hand-crafted ``goodness functions'''' constructed around core decision theoretic elements. Several goodness functions for different decisions are presented and evaluated. .pp The TEA-1 system solves a version of the T-world problem, an abstraction of a large set of domains and tasks. Some key factors that affect the success of selective perception are analyzed by examining how each factor affects the overall performance of TEA-1 when solving ensembles of randomly produced, simulated T-world domains and tasks. TEA-1''s decision making algorithms are also evaluated in this manner. Experiments in the lab for one specific T-world domain, table settings, are also presented. --- paper_title: An Autonomous Active Vision System for Complete and Accurate 3D Scene Reconstruction paper_content: We propose in this paper an active vision approach for performing the 3D reconstruction of static scenes. The perception-action cycles are handled at various levels: from the definition of perception strategies for scene exploration downto the automatic generation of camera motions using visual servoing. To perform the reconstruction, we use a structure from controlled motion method which allows an optimal estimation of geometrical primitive parameters. As this method is based on particular camera motions, perceptual strategies able to appropriately perform a succession of such individual primitive reconstructions are proposed in order to recover the complete spatial structure of the scene. Two algorithms are proposed to ensure the exploration of the scene. The former is an incremental reconstruction algorithm based on the use of a prediction/verification scheme managed using decision theory and Bayes nets. It allows the visual system to get a high level description of the observed part of the scene. The latter, based on the computation of new viewpoints ensures the complete reconstruction of the scene. Experiments carried out on a robotic cell have demonstrated the validity of our approach. --- paper_title: A Bayesian Approach to Landmark Discovery and Active Perception in Mobile Robot Navigation paper_content: To operate successfully in indoor environments, mobile robots must be able to localize themselves. Over the past few years, localization based on landmarks has become increasingly popular. Virtually all existing approaches to landmark-based navigation, however, rely on the human designer to decide what constitutes appropriate landmarks. This paper presents an approach that enables mobile robots to select their landmarks by themselves. Landmarks are chosen based on their utility for localization. This is done by training neural network landmark detectors so as to minimize the a posteriori localization error that the robot is expected to make after querying its sensors. An empirical study illustrates that self-selected landmarks are superior to landmarks carefully selected by a human. The Bayesian approach is also applied to control the direction of the robot’s camera, and empirical data demonstrates the appropriateness of this approach for active perception. The author is also affiliated with the Computer Science Department III of the University of Bonn, Germany, where part of this research was carried out. This research is sponsored in part by the National Science Foundation under award IRI-9313367, and by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Defense Advanced Research Projects Agency (DARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of NSF, Wright Laboratory or the United States Government. --- paper_title: A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots paper_content: This paper addresses the problem of building large-scale geometric maps of indoor environments with mobile robots. It poses the map building problem as a constrained, probabilistic maximum-likelihood estimation problem. It then devises a practical algorithm for generating the most likely map from data, along with the most likely path taken by the robot. Experimental results in cyclic environments of size up to 80 by 25 meter illustrate the appropriateness of the approach. --- paper_title: Recovering shape by purposive viewpoint adjustment paper_content: An approach for recovering surface shape from the occluding contour using an active (i.e., moving) observer is presented. It is based on a relationship between the geometries of a surface in a scene and its occluding contour: If the viewing direction of the observer is along a principal direction for a surface point whose projection is on the contour, surface shape (i.e., curvature) at the surface point can be recovered from the contour. An observer that purposefully changes viewpoint in order to achieve a well-defined geometric relationship with respect to a 3D shape prior to its recognition is used. It is shown that there is a simple and efficient viewing strategy that allows the observer to align the viewing direction with one of the two principal directions for a point on the surface. Experimental results demonstrate that the method can be easily implemented and can provide reliable shape information.<<ETX>> --- paper_title: Using intermediate objects to improve the efficiency of visual search paper_content: When using a mobile camera to search for a target object, it is often important to maximize the efficiency of the search. We consider a method for increasing efficiency by searching only those subregions that are especially likely to contain the object. These subregions are identified via spatial relationships. Searches that use this method repeatedly find an “intermediate” object that commonly participates in a spatial relationship with the target object, and then look for the target in the restricted region specified by this relationship. Intuitively, such searches, calledindirect searches, seem likely to provide efficiency increases when the intermediate objects can be recognized at low resolutions and hence can be found with little extra overhead, and when they significantly restrict the area that must be searched for the target. But what is the magnitude of this increase, and upon what other factors does efficiency depend? Although the idea of exploiting spatial relationships has been used in vision systems before, few have quantitatively examined these questions. ---
Title: Active Recognition through Next View Planning: A Survey Section 1: Introduction Description 1: Write about the basics of 3-D object recognition, including the challenges in recognizing 3-D objects from 2-D intensity images. Section 2: The Need for Multiple Views Description 2: Discuss the necessity of using multiple views for 3-D object recognition, and the limitations of single-view recognition systems. Section 3: Active Vision Description 3: Define active sensors and discuss the concepts of active vision, including the ability to control sensor parameters. Section 4: Object Feature Detection Description 4: Explain the strategies for object feature detection and the planning for complete sensor coverage of 3-D objects. Section 5: Object Recognition and Localization, and Scene Reconstruction Description 5: Describe the fundamental problems in multiple view-based recognition systems and the classification of systems for object recognition and scene analysis. Section 6: Active Object Recognition Systems Description 6: Detail the use of multiple views in active object recognition systems and the various representation schemes used for these systems. Section 7: Methods for Representing Uncertainty Description 7: Discuss the common methods for representing uncertainty in object recognition tasks, including probability theory, Dempster-Shafer theory, and fuzzy logic. Section 8: Recognition Strategies Description 8: Present various recognition strategies for active 3-D object recognition schemes, focusing on next view planning strategies. Section 9: Active Scene Analysis Systems Description 9: Review different classes of active scene analysis systems and their information representation and control schemes. Section 10: Conclusions Description 10: Summarize the survey and analysis of different active 3-D object recognition and scene analysis systems, highlighting key observations and conclusions.
A survey of partial differential equations in geometric design
15
--- paper_title: Curves and surfaces for computer aided geometric design: A practical guide paper_content: From the Publisher: ::: This book will be of interest to computer graphics enthusiasts, software developers for CAD/CAM systems, geometric modeling researchers, graphics programmers, academicians, and many others throughout the graphics community. Assuming only a background in calculus and basic linear algebra, the author's informal and reader-friendly style makes the material accessible to a wide audience. Finally, the included disk contains data sets and all of the C programs used in the book, making it easier for the user to gain first-hand experience with the concepts as they are explained. This unified treatment of curve and surface design concepts focuses on Bezier and B-spline methods for curves, rational Bezier and B-spline curves, geometric continuity, spline interpolation, and Coons methods. The fourth edition has been thoroughly updated and revised to include a new chapter on recursive subdivision, as well as new sections on triangulations and scattered data interpolants. Finally, the disk in the back of the book has been updated to include all of the programs, as well as the data sets from the text. --- paper_title: On harmonic and biharmonic Bézier surfaces ✩ paper_content: We present a new method of surface generation from prescribed boundaries based on the elliptic partial differential operators. In particular, we focus on the study of the so-called harmonic and biharmonic Bezier surfaces. The main result we report here is that any biharmonic Bezier surface is fully determined by the boundary control points. We compare the new method, by way of practical examples, with some related methods such as surfaces generation using discretisation masks and functional minimisations. --- paper_title: Curves and surfaces for computer aided geometric design: A practical guide paper_content: From the Publisher: ::: This book will be of interest to computer graphics enthusiasts, software developers for CAD/CAM systems, geometric modeling researchers, graphics programmers, academicians, and many others throughout the graphics community. Assuming only a background in calculus and basic linear algebra, the author's informal and reader-friendly style makes the material accessible to a wide audience. Finally, the included disk contains data sets and all of the C programs used in the book, making it easier for the user to gain first-hand experience with the concepts as they are explained. This unified treatment of curve and surface design concepts focuses on Bezier and B-spline methods for curves, rational Bezier and B-spline curves, geometric continuity, spline interpolation, and Coons methods. The fourth edition has been thoroughly updated and revised to include a new chapter on recursive subdivision, as well as new sections on triangulations and scattered data interpolants. Finally, the disk in the back of the book has been updated to include all of the programs, as well as the data sets from the text. --- paper_title: Subdivision surfaces in character animation paper_content: The creation of believable and endearing characters in computer graphics presents a number of technical challenges, including the modeling, animation and rendering of complex shapes such as heads, hands, and clothing. Traditionally, these shapes have been modeled with NURBS surfaces despite the severe topological restrictions that NURBS impose. In order to move beyond these restrictions, we have recently introduced subdivision surfaces into our production environment. Subdivision surfaces are not new, but their use in high-end CG production has been limited. Here we describe a series of developments that were required in order for subdivision surfaces to meet the demands of high-end production. First, we devised a practical technique for constructing provably smooth variable-radius fillets and blends. Second, we developed methods for using subdivision surfaces in clothing simulation including a new algorithm for efficient collision detection. Third, we developed a method for constructing smooth scalar fields on subdivision surfaces, thereby enabling the use of a wider class of programmable shaders. These developments, which were used extensively in our recently completed short film Geri’s game, have become a highly valued feature of our production environment. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling; I.3.3 [Computer Graphics]: Picture/Image Generation. --- paper_title: Curves and surfaces for computer aided geometric design: A practical guide paper_content: From the Publisher: ::: This book will be of interest to computer graphics enthusiasts, software developers for CAD/CAM systems, geometric modeling researchers, graphics programmers, academicians, and many others throughout the graphics community. Assuming only a background in calculus and basic linear algebra, the author's informal and reader-friendly style makes the material accessible to a wide audience. Finally, the included disk contains data sets and all of the C programs used in the book, making it easier for the user to gain first-hand experience with the concepts as they are explained. This unified treatment of curve and surface design concepts focuses on Bezier and B-spline methods for curves, rational Bezier and B-spline curves, geometric continuity, spline interpolation, and Coons methods. The fourth edition has been thoroughly updated and revised to include a new chapter on recursive subdivision, as well as new sections on triangulations and scattered data interpolants. Finally, the disk in the back of the book has been updated to include all of the programs, as well as the data sets from the text. --- paper_title: Discrete surface modelling using partial differential equations paper_content: We use various nonlinear partial differential equations to efficiently solve several surface modelling problems, including surface blending, N-sided hole filling and free-form surface fitting. The nonlinear equations used include two second order flows, two fourth order flows and two sixth order flows. These nonlinear equations are discretized based on discrete differential geometry operators. The proposed approach is simple, efficient and gives very desirable results, for a range of surface models, possibly having sharp creases and corners. --- paper_title: Discrete surface modelling using partial differential equations paper_content: We use various nonlinear partial differential equations to efficiently solve several surface modelling problems, including surface blending, N-sided hole filling and free-form surface fitting. The nonlinear equations used include two second order flows, two fourth order flows and two sixth order flows. These nonlinear equations are discretized based on discrete differential geometry operators. The proposed approach is simple, efficient and gives very desirable results, for a range of surface models, possibly having sharp creases and corners. --- paper_title: A shape design system using volumetric implicit PDEs paper_content: Abstract Solid modeling based on partial differential equations (PDEs) can potentially unify both geometric constraints and functional requirements within a single design framework to model real-world objects via its explicit, direct integration with parametric geometry. In contrast, implicit functions indirectly define geometric objects as the level-set of underlying scalar fields. To maximize the modeling potential of PDE-based methodology, in this paper we tightly couple PDEs with volumetric implicit functions in order to achieve interactive, intuitive shape representation, manipulation, and deformation. In particular, the unified approach can reconstruct the PDE geometry of arbitrary topology from scattered data points or a set of sketch curves. We make use of elliptic PDEs for boundary value problems to define the volumetric implicit function. The proposed implicit PDE model has the capability to reconstruct a complete solid model from partial information and facilitates the direct manipulation of underlying volumetric datasets via sketch curves and iso-surface sculpting, deformation of arbitrary interior regions, as well as a set of CSG operations inside the working space. The prototype system that we have developed allows designers to interactively sketch the curve outlines of the object, define intensity values and gradient directions, and specify interpolatory points in the 3D working space. The governing implicit PDE treats these constraints as generalized boundary conditions to determine the unknown scalar intensity values over the entire working space. The implicit shape is reconstructed with specified intensity value accordingly and can be deformed using a set of sculpting toolkits. We use the finite-difference discretization and variational interpolating approach with the localized iterative solver for the numerical integration of our PDEs in order to accommodate the diversity of generalized boundary and additional constraints. --- paper_title: Discrete surface modelling using partial differential equations paper_content: We use various nonlinear partial differential equations to efficiently solve several surface modelling problems, including surface blending, N-sided hole filling and free-form surface fitting. The nonlinear equations used include two second order flows, two fourth order flows and two sixth order flows. These nonlinear equations are discretized based on discrete differential geometry operators. The proposed approach is simple, efficient and gives very desirable results, for a range of surface models, possibly having sharp creases and corners. --- paper_title: Image inpainting paper_content: Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects. --- paper_title: Variational problems and partial differential equations on implicit surfaces paper_content: A novel framework for solving variational problems and partial differential equations for scalar and vector-valued data defined on surfaces is introduced in this paper. The key idea is to implicitly represent the surface as the level set of a higher dimensional function and to solve the surface equations in a fixed Cartesian coordinate system using this new embedding function. The equations are then both intrinsic to the surface and defined in the embedding space. This approach thereby eliminates the need for performing complicated and inaccurate computations on triangulated surfaces, as is commonly done in the literature. We describe the framework and present examples in computer graphics and image processing applications, including texture synthesis, flow field visualization, and image and vector field intrinsic regularization for data defined on 3D surfaces. --- paper_title: Interactive shape design using volumetric implicit PDEs paper_content: Solid modeling based on Partial Differential Equations (PDEs) can potentially unify both geometric constraints and functional requirements within a single design framework to model real-world objects via its explicit, direct integration with parametric geometry. In contrast, implicit functions indirectly define geometric objects as the level-set of underlying scalar fields. To maximize the modeling potential of PDE-based methodology, in this paper we tightly couple PDEs with volumetric implicit functions in order to achieve interactive, intuitive shape representation, manipulation, and deformation. In particular, the unified approach can reconstruct the PDE geometry of arbitrary topology from scattered data points or a set of sketch curves. We make use of a fourth-order elliptic PDE to define the volumetric implicit function. The proposed implicit PDE model has the capability to reconstruct a complete solid model from partial information and facilitates the direct manipulation of underlying volumetric datasets via sketch curves, iso-surface sculpting, deformation of arbitrary interior regions, as well as a set of CSG operations inside the working space.The prototype system that we have developed allows designers to interactively sketch the curve outlines of the object, define intensity values and gradient directions, and specify interpolatory points in the 3D working space. The governing implicit PDE treats these constraints as generalized boundary conditions to determine the unknown scalar intensity values over the entire working space. The implicit shape is reconstructed with specified intensity value accordingly and can be deformed using a set of sculpting toolkits. We use the finite-difference discretization and variational interpolating approach with the localized iterative solver for the numerical integration of our PDEs in order to accommodate the diversity of generalized boundary constraints. --- paper_title: Direct Manipulation and Interactive Sculpting of PDE Surfaces paper_content: This paper presents an integrated approach and a unified algorithm that combine the benefits of PDE surfaces and powerful physics-based modeling techniques within one single modeling framework, in order to realize the full potential of PDE surfaces. We have developed a novel system that allows direct manipulation and interactive sculpting of PDE surfaces at arbitrary location, hence supporting various interactive techniques beyond the conventional boundary control. Our prototype software affords users to interactively modify point, normal, curvature, and arbitrary region of PDE surfaces in a predictable way. We employ several simple, yet effective numerical techniques including the finite-difference discretization of the PDE surface, the multigrid-like subdivision on the PDE surface, the mass-spring approximation of the elastic PDE surface, etc. to achieve real-time performance. In addition, our dynamic PDE surfaces can also be approximated using standard bivariate B-spline finite elements, which can subsequently be sculpted and deformed directly in real-time subject to intrinsic PDE constraints. Our experiments demonstrate many attractive advantages of our dynamic PDE formulation such as intuitive control, real-time feedback, and usability to the general public. --- paper_title: Techniques for interactive design using the PDE method paper_content: Interactive design of practical surfaces using the partial differential equation (PDE) method is considered. The PDE method treats surface design as a boundary value problem (ensuring that surfaces can be defined using a small set of design parameters). Owing to the elliptic nature of the PDE operator, the boundary conditions imposed around the edges of the surface control the internal shape of the surface. Moreover, surfaces obtained in this manner tend to be smooth and fair. The PDE chosen has a closed form solution allowing the interactive manipulation of the surfaces in real time. Thus we present efficient techniques by which we show how surfaces of practical significance can be constructed interactively in real time. --- paper_title: Parametric Design and Optimisation of Thin-Walled Structures for Food Packaging paper_content: In this paper the parametric design and functional optimisation of thin-walled structures made from plastics for food packaging is considered. These objects are produced in such vast numbers each year that one important task in the design of these objects is to minimise the amount of plastic used, subject to functional constraints, to reduce the costs of production and to conserve raw materials. By means of performing an automated optimisation on the possible shapes of the food containers, where the geometry is parametrised succinctly, a strategy to create the optimal design of the containers subject to a given set of functional constraints is demonstrated. --- paper_title: Generating blend surfaces using partial differential equations paper_content: Abstract A method is proposed for representing surfaces as solutions to partial differential equations. It is shown, by examples from the field of blend generation, that the method can easily achieve the required degree of continuity between the blend and the surfaces to which it attaches. The surfaces also have the property of being geometrically ‘well-behaved’. --- paper_title: On harmonic and biharmonic Bézier surfaces ✩ paper_content: We present a new method of surface generation from prescribed boundaries based on the elliptic partial differential operators. In particular, we focus on the study of the so-called harmonic and biharmonic Bezier surfaces. The main result we report here is that any biharmonic Bezier surface is fully determined by the boundary control points. We compare the new method, by way of practical examples, with some related methods such as surfaces generation using discretisation masks and functional minimisations. --- paper_title: Spectral approximations to PDE surfaces paper_content: The PDE method generates surfaces from the solutions to elliptic partial differential equations (PDEs), where boundary conditions are used to control surface shape. This paper describes a method whereby PDE surfaces may be obtained in closed form, even for the case of general boundary conditions. Furthermore, the method is fast, making possible the interactive manipulations of PDE surfaces in real-time. --- paper_title: Using partial differential equations to generate free-form surfaces paper_content: Abstract A method of generating free-form surfaces using solutions to a suitably chosen partial differential equation is discussed. By varying the boundary conditions and a parameter in the partial differential equation, it is demonstrated that a wide variety of surface shapes are accessible to the method. --- paper_title: An analytic pseudo-spectral method to generate a regular 4-sided PDE surface patch paper_content: We describe a pseudo-spectral method for rapidly calculating an analytic approximation to a 4-sided PDE surface patch. The method generates an approximate solution consisting of three parts: an eigenfunction solution and a polynomial solution, both of which satisfy the generating partial differential equation exactly, and a third function, or 'remainder' term that ensures that the boundary conditions are satisfied exactly. Being analytic, the approximation allows an arbitrary degree of surface refinement thereby facilitating physical analysis. --- paper_title: Generating blend surfaces using partial differential equations paper_content: Abstract A method is proposed for representing surfaces as solutions to partial differential equations. It is shown, by examples from the field of blend generation, that the method can easily achieve the required degree of continuity between the blend and the surfaces to which it attaches. The surfaces also have the property of being geometrically ‘well-behaved’. --- paper_title: Variational geometry in computer-aided design paper_content: A system has been developed which utilizes variational geometry in the design and modification of mechanical parts. Three-dimensional constraints between characteristic points are used to define an object's geometry. Modification of geometry is accomplished by alteration of one or more constraints. A matrix method is used to determine the shape of the part by simultaneous solution of constraint equations. A method for increasing the speed and efficiency of the solution procedure is described. The method uses the relationships between the geometry and constraints to minimize the number of equations and variables to be solved. --- paper_title: Spectral approximations to PDE surfaces paper_content: The PDE method generates surfaces from the solutions to elliptic partial differential equations (PDEs), where boundary conditions are used to control surface shape. This paper describes a method whereby PDE surfaces may be obtained in closed form, even for the case of general boundary conditions. Furthermore, the method is fast, making possible the interactive manipulations of PDE surfaces in real-time. --- paper_title: Fast Surface Modelling Using a 6th Order PDE paper_content: Although the control-point based parametric approach is used most widely in free-form surface modelling, complementary techniques co-exist to meet various specialised requirements. The partial differential equation (PDE) based modelling approach is especially suitable for satisfying surface boundary constraints. They are also effective for the generation of families of free-form surfaces, which share a common base and differ in their secondary features. In this paper, we present a fast surface modelling method using a sixth order PDE. This PDE provides enough degrees of freedom not only to accommodate tangent, but also curvature boundary conditions and offers more shape control parameters to serve as user controls for the manipulation of surface shapes. In order to achieve real-time performance, we have constructed a surface function and developed a high-precision approximate solution to the 6th order PDE. Unlike some existing PDE-based techniques, this resolution method can satisfy the boundary conditions exactly, and is able to create free-form surfaces as fast and almost as accurately as the closed-form (analytical) solutions. Due to the fact that it has sufficient degrees of freedom to accommodate the continuity of 3-sided and 4-sided surface patches at their boundaries, this method is able to model complex surfaces consisting of multiple patches. Compared with existing PDE-based modelling methods, this method is both fast and can solve a larger class of surface modelling problems. ::: ::: Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Curves, surfaces, solid, and object representations; physically based modelling --- paper_title: Dynamic PDE-based surface design using geometric and physical constraints paper_content: PDE surfaces, which are defined as solutions of partial differential equations (PDEs), offer many modeling advantages in surface blending, free-form surface modeling, and specifying surface's aesthetic or functional requirements. Despite the earlier advances of PDE surfaces, previous PDE-based techniques exhibit certain difficulties such as lack of interactive sculpting capabilities and restrained topological structure of modeled objects. This paper presents an integrated approach that can incorporate PDE surfaces into the powerful physics-based modeling framework, to realize the full potential of PDE methodology. We have developed a prototype system that allows interactive design of flexible topological surfaces as PDE surfaces and displacements using generalized boundary conditions as well as a variety of geometric and physical constraints, hence supporting various interactive techniques beyond the conventional boundary control. The system offers a set of sculpting toolkits that allow users to interactively modify arbitrary points, curve spans, and/or regions of interest across the entire PDE surfaces and displacements in an intuitive and physically meaningful way. To achieve real-time performance, we employ several simple, yet efficient numerical techniques, including the finite-difference discretization, the multigrid-like subdivision, and the mass-spring approximation of elastic PDE surfaces and displacements. In addition, we present the standard bivariant B-spline finite element approximations of dynamic PDEs, which can subsequently be sculpted and deformed directly in real-time subject to the intrinsic PDE constraints. Our experiments demonstrate many attractive advantages of the physics-based PDE formulation such as intuitive control, real-time feedback, and usability to both professional and common users. --- paper_title: D-NURBS: A Physics-Based Framework for Geometric Design paper_content: Presents dynamic non-uniform rational B-splines (D-NURBS), a physics-based generalization of NURBS. NURBS have become a de facto standard in commercial modeling systems. Traditionally, however, NURBS have been viewed as purely geometric primitives, which require the designer to interactively adjust many degrees of freedom-control points and associated weights-to achieve the desired shapes. The conventional shape modification process can often be clumsy and laborious. D-NURBS are physics-based models that incorporate physical quantities into the NURBS geometric substrate. Their dynamic behavior, resulting from the numerical integration of a set of nonlinear differential equations, produces physically meaningful, and hence intuitive shape variation. Consequently, a modeler can interactively sculpt complex shapes to required specifications not only in the traditional indirect fashion, by adjusting control points and setting weights, but also through direct physical manipulation, by applying simulated forces and local and global shape constraints. We use Lagrangian mechanics to formulate the equations of motion for D-NURBS curves, tensor-product D-NURBS surfaces, swung D-NURBS surfaces and triangular D-NURBS surfaces. We apply finite element analysis to reduce these equations to efficient numerical algorithms computable at interactive rates on common graphics workstations. We implement a prototype modeling environment based on D-NURBS and demonstrate that D-NURBS can be effective tools in a wide range of computer-aided geometric design (CAGD) applications. --- paper_title: Integrating physics-based modeling with PDE solids for geometric design paper_content: PDE techniques, which use partial differential equations (PDEs) to model the shapes of various real-world objects, can unify their geometric attributes and functional constraints in geometric computing and graphics. This paper presents a unified dynamic approach that allows modelers to define the solid geometry of sculptured objects using the second-order or fourth-order elliptic PDEs subject to flexible boundary conditions. Founded upon the previous work on PDE solids by Bloor and Wilson (1989, 1990, 1993), as well as our recent research on the interactive sculpting of physics-based PDE surfaces, our new formulation and its associated dynamic principle permit designers to directly deform PDE solids whose behaviors are natural and intuitive subject to imposed constraints. Users can easily model and interact with solids of complicated geometry and/or arbitrary topology from locally-defined PDE primitives through trimming operations. We employ the finite-difference discretization and the multi-grid subdivision to solve the PDEs numerically. Our PDE-based modeling software offers users various sculpting toolkits for solid design, allowing them to interactively modify the physical and geometric properties of arbitrary points, curve spans, regions of interest (either in the isoparametric or nonisoparametric form) on boundary surfaces, as well as any interior parts of modeled objects. --- paper_title: Dynamic PDE surfaces with flexible and general geometric constraints paper_content: PDE surfaces, whose behavior is governed by partial differential equations (PDEs), have demonstrated many modeling advantages in surface blending, free-form surface modeling, and surface aesthetic or functional specifications. Although PDE surfaces can potentially unify geometric attributes and functional constraints for surface design, current PDE based techniques exhibit certain difficulties such as the restrained topological structure of modeled objects and the lack of interactive editing functionalities. We propose an integrated approach and develop a set of algorithms that augment conventional PDE surfaces with material properties and dynamic behavior. The authors incorporate PDE surfaces into the powerful physics based framework, aiming to realize the full potential of the PDE methodology. We have implemented a prototype software environment that can offer users a wide array of PDE surfaces with flexible topology (through trimming and joining operations) as well as generalized boundary constraints. Using our system, designers can dynamically manipulate PDE surfaces at arbitrary location with applied forces. Our sculpting toolkits allow users to interactively modify arbitrary point, curve span, and/or region of interest throughout the entire PDE surface in an intuitive and predictable way. To achieve real time sculpting, we employ several simple, yet efficient numerical techniques such as finite difference discretization, multi-grid subdivision, and FEM approximation. Our experiments demonstrate many advantages of physics based PDE formulation such as intuitive control, real time feedback, and usability to both professional and non-expert users. --- paper_title: Using partial differential equations to generate free-form surfaces paper_content: Abstract A method of generating free-form surfaces using solutions to a suitably chosen partial differential equation is discussed. By varying the boundary conditions and a parameter in the partial differential equation, it is demonstrated that a wide variety of surface shapes are accessible to the method. --- paper_title: Interactive shape design using volumetric implicit PDEs paper_content: Solid modeling based on Partial Differential Equations (PDEs) can potentially unify both geometric constraints and functional requirements within a single design framework to model real-world objects via its explicit, direct integration with parametric geometry. In contrast, implicit functions indirectly define geometric objects as the level-set of underlying scalar fields. To maximize the modeling potential of PDE-based methodology, in this paper we tightly couple PDEs with volumetric implicit functions in order to achieve interactive, intuitive shape representation, manipulation, and deformation. In particular, the unified approach can reconstruct the PDE geometry of arbitrary topology from scattered data points or a set of sketch curves. We make use of a fourth-order elliptic PDE to define the volumetric implicit function. The proposed implicit PDE model has the capability to reconstruct a complete solid model from partial information and facilitates the direct manipulation of underlying volumetric datasets via sketch curves, iso-surface sculpting, deformation of arbitrary interior regions, as well as a set of CSG operations inside the working space.The prototype system that we have developed allows designers to interactively sketch the curve outlines of the object, define intensity values and gradient directions, and specify interpolatory points in the 3D working space. The governing implicit PDE treats these constraints as generalized boundary conditions to determine the unknown scalar intensity values over the entire working space. The implicit shape is reconstructed with specified intensity value accordingly and can be deformed using a set of sculpting toolkits. We use the finite-difference discretization and variational interpolating approach with the localized iterative solver for the numerical integration of our PDEs in order to accommodate the diversity of generalized boundary constraints. --- paper_title: Interactive design using higher order PDEs paper_content: This paper extends the PDE method of surface generation. The governing partial differential equation is generalised to sixth order to increase its flexibility. The PDE is solved analytically, even in the case of general boundary conditions, making the method fast. The boundary conditions, which control the surface shape, are specified interactively, allowing intuitive manipulation of generic shapes. A compact user interface is presented which makes use of direct manipulation and other techniques for 3D interaction. --- paper_title: Functionality in blend design paper_content: A method is presented for obtaining surfaces that satisfy certain given design conditions. The surfaces are generated as solutions to a partial differential equation and designs, which optimize some function of the surface while satisfying some other constraints, are found. Examples of maximizing the heat lost from a surface and of minimizing shear stress are considered. --- paper_title: Discrete surface modelling using partial differential equations paper_content: We use various nonlinear partial differential equations to efficiently solve several surface modelling problems, including surface blending, N-sided hole filling and free-form surface fitting. The nonlinear equations used include two second order flows, two fourth order flows and two sixth order flows. These nonlinear equations are discretized based on discrete differential geometry operators. The proposed approach is simple, efficient and gives very desirable results, for a range of surface models, possibly having sharp creases and corners. --- paper_title: Generating blend surfaces using partial differential equations paper_content: Abstract A method is proposed for representing surfaces as solutions to partial differential equations. It is shown, by examples from the field of blend generation, that the method can easily achieve the required degree of continuity between the blend and the surfaces to which it attaches. The surfaces also have the property of being geometrically ‘well-behaved’. --- paper_title: Image inpainting paper_content: Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects. --- paper_title: Anisotropic diffusion of surfaces and functions on surfaces paper_content: We present a unified anisotropic geometric diffusion PDE model for smoothing (fairing) out noise both in triangulated two-manifold surface meshes in IR3 and functions defined on these surface meshes, while enhancing curve features on both by careful choice of an anisotropic diffusion tensor. We combine the C1 limit representation of Loop's subdivision for triangular surface meshes and vector functions on the surface mesh with the established diffusion model to arrive at a discretized version of the diffusion problem in the spatial direction. The time direction discretization then leads to a sparse linear system of equations. Iteratively solving the sparse linear system yields a sequence of faired (smoothed) meshes as well as faired functions. --- paper_title: Solving Variational Problems and Partial Differential Equations Mapping into General Target Manifolds paper_content: A framework for solving variational problems and partial differential equations that define maps onto a given generic manifold is introduced in this paper. We discuss the framework for arbitrary target manifolds, while the domain manifold problem was addressed in [J. Comput. Phys. 174(2) (2001) 759]. The key idea is to implicitly represent the target manifold as the level-set of a higher dimensional function, and then implement the equations in the Cartesian coordinate system where this embedding function is defined. In the case of variational problems, we restrict the search of the minimizing map to the class of maps whose target is the level-set of interest. In the case of partial differential equations, we re-write all the equation's geometric characteristics with respect to the embedding function. We then obtain a set of equations that, while defined on the whole Euclidean space, are intrinsic to the implicitly defined target manifold and map into it. This permits the use of classical numerical techniques in Cartesian grids, regardless of the geometry of the target manifold. The extension to open surfaces and submanifolds is addressed in this paper as well. In the latter case, the submanifold is defined as the intersection of two higher dimensional hypersurfaces, and all the computations are restricted to this intersection. Examples of the applications of the framework here described include harmonic maps in liquid crystals, where the target manifold is a hypersphere; probability maps, where the target manifold is a hyperplane; chroma enhancement; texture mapping; and general geometric mapping between high dimensional manifolds. --- paper_title: Variational problems and partial differential equations on implicit surfaces paper_content: A novel framework for solving variational problems and partial differential equations for scalar and vector-valued data defined on surfaces is introduced in this paper. The key idea is to implicitly represent the surface as the level set of a higher dimensional function and to solve the surface equations in a fixed Cartesian coordinate system using this new embedding function. The equations are then both intrinsic to the surface and defined in the embedding space. This approach thereby eliminates the need for performing complicated and inaccurate computations on triangulated surfaces, as is commonly done in the literature. We describe the framework and present examples in computer graphics and image processing applications, including texture synthesis, flow field visualization, and image and vector field intrinsic regularization for data defined on 3D surfaces. --- paper_title: Discrete surface modelling using partial differential equations paper_content: We use various nonlinear partial differential equations to efficiently solve several surface modelling problems, including surface blending, N-sided hole filling and free-form surface fitting. The nonlinear equations used include two second order flows, two fourth order flows and two sixth order flows. These nonlinear equations are discretized based on discrete differential geometry operators. The proposed approach is simple, efficient and gives very desirable results, for a range of surface models, possibly having sharp creases and corners. --- paper_title: Geometric Fairing of Irregular Meshes for Free-Form Surface Design paper_content: In this paper we present a new algorithm for smoothing arbitrary triangle meshes while satisfying G^1 boundary conditions. The algorithm is based on solving a nonlinear fourth order partial differential equation (PDE) that only depends on intrinsic surface properties instead of being derived from a particular surface parameterization. This continuous PDE has a (representation-independent) well-defined solution which we approximate by our triangle mesh. Hence, changing the mesh complexity (refinement) or the mesh connectivity (remeshing) leads to just another discretization of the same smooth surface and doesn't affect the resulting geometric shape beyond this. This is typically not true for filter-based mesh smoothing algorithms. To simplify the computation we factorize the fourth order PDE into a set of two nested second order problems thus avoiding the estimation of higher order derivatives. Further acceleration is achieved by applying multigrid techniques on a fine-to-coarse hierarchical mesh representation. --- paper_title: Efficient parametrization of generic aircraft geometry paper_content: A new method is presented for the parametrization of aircraft geometry; it is efficient in the sense that a relatively small number of "design" parameters are required to describe a complex surface geometry. The method views surface generation as a boundary-value problem and produces surfaces as the solutions to elliptic partial differential equations, hence its name, the PDE method. The use of the PDE method will be illustrated in this article by the parametrization of double delta geometries; it will be shown that it is possible to capture the basic features of the large-scale geometry of the aircraft in terms of a small set of design variables. --- paper_title: Parametric Design and Optimisation of Thin-Walled Structures for Food Packaging paper_content: In this paper the parametric design and functional optimisation of thin-walled structures made from plastics for food packaging is considered. These objects are produced in such vast numbers each year that one important task in the design of these objects is to minimise the amount of plastic used, subject to functional constraints, to reduce the costs of production and to conserve raw materials. By means of performing an automated optimisation on the possible shapes of the food containers, where the geometry is parametrised succinctly, a strategy to create the optimal design of the containers subject to a given set of functional constraints is demonstrated. --- paper_title: Efficient Shape Parametrisation for Automatic Design Optimisation using a Partial Differential Equation Formulation paper_content: Abstract This paper presents a methodology for efficient shape parametrisation for automatic design optimisation using a partial differential equation (PDE) formulation. It is shown how the choice of an elliptic PDE enables one to define and parametrise geometries corresponding to complex shapes. By using the PDE formulation it is shown how the shape definition and parametrisation can be based on a boundary value approach by which complex shapes can be created and parametrised based on the shape information at the boundaries or the character lines defining the shape. Furthermore, this approach to shape definition allows complex shapes to be parametrised intuitively using a very small set of design parameters. Thus, it is shown that the PDE based approach to shape parametrisation when combined with a standard method for numerical optimisation is capable of setting up automatic design optimisation problems allowing practical design optimisation to be more feasible. --- paper_title: Efficient parametrization of generic aircraft geometry paper_content: A new method is presented for the parametrization of aircraft geometry; it is efficient in the sense that a relatively small number of "design" parameters are required to describe a complex surface geometry. The method views surface generation as a boundary-value problem and produces surfaces as the solutions to elliptic partial differential equations, hence its name, the PDE method. The use of the PDE method will be illustrated in this article by the parametrization of double delta geometries; it will be shown that it is possible to capture the basic features of the large-scale geometry of the aircraft in terms of a small set of design variables. --- paper_title: Parametric Design and Optimisation of Thin-Walled Structures for Food Packaging paper_content: In this paper the parametric design and functional optimisation of thin-walled structures made from plastics for food packaging is considered. These objects are produced in such vast numbers each year that one important task in the design of these objects is to minimise the amount of plastic used, subject to functional constraints, to reduce the costs of production and to conserve raw materials. By means of performing an automated optimisation on the possible shapes of the food containers, where the geometry is parametrised succinctly, a strategy to create the optimal design of the containers subject to a given set of functional constraints is demonstrated. --- paper_title: Efficient Shape Parametrisation for Automatic Design Optimisation using a Partial Differential Equation Formulation paper_content: Abstract This paper presents a methodology for efficient shape parametrisation for automatic design optimisation using a partial differential equation (PDE) formulation. It is shown how the choice of an elliptic PDE enables one to define and parametrise geometries corresponding to complex shapes. By using the PDE formulation it is shown how the shape definition and parametrisation can be based on a boundary value approach by which complex shapes can be created and parametrised based on the shape information at the boundaries or the character lines defining the shape. Furthermore, this approach to shape definition allows complex shapes to be parametrised intuitively using a very small set of design parameters. Thus, it is shown that the PDE based approach to shape parametrisation when combined with a standard method for numerical optimisation is capable of setting up automatic design optimisation problems allowing practical design optimisation to be more feasible. --- paper_title: Subdivision surfaces in character animation paper_content: The creation of believable and endearing characters in computer graphics presents a number of technical challenges, including the modeling, animation and rendering of complex shapes such as heads, hands, and clothing. Traditionally, these shapes have been modeled with NURBS surfaces despite the severe topological restrictions that NURBS impose. In order to move beyond these restrictions, we have recently introduced subdivision surfaces into our production environment. Subdivision surfaces are not new, but their use in high-end CG production has been limited. Here we describe a series of developments that were required in order for subdivision surfaces to meet the demands of high-end production. First, we devised a practical technique for constructing provably smooth variable-radius fillets and blends. Second, we developed methods for using subdivision surfaces in clothing simulation including a new algorithm for efficient collision detection. Third, we developed a method for constructing smooth scalar fields on subdivision surfaces, thereby enabling the use of a wider class of programmable shaders. These developments, which were used extensively in our recently completed short film Geri’s game, have become a highly valued feature of our production environment. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling; I.3.3 [Computer Graphics]: Picture/Image Generation. --- paper_title: Discrete surface modelling using partial differential equations paper_content: We use various nonlinear partial differential equations to efficiently solve several surface modelling problems, including surface blending, N-sided hole filling and free-form surface fitting. The nonlinear equations used include two second order flows, two fourth order flows and two sixth order flows. These nonlinear equations are discretized based on discrete differential geometry operators. The proposed approach is simple, efficient and gives very desirable results, for a range of surface models, possibly having sharp creases and corners. --- paper_title: On the Spine of a PDE Surface paper_content: The spine of an object is an entity that can characterise the object's topology and describes the object by a lower dimension. It has an intuitive appeal for supporting geometric modelling operations. The aim of this paper is to show how a spine for a PDE surface can be generated. For the purpose of the work presented here an analytic solution form for the chosen PDE is utilised. It is shown that the spine of the PDE surface is then computed as a by-product of this analytic solution. This paper also discusses how the of a PDE surface can be used to manipulate the shape. The solution technique adopted here caters for periodic surfaces with general boundary conditions allowing the possibility of the spine based shape manipulation for a wide variety of free-form PDE surface shapes. --- paper_title: Subdivision surfaces in character animation paper_content: The creation of believable and endearing characters in computer graphics presents a number of technical challenges, including the modeling, animation and rendering of complex shapes such as heads, hands, and clothing. Traditionally, these shapes have been modeled with NURBS surfaces despite the severe topological restrictions that NURBS impose. In order to move beyond these restrictions, we have recently introduced subdivision surfaces into our production environment. Subdivision surfaces are not new, but their use in high-end CG production has been limited. Here we describe a series of developments that were required in order for subdivision surfaces to meet the demands of high-end production. First, we devised a practical technique for constructing provably smooth variable-radius fillets and blends. Second, we developed methods for using subdivision surfaces in clothing simulation including a new algorithm for efficient collision detection. Third, we developed a method for constructing smooth scalar fields on subdivision surfaces, thereby enabling the use of a wider class of programmable shaders. These developments, which were used extensively in our recently completed short film Geri’s game, have become a highly valued feature of our production environment. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling; I.3.3 [Computer Graphics]: Picture/Image Generation. --- paper_title: Animation and rendering of complex water surfaces paper_content: We present a new method for the animation and rendering of photo-realistic water effects. Our method is designed to produce visually plausible three dimensional effects, for example the pouring of water into a glass (see figure 1) and the breaking of an ocean wave, in a manner which can be used in a computer animation environment. In order to better obtain photorealism in the behavior of the simulated water surface, we introduce a new "thickened" front tracking technique to accurately represent the water surface and a new velocity extrapolation method to move the surface in a smooth, water-like manner. The velocity extrapolation method allows us to provide a degree of control to the surface motion, e.g. to generate a windblown look or to force the water to settle quickly. To ensure that the photorealism of the simulation carries over to the final images, we have integrated our method with an advanced physically based rendering system. --- paper_title: Visual simulation of smoke paper_content: In this paper, we propose a new approach to numerical smoke simulation for computer graphics applications. The method proposed here exploits physics unique to smoke in order to design a numerical method that is both fast and efficient on the relatively coarse grids traditionally used in computer graphics applications (as compared to the much finer grids used in the computational fluid dynamics literature). We use the inviscid Euler equations in our model, since they are usually more appropriate for gas modeling and less computationally intensive than the viscous Navier-Stokes equations used by others. In addition, we introduce a physically consistent vorticity confinement term to model the small scale rolling features characteristic of smoke that are absent on most coarse grid simulations. Our model also correctly handles the inter-action of smoke with moving objects. --- paper_title: Physically based modeling and animation of fire paper_content: We present a physically based method for modeling and animating fire. Our method is suitable for both smooth (laminar) and turbulent flames, and it can be used to animate the burning of either solid or gas fuels. We use the incompressible Navier-Stokes equations to independently model both vaporized fuel and hot gaseous products. We develop a physically based model for the expansion that takes place when a vaporized fuel reacts to form hot gaseous products, and a related model for the similar expansion that takes place when a solid fuel is vaporized into a gaseous state. The hot gaseous products, smoke and soot rise under the influence of buoyancy and are rendered using a blackbody radiation model. We also model and render the blue core that results from radicals in the chemical reaction zone where fuel is converted into products. Our method allows the fire and smoke to interact with objects, and flammable objects can catch on fire. ---
Title: A Survey of Partial Differential Equations in Geometric Design Section 1: Introduction Description 1: This section introduces the historical and modern context of geometric design and outlines the evolution and usage of surface generation techniques, focusing on the role of PDEs. Section 2: Common Surface Generation Techniques for Geometric Design Description 2: This section describes the different methods used for surface generation in geometric design, such as B-splines, Bézier surfaces, NURBS, and other parametric surfaces. Section 3: Partial Differential Equations Description 3: This section defines what partial differential equations (PDEs) are and provides a general overview of their importance and application in surface generation. Section 4: Geometric PDE Surfaces Description 4: This section discusses the generation and advantages of PDE surfaces, emphasizing the types of PDEs used and their respective benefits. Section 5: Implicit PDE Surfaces Description 5: This section explains implicit PDE surfaces, describing various geometric flows and boundary conditions, and highlights studies relevant to computer-aided geometric design. Section 6: Accounting for the Use of Elliptic PDEs to Obtain Implicit PDE Surfaces Description 6: This section details the use of elliptic PDEs in generating implicit surfaces and discusses boundary conditions, numerical methods, and practical applications. Section 7: Parametric PDE Surfaces Description 7: This section covers parametric PDE surfaces, their advantages, and methods for their generation, including a detailed look at the Bloor-Wilson PDE method. Section 8: Alternatives to the Bloor-Wilson PDE Method Description 8: This section presents variations and alternatives to the Bloor-Wilson PDE method, including alternative elliptic PDE formulations and physics-based models. Section 9: Applications of PDE Surfaces Description 9: This section provides an overview of applications of PDE surfaces in computer-aided geometric design, including interactive design, shape blending, surface processing, and design analysis and optimization. Section 10: Surface Generation Description 10: This section discusses PDE-based surface generation techniques for interactive design and blending, detailing the methods and advantages. Section 11: Surface Processing Description 11: This section discusses processing existing surfaces using PDEs, with applications such as image inpainting, noise reduction, N-sided hole filling, and surface fairing. Section 12: Design Analysis and Optimization Description 12: This section explains how PDE surfaces can be used for design analysis and optimization by characterizing and optimizing object designs based on physical properties. Section 13: Other Applications Description 13: This section discusses additional applications of PDE surfaces in subdivision, geometric manipulations, and animation. Section 14: Other Aspects of Computer Graphics Related to PDEs Description 14: This section touches on the application of PDEs in simulating natural phenomena in computer graphics, such as water, smoke, and fire. Section 15: Conclusions Description 15: This section provides concluding remarks, summarizing the benefits and potential future of PDEs in geometric design and computer graphics.
Microfluidics-Based Lab-on-Chip Systems in DNA-Based Biosensing: An Overview
7
--- paper_title: Design Automation and Test Solutions for Digital Microfluidic Biochips paper_content: Microfluidics-based biochips are revolutionizing high-throughput sequencing, parallel immunoassays, blood chemistry for clinical diagnostics, and drug discovery. These devices enable the precise control of nanoliter volumes of biochemical samples and reagents. They combine electronics with biology, and they integrate various bioassay operations, such as sample preparation, analysis, separation, and detection. Compared to conventional laboratory procedures, which are cumbersome and expensive, miniaturized biochips offer the advantages of higher sensitivity, lower cost due to smaller sample and reagent volumes, system integration, and less likelihood of human error. This tutorial paper provides an overview of droplet-based ?digital? microfluidic biochips. It describes emerging computer-aided design (CAD) tools for the automated synthesis and optimization of biochips from bioassay protocols. Recent advances in fluidic-operation scheduling, module placement, droplet routing, pin-constrained chip design, and testing are presented. These CAD techniques allow biochip users to concentrate on the development of nanoscale bioassays, leaving chip optimization and implementation details to design-automation tools. --- paper_title: Microfluidic diagnostic technologies for global public health paper_content: The developing world does not have access to many of the best medical diagnostic technologies; they were designed for air-conditioned laboratories, refrigerated storage of chemicals, a constant supply of calibrators and reagents, stable electrical power, highly trained personnel and rapid transportation of samples. Microfluidic systems allow miniaturization and integration of complex functions, which could move sophisticated diagnostic tools out of the developed-world laboratory. These systems must be inexpensive, but also accurate, reliable, rugged and well suited to the medical and social contexts of the developing world. --- paper_title: Polymer microfabrication technologies for microfluidic systems. paper_content: Polymers have assumed the leading role as substrate materials for microfluidic devices in recent years. They offer a broad range of material parameters as well as material and surface chemical properties which enable microscopic design features that cannot be realised by any other class of materials. A similar range of fabrication technologies exist to generate microfluidic devices from these materials. This review will introduce the currently relevant microfabrication technologies such as replication methods like hot embossing, injection molding, microthermoforming and casting as well as photodefining methods like lithography and laser ablation for microfluidic systems and discuss academic and industrial considerations for their use. A section on back-end processing completes the overview. --- paper_title: Microfluidics for Biological Applications paper_content: Microfluidics for Biological Applications provides researchers and scientists in the biotechnology, pharmaceutical, and life science industries with an introduction to the basics of microfluidics and also discusses how to link these technologies to various biological applications at the industrial and academic level. Readers will gain insight into a wide variety of biological applications for microfluidics. The material presented here is divided into four parts, Part I gives perspective on the history and development of microfluidic technologies, Part II presents overviews on how microfluidic systems have been used to study and manipulate specific classes of components, Part III focuses on specific biological applications of microfluidics: biodefense, diagnostics, high throughput screening, and tissue engineering and finally Part IV concludes with a discussion of emerging trends in the microfluidics field and the current challenges to the growth and continuing success of the field. --- paper_title: A 1.5 µL microbial fuel cell for on-chip bioelectricity generation paper_content: We have developed a dual-chamber microfluidic microbial fuel cell (MFC) system that allows on-chip bacterial culture and conversion of bacterial metabolism into electricity. The micro-MFC contains a vertically stacked 1.5 µL anode chamber and 4 µL cathode chamber, and represents the smallest MFC device to our knowledge. Microfluidic deliveries of growth medium and catholyte were achieved in separate flow channels without cross-channel mass exchange. After inoculation of electrogenic Shewanella oneidensis strain MR-1, current generation was observed on an external load for up to two weeks. Current production was repeatable with replenishment of organic substrates. A maximum current density of 1300 A/m3 and power density of 15 W/m3 were achieved. Electron microscopic studies confirmed large-scale, uniform biofilm growth on the gold anode, and suggested that the enhanced cell/anode interaction in the small volume may accelerate start-up. Our result demonstrates a versatile platform for studying the fundamental issues in MFCs on the micro-scale, and suggests the possibility of powering nanodevices using on-chip bioenergy. --- paper_title: Capillary electrophoresis chips with a sheath-flow supported electrochemical detection system. paper_content: Microfabricated capillary electrophoresis chips containing an integrated sheath-flow electrochemical detector are developed with the goal of minimizing the influence of separation voltages on end-column detection while maintaining optimum performance. The microdevice consists of an upper glass wafer carrying the etched separation, injection, and sheath-flow channels and a lower glass wafer on which gold- and silver-plated electrodes have been fabricated. The sheath-flow channels join the end of the separation channel from each side, and gravity-driven flow carries the analytes to the electrochemical detector placed at working distances of 100, 150, 200, and 250 microm from the separation channel exit. The performance of this detector is evaluated using catechol and a detection limit of 4.1 microM obtained at a working distance of 250 microm. Detection of DNA restriction fragments and PCR product sizing is demonstrated using the electroactive intercalating dye, iron phenanthroline. Additionally, an allele-specific, PCR-based single-nucleotide polymorphism typing assay for the C282Y substitution diagnostic for hereditary hemochromatosis is developed and evaluated using ferrocene-labeled primers. This study advances the feasibility of high-speed, high-throughput chemical and genetic analysis using microchip electrochemical detection. --- paper_title: High Purity DNA Extraction with a SPE Microfluidic Chip Using KI as the Binding Salt paper_content: Based on solid phase extraction method, a novel silicon-PDMS-glass microchip for high purity DNA extraction has been developed by using KI as the binding salt. The microfluidic chip fabricated by MEMS technology was composed of a silicon substrate with a coiled channel and a compounded PDMS-glass cover. With this microfluidic chip, the wall of the coiled channel was used as solid phase matrix for binding DNA and DNA was extracted by the fluxion of the binding buffer, washing buffer and elution buffer. KI as a substitute for guanidine, was used successfully as binding salt for purification DNA, obtaining higher purity of genomic DNA and about 13.9 ng DNA from 1 μL rat whole blood in 35 minutes. --- paper_title: Mammalian electrophysiology on a microfluidic platform paper_content: Abstract ::: The recent development of automated patch clamp technology has increased the throughput of electrophysiology but at the expense of visual access to the cells being studied. To improve visualization and the control of cell position, we have developed a simple alternative patch clamp technique based on microfluidic junctions between a main chamber and lateral recording capillaries, all fabricated by micromolding of polydimethylsiloxane (PDMS). PDMS substrates eliminate the need for vibration isolation and allow direct cell visualization and manipulation using standard microscopy. Microfluidic integration allows recording capillaries to be arrayed 20 μm apart, for a total chamber volume of <0.5 nl. The geometry of the recording capillaries permits high-quality, stable, whole-cell seals despite the hydrophobicity of the PDMS surface. Using this device, we are able to demonstrate reliable whole-cell recording of mammalian cells on an inexpensive microfluidic platform. Recordings of activation of the voltage-sensitive potassium channel Kv2.1 in mammalian cells compare well with traditional pipette recordings. The results make possible the integration of whole-cell electrophysiology with easily manufactured microfluidic lab-on-a-chip devices. ::: ::: microfluidics ::: patch clamp ::: drug screening ::: single-cell assay --- paper_title: Microfluidics-based systems biology. paper_content: Systems biology seeks to develop a complete understanding of cellular mechanisms by studying the functions of intra- and inter-cellular molecular interactions that trigger and coordinate cellular events. However, the complexity of biological systems causes accurate and precise systems biology experimentation to be a difficult task. Most biological experimentation focuses on highly detailed investigation of a single signaling mechanism, which lacks the throughput necessary to reconstruct the entirety of the biological system, while high-throughput testing often lacks the fidelity and detail necessary to fully comprehend the mechanisms of signal propagation. Systems biology experimentation, however, can benefit greatly from the progress in the development of microfluidic devices. Microfluidics provides the opportunity to study cells effectively on both a single- and multi-cellular level with high-resolution and localized application of experimental conditions with biomimetic physiological conditions. Additionally, the ability to massively array devices on a chip opens the door for high-throughput, high fidelity experimentation to aid in accurate and precise unraveling of the intertwined signaling systems that compose the inner workings of the cell. --- paper_title: Automated Design and Programming of a Microfluidic DNA Computer paper_content: Previously, we described ways to implement the functions AND and OR in a DNA computer consisting of microreactors with attached heating elements that control annealing of DNA. Based on these findings, we have devised a similar device that can solve a satisfiability problem in any form. The device occupies linear space and operates in quadratic time, while a previously described competing device is built in quadratic space and operates in quadratic time or greater. Reducing the number of reactors in a DNA computer reduces the loss of DNA through binding to the surfaces of the system. --- paper_title: A thermopneumatic dispensing micropump paper_content: Abstract A micropump for dispensing microliter liquid volumes is realized by warming a thermopneumatic fluid that expands a membrane to pressurize a liquid reservoir with an outlet flow restrictor. The temperature of this liquid–vapor perfluorocarbon mixture is controlled by a thin film heater. The outlet flow restrictors, used in this work, are 75–100 μm silica capillaries, which result in microliter per minute flow rates at 7–42 kPa liquid pressure levels. We investigate both open loop designs and closed loop designs with pressure feedback. The closed loop designs offer increased transient and steady state flow control and allow a much larger range of pump geometries compared to open loop designs. The micropump dispenses 1.4 μl/min for 4.5 h with an average power of 200 mW. Due to its design with no moving parts, this thermopneumatic pump is low cost and simple in construction. The basic principles appear suited to MEMS fabrication, and so would have potential applications for micro total chemical analysis systems (μTAS), biosensors, and “lab-on-a-chip” devices. --- paper_title: Microfluidics: Fluid physics at the nanoliter scale paper_content: Microfabricated integrated circuits revolutionized computation by vastly reducing the space, labor, and time required for calculations. Microfluidic systems hold similar promise for the large-scale automation of chemistry and biology, suggesting the possibility of numerous experiments performed rapidly and in parallel, while consuming little reagent. While it is too early to tell whether such a vision will be realized, significant progress has been achieved, and various applications of significant scientific and practical interest have been developed. Here a review of the physics of small volumes (nanoliters) of fluids is presented, as parametrized by a series of dimensionless numbers expressing the relative importance of various physical phenomena. Specifically, this review explores the Reynolds number Re, addressing inertial effects; the Peclet number Pe, which concerns convective and diffusive transport; the capillary number Ca expressing the importance of interfacial tension; the Deborah, Weissenberg, and elasticity numbers De, Wi, and El, describing elastic effects due to deformable microstructural elements like polymers; the Grashof and Rayleigh numbers Gr and Ra, describing density-driven flows; and the Knudsen number, describing the importance of noncontinuum molecular effects. Furthermore, the long-range nature of viscous flows and the small device dimensions inherent in microfluidics mean that the influence of boundaries is typically significant. A variety of strategies have been developed to manipulate fluids by exploiting boundary effects; among these are electrokinetic effects, acoustic streaming, and fluid-structure interactions. The goal is to describe the physics behind the rich variety of fluid phenomena occurring on the nanoliter scale using simple scaling arguments, with the hopes of developing an intuitive sense for this occasionally counterintuitive world. --- paper_title: A PMMA valveless micropump using electromagnetic actuation paper_content: We have fabricated and characterized a polymethylmethacrylate (PMMA) valveless micropump. The pump consists of two diffuser elements and a polydimethylsiloxane (PDMS) membrane with an integrated composite magnet made of NdFeB magnetic powder. A large-stroke membrane deflection (~200 μm) is obtained using external actuation by an electromagnet. We present a detailed analysis of the magnetic actuation force and the flow rate of the micropump. Water is pumped at flow rates of up to 400 µl/min and backpressures of up to 12 mbar. We study the frequency-dependent flow rate and determine a resonance frequency of 12 and 200 Hz for pumping of water and air, respectively. Our experiments show that the models for valveless micropumps of A. Olsson et al. (J Micromech Microeng 9:34, 1999) and L.S. Pan et al. (J Micromech Microeng 13:390, 2003) correctly predict the resonance frequency, although additional modeling of losses is necessary. --- paper_title: A valveless micro impedance pump driven by electromagnetic actuation paper_content: Over the past two decades, a variety of micropumps have been explored for various applications in microfluidics such as control of pico- and nanoliter flows for drug delivery as well as chemical mixing and analysis. We present the fabrication and preliminary experimental studies of flow performance on the micro impedance pump, a previously unexplored method of pumping fluid on the microscale. The micro impedance pump was constructed of a simple thin-walled tube coupled at either end to glass capillary tubing and actuated electromagnetically. Through the cumulative effects of wave propagation and reflection originating from an excitation located asymmetrically along the length of the elastic tube, a pressure head can be established to drive flow. Flow rates were observed to be reversible and highly dependent on the profile of the excitation. Micro impedance pump flow studies were conducted in open and closed circuit flow configurations. Maximum flow rates of 16 ml min-1 have been achieved under closed loop flow conditions with an elastic tube diameter of 2 mm. Two size scales with channel diameters of 2 mm and 250 µm were also examined in open circuit flow, resulting in flow rates of 191 µl min-1 and 17 µl min-1, respectively. --- paper_title: A Soft-Polymer Piezoelectric Bimorph Cantilever-Actuated Peristaltic Micropump paper_content: For this work, a peristaltic micropump was fabricated. Actuation of the micropump was accomplished with piezoelectric cantilevers. To date, a minimal number of soft polymer-based micropump designs, have explored the use of piezoelectric materials as actuators. The fluidic channel for the micropump was fabricated using PDMS and soft lithography. A novel and very simple template fabrication process was employed, where the use of a mask and clean room facilities was not required. Replica molding to the template produces both, a channel measuring ∼95μm in height, and a rounded cross-sectional geometry, the latter of which is known to be favorable for complete valve shutoff. Clamps were adhered to the tips of the cantilevers, and used to secure in place aluminum valves. The valves had finely machined tips [3mm×200μm(L×W)] on one surface. These tips served as contact points for the valve making contact with the PDMS membrane surface, and were used for the purpose of opening and closing the channels. The cantilevers were secured in place with in-house manufactured micropositioners, which were used to position the valves directly over the PDMS channel. The micropump was thoroughly tested where the variables characterized were maximum attainable backpressure, flow rate, valve open/close characteristics, and valve leakage. The effect of the phase difference (60°, 90°, and 120°) between the square wave signals delivered to each of the three cantilevers was investigated for flow rate and maximum attainable backpressure. Of the three signal phases, the 120° signal demonstrated the largest flow rate range of 52–575 nL/min (0.1–25 Hz), as well as the highest attainable backpressure value of 36,800 Pa (5.34 psi). The valve shutoff characteristics for this micropump was also examined. Fluorescein was trapped inside the microchannel, where the fluorescent signal was monitored throughout the valves open/close cycle with the aid of an epifluorescent microscope. It was found that the fluorescent signal went to zero with the valve fully closed, supporting the conclusion that the valve completely closes off the channel. Further evidence of this claim was demonstrated by observing the valve leakage characteristics. An electronic pressure sensor was used to collect data for this experiment, where it was found the valve was able to hold off 36,800 Pa (5.34 psi), only loosing 2% of this pressure over 10 minutes. In conclusion, it has been shown this micropump outperforms many existing micropump designs, and is suitable for integration into a variety of both macro, and microdevice platforms. Experiments are currently underway to examine how the flow and valving characteristics change for valves with different tip dimensions. A discussion will also be given for improved fabrication techniques, where injection molding is currently being used as the fabrication method to examine the performance changes associated with different cross-sectional PDMS channel geometries. The end goal for use of this micropump is twofold; 1) integration into a micro-free flow separation device, and 2) integration into a capillary electrophoresis instrument for use in direct-sampling neuroscience experiments. --- paper_title: Microfluidic on-chip fluorescence-activated interface control system paper_content: A microfluidic dynamic fluorescence-activated interface control system was developed for lab-on-a-chip applications. The system consists of a straight rectangular microchannel, a fluorescence excitation source, a detection sensor, a signal conversion circuit, and a high-voltage feedback system. Aqueous NaCl as conducting fluid and aqueous glycerol as nonconducting fluid were introduced to flow side by side into the straight rectangular microchannel. Fluorescent dye was added to the aqueous NaCl to work as a signal representing the interface position. Automatic control of the liquid interface was achieved by controlling the electroosmotic effect that exists only in the conducting fluid using a high-voltage feedback system. A LABVIEW program was developed to control the output of high-voltage power supply according the actual interface position, and then the interface position is modified as the output of high-voltage power supply. At last, the interface can be moved to the desired position automatically using this feedback system. The results show that the system presented in this paper can control an arbitrary interface location in real time. The effects of viscosity ratio, flow rates, and polarity of electric field were discussed. This technique can be extended to switch the sample flow and droplets automatically. --- paper_title: Nucleic Acid-based Detection of Bacterial Pathogens Using Integrated Microfluidic Platform Systems paper_content: The advent of nucleic acid-based pathogen detection methods offers increased sensitivity and specificity over traditional microbiological techniques, driving the development of portable, integrated biosensors. The miniaturization and automation of integrated detection systems presents a significant advantage for rapid, portable field-based testing. In this review, we highlight current developments and directions in nucleic acid-based micro total analysis systems for the detection of bacterial pathogens. Recent progress in the miniaturization of microfluidic processing steps for cell capture, DNA extraction and purification, polymerase chain reaction, and product detection are detailed. Discussions include strategies and challenges for implementation of an integrated portable platform. --- paper_title: Mixing with bubbles: a practical technology for use with portable microfluidic devices paper_content: This paper demonstrates a methodology for micromixing that is sufficiently simple that it can be used in portable microfluidic devices. It illustrates the use of the micromixer by incorporating it into an elementary, portable microfluidic system that includes sample introduction, sample filtration, and valving. This system has the following characteristics: (i) it is powered with a single hand-operated source of vacuum, (ii) it allows samples to be loaded easily by depositing them into prefabricated wells, (iii) the samples are filtered in situ to prevent clogging of the microchannels, (iv) the structure of the channels ensure mixing of the laminar streams by interaction with bubbles of gas introduced into the channels, (v) the device is prepared in a single-step soft-lithographic process, and (vi) the device can be prepared to be resistant to the adsorption of proteins, and can be used with or without surface-active agents. --- paper_title: Investigation of active interface control of pressure driven two-fluid flow in microchannels paper_content: We report a novel concept to control the interface location of a pressure-driven multi-phase flow in a microchannel by using electroosmotic flow effects. This concept has potential applications in flow switching and cell sorting in bio-analytical systems. In an H-shaped microchannel structure, aqueous sodium chloride (NaCl) solution and glycerol diluted with water were pumped through two inlets at the same flow rate. The electric field was applied on the electrolyte solution side. Adjusting the magnitude and direction of electric field has successfully controlled the interface position between the two phases. This technique provides a new approach to control the interface position between the two fluids. --- paper_title: Droplet microfluidics for characterizing the neurotoxin-induced responses in individual Caenorhabditis elegans paper_content: A droplet-based microfluidic device integrated with a novel floatage-based trap array and a tapered immobilization channel array was presented for characterizing the neurotoxin-induced multiple responses in individual Caenorhabditis elegans (C. elegans) continuously. The established device enabled the evaluations of movement and fluorescence imaging analysis of individual C. elegans simultaneously. The utility of this device was demonstrated by the pharmacological evaluation of neurotoxin (6-hydroxydopamine, 6-OHDA) triggered mobility defects, neuron degeneration and oxidative stress in individual worms. Exposure of living worms to 6-OHDA could cause obvious mobility defects, selective degeneration of dopaminergic (DAergic) neurons, and increased oxidative stress in a dose dependent manner. These results are important towards the understanding of mechanisms leading to DAergic toxicity by neurotoxin and will be of benefit for the screening of new therapeutics for neurodegenerative diseases. This device was simple, stable and easy to operate, with the potential to facilitate whole-animal assays and drug screening in a high throughput manner at single animal resolution. --- paper_title: Development and modeling of electrically triggered hydrogels for microfluidic applications paper_content: In this paper, we present progress in the development of electrically triggered hydrogels as components in microfluidic systems. Stimuli-responsive hydrogels are fabricated using liquid-phase photopolymerization techniques and are subjected to different voltage signals in order to determine their volume change response characteristics. A chemoelectromechanical model has been developed to predict the swelling and deswelling kinetics of these hydrogels. The Nernst-Planck equation, Poisson equation, and mechanical equations are the basic governing relationships, and these are solved in an iterative manner to compute the deformation of the hydrogel in response to varied electrical input. --- paper_title: A simple PDMS-based microfluidic channel design that removes bubbles for long-term on-chip culture of mammalian cells paper_content: This report shows methods to fabricate polydimethylsiloxane (PDMS) microfluidic systems for long-term (up to 10 day) cell culture. Undesired bubble accumulation in microfluidic channels abruptly changes the microenvironment of adherent cells and leads to the damage and death of cells. Existing bubble trapping approaches have drawbacks such as the need to pause fluid flow, requirement for external vacuum or pressure source, and possible cytotoxicity. This study reports two kinds of integrated bubble trap (IBT) which have excellent properties, including simplicity in structure, ease in fabrication, no interference with the flow, and long-term stability. IBT-A provides the simplest solution to prevent bubbles from entering microfluidic channels. In situ time-lapse imaging experiments indicate that IBT-B is an excellent device both for bubble trapping and debubbling in cell-loaded microfluidics. MC 3T3 E1 cells, cultured in a long and curved microfluidic channel equipped with IBT-B, showed high viability and active proliferation after 10 days of continuous fluid flow. The comprehensive measures taken in our experiments have led to successful long-term, bubble-free, on-chip culture of cells. --- paper_title: A membrane micropump electrostatically actuated across the working fluid paper_content: A novel electrostatically actuated valveless micropump is presented whereby an actuation voltage is applied across a working fluid, which takes advantage of the higher relative electrical permittivity of water and many other fluids with respect to air. The device is fabricated in silicon and the diaphragm is made of electroplated nickel, while the assembly is carried out using flip–chip bonding. A reduced-order model is used to describe the micropump's performance in terms of electrical properties of the fluid, the residual stress in the diaphragm, geometrical features and the actuation voltage. The tested prototype featured a ~1 µl min−1 flow rate at 50 V actuation voltage. The model predictions show the possibility of achieving flow rates >1 µl min−1 with the actuation voltage <10 V for devices with 3 mm diaphragm size. --- paper_title: Logic control of microfluidics with smart colloid. paper_content: We report the successful realization of a microfluidic chip with switching and corresponding inverting functionalities. The chips are identical logic control components incorporating a type of smart colloid, giant electrorheological fluid (GERF), which possesses reversible characteristics via a liquid-solid phase transition under external electric field. Two pairs of electrodes embedded on the sides of two microfluidic channels serve as signal input and output, respectively. One, located in the GERF micro-channel is used to control the flow status of GERF, while another one in the ither micro-fluidic channel is used to detect the signal generated with a passing-by droplet (defined as a signal droplet). Switching of the GERF from the suspended state (off-state) to the flowing state (on-state) or vice versa in the micro-channel is controlled by the appearance of signal droplets whenever they pass through the detection electrode. The output on-off signals can be easily demonstrated, clearly matching with GERF flow status. Our results show that such a logic switch is also a logic IF gate, while its inverter functions as a NOT gate. --- paper_title: Point-of-care testing of proteins paper_content: Point-of-care testing (POCT) is a fast developing area in clinical diagnostics that is considered to be one of the main driving forces for the future in vitro diagnostic market. POCT means decentralized testing at the site of patient care. The most important POCT devices are handheld blood glucose sensors. In some of these sensors, after the application of less than 1 microl whole blood, the results are displayed in less than 10 s. For protein determination, the most commonly used devices are based on lateral flow technology. Although these devices are convenient to use, the results are often only qualitative or semiquantitative. The review will illuminate some of the current methods employed in POCT for proteins and will discuss the outlook for techniques (e.g., electrochemical immunosensors) that could have a great impact on future POCT of proteins. --- paper_title: Design Tools for Digital Microfluidic Biochips: Toward Functional Diversification and More Than Moore paper_content: Microfluidics-based biochips enable the precise control of nanoliter volumes of biochemical samples and reagents. They combine electronics with biology, and they integrate various bioassay operations, such as sample preparation, analysis, separation, and detection. Compared to conventional laboratory procedures, which are cumbersome and expensive, miniaturized biochips offer the advantages of higher sensitivity, lower cost due to smaller sample and reagent volumes, system integration, and less likelihood of human error. This paper first describes the droplet-based “digital” microfluidic technology platform and emerging applications. The physical principles underlying droplet actuation are next described. Finally, the paper presents computer-aided design tools for simulation, synthesis and chip optimization. These tools target modeling and simulation, scheduling, module placement, droplet routing, pin-constrained chip design, and testing. --- paper_title: Microfluidics: Fluid physics at the nanoliter scale paper_content: Microfabricated integrated circuits revolutionized computation by vastly reducing the space, labor, and time required for calculations. Microfluidic systems hold similar promise for the large-scale automation of chemistry and biology, suggesting the possibility of numerous experiments performed rapidly and in parallel, while consuming little reagent. While it is too early to tell whether such a vision will be realized, significant progress has been achieved, and various applications of significant scientific and practical interest have been developed. Here a review of the physics of small volumes (nanoliters) of fluids is presented, as parametrized by a series of dimensionless numbers expressing the relative importance of various physical phenomena. Specifically, this review explores the Reynolds number Re, addressing inertial effects; the Peclet number Pe, which concerns convective and diffusive transport; the capillary number Ca expressing the importance of interfacial tension; the Deborah, Weissenberg, and elasticity numbers De, Wi, and El, describing elastic effects due to deformable microstructural elements like polymers; the Grashof and Rayleigh numbers Gr and Ra, describing density-driven flows; and the Knudsen number, describing the importance of noncontinuum molecular effects. Furthermore, the long-range nature of viscous flows and the small device dimensions inherent in microfluidics mean that the influence of boundaries is typically significant. A variety of strategies have been developed to manipulate fluids by exploiting boundary effects; among these are electrokinetic effects, acoustic streaming, and fluid-structure interactions. The goal is to describe the physics behind the rich variety of fluid phenomena occurring on the nanoliter scale using simple scaling arguments, with the hopes of developing an intuitive sense for this occasionally counterintuitive world. --- paper_title: Biomicrofluidics: Recent trends and future challenges paper_content: Biomicrofluidics is an active area of research at present, exploring the synergy of microfluidics with cellular and molecular biology, biotechnology, and biomedical engineering. The present article outlines the recent advancements in these areas, including the development of novel lab-on-a-chip based applications. Particular emphasis is given on the microfluidics-based handling of DNA, cells, and proteins, as well as fundamental microfluidic considerations for design of biomedical microdevices. Future directions of research on these topics are also discussed. --- paper_title: Design Tools for Digital Microfluidic Biochips: Toward Functional Diversification and More Than Moore paper_content: Microfluidics-based biochips enable the precise control of nanoliter volumes of biochemical samples and reagents. They combine electronics with biology, and they integrate various bioassay operations, such as sample preparation, analysis, separation, and detection. Compared to conventional laboratory procedures, which are cumbersome and expensive, miniaturized biochips offer the advantages of higher sensitivity, lower cost due to smaller sample and reagent volumes, system integration, and less likelihood of human error. This paper first describes the droplet-based “digital” microfluidic technology platform and emerging applications. The physical principles underlying droplet actuation are next described. Finally, the paper presents computer-aided design tools for simulation, synthesis and chip optimization. These tools target modeling and simulation, scheduling, module placement, droplet routing, pin-constrained chip design, and testing. --- paper_title: Laminar flow and convective transport processes : scaling principles and asymptotic analysis paper_content: Basic Principles Unidirectional Flows Creeping Flows Further Results in the Creeping Flow Limit Asymptotic Approximations for Unidirectional, One-Dimensional, and Nearly Unidirectional Flows Thin Films, Lubrication, and Related Problems Weak Convection Effects Strong Convection Effects in Heat and Mass Transfer at Low Reynolds Number Laminar Boundary-Layer theory Thermal Boundary-Layer Theory at Large Reynolds Number Natural and Mixed Convection Flows. --- paper_title: RNA biosensor for the rapid detection of viable Escherichia coli in drinking water. paper_content: A highly sensitive and specific RNA biosensor was developed for the rapid detection of viable Escherichia coli as an indicator organism in water. The biosensor is coupled with protocols developed earlier for the extraction and amplification of mRNA molecules from E. coli [Anal. Biochem. 303 (2002) 186]. However, in contrast to earlier detection methods, the biosensor allows the rapid detection and quantification of E. coli mRNA in only 15-20 min. In addition, the biosensor is portable, inexpensive and very easy to use, which makes it an ideal detection system for field applications. Viable E. coli are identified and quantified via a 200 nt-long target sequence from mRNA (clpB) coding for a heat shock protein. For sample preparation, a heat shock is applied to the cells prior to disruption. Then, mRNA is extracted, purified and finally amplified using the isothermal amplification technique Nucleic acid sequence-based amplification (NASBA). The amplified RNA is then quantified with the biosensor. The biosensor is a membrane-based DNA/RNA hybridization system using liposome amplification. The various biosensor components such as DNA probe sequences and concentration, buffers, incubation times have been optimized, and using a synthetic target sequence, a detection limit of 5 fmol per sample was determined. An excellent correlation to a much more elaborate and expensive laboratory based detection system was demonstrated, which can detect as few as 40 E. coli cfu/ml. Finally, the assay was tested regarding its specificity; no false positive signals were obtained from other microorganisms or from nonviable E. coli cells. --- paper_title: Electrical detection of viral DNA using ultramicroelectrode arrays. paper_content: A fully electrical array for voltammetric detection of redox molecules produced by enzyme-labeled affinity binding complexes is shown. The electronic detection is based on ultramicroelectrode arrays manufactured in silicon technology. The 200-μm circular array positions have 800-nm-wide interdigitated gold ultramicroelectrodes embedded in silicon dioxide. Immobilization of oligonucleotide capture probes onto the gold electrodes surfaces is accomplished via thiol−gold self-assembling. Spatial separation of probes at different array positions is controlled by polymeric rings around each array position. The affinity bound complexes are labeled with alkaline phosphatase, which converts the electrochemically inactive substrate 4-aminophenyl phosphate into the active 4-hydroxyaniline (HA). The nanoscaled electrodes are used to perform a sensitive detection of enzyme activity by signal enhancing redox recycling of HA resulting in local and position-specific current signals. Multiplexing and serial readout is rea... --- paper_title: Development of a microfluidic biosensor module for pathogen detection paper_content: The development of a microfluidic biosensor module with fluorescence detection for the identification of pathogenic organisms and viruses is presented in this article. The microfluidic biosensor consists of a network of microchannels fabricated in polydimethylsiloxane (PDMS) substrate. The microchannels are sealed with a glass substrate and packed in a Plexiglas housing to provide connection to the macro-world and ensure leakage-free flow operation. Reversible sealing permits easy disassembly for cleaning and replacing the microfluidic channels. The fluidic flow is generated by an applied positive pressure gradient, and the module can be operated under continuous solution flow of up to 80 µL min−1. The biosensor recognition principle is based on DNA/RNA hybridization and liposome signal amplification. Superparamagnetic beads are incorporated into the system as a mobile solid support and are an essential part of the analysis scheme. In this study, the design, fabrication and the optimization of concentrations and amounts of the different biosensor components are carried out. The total time required for an assay is only 15 min including sample incubation time. The biosensor module is designed so that it can be easily integrated with a micro total analysis system, which will combine sample preparation and detection steps onto a single chip. --- paper_title: Principles of Bacterial Detection: Biosensors, Recognition Receptors and Microsystems paper_content: Principles of Bacterial Detection: Biosensors, Recognition Receptors and Microsystems will cover the up-to-date biosensor technologies used for the detection of bacteria. Written by the world's mos ... --- paper_title: Multi-analyte single-membrane biosensor for the serotype-specific detection of Dengue virus. paper_content: A multi-analyte biosensor based on nucleic acid hybridization and liposome signal amplification was developed for the rapid serotype-specific detection of Dengue virus. After RNA amplification, detection of Dengue virus specific serotypes can be accomplished using a single analysis within 25 min. The multi-analyte biosensor is based on single-analyte assays (see Baeumner et al (2002) Anal Chem 74:1442-1448) developed earlier in which four analyses were required for specific serotype identification of Dengue virus samples. The multi-analyte biosensor employs generic and serotype-specific DNA probes, which hybridize with Dengue RNA that is amplified by the isothermal nucleic acid sequence based amplification (NASBA) reaction. The generic probe (reporter probe) is coupled to dye-entrapping liposomes and can hybridize to all four Dengue serotypes, while the serotype-specific probes (capture probes) are immobilized through biotin-streptavidin interaction on the surface of a polyethersulfone membrane strip in separate locations. A mixture of amplified Dengue virus RNA sequences and liposomes is applied to the membrane and allowed to migrate up along the test strip. After the liposome-target sequence complexes hybridize to the specific probes immobilized in the capture zones of the membrane strip, the Dengue serotype present in the sample can be determined. The amount of liposomes immobilized in the various capture zones directly correlates to the amount of viral RNA in the sample and can be quantified by a portable reflectometer. The specific arrangement of the capture zones and the use of unlabeled oligonucleotides (cold probes) enabled us to dramatically reduce the cross-reactivity of Dengue virus serotypes. Therefore, a single biosensor can be used to detect the exact Dengue serotype present in the sample. In addition, the biosensor can simultaneously detect two serotypes and so it is useful for the identification of possible concurrent infections found in clinical samples. The various biosensor components have been optimized with respect to specificity and sensitivity, and the system has been ultimately tested using blind coded samples. The biosensor demonstrated 92% reliability in Dengue serotype determination. Following isothermal amplification of the target sequences, the biosensor had a detection limit of 50 RNA molecules for serotype 2, 500 RNA molecules for serotypes 3 and 4, and 50,000 molecules for serotype 1. The multi-analyte biosensor is portable, inexpensive, and very easy to use and represents an alternative to current detection methods coupled with nucleic acid amplification reactions such as electrochemiluminescence, or those based on more expensive and time consuming methods such as ELISA or tissue culture. --- paper_title: Nucleic Acid-based Detection of Bacterial Pathogens Using Integrated Microfluidic Platform Systems paper_content: The advent of nucleic acid-based pathogen detection methods offers increased sensitivity and specificity over traditional microbiological techniques, driving the development of portable, integrated biosensors. The miniaturization and automation of integrated detection systems presents a significant advantage for rapid, portable field-based testing. In this review, we highlight current developments and directions in nucleic acid-based micro total analysis systems for the detection of bacterial pathogens. Recent progress in the miniaturization of microfluidic processing steps for cell capture, DNA extraction and purification, polymerase chain reaction, and product detection are detailed. Discussions include strategies and challenges for implementation of an integrated portable platform. --- paper_title: Esterase 2-oligodeoxynucleotide conjugates as sensitive reporter for electrochemical detection of nucleic acid hybridization. paper_content: A thermostable, single polypeptide chain enzyme, esterase 2 from Alicyclobacillus acidocaldarius, was covalently conjugated in a site specific manner with an oligodeoxynucleotide. The conjugate served as a reporter enzyme for electrochemical detection of DNA hybridization. Capture oligodeoxynucleotides were assembled on gold electrode via thiol-gold interaction. The esterase 2-oligodeoxynucleotide conjugates were brought to electrode surface by DNA hybridization. The p-aminophenol formed by esterase 2 catalyzed hydrolysis of p-aminophenylbutyrate was amperometrically determined. Esterase 2 reporters allows to detect approximately 1.5 x 10(-18)mol oligodeoxynucleotides/0.6 mm2 electrode, or 3 pM oligodeoxynucleotide in a volume of 0.5 microL. Chemically targeted, single site covalent attachment of esterase 2 to an oligodeoxynucleotide significantly increases the selectivity of the mismatch detection as compared to widely used, rather unspecific, streptavidin/biotin conjugated proteins. Artificial single nucleotide mismatches in a 510-nucleotide ssDNA could be reliably determined using esterase 2-oligodeoxynucleotide conjugates as a reporter. --- paper_title: Fungal pathogenic nucleic acid detection achieved with a microfluidic microarray device. paper_content: Detection of polymerase chain reaction (PCR) products obtained from cultured greenhouse fungal pathogens, Botrytis cinerea and Didymella bryoniae has been achieved using a previously developed microfluidic microarray assembly (MMA) device. The flexible probe construction and rapid DNA detection resulted from the use of centrifugal pumping in the steps of probe introduction and sample delivery, respectively. The line arrays of the oligonucleotide probes were "printed" on a CD-like glass chip using a polydimethylsiloxane (PDMS) polymer plate with radial microfluidic channels, and the sample hybridizations were conducted within the spiral channels on the second plate. The experimental conditions of probe immobilization and sample hybridization were optimized, and both complementary oligonucleotides and PCR products were tested. We were able to achieve adequate fluorescent signals with a sample load as small as 0.5 nM (1 microL) for oligonucleotide samples; for PCR products, we achieved detection at the level of 3 ng. --- paper_title: Detection of viable oocysts of Cryptosporidium parvum following nucleic acid sequence based amplification. paper_content: A reliable method using nucleic acid sequence based amplification (NASBA) with subsequent electrochemiluminescent detection for the specific and sensitive detection of viable oocysts of Cryptosporidium parvum in environmental samples was developed. The target molecule was a 121-nt sequence from the C. parvum heat shock protein hsp70 mRNA. Oocysts of C. parvum were isolated from environmental water via vortex flow filtration and immunomagnetic separation. A brief heat shock was applied to the oocysts and the nucleic acid purified using an optimized very simple but efficient nucleic acid extraction method. The nucleic acid was amplified in a water bath for 60-90 min with NASBA, an isothermal technique that specifically amplifies RNA molecules. Amplified RNA was hybridized with specific DNA probes and quantified with an electrochemiluminescence (ECL) detection system. We optimized the nucleic acid extraction and purification, the NASBA reaction, amplification, and detection probes. We were able to amplify and detect as few as 10 mRNA molecules. The NASBA primers as well as the ECL probes were highly specific for C. parvum in buffer and in environmental samples. Our detection limit was approximately 5 viable oocysts/sample for the assay procedure, including nucleic acid extraction, NASBA, and ECL detection. Nonviable oocysts were not detected. --- paper_title: Biomicrofluidics: Recent trends and future challenges paper_content: Biomicrofluidics is an active area of research at present, exploring the synergy of microfluidics with cellular and molecular biology, biotechnology, and biomedical engineering. The present article outlines the recent advancements in these areas, including the development of novel lab-on-a-chip based applications. Particular emphasis is given on the microfluidics-based handling of DNA, cells, and proteins, as well as fundamental microfluidic considerations for design of biomedical microdevices. Future directions of research on these topics are also discussed. --- paper_title: A nanoliter rotary device for polymerase chain reaction paper_content: Polymerase chain reaction (PCR) has revolutionized a variety of assays in biotechnology. The ability to implement PCR in disposable and reliable microfluidic chips will facilitate its use in applications such as rapid medical diagnostics, food control testing, and biological weapons detection. We fabricated a microfluidic chip with integrated heaters and plumbing in which various forms of PCR have been successfully demonstrated. The device uses only 12 nL of sample, one of the smallest sample volumes demonstrated to date. Minimizing the sample volume allows low power consumption, reduced reagent costs, and ultimately more rapid thermal cycling. --- paper_title: Integrated Microfluidic CustomArray Device for Bacterial Genotyping and Identification paper_content: The ongoing threat of the potential use of biothreat agents (such as Bacillus anthracis) as a biochemical weapon emphasizes the need for a rapid, miniature, fully automated, and highly specific detection assay. An integrated and self-contained microfluidic device has been developed to rapidly detect B. anthracis and many other bacteria. The device consists of a semiconductor-based DNA microarray chip with 12,000 features and a microfluidic cartridge that automates the fluid handling steps required to carry out a genotyping assay for pathogen identification. This fully integrated and disposable device consists of low-cost microfluidic pumps, mixers, valves, fluid channels, reagent storage chambers, and DNA microarray silicon chip. Microarray hybridization and subsequent fluid handling and reactions were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. The genotyping results showed that the device was able to identify and distinguish B. anthrac... --- paper_title: A rapid biosensor for viable B. anthracis spores. paper_content: A simple membrane-strip-based biosensor assay has been combined with a nucleic acid sequence-based amplification (NASBA) reaction for rapid (4 h) detection of a small number (ten) of viable B. anthracis spores. The biosensor is based on identification of a unique mRNA sequence from one of the anthrax toxin genes, the protective antigen (pag), encoded on the toxin plasmid, pXO1, and thus provides high specificity toward B. anthracis. Previously, the anthrax toxins activator (atxA) mRNA had been used in our laboratory for the development of a biosensor for the detection of a single B. anthracis spore within 12 h. Changing the target sequence to the pag mRNA provided the ability to shorten the overall assay time significantly. The vaccine strain of B. anthracis (Sterne strain) was used in all experiments. A 500-μL sample containing as few as ten spores was mixed with 500 μL growth medium and incubated for 30 min for spore germination and mRNA production. Thus, only spores that are viable were detected. Subsequently, RNA was extracted from lysed cells, selectively amplified using NASBA, and rapidly identified by the biosensor. While the biosensor assay requires only 15 min assay time, the overall process takes 4 h for detection of ten viable B. anthracis spores, and is shortened significantly if more spores are present. The biosensor is based on an oligonucleotide sandwich-hybridization assay format. It uses a membrane flow-through system with an immobilized DNA probe that hybridizes with the target sequence. Signal amplification is provided when the target sequence hybridizes to a second DNA probe that has been coupled to liposomes encapsulating the dye sulforhodamine B. The amount of liposomes captured in the detection zone can be read visually or quantified with a hand-held reflectometer. The biosensor can detect as little as 1 fmol target mRNA (1 nmol L−1). Specificity analysis revealed no cross-reactivity with 11 organisms tested, among them closely related species such as B. cereus, B. megaterium, B. subtilis, B. thuringiensis, Lactococcus lactis, Lactobacillus plantarum, and Chlostridium butyricum. Also, no false positive signals were obtained from nonviable spores. We suggest that this inexpensive biosensor is a viable option for rapid, on-site analysis providing highly specific data on the presence of viable B. anthracis spores. --- paper_title: High-throughput SPR sensor for food safety. paper_content: High-throughput surface plasmon resonance (SPR) biosensor for rapid and parallelized detection of nucleic acids identifying specific bacterial pathogens is reported. The biosensor consists of a high-performance SPR imaging sensor with polarization contrast and internal referencing (refractive index resolution 2 x 10(-7) RIU) and an array of DNA probes microspotted on the surface of the SPR sensor. It is demonstrated that short sequences of nucleic acids (20-23 bases) characteristic for bacterial pathogens such as Brucella abortus, Escherichia coli, and Staphylococcus aureus can be detected at 100 pM levels. Detection of specific DNA or RNA sequences can be performed in less than 15 min by the reported SPR sensor. --- paper_title: Nucleic acid approaches for detection and identification of biological warfare and infectious disease agents. paper_content: Biological warfare agents are the most problematic of the weapons of mass destruction and terror. Both civilian and military sources predict that over the next decade the threat from proliferation of these agents will increase significantly. In this review we summarize the state of the art in detection and identification of biological threat agents based on PCR technology with emphasis on the new technology of microarrays. The advantages and limitations of real-time PCR technology and a review of the literature as it applies to pathogen and virus detection are presented. The paper covers a number of issues related to the challenges facing biological threat agent detection technologies and identifies critical components that must be overcome for the emergence of reliable PCR-based DNA technologies as bioterrorism countermeasures and for environmental applications. The review evaluates various system components developed for an integrated DNA microchip and the potential applications of the next generation of fully automated DNA analyzers with integrated sample preparation and biosensing elements. The article also reviews promising devices and technologies that are near to being, or have been, commercialized. --- paper_title: DNA sensor based on an Escherichia coli lac Z gene probe immobilization at self-assembled monolayers-modified gold electrodes paper_content: Abstract A novel approach to construct an electrochemical DNA sensor based on immobilization of a 25 base single-stranded probe, specific to E. coli lac Z gene, onto a gold disk electrode is described. The capture probe is covalently attached using a self-assembled monolayer of 3,3′-dithiodipropionic acid di( N -succinimidyl ester) (DTSP) and mercaptohexanol (MCH) as spacer. Hybridization of the immobilized probe with the target DNA at the electrode surface was monitored by square wave voltammetry (SWV), using methylene blue (MB) as electrochemical indicator. Variables involved in the sensor performance, such as the DTSP concentration in the modification solution, the self-assembled monolayers (SAM) formation time, the DNA probe drying time atop the electrode surface and the amount of probe immobilized, were optimized. A good stability of the single- and double-stranded oligonucleotides immobilized on the DTSP-modified electrode was demonstrated, and a target DNA detection limit of 45 nM was achieved without signal amplification. Hybridization specificity was checked with non-complementary and mismatch oligonucleotides. A single-base mismatch oligonucleotide gave a hybridization response only 7 ± 3%, higher than the signal obtained for the capture probe before hybridization. The possibility of reusing the electrochemical genosensor was also tested. --- paper_title: A DNA electrochemical sensor based on nanogold-modified poly-2,6-pyridinedicarboxylic acid film and detection of PAT gene fragment. paper_content: Abstract A new DNA electrochemical biosensor is described for electrochemical impedance spectroscopy (EIS) detection of the sequence-specific DNA related to PAT transgene in the transgenic plants. Poly-2,6-pyridinedicarboxylic acid film (PDC) was fabricated by electropolymerizing 2,6-pyridinedicarboxylic acid on the glassy carbon electrode (GCE). The gold nanoparticles (NG) were modified on the PDC/GCE to prepare NG/PDC/GCE, and then DNA probe (ssDNA) was immobilized on the NG/PDC/GCE by the interaction of NG with DNA. The immobilization of NG and the immobilization and hybridization of DNA probe were characterized with differential pulse voltammetry (DPV) and cyclic voltammetry (CV) using methylene blue (MB) as indicator and EIS. MB had a couple of well-defined CV peaks at the NG/PDC/GCE, and these redox peak currents increased after the immobilization of the DNA probe. After the hybridization of the DNA probe with the complementary single-stranded DNA (cDNA), the redox peak currents of MB decreased greatly. The electron transfer resistance ( R et ) of the electrode surface in EIS in [Fe(CN) 6 ] 3–/4– solution increased after the immobilization of the DNA probe on the NG/PDC/GCE. The hybridization of the DNA probe with cDNA made R et increase further. EIS was used for the label-free detection of the target DNA. The NG modified on the PDC dramatically enhanced the immobilization amount of the DNA probe and greatly improved the sensitivity of DNA detection. The difference between the R et value at the ssDNA/NG/PDC/GCE and that at hybridization DNA-modified electrode (dsDNA/NG/PDC/GCE) was used as the signal for detecting the PAT gene fragment with the dynamic range from 1.0 × 10 −10 to 1.0 × 10 −5 mol/L. A detection limit of 2.4 × 10 –11 mol/L could be estimated. --- paper_title: Two electrophoreses in different pH buffers to purify forest soil DNA contaminated with humic substances paper_content: Direct extraction of DNA from soils is a useful way to gain genetic information on the soil source. However, DNA extraction from soils, especially forest soils, may be contaminated by humic substances due to their similar physical and chemical characteristics to soil. Even commercial soil DNA extraction kits fail to retrieve DNA from these soils. Using the potential changes of specific charge of DNA and humic substances in a pH solution, we performed two electrophoreses in different pH buffers to eliminate the interfering substances. The method produced high quality soil DNA, which is applicable for PCR amplification. --- paper_title: DNA single-base mismatch study with an electrochemical enzymatic genosensor paper_content: Abstract A thorough selectivity study of DNA hybridization employing an electrochemical enzymatic genosensor is discussed here. After immobilizing on a gold film a 30-mer 3′-thiolated DNA strand, hybridization with a biotinylated complementary one takes place. Then, alkaline phosphatase is incorporated to the duplex through the interaction streptavidin–biotin. Enzymatic generation of indigo blue from 3-indoxyl phosphate and subsequent electrochemical detection was made. The influence of hybridization conditions was studied in order to better discern between fully complementary and mismatched strands. Detection of 3, 2 and 1 mismatch was possible. The type and location of the single-base mismatch, as well as the influence of the length of the strands was studied too. Mutations that suppose displacement of the reading frame were also considered. The effect of the concentration on the selectivity was tested, resulting a highly selective genosensor with an adequate sensitivity and stability. --- paper_title: Microfluidic device architecture for electrochemical patterning and detection of multiple DNA sequences. paper_content: Electrochemical biosensors pose an attractive solution for point-of-care diagnostics because they require minimal instrumentation and they are scalable and readily integrated with microelectronics. The integration of electrochemical biosensors with microscale devices has, however, proven to be challenging due to significant incompatibilities among biomolecular stability, operation conditions of electrochemical sensors, and microfabrication techniques. Toward a solution to this problem, we have demonstrated here an electrochemical array architecture that supports the following processes in situ, within a self-enclosed microfluidic device: (a) electrode cleaning and preparation, (b) electrochemical addressing, patterning, and immobilization of sensing biomolecules at selected sensor pixels, (c) sequence-specific electrochemical detection from multiple pixels, and (d) regeneration of the sensing pixels. The architecture we have developed is general, and it should be applicable to a wide range of biosensing schemes that utilize gold-thiol self-assembled monolayer chemistry. As a proof-of-principle, we demonstrate the detection and differentiation of polymerase chain reaction (PCR) amplicons diagnostic of human (H1N1) and avian (H5N1) influenza. --- paper_title: Optical study of DNA surface hybridization reveals DNA surface density as a key parameter for microarray hybridization kinetics paper_content: We investigate the kinetics of DNA hybridization reactions on glass substrates, where one 22 mer strand (bound-DNA) is immobilized via phenylene-diisothiocyanate linker molecule on the substrate, the dye-labeled (Cy3) complementary strand (free-DNA) is in solution in a reaction chamber. We use total internal reflection fluorescence for surface detection of hybridization. As a new feature we perform a simultaneous real-time measurement of the change of free-DNA concentration in bulk parallel to the total internal reflection fluorescence measurement. We observe that the free-DNA concentration decreases considerably during hybridization. We show how the standard Langmuir kinetics needs to be extended to take into account the change in bulk concentration and explain our experimental results. Connecting both measurements we can estimate the surface density of accessible, immobilized bound-DNA. We discuss the implications with respect to DNA microarray detection. --- paper_title: A nucleic acid biosensor for the detection of a short sequence related to the hepatitis B virus using bis(benzimidazole)cadmium(II) dinitrate as an electrochemical indicator paper_content: Abstract A novel hybridization indicator, bis(benzimidazole)cadmium(II) dinitrate (Cd(bzim) 2 (NO 3 ) 2 ), was utilized to develop an electrochemical DNA biosensor for the detection of a short DNA sequence related to the hepatitis B virus (HBV). The sensor relies on the immobilization and hybridization of the 21-mer single-stranded oligonucleotide from the HBV long repeat at the glassy carbon electrode (GCE). The hybridization between the probe and its complementary sequence as the target was studied by enhancement of the peak of the Cd ( bzim ) 2 2 + indicator using cyclic voltammetry (CV) and differential pulse voltammetry (DPV). Numerous factors affecting the probe immobilization, target hybridization, and indicator binding reactions were optimized to maximize the sensitivity and speed of the assay time. With this approach, a sequence of the HBV could be quantified over the range from 1.49 × 10 −7 M to 1.06 × 10 −6 M, with a linear correlation of r = 0.9973 and a detection limit of 8.4 × 10 −8 M. The Cd ( bzim ) 2 2 + signal observed from the probe sequence before and after hybridization with a four-base mismatch containing sequence was lower than that observed after hybridization with a complementary sequence, showing good selectivity. These results demonstrate that the Cd ( bzim ) 2 2 + indicator provides great promise for the rapid and specific measurement of the target DNA. --- paper_title: A DNA electrochemical sensor prepared by electrodepositing zirconia on composite films of single-walled carbon nanotubes and poly(2,6-pyridinedicarboxylic acid), and its application to detection of the PAT gene fragment. paper_content: Carboxyl group-functionalized single-walled carbon nanotubes (SWNTs) and 2,6-pyridinedicarboxylic acid (PDC) were electropolymerized by cyclic voltammetry on a glassy-carbon electrode (GCE) surface to form composite films (SWNTs/PDC). Zirconia was then electrodeposited on the SWNTs/PDC/GCE from an aqueous electrolyte containing ZrOCl2 and KCl by cycling the potential between -1.1 V and +0.7 V at a scan rate of 20 mV s(-1). DNA probes with a phosphate group at the 5' end were easily immobilized on the zirconia thin films, because of the strong affinity between zirconia and phosphate groups. The sensors were characterized by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). EIS was used for label-free detection of the target DNA by measuring the increase of the electron transfer resistance (R(et)) of the electrode surface after the hybridization of the probe DNA with the target DNA. The PAT gene fragment and polymerase chain reaction (PCR) amplification of the NOS gene from transgenically modified beans were satisfactorily detected by use of this DNA electrochemical sensor. The dynamic range of detection of the sensor for the PAT gene fragment was from 1.0 x 10(-11) to 1.0 x 10(-6) mol L(-1) and the detection limit was 1.38 x 10(-12) mol L(-1). --- paper_title: Characterization of insecticidal crystal proteincry gene ofBacillus thuringiensis from soil of Sichuan Basin, China and cloning of novel haplotypescry gene paper_content: Sichuan Basin situated in the west of China, the fourth-largest basin of China, is a special area with complicated geomorphology (mountain, pasture, gorge, virgin forest, highland, hurst, glacier, and plain), and contains a rich and unique biodiversity. In order to describe a systematic study ofcry gene resources fromBacillus thuringiensis (Bt) strains of different ecological regions in Sichuan Basin, a total of 791 Bt strains have been screened from 2650 soil samples. The analysis of thecry genes was based on the method of PCR-restriction fragment length polymorphism (PCR-RFLP).cry1, cry2, cry3, cry4/10, cry9, cry30, andcry40-type genes were found in this basin. Strains containingcry1 genes were the most abundant in our collection (66%), and twenty-one differentcry1-type gene combinations were found. Bt strains harboringcry2 genes were the second most abundant (39.5%), and the strains containingcry3, cry9, cry4/10, cry30, andcry40 genes were found in 2.5, 3.5, 4.2, 4.2, and < 1%, respectively. Furthermore, several novel haplotypescry genes were found, and the full-length sequences of three novelcry genes were obtained, which were designated ascry52Ba1, cry54Aa1, cry30Fa1 by theB. thuringiensis Pesticide Crystal Protein Nomenclature Committee, respectively. Sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) assay of 80 strains which did not produce any PCR products indicated that these strains may harbour potentially novel Cry proteins. All these researches mentioned above revealed that the diversity and particularity ofcry gene resources fromB. thuringiensis strains in Sichuan Basin. --- paper_title: Integrated Microfluidic Electrochemical DNA Sensor paper_content: Effective systems for rapid, sequence-specific nucleic acid detection at the point of care would be valuable for a wide variety of applications, including clinical diagnostics, food safety, forensics, and environmental monitoring. Electrochemical detection offers many advantages as a basis for such platforms, including portability and ready integration with electronics. Toward this end, we report the Integrated Microfluidic Electrochemical DNA (IMED) sensor, which combines three key biochemical functionalities--symmetric PCR, enzymatic single-stranded DNA generation, and sequence-specific electrochemical detection--in a disposable, monolithic chip. Using this platform, we demonstrate detection of genomic DNA from Salmonella enterica serovar Typhimurium LT2 with a limit of detection of <10 aM, which is approximately 2 orders of magnitude lower than that from previously reported electrochemical chip-based methods. --- paper_title: Tracking the influence of long-term chromium pollution on soil bacterial community structures by comparative analyses of 16S rRNA gene phylotypes. paper_content: Abstract Bacterial community structures of highly chromium-polluted industrial landfill sites (G1 and G2) and a nearby control site (G3) were assessed using cultivation-dependent and cultivation-independent analyses. Sequencing of 16S rRNA genes discerned a total of 141 distinct operational taxonomic units (OTUs). Twelve different bacterial phyla were represented amongst 35, 34 and 72 different bacterial genera retrieved from sites G1, G2 and G3, respectively. The bacterial community of site G1 consisted of Firmicutes (52.75%), Gammaproteobacteria (18%), Actinobacteria (14.5%), Bacteriodetes (9.5%) and Deinococcus-Thermus (5.25%) and that of site G2 consisted of Firmicutes (31.25%), Alphaproteobacteria (7%), Betaproteobacteria (8%), Gammaproteobacteria (19%), Deltaproteobacteria (9.5%), Epsilonproteobacteria (3%), Actinobacteria (13%), Bacteriodetes (7.75%) and Deinococcus-Thermus (1.5%). The bacterial community of site G3 consisted of Firmicutes (6.25%), Alphaproteobacteria (7.5%), Betaproteobacteria (17.25%), Gammaproteobacteria (29.75%), Deltaproteobacteria (7.5%), Epsilonproteobacteria (4%), Actinobacteria (9.5%), Bacteriodetes (11.25%), Gemmatimonadetes (2.5%), Deinococcus-Thermus (1.8%), Chloroflexi (1.5%) and Planctomycetes (1.2%). The phyla of Gemmatimonadetes , Chloroflexi and Planctomycetes were not detected in sites G1 and G2; likewise, Alpha , Beta , Delta and Epsilon subdivisions of Proteobacteria were not recovered from site G1. These findings reveal that long-term chromium-induced perturbation results in community shifts towards a dominance of Firmicutes from Proteobacteria in the soil environment. --- paper_title: Cylinder-shaped conducting polypyrrole for labelless electrochemical multidetection of DNA paper_content: Abstract A new multidetection biosensor has been developed using the electrochemical properties of cylinder-shaped conducting polypyrrole grown on miniaturized graphite electrodes. Our objective was to conceive a sensitive, labelless and real-time DNA sensor for biomedical diagnosis. In a first step, copolymers bearing both ferrocene redox markers and oligonucleotide probes were selectively electro-addressed on microchip electrodes. Then, the study of their voltammetric response upon the addition of DNA targets revealed that the hybridization was efficiently transduced through the variation of ferrocene oxidation intensity. Using this technique, a good selectivity between Human Immunodeficiency Virus and Hepatitis B Virus targets was obtained. It was indeed possible to directly follow the hybridization. Complementary DNA detection limit reached 100 pM (3 fmol in 30 μL), which represents a good performance for such a practical, labelless and real-time sensor. --- paper_title: Selective Photoelectrochemical Detection of DNA with High-Affinity Metallointercalator and Tin Oxide Nanoparticle Electrode paper_content: Selective detection of double-stranded DNA (ds-DNA) in solution was achieved by photoelectrochemistry using a high-affinity DNA intercalator, Ru(bpy)2dppz (bpy = 2,2‘-bipyridine, dppz = dipyrido[3,2-a:2‘,3‘-c]phenazine) as the signal indicator and tin oxide nanoparticle as electrode material. When Ru(bpy)2dppz alone was irradiated with 470-nm light, anodic photocurrent was detected on the semiconductor electrode due to electron injection from its excited state into the conduction band of the electrode. The current was sustained in the presence of oxalate in solution, which acted as a sacrificial electron donor to regenerate the ground-state metal complex. After addition of double-stranded calf thymus DNA into the solution, photocurrent dropped substantially. The drop was attributed to the intercalation of Ru(bpy)2dppz into DNA and, consequently, the reduced mass diffusion of the indicator to the electrode, as well as electrostatic repulsion between oxalate anion and negative charges on DNA. The degree of ... --- paper_title: Microfluidic device architecture for electrochemical patterning and detection of multiple DNA sequences. paper_content: Electrochemical biosensors pose an attractive solution for point-of-care diagnostics because they require minimal instrumentation and they are scalable and readily integrated with microelectronics. The integration of electrochemical biosensors with microscale devices has, however, proven to be challenging due to significant incompatibilities among biomolecular stability, operation conditions of electrochemical sensors, and microfabrication techniques. Toward a solution to this problem, we have demonstrated here an electrochemical array architecture that supports the following processes in situ, within a self-enclosed microfluidic device: (a) electrode cleaning and preparation, (b) electrochemical addressing, patterning, and immobilization of sensing biomolecules at selected sensor pixels, (c) sequence-specific electrochemical detection from multiple pixels, and (d) regeneration of the sensing pixels. The architecture we have developed is general, and it should be applicable to a wide range of biosensing schemes that utilize gold-thiol self-assembled monolayer chemistry. As a proof-of-principle, we demonstrate the detection and differentiation of polymerase chain reaction (PCR) amplicons diagnostic of human (H1N1) and avian (H5N1) influenza. --- paper_title: Integrated Microfluidic Electrochemical DNA Sensor paper_content: Effective systems for rapid, sequence-specific nucleic acid detection at the point of care would be valuable for a wide variety of applications, including clinical diagnostics, food safety, forensics, and environmental monitoring. Electrochemical detection offers many advantages as a basis for such platforms, including portability and ready integration with electronics. Toward this end, we report the Integrated Microfluidic Electrochemical DNA (IMED) sensor, which combines three key biochemical functionalities--symmetric PCR, enzymatic single-stranded DNA generation, and sequence-specific electrochemical detection--in a disposable, monolithic chip. Using this platform, we demonstrate detection of genomic DNA from Salmonella enterica serovar Typhimurium LT2 with a limit of detection of <10 aM, which is approximately 2 orders of magnitude lower than that from previously reported electrochemical chip-based methods. ---
Title: Microfluidics-Based Lab-on-Chip Systems in DNA-Based Biosensing: An Overview Section 1: Introduction Description 1: Write an introductory overview of advances in microfluidics for nanotechnology-based sensing methods and the challenges faced in developing diagnostic devices for environmental monitoring. Section 2: The Physics of Microfluidics Description 2: Discuss the fundamental flow physics and interfacial phenomena in microfluidics, emphasizing design considerations and the critical importance of fluid control and flow stability. Section 3: Dimensionless Numbers Description 3: Explain the key dimensionless numbers relevant to fluid mechanics and species transport in microfluidics, including the Reynolds number and the Peclet number, and their significance in microfluidic device design. Section 4: Droplet Flow Description 4: Introduce the concept of droplet-based microfluidics (digital microfluidics), detailing the advantages of droplet flow over traditional continuous-flow systems. Section 5: Microfluidics-Based Pathogen Detection Description 5: Explore the application of microfluidic biochips for pathogen detection, covering different electrochemical techniques for DNA hybridization and pathogen identification. Section 6: DNA-Based Biosensor Description 6: Describe the functioning and types of DNA-based biosensors, focusing on electrochemical detection methods and advancements in the field. Section 7: Conclusions Description 7: Summarize the key points covered in the paper, highlighting the importance of microfluidics-based LOC systems in environmental pathogen detection and discussing future directions in the field.
Applying CBM and PHM concepts with reliability approach for Blowout Preventer (BOP): a literature review
10
--- paper_title: Improved methods for reliability assessments of safety-critical systems: An application example for BOP systems paper_content: The failure of the Deepwater Horizon drilling rig's blowout preventer has been pointed to as one of the main causes of the Macondo accident on April 10th 2010. The blowout preventer system is one the most important safety barriers in a hydrocarbon well. The accident has created a demand for improved methods of assessing the reliability of blowout preventer systems. The objective of this master thesis is to propose improvements to current reliability assessment methods for complex safety critical systems such as the blowout preventer. The report begins by describing the blowout preventer system. It is a system consisting of two main subsea parts containing the annular and ram blowout preventer valves which are used to seal off a well in the event of a subsea well kick. These annular and ram type preventers are governed by an electro-hydraulic control system which is operated by human interaction from control panels located on the rig floor. A functional analysis of the blowout preventer system is presented next. Essential functions are defined, and performance criteria for these functions identified. An approach to classification of blowout preventer functions is also presented, before the report moves on to the analysis of four main operational situations to which the blowout preventer is exposed, and whose characteristics have implications for the system's ability to act as a safety barrier. The pros and cons of different widely used blowout preventer system configurations is also discussed. Three main types of configurations are mentioned in the report; the \emph{modern} configuration, \emph{traditional} configuration and the Deepwater Horizon blowout preventer system configuration. A literature survey which documents previous blowout preventer reliability studies performed by Per Holand on behalf of SINTEF is presented. An evaluation of validity of the operational assumptions which have been made in these previous studies is also provided, such as such as assumptions regarding operational situations, failure input data, and several important assumptions regarding testing of blowout preventer systems. Regulations and guidelines which are relevant to blowout preventer reliability are also described here. The report further discusses how the blowout preventer may fail, and which types of failures modes are considered critical from a safety perspective. Some theoretic principles behind common cause failures are presented, along with a description of how common cause failures should be included in reliability assessments of safety critical systems through an approach called the \emph{PDS approach}. This is followed by a discussion of possible sources for common cause failures in the blowout preventer system. As a suggestion towards how reliability assessments of blowout preventers can be improved, and some of the identified challenges solved, a reliability quantification method is presented. The method is based on post-processing of minimal cut sets from a fault tree analysis of the blowout preventer system, and produces more conservative and accurate approximations of the reliability than those produced through conventonal methods. The method is also capable of taking into account common cause failures. The results from the calculations are presented and discussed. An event tree which illustrates the effect from an escalated well control situation on the blowout preventer's ability to act as a safety barrier is also presented, along with a discussion of how blowout preventer reliability could possibly be more appropriately assessed through event tree analysis. Finally, the conclusions from the thesis are provided. The main conclusions are that the approach based on fault trees and post-processing of minimal cut sets can certainly be used to improve the quality of blowout preventer reliability estimates, and also provides a sound platform for including common cause failures in the analysis. Another key finding is that the fault tree, which is a "static model", poorly illustrates the criticality of "preventer-specific" components in escalated well control situations, since the unavailablility of certain functions due to operational conditions has little or no implication of the reliability estimates produced. In contrast, the criticality of common control system components is certainly emphasised by the fault tree model. The author suggests that quantificaiton of blowout preventer systems through fault tree analysis should be supplemented by event tree analysis to better evaluate the effect from escalation of the well control situation. Furthermore, the author recommends that a test coverage factor should be included when calculating the safety unavailability of components exclusive to shearing rams, since these cannot be fully function tested through conventional, non-destructive blowout preventer tests. It is also recommended that the industry investigate the accuracy with which the location of tool joints in the wellbore annulus can be determined through current methods. Improper spacing of tool joints is critical in a well control situation where the shear rams must be activated. --- paper_title: Strategies for Diagnosis paper_content: Diagnosis is among the dominant applications of expert systems technology today. In the past, most diagnostic systems were built using some form of rule-based production system. Recently, however, many new strategies have emerged to support much more complex reasoning for diagnosis. A survey of the architectures for the complex strategies now available for diagnosis is presented. --- paper_title: Sensor Systems for Prognostics and Health Management paper_content: Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented. --- paper_title: Improved methods for reliability assessments of safety-critical systems: An application example for BOP systems paper_content: The failure of the Deepwater Horizon drilling rig's blowout preventer has been pointed to as one of the main causes of the Macondo accident on April 10th 2010. The blowout preventer system is one the most important safety barriers in a hydrocarbon well. The accident has created a demand for improved methods of assessing the reliability of blowout preventer systems. The objective of this master thesis is to propose improvements to current reliability assessment methods for complex safety critical systems such as the blowout preventer. The report begins by describing the blowout preventer system. It is a system consisting of two main subsea parts containing the annular and ram blowout preventer valves which are used to seal off a well in the event of a subsea well kick. These annular and ram type preventers are governed by an electro-hydraulic control system which is operated by human interaction from control panels located on the rig floor. A functional analysis of the blowout preventer system is presented next. Essential functions are defined, and performance criteria for these functions identified. An approach to classification of blowout preventer functions is also presented, before the report moves on to the analysis of four main operational situations to which the blowout preventer is exposed, and whose characteristics have implications for the system's ability to act as a safety barrier. The pros and cons of different widely used blowout preventer system configurations is also discussed. Three main types of configurations are mentioned in the report; the \emph{modern} configuration, \emph{traditional} configuration and the Deepwater Horizon blowout preventer system configuration. A literature survey which documents previous blowout preventer reliability studies performed by Per Holand on behalf of SINTEF is presented. An evaluation of validity of the operational assumptions which have been made in these previous studies is also provided, such as such as assumptions regarding operational situations, failure input data, and several important assumptions regarding testing of blowout preventer systems. Regulations and guidelines which are relevant to blowout preventer reliability are also described here. The report further discusses how the blowout preventer may fail, and which types of failures modes are considered critical from a safety perspective. Some theoretic principles behind common cause failures are presented, along with a description of how common cause failures should be included in reliability assessments of safety critical systems through an approach called the \emph{PDS approach}. This is followed by a discussion of possible sources for common cause failures in the blowout preventer system. As a suggestion towards how reliability assessments of blowout preventers can be improved, and some of the identified challenges solved, a reliability quantification method is presented. The method is based on post-processing of minimal cut sets from a fault tree analysis of the blowout preventer system, and produces more conservative and accurate approximations of the reliability than those produced through conventonal methods. The method is also capable of taking into account common cause failures. The results from the calculations are presented and discussed. An event tree which illustrates the effect from an escalated well control situation on the blowout preventer's ability to act as a safety barrier is also presented, along with a discussion of how blowout preventer reliability could possibly be more appropriately assessed through event tree analysis. Finally, the conclusions from the thesis are provided. The main conclusions are that the approach based on fault trees and post-processing of minimal cut sets can certainly be used to improve the quality of blowout preventer reliability estimates, and also provides a sound platform for including common cause failures in the analysis. Another key finding is that the fault tree, which is a "static model", poorly illustrates the criticality of "preventer-specific" components in escalated well control situations, since the unavailablility of certain functions due to operational conditions has little or no implication of the reliability estimates produced. In contrast, the criticality of common control system components is certainly emphasised by the fault tree model. The author suggests that quantificaiton of blowout preventer systems through fault tree analysis should be supplemented by event tree analysis to better evaluate the effect from escalation of the well control situation. Furthermore, the author recommends that a test coverage factor should be included when calculating the safety unavailability of components exclusive to shearing rams, since these cannot be fully function tested through conventional, non-destructive blowout preventer tests. It is also recommended that the industry investigate the accuracy with which the location of tool joints in the wellbore annulus can be determined through current methods. Improper spacing of tool joints is critical in a well control situation where the shear rams must be activated. --- paper_title: The EDF failure reporting system process, presentation and prospects paper_content: Abstract This paper describes the procedure Electricite de France uses to exploit the information on pressurized water reactor operation it receives back from the field (operation feedback). The first requirement in analyzing such data is a knowledge of past records. The first step, therefore, is to record the data, particularly events occurring on the plant and failures occurring on equipment, in large reliability data banks. However, the ‘raw’ information stored is rarely usable directly. The first step in the second stage—analysis—is to review and qualify the data before using it for any purpose. This difficult, but essential, review provides valuable information on the improvement of equipment reliability. The greater knowledge of plant and equipment behaviour, and the damage mechanisms involved, allows: • ⊎ safety to be kept at a high level: operation feedback is also essential for probabilistic safety studies, • ⊎ improvement of availability and preventive maintenance practices, • ⊎ correction of the initial design (design changes) and help for designing future plants. Finally, operation feedback is a source of progress. Although it requires heavy initial investment, it is also a source of profit. It is a source of learning. The analysis results make it possible to define more suitable procedures and better preventive maintenance practices and thus improve the operation and safety of existing and future plants. --- paper_title: Reliability centered maintenance paper_content: Abstract Reliability centered maintenance (RCM) is a method for maintenance planning developed within the aircraft industry and later adapted to several other industries and military branches. This paper presents a structured approach to RCM, and discusses the various steps in the approach. The RCM method provides a framework for utilizing operating experience in a more systematic way. The requirements for reliability models and data are therefore highlighted. The gap between maintenance practitioners and scientists working with maintenance optimization models is discussed, together with some future challenges for RCM. --- paper_title: Sensor Systems for Prognostics and Health Management paper_content: Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented. --- paper_title: MULTIPLE FAILURE MODES ANALYSIS AND WEIGHTED RISK PRIORITY NUMBER EVALUATION IN FMEA paper_content: Traditionally, failure mode and effects analysis (FMEA) only considers the impact of single failure on the system. For large and complex systems, since multiple failures of components exist, assessing multiple failure modes with all possible combinations is impractical. Pickard et al. [1] introduced a useful method to simultaneously analyze multiple failures for complex systems. However, they did not indicate which failures need to be considered and how to combine them appropriately. This paper extends Pickard’s work by proposing a minimum cut set based method for assessing the impact of multiple failure modes. In addition, traditional FMEA is made by addressing problems in an order from the biggest risk priority number (RPN) to the smallest ones. However, one disadvantage of this approach is that it ignores the fact that three factors (Severity (S), Occurrence (O), Detection (D)) (S, O, D) have the different weights in system rather than equality. For examples, reasonable weights for factors S, O are higher than the weight of D for some non-repairable systems. In this paper, we extended the definition of RPN by multiplying it with a weight parameter, which characterize the importance of the failure causes within the system. Finally, the effectiveness of the method is demonstrated with numerical examples. --- paper_title: Fuzzy logic prioritization of failures in a system failure mode, effects and criticality analysis paper_content: Abstract This paper describes a new technique, based on fuzzy logic, for prioritizing failures for corrective actions in a Failure Mode, Effects and Criticality Analysis (FMECA). As in a traditional criticality analysis, the assessment is based on the severity, frequency of occurrence, and detectability of an item failure. However, these parameters are here represented as members of a fuzzy set, combined by matching them against rules in a rule base, evaluated with min-max inferencing, and then defuzzified to assess the riskiness of the failure. This approach resolves some of the problems in traditional methods of evaluation and it has several advantages compared to strictly numerical methods: 1) it allows the analyst to evaluate the risk associated with item failure modes directly using the linguistic terms that are employed in making the criticality assessment; 2) ambiguous, qualitative, or imprecise information, as well as quantitative data, can be used in the assessment and they are handled in a consistent manner; and 3) it gives a more flexible structure for combining the severity, occurrence, and detectability parameters. Two fuzzy logic based approaches for assessing criticality are presented. The first is based on the numerical rankings used in a conventional Risk Priority Number (RPN) calculation and uses crisp inputs gathered from the user or extracted from a reliability analysis. The second, which can be used early in the design process when less detailed information is available, allows fuzzy inputs and also illustrates the direct use of the linguistic rankings defined for the RPN calculations. --- paper_title: Risk analysis of analytical validations by probabilistic modification of FMEA paper_content: Abstract Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. --- paper_title: A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA paper_content: The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach. unacceptable failure effects occur, design changes are made to mitigate those effects. The criticality part of the analysis prioritizes the failures for corrective action based on the probability of the item's failure mode and the severity of its effects. It uses linguistic terms to rank the probability of the failure mode occurrence, the severity of its failure effect and the probability of the failure being detected on a numeric scale from 1 to 10. These rankings are then multiplied to give the Risk Priority Number. Failure modes having a high RPN are assumed to be more important and given a higher priority than those having a lower RPN (2). II. RPN METHODOLOGY --- paper_title: System failure behavior and maintenance decision making using, RCA, FMEA and FM paper_content: Purpose – The purpose of this paper is to permit the system reliability analysts/managers/engineers to model, analyze and predict the behavior of industrial systems in a more realistic and consistent manner and plan suitable maintenance strategies accordingly.Design/methodology/approach – Root cause analysis (RCA), failure mode effect analysis (FMEA) and fuzzy methodology (FM) have been used by the authors to build an integrated framework, to facilitate the reliability/system analysts in maintenance planning. The factors contributing to system unreliability were analyzed using RCA and FMEA. The uncertainty related to performance of system is modeled using fuzzy synthesis of information.Findings – The in‐depth analysis of system is carried out using RCA and FMEA. The discrepancies associated with the traditional procedure of risk ranking in FMEA are modeled using decision making system based on fuzzy methodology. Further, to cope up with imprecise, uncertain and subjective information related to system per... --- paper_title: Utility Priority Number Evaluation for FMEA paper_content: Traditionally, decisions on how to improve an operation are based on risk priority number (RPN) in the failure mode and effects analysis (FMEA). Many scholars questioned the RPN method and proposed some new methods to improve the decision process, but these methods are only measuring from the risks viewpoint while ignoring the importance of corrective actions. The corrective actions may be interdependent; hence, if the implementation of corrective actions is in proper order, selection may maximize the improvement effect, bring favorable results in the shortest times, and provide the lowest cost. This study aims to evaluate the structure of hierarchy and interdependence of corrective action by interpretive structural model (ISM), then to calculate the weight of a corrective action through the analytic network process (ANP), then to combine the utility of corrective actions and make a decision on improvement priority order of FMEA by utility priority number (UPN). Finally, it verifies the feasibility and effectiveness of this method by application to a case study. --- paper_title: Intelligent Predictive Decision Support System for Condition-Based Maintenance paper_content: The high costs in maintaining today’s complex and sophisticated equipment make it necessary to enhance modern maintenance management systems. Conventional condition-based maintenance (CBM) reduces the uncertainty of maintenance according to the needs indicated by the equipment condition. The intelligent predictive decision support system (IPDSS) for condition-based maintenance (CBM) supplements the conventional CBM approach by adding the capability of intelligent condition-based fault diagnosis and the power of predicting the trend of equipment deterioration. An IPDSS model, based on the recurrent neural network (RNN) approach, was developed and tested and run for the critical equipment of a power plant. The results showed that the IPDSS model provided reliable fault diagnosis and strong predictive power for the trend of equipment deterioration. These valuable results could be used as input to an integrated maintenance management system to pre-plan and pre-schedule maintenance work, to reduce inventory costs for spare parts, to cut down unplanned forced outage and to minimise the risk of catastrophic failure. --- paper_title: Modelling using UML and BPMN the integration of open reliability, maintenance and condition monitoring management systems: An application in an electric transformer system paper_content: Maintenance management of an industrial plant has been always a complex activity. Nowadays Computerized Maintenance Management Systems (CMMSs) help to organize information and thus to carry out maintenance activities in a more efficient way. The emergence of new ICT has increased also the use of Condition Based Maintenance (CBM) systems and the application of Reliability Centred Maintenance (RCM) analysis. Each system is proved to provide benefits to the maintenance management. However when all the systems are adopted, the lack of integration among them can prevent the maximum exploitation of their capabilities. This work aims at fulfilling this gap, proposing an e-maintenance integration platform that combines the features of the three main systems. The methodology and the reference open standards used to develop the platform are exposed. UML-BPMN diagrams represent the emerging algorithms of the designed system. The final product, a software demo is implemented in an electric transformer. --- paper_title: Common cause failures in safety instrumented systems on oil and gas installations: Implementing defense measures through function testing paper_content: This paper presents a common cause failure (CCF) defense approach for safety instrumented systems (SIS) in the oil and gas industry. The SIS normally operates in the low demand mode, which means that regular testing and inspection are required to reveal SIS failures. The CCF defense approach comprises checklists and analytical tools which may be integrated with current approaches for function testing, inspection and follow-up. The paper focuses on how defense measures may be implemented to increase awareness of CCFs, to improve the ability to detect CCFs, and to avoid introducing new CCFs. The CCF defense approach may also be applicable for other industry sectors. --- paper_title: On condition based maintenance policy paper_content: Abstract In the case of a high-valuable asset, the Operation and Maintenance (O&M) phase requires heavy charges and more efforts than the installation (construction) phase, because it has long usage life and any accident of an asset during this period causes catastrophic damage to an industry. Recently, with the advent of emerging Information Communication Technologies (ICTs), we can get the visibility of asset status information during its usage period. It gives us new challenging issues for improving the efficiency of asset operations. One issue is to implement the Condition-Based Maintenance (CBM) approach that makes a diagnosis of the asset status based on wire or wireless monitored data, predicts the assets abnormality, and executes suitable maintenance actions such as repair and replacement before serious problems happen. In this study, we have addressed several aspects of CBM approach: definition, related international standards, procedure, and techniques with the introduction of some relevant case studies that we have carried out. --- paper_title: Reliability centered maintenance paper_content: Abstract Reliability centered maintenance (RCM) is a method for maintenance planning developed within the aircraft industry and later adapted to several other industries and military branches. This paper presents a structured approach to RCM, and discusses the various steps in the approach. The RCM method provides a framework for utilizing operating experience in a more systematic way. The requirements for reliability models and data are therefore highlighted. The gap between maintenance practitioners and scientists working with maintenance optimization models is discussed, together with some future challenges for RCM. --- paper_title: Condition based maintenance systems : An Investigation of Technical Constituets and Organizational Aspects paper_content: Condition based maintenance systems : An Investigation of Technical Constituets and Organizational Aspects --- paper_title: Sensor Systems for Prognostics and Health Management paper_content: Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented. --- paper_title: Self-maintenance and engineering immune systems: Towards smarter machines and manufacturing systems paper_content: Abstract This paper discusses the state-of-the-art research in the areas of self-maintenance and engineering immune systems (EIS) for machines with smarter adaptability to operating regime changes in future manufacturing systems. Inspired by the biological immune and nervous systems, the authors are introducing the transformation of prognostics and health management (PHM) to engineering immune systems (EIS). First, an overview on PHM is introduced. Its transformation toward resilient systems, self-maintenance systems, and engineering immune systems is also discussed. Finally, new concepts in developing future biological-based smarter machines based on autonomic computing and cloud computing are discussed. --- paper_title: Intelligent Fault Diagnosis and Prognosis for Engineering Systems paper_content: PREFACE. ACKNOWLEDGMENTS. PROLOGUE. 1 INTRODUCTION. 1.1 Historical Perspective. 1.2 Diagnostic and Prognostic System Requirements. 1.3 Designing in Fault Diagnostic and Prognostic Systems. 1.4 Diagnostic and Prognostic Functional Layers. 1.5 Preface to Book Chapters. 1.6 References. 2 SYSTEMS APPROACH TO CBM/PHM. 2.1 Introduction. 2.2 Trade Studies. 2.3 Failure Modes and Effects Criticality Analysis (FMECA). 2.4 System CBM Test-Plan Design. 2.5 Performance Assessment. 2.6 CBM/PHM Impact on Maintenance and Operations: Case Studies. 2.7 CBM/PHM in Control and Contingency Management. 2.8 References. 3 SENSORS AND SENSING STRATEGIES. 3.1 Introduction. 3.2 Sensors. 3.3 Sensor Placement. 3.4 Wireless Sensor Networks. 3.5 Smart Sensors. 3.6 References. 4 SIGNAL PROCESSING AND DATABASE MANAGEMENT SYSTEMS. 4.1 Introduction. 4.2 Signal Processing in CBM/PHM. 4.3 Signal Preprocessing. 4.4 Signal Processing. 4.5 Vibration Monitoring and Data Analysis. 4.6 Real-Time Image Feature Extraction and Defect/Fault Classification. 4.7 The Virtual Sensor. 4.8 Fusion or Integration Technologies. 4.9 Usage-Pattern Tracking. 4.10 Database Management Methods. 4.11 References. 5 FAULT DIAGNOSIS. 5.1 Introduction. 5.2 The Diagnostic Framework. 5.3 Historical Data Diagnostic Methods. 5.4 Data-Driven Fault Classification and Decision Making. 5.5 Dynamic Systems Modeling. 5.6 Physical Model-Based Methods. 5.7 Model-Based Reasoning. 5.8 Case-Based Reasoning (CBR). 5.9 Other Methods for Fault Diagnosis. 5.10 A Diagnostic Framework for Electrical/Electronic Systems. 5.11 Case Study: Vibration-Based Fault Detection and Diagnosis for Engine Bearings. 5.12 References. 6 FAULT PROGNOSIS. 6.1 Introduction. 6.2 Model-Based Prognosis Techniques. 6.3 Probability-Based Prognosis Techniques. 6.4 Data-Driven Prediction Techniques. 6.5 Case Studies. 6.6 References. 7 FAULT DIAGNOSIS AND PROGNOSIS PERFORMANCE METRICS. 7.1 Introduction. 7.2 CBM/PHM Requirements Definition. 7.3 Feature-Evaluation Metrics. 7.4 Fault Diagnosis Performance Metrics. 7.5 Prognosis Performance Metrics. 7.6 Diagnosis and Prognosis Effectiveness Metrics. 7.7 Complexity/Cost-Benefit Analysis of CBM/PHM Systems. 7.8 References. 8 LOGISTICS: SUPPORT OF THE SYSTEM IN OPERATION. 8.1 Introduction. 8.2 Product-Support Architecture, Knowledge Base, and Methods for CBM. 8.3 Product Support without CBM. 8.4 Product Support with CBM. 8.5 Maintenance Scheduling Strategies. 8.6 A Simple Example. 8.7 References. APPENDIX. INDEX. --- paper_title: Condition based maintenance optimization for wind power generation systems under continuous monitoring paper_content: By utilizing condition monitoring information collected from wind turbine components, condition based maintenance (CBM) strategy can be used to reduce the operation and maintenance costs of wind power generation systems. The existing CBM methods for wind power generation systems deal with wind turbine components separately, that is, maintenance decisions are made on individual components, rather than the whole system. However, a wind farm generally consists of multiple wind turbines, and each wind turbine has multiple components including main bearing, gearbox, generator, etc. There are economic dependencies among wind turbines and their components. That is, once a maintenance team is sent to the wind farm, it may be more economical to take the opportunity to maintain multiple turbines, and when a turbine is stopped for maintenance, it may be more cost-effective to simultaneously replace multiple components which show relatively high risks. In this paper, we develop an optimal CBM solution to the above-mentioned issues. The proposed maintenance policy is defined by two failure probability threshold values at the wind turbine level. Based on the condition monitoring and prognostics information, the failure probability values at the component and the turbine levels can be calculated, and the optimal CBM decisions can be made accordingly. A simulation method is developed to evaluate the cost of the CBM policy. A numerical example is provided to illustrate the proposed CBM approach. A comparative study based on commonly used constant-interval maintenance policy demonstrates the advantage of the proposed CBM approach in reducing the maintenance cost. --- paper_title: A Wireless Sensor System for Prognostics and Health Management paper_content: This paper introduces a novel radio-frequency-based wireless sensor system and describes its prognostics and health management functions. The wireless sensor system includes a radio frequency identification sensor tag, a wireless reader, and diagnostic-prognostic software. The software uses the sequential probability ratio test with a cross-validation procedure to detect anomalies, assess degradation, and predict failures. The prognostic performance of the sensor system is demonstrated by a field application. --- paper_title: Sensor Systems for Prognostics and Health Management paper_content: Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented. --- paper_title: Intelligent Predictive Decision Support System for Condition-Based Maintenance paper_content: The high costs in maintaining today’s complex and sophisticated equipment make it necessary to enhance modern maintenance management systems. Conventional condition-based maintenance (CBM) reduces the uncertainty of maintenance according to the needs indicated by the equipment condition. The intelligent predictive decision support system (IPDSS) for condition-based maintenance (CBM) supplements the conventional CBM approach by adding the capability of intelligent condition-based fault diagnosis and the power of predicting the trend of equipment deterioration. An IPDSS model, based on the recurrent neural network (RNN) approach, was developed and tested and run for the critical equipment of a power plant. The results showed that the IPDSS model provided reliable fault diagnosis and strong predictive power for the trend of equipment deterioration. These valuable results could be used as input to an integrated maintenance management system to pre-plan and pre-schedule maintenance work, to reduce inventory costs for spare parts, to cut down unplanned forced outage and to minimise the risk of catastrophic failure. --- paper_title: On condition based maintenance policy paper_content: Abstract In the case of a high-valuable asset, the Operation and Maintenance (O&M) phase requires heavy charges and more efforts than the installation (construction) phase, because it has long usage life and any accident of an asset during this period causes catastrophic damage to an industry. Recently, with the advent of emerging Information Communication Technologies (ICTs), we can get the visibility of asset status information during its usage period. It gives us new challenging issues for improving the efficiency of asset operations. One issue is to implement the Condition-Based Maintenance (CBM) approach that makes a diagnosis of the asset status based on wire or wireless monitored data, predicts the assets abnormality, and executes suitable maintenance actions such as repair and replacement before serious problems happen. In this study, we have addressed several aspects of CBM approach: definition, related international standards, procedure, and techniques with the introduction of some relevant case studies that we have carried out. --- paper_title: A technical framework and roadmap of embedded diagnostics/prognostics for complex mechanical systems in PHM systems paper_content: Prognostics and Health Management (PHM) technologies have emerged as a key enabler to provide early indications of system faults and perform predictive maintenance actions. While implementing PHM depends on real-time acquiring the present and future health status of a system accurately. For electronic subsystems, built-in-test (BIT) makes it not difficult to achieve these goals. However, reliable prognostics capability is still a bottle-neck problem for mechanical subsystems due to lack of proper on-line sensors. Recent advancements in sensors and micro-electronics technologies have brought about a novel way out for complex mechanical systems, which is called embedded diagnostics/prognostics (ED/EP). ED/EP can provide real-time present condition and prognostic health state by integrating micro-sensors into mechanical structures during design and manufacture, so ED/EP has a revolutionary progress compared to traditional mechanical fault diagnostic/prognostic ways. But how to study ED/EP for complex mechanical systems has not been focused so far. This paper explores the challenges and needs of efforts to implement ED/EP technologies. In particular, this paper presents a technical framework and roadmap of ED/EP for complex mechanical systems. The framework is proposed based on the methodology of system integration and parallel design, which include six key elements (embedded sensors, embedded sensing design, embedded sensors placement, embedded signals transmission, ED/EP algorithms and embedded self-power). Relationships among these key elements are outlined and they should be considered simultaneously during the design of a complex mechanical system. Technical challenges of each key element are emphasized and existed or potential ways to solve each challenge are summarized in detail. Then the development roadmap of ED/EP in complex mechanical systems is brought forward according to potential advancements in related areas, which can be divided into three different stages: individual technology development, system integration and prototype design, and autonomous mechanical systems. In the end, the presented framework is exemplified with a gearbox --- paper_title: Modelling using UML and BPMN the integration of open reliability, maintenance and condition monitoring management systems: An application in an electric transformer system paper_content: Maintenance management of an industrial plant has been always a complex activity. Nowadays Computerized Maintenance Management Systems (CMMSs) help to organize information and thus to carry out maintenance activities in a more efficient way. The emergence of new ICT has increased also the use of Condition Based Maintenance (CBM) systems and the application of Reliability Centred Maintenance (RCM) analysis. Each system is proved to provide benefits to the maintenance management. However when all the systems are adopted, the lack of integration among them can prevent the maximum exploitation of their capabilities. This work aims at fulfilling this gap, proposing an e-maintenance integration platform that combines the features of the three main systems. The methodology and the reference open standards used to develop the platform are exposed. UML-BPMN diagrams represent the emerging algorithms of the designed system. The final product, a software demo is implemented in an electric transformer. ---
Title: Applying CBM and PHM concepts with reliability approach for Blowout Preventer (BOP): a literature review Section 1: INTRODUCTION Description 1: Write an overview of the significance of operational safety in oil and gas drilling, focusing on the critical role of Blowout Preventer (BOP) systems in preventing blowouts and ensuring well control. Section 2: BOP reliability and condition monitoring approach Description 2: Discuss the historical development of BOP reliability studies, the importance of high-quality failure data, and the implementation challenges of BOP condition monitoring and real-time monitoring technology. Section 3: METHODOLOGY Description 3: Provide a detailed explanation of the literature review methodology, including the systematic subdivision of the research theme, theoretical framework organization, literature search strategy, and selection criteria. Section 4: Blowout Preventer System Description 4: Describe the structure and components of the Blowout Preventer (BOP) system, highlighting the types of preventers, valves, lines, hydraulic connectors, primary control system, and backup system. Section 5: Reliability concepts Description 5: Explain the various reliability concepts critical for CBM and PHM, including failure data and analysis, Reliability Centered Maintenance (RCM), and Failure Mode and Effect Analysis (FMEA). Section 6: Condition Monitoring / Detection Description 6: Define condition monitoring, its significance, and how it helps in detecting and reporting abnormal events in machinery or systems. Section 7: Condition-based Maintenance (CBM) Description 7: Elaborate on the Condition-Based Maintenance (CBM) approach, including its definitions, importance, and implementation challenges. Section 8: Prognostic Health Management (PHM) Description 8: Discuss the evolution of CBM into Prognostics and Health Management (PHM), highlighting its capabilities, importance, and reliance on sensor systems for accurate information and prediction of equipment health. Section 9: Reliability approach for CBM and PHM Description 9: Explore the relationship between CBM and Reliability Centered Maintenance (RCM), detailing the steps required for developing a CBM framework and integrating it with sensors and monitoring systems for diagnostics and prognostics. Section 10: CONCLUSIONS Description 10: Summarize the current state and future prospects of real-time monitoring technology, BOP reliability, and the importance of integrating prognostics and diagnostics in maintenance strategy to enhance operational safety and reduce downtimes.
Software Defined Optical Access Networks (SDOANs): A Comprehensive Survey
5
--- paper_title: OpenFlow: The Next Generation of the Network? paper_content: Software-defined-network technologies like OpenFlow could change how datacenters, cloud systems, and perhaps even the Internet handle tomorrow's heavy network loads. --- paper_title: Challenges to support edge-as-a-service paper_content: A new era in telecommunications is emerging. Virtualized networking functions and resources will offer network operators a way to shift the balance of expenditure from capital to operational, opening up networks to new and innovative services. This article introduces the concept of edge as a service (EaaS), a means of harnessing the flexibility of virtualized network functions and resources to enable network operators to break the tightly coupled relationship they have with their infrastructure and enable more effective ways of generating revenue. To achieve this vision, we envisage a virtualized service access interface that can be used to programmatically alter access network functions and resources available to service providers in an elastic fashion. EaaS has many technically and economically difficult challenges that must be addressed before it can become a reality; the main challenges are summarized in this article. --- paper_title: Network Function Virtualization: State-of-the-art and Research Challenges paper_content: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products. --- paper_title: A Survey of Network Virtualization paper_content: Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. --- paper_title: A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation paper_content: Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs. --- paper_title: Software-Defined Access Network (SDAN) paper_content: Control-plane functions are migrating from dedicated network equipment into software running on commodity hardware. The Software-Defined Access Network (SDAN) concept is introduced here that extends the benefits of Software-Defined Networking (SDN) into broadband access. The SDAN virtualizes access-network control and management functions for broadband access, to enable network optimizations, streamline operations, and encourage innovative services creation, particularly in multi-operator environments. This paper identifies software-definable control and management functions for broadband access, and presents some specific network optimizations using the SDAN. --- paper_title: Network Function Virtualization: State-of-the-art and Research Challenges paper_content: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products. --- paper_title: A Survey of Network Virtualization paper_content: Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. --- paper_title: Femtocell Networks: A Survey paper_content: The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them. --- paper_title: Random access for machine-to-machine communication in LTE-advanced networks: issues and approaches paper_content: Machine-to-machine communication, a promising technology for the smart city concept, enables ubiquitous connectivity between one or more autonomous devices without or with minimal human interaction. M2M communication is the key technology to support data transfer among sensors and actuators to facilitate various smart city applications (e.g., smart metering, surveillance and security, infrastructure management, city automation, and eHealth). To support massive numbers of machine type communication (MTC) devices, one of the challenging issues is to provide an efficient way for multiple access in the network and to minimize network overload. In this article, we review the M2M communication techniques in Long Term Evolution- Advanced cellular networks and outline the major research issues. Also, we review the different random access overload control mechanisms to avoid congestion caused by random channel access of MTC devices. To this end, we propose a reinforcement learning-based eNB selection algorithm that allows the MTC devices to choose the eNBs (or base stations) to transmit packets in a self-organizing fashion. --- paper_title: Downlink Packet Scheduling in LTE Cellular Networks: Key Design Issues and a Survey paper_content: Future generation cellular networks are expected to provide ubiquitous broadband access to a continuously growing number of mobile users. In this context, LTE systems represent an important milestone towards the so called 4G cellular networks. A key feature of LTE is the adoption of advanced Radio Resource Management procedures in order to increase the system performance up to the Shannon limit. Packet scheduling mechanisms, in particular, play a fundamental role, because they are responsible for choosing, with fine time and frequency resolutions, how to distribute radio resources among different stations, taking into account channel condition and QoS requirements. This goal should be accomplished by providing, at the same time, an optimal trade-off between spectral efficiency and fairness. In this context, this paper provides an overview on the key issues that arise in the design of a resource allocation algorithm for LTE networks. It is intended for a wide range of readers as it covers the topic from basics to advanced aspects. The downlink channel under frequency division duplex configuration is considered as object of our study, but most of the considerations are valid for other configurations as well. Moreover, a survey on the most recent techniques is reported, including a classification of the different approaches presented in literature. Performance comparisons of the most well-known schemes, with particular focus on QoS provisioning capabilities, are also provided for complementing the described concepts. Thus, this survey would be useful for readers interested in learning the basic concepts before going into the details of a particular scheduling strategy, as well as for researchers aiming at deepening more specific aspects. --- paper_title: Pushing the Limits of LTE: A Survey on Research Enhancing the Standard paper_content: Cellular networks are currently experiencing a tremendous growth of data traffic. To cope with this demand, a close cooperation between academic researchers and industry/standardization experts is necessary, which hardly exists in practice. In this paper, we try to bridge this gap between researchers and engineers by providing a review of current standard-related research efforts in wireless communication systems. Furthermore, we give an overview about our attempt in facilitating the exchange of information and results between researchers and engineers, via a common simulation platform for 3GPP long term evolution (LTE) and a corresponding webforum for discussion. Often, especially in signal processing, reproducing results of other researcher is a tedious task, because assumptions and parameters are not clearly specified, which hamper the consideration of the state-of-the-art research in the standardization process. Also, practical constraints, impairments imposed by technological restrictions and well-known physical phenomena, e.g., signaling overhead, synchronization issues, channel fading, are often disregarded by researchers, because of simplicity and mathematical tractability. Hence, evaluating the relevance of research results under practical conditions is often difficult. To circumvent these problems, we developed a standard-compliant opensource simulation platform for LTE that enables reproducible research in a well-defined environment. We demonstrate that innovative research under the confined framework of a real-world standard is possible, sometimes even encouraged. With examples of our research work, we investigate on the potential of several important research areas under typical practical conditions, and highlight consistencies as well as differences between theory and practice. --- paper_title: Towards realization of the ABC vision: A comparative survey of Access Network Selection paper_content: Access Network Selection (ANS) providing the most appropriate networking technology for accessing and using services in a heterogeneous wireless environment constitutes the heart of the overall handover management procedure. The aim of this paper is to survey representative vertical handover schemes proposed in related research literature with emphasis laid on the design of the ANS mechanism. Schemes' distinct features are analyzed and the authors discuss on their relative merits and weaknesses. --- paper_title: A Survey of Recent Developments in Home M2M Networks paper_content: Recent years have witnessed the emergence of machine-to-machine (M2M) networks as an efficient means for providing automated communications among distributed devices. Automated M2M communications can offset the overhead costs of conventional operations, thus promoting their wider adoption in fixed and mobile platforms equipped with embedded processors and sensors/actuators. In this paper, we survey M2M technologies for applications such as healthcare, energy management and entertainment. In particular, we examine the typical architectures of home M2M networks and discuss the performance tradeoffs in existing designs. Our investigation covers quality of service, energy efficiency and security issues. Moreover, we review existing home networking projects to better understand the real-world applicability of these systems. This survey contributes to better understanding of the challenges in existing M2M networks and further shed new light on future research directions. --- paper_title: Stochastic Geometry for Modeling, Analysis, and Design of Multi-Tier and Cognitive Cellular Wireless Networks: A Survey paper_content: For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions. --- paper_title: Is the Random Access Channel of LTE and LTE-A Suitable for M2M Communications? A Survey of Alternatives paper_content: The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future. --- paper_title: Green Cellular Networks: A Survey, Some Research Issues and Challenges paper_content: Energy efficiency in cellular networks is a growing concern for cellular operators to not only maintain profitability, but also to reduce the overall environment effects. This emerging trend of achieving energy efficiency in cellular networks is motivating the standardization authorities and network operators to continuously explore future technologies in order to bring improvements in the entire network infrastructure. In this article, we present a brief survey of methods to improve the power efficiency of cellular networks, explore some research issues and challenges and suggest some techniques to enable an energy efficient or "green" cellular network. Since base stations consume a maximum portion of the total energy used in a cellular system, we will first provide a comprehensive survey on techniques to obtain energy savings in base stations. Next, we discuss how heterogeneous network deployment based on micro, pico and femto-cells can be used to achieve this goal. Since cognitive radio and cooperative relaying are undisputed future technologies in this regard, we propose a research vision to make these technologies more energy efficient. Lastly, we explore some broader perspectives in realizing a "green" cellular network technology --- paper_title: Survey on transport control in data center networks paper_content: Traditional fair bandwidth sharing by leveraging AIMD-based congestion control mechanisms faces great challenges in data center networks. Much work has been done to solve one of the various challenges. However, no single transport layer protocol can solve all of them. In this article, we focus on the transport layer in data centers, and present a comprehensive survey of existing problems and their current solutions. We hope that this article can help readers quickly understand the causes of each problem and learn about current research progress, so as to help them make new contributions in this field. --- paper_title: OpenRAN: a software-defined ran architecture via virtualization paper_content: With the rapid growth of the demands for mobile data, wireless network faces several challenges, such as lack of efficient interconnection among heterogeneous wireless networks, and shortage of customized QoS guarantees between services. The fundamental reason for these challenges is that the radio access network (RAN) is closed and ossified. We propose OpenRAN, an architecture for software-defined RAN via virtualization. It achieves complete virtualization and programmability vertically, and benefits the convergence of heterogeneous network horizontally. It provides open, controllable, flexible and evolvable wireless networks. --- paper_title: Distributed scheduling schemes for wireless mesh networks: A survey paper_content: An efficient scheduling scheme is a crucial part of Wireless Mesh Networks (WMNs)—an emerging communication infrastructure solution for autonomy, scalability, higher throughput, lower delay metrics, energy efficiency, and other service-level guarantees. Distributed schedulers are preferred due to better scalability, smaller setup delays, smaller management overheads, no single point of failure, and for avoiding bottlenecks. Based on the sequence in which nodes access the shared medium, repetitiveness, and determinism, distributed schedulers that are supported by wireless mesh standards can be classified as either random, pseudo-random, or cyclic schemes. We performed qualitative and quantitative studies that show the strengths and weaknesses of each category, and how the schemes complement each other. We discuss how wireless standards with mesh definitions have evolved by incorporating and enhancing one or more of these schemes. Emerging trends and research problems remaining for future research also have been identified. --- paper_title: Routing, modulation level, and spectrum assignment in optical metro ring networks using elastic transceivers paper_content: For decades, optical networks have provided larger bandwidths than could be utilized, but with the increasing growth of the global Internet traffic demand, new optical transmission technologies are required to provide a much higher data rate per channel and to enable more flexibility in the allocation of traffic flow. Currently, researchers are investigating innovative transceiver architectures capable of dynamically adapting the modulation format to the transmission link properties. These transceivers are referred to as elastic and enable flexible allocation of optical bandwidth resources. To exploit their capabilities, the conventional fixed spectrum grid has to evolve to provide a more scalable and flexible system that can provide the spectral resources requireded to serve the client demand. The benefits of elastic transceivers with distance-adaptive data rates have been evaluated in optical core networks, but their application to metro ring networks has still not been addressed. This paper proposes methods based on integer linear programs and heuristic approaches to solve the routing, modulation level, and spectrum assignment problem in optical rings with elastic transceivers and rate-adaptive modulation formats. Moreover, we discuss how to analytically compute feasible solutions that provide useful upper bounds. Results show a significant reduction in terms of transceiver utilization and spectrum occupation. --- paper_title: Next-generation optical access seamless evolution: concluding results of the European FP7 Project OASE paper_content: Increasing bandwidth demand drives the need for next-generation optical access (NGOA) networks that can meet future end-user service requirements. This paper gives an overview of NGOA solutions, the enabling optical access network technologies, architecture principles, and related economics and business models. NGOA requirements (including peak and sustainable data rate, reach, cost, node consolidation, and open access) are proposed, and the different solutions are compared against such requirements in different scenarios (in terms of population density and system migration). Unsurprisingly, it is found that different solutions are best suited for different scenarios. The conclusions drawn from such findings allow us to formulate recommendations in terms of technology, strategy, and policy. The paper is based on the main results of the European FP7 OASE Integrated Project that ran between January 1, 2010 and February 28, 2013. --- paper_title: Fiber-wireless (FiWi) access networks: A survey paper_content: This article provides an up-to-date survey of hybrid fiber-wireless (FiWi) access networks that leverage on the respective strengths of optical and wireless technologies and converge them seamlessly. FiWi networks become rapidly mature and give rise to new powerful access network solutions and paradigms. The survey first overviews the state of the art, enabling technologies and future developments of wireless and optical access networks, respectively, paying particular attention to wireless mesh networks and fiber to the home networks. After briefly reviewing some generic integration approaches of EPON and WiMAX networks, several recently proposed FiWi architectures based on different optical network topologies and WiFi technology are described. Finally, technological challenges toward the realization and commercial adoption of future FiWi access networks are identified. --- paper_title: Resource Allocation Optimization for Delay-Sensitive Traffic in Fronthaul Constrained Cloud Radio Access Networks paper_content: The cloud radio access network (C-RAN) provides high spectral and energy efficiency performances, low expenditures, and intelligent centralized system structures to operators, which have attracted intense interests in both academia and industry. In this paper, a hybrid coordinated multipoint transmission (H-CoMP) scheme is designed for the downlink transmission in C-RANs and fulfills the flexible tradeoff between cooperation gain and fronthaul consumption. The queue-aware power and rate allocation with constraints of average fronthaul consumption for the delay-sensitive traffic are formulated as an infinite horizon constrained partially observed Markov decision process, which takes both the urgent queue state information and the imperfect channel state information at transmitters (CSIT) into account. To deal with the curse of dimensionality involved with the equivalent Bellman equation, the linear approximation of postdecision value functions is utilized. A stochastic gradient algorithm is presented to allocate the queue-aware power and transmission rate with H-CoMP, which is robust against unpredicted traffic arrivals and uncertainties caused by the imperfect CSIT. Furthermore, to substantially reduce the computing complexity, an online learning algorithm is proposed to estimate the per-queue postdecision value functions and update the Lagrange multipliers. The simulation results demonstrate performance gains of the proposed stochastic gradient algorithms and confirm the asymptotical convergence of the proposed online learning algorithm. --- paper_title: EPON versus APON and GPON: a detailed performance comparison paper_content: Feature Issue on Optical Ethernet (OE) Ethernet passive optical network (EPON) efficiency issues in both upstream and downstream directions are discussed in detail, describing each component of the overall transmission overhead as well as quantifying their effect on the system's performance and comparing them with the other existing passive optical network (PON) access systems, namely, asynchronous transfer mode PON (APON) and generic framing PON (GPON). For EPON, two main transmission overhead groups are defined, namely, Ethernet encapsulation overhead and EPON-specific scheduling overhead. Simulations are performed using the source aggregation algorithm (SAA) to verify the Ethernet encapsulation overhead for various synthetic and measured packet size distributions (PSDs). A SAA based an EPON simulator is used to verify both upstream and downstream overall channel efficiencies. The obtained simulation results closely match the theoretical limits estimated based on the IEEE 802.3ah standard. An estimated throughput of 820 to 900 Mbits/s is available in the upstream direction, whereas in the downstream direction effective throughput ranges from 915 to 935 Mbits/s. --- paper_title: Survey on converged data center networks with DCB and FCoE: standards and protocols paper_content: Data center networks today face exciting new challenges in supporting cloud computing and other data-intensive applications. In conventional DCNs, different types of traffic are carried by different types of networks, such as Ethernet and Fibre Channel. Typically, Ethernet carries data traffic among servers in LANs, and Fibre Channel connects servers and storages in storage area networks. Due to the existence of multiple networks, the network cost, power consumption, wiring complexity, and management overhead are often high. The concept of a converged DCN is therefore appealing, carrying both types of traffic in a single converged Ethernet. Recent standards have been proposed for unified data center bridging (DCB) Ethernet and Fibre Channel over Ethernet (FCoE) protocols by the DCB Task Group of IEEE and the T11 Technical Committee of INCITS. In this article, we give a survey of the standards and protocols on converged DCNs, focusing mainly on their motivations and key functionalities. The technologies are discussed mainly from a practical perspective and may serve as a foundation for future research in this area. --- paper_title: A Survey on Radio-and-Fiber FiWi Network Architectures paper_content: The ultimate goal of Fiber-Wireless (FiWi) networks is the convergence of various optical and wireless technologies under a single infrastructure in order to take advantage of their complementary features and therefore provide a network capable of supporting bandwidth-hungry emerging applications in a seamless way for both fixed and mobile clients. This article surveys possible FiWi network architectures that are based on a Radio-and-Fiber (R&F) network integration, an approach that is different compared to the Radio-over-Fiber (RoF) proposal. The survey distinguishes FiWi R&F architectures based on a three- level network deployment of different optical or wireless technologies and classifies them into three main categories based on the technology used in the first level network. Future research challenges that should be explored in order to achieve a feasible FiWi R&F architecture are also discussed. --- paper_title: Wireless mesh networks: a survey paper_content: Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted. --- paper_title: A survey of dynamic bandwidth allocation algorithms for Ethernet Passive Optical Networks paper_content: Ethernet Passive Optical Network (EPON) has been widely considered as a promising technology for implementing the FTTx solutions to the ''last mile'' bandwidth bottleneck problem. Bandwidth allocation is one of the critical issues in the design of EPON systems. In an EPON system, multiple optical network units (ONUs) share a common upstream channel for data transmission. To efficiently utilize the limited bandwidth of the upstream channel, an EPON system must dynamically allocate the upstream bandwidth among multiple ONUs based on the instantaneous bandwidth demands and quality of service requirements of end users. This paper introduces the fundamental concepts on EPONs, discusses the major issues related to bandwidth allocation in EPON systems, and presents a survey of the state-of-the-art dynamic bandwidth allocation (DBA) algorithms for EPONs. --- paper_title: PON/xDSL hybrid access networks paper_content: We discuss hybrid fiber/copper access networks with a focus on XG-PON/VDSL2 hybrid access networks. We present tutorial material on the XG-PON and VDSL2 protocols as standardized by the ITU. We investigate mechanisms to reduce the functional logic at the device that bridges the fiber and copper segments of the hybrid fiber/copper access network. This device is called a drop-point device. Reduced functional logic translates into lower energy consumption and cost for the drop-point device. We define and analyze the performance of several mechanisms to move some of the VDSL2 functional logic blocks from the drop-point device into the XG-PON Optical Line Terminal. Our analysis uncovers that silence suppression mechanisms are necessary to achieve statistical multiplexing gain when carrying synchronous intermediate VDSL2 data formats across the XG-PON. --- paper_title: Cost, power consumption and performance evaluation of metro networks paper_content: We provide models for evaluating the performance, cost and power consumption of different architectures suitable for a metropolitan area network (MAN). We then apply these models to compare today's synchronous optical network/synchronous digital hierarchy metro rings with different alternatives envisaged for next-generation MAN: an Ethernet carrier grade ring, an optical hub-based architecture and an optical time-slotted wavelength division multiplexing (WDM) ring. Our results indicate that the optical architectures are likely to decrease power consumption by up to 75% when compared with present day MANs. Moreover, by allowing the capacity of each wavelength to be dynamically shared among all nodes, a transparent slotted WDM yields throughput performance that is practically equivalent to that of today's electronic architectures, for equal capacity. --- paper_title: Scalable Network Virtualization in Software-Defined Networks paper_content: Network virtualization gives each "tenant" in a data center its own network topology and control over its traffic flow. Software-defined networking offers a standard interface between controller applications and switch-forwarding tables, and is thus a natural platform for network virtualization. Yet, supporting numerous tenants with different topologies and controller applications raises scalability challenges. The FlowN architecture gives each tenant the illusion of its own address space, topology, and controller, and leverages database technology to efficiently store and manipulate mappings between virtual networks and physical switches. --- paper_title: Software defined networking and virtualization for broadband satellite networks paper_content: Satellite networks have traditionally been considered for specific purposes. Recently, new satellite technologies have been pushed to the market enabling high-performance satellite access networks. On the other hand, network architectures are taking advantage of emerging technologies such as software-defined networking (SDN), network virtualization and network functions virtualization (NFV). Therefore, benefiting communications services over satellite networks from these new technologies at first, and their seamless integration with terrestrial networks at second, are of great interest and importance. In this paper, and through comprehensive use cases, the advantages of introducing network programmability and virtualization using SDN and/or NFV in satellite networks are investigated. The requirements to be fulfilled in each use case are also discussed. --- paper_title: Security in Software Defined Networks: A Survey paper_content: Software defined networking (SDN) decouples the network control and data planes. The network intelligence and state are logically centralized and the underlying network infrastructure is abstracted from applications. SDN enhances network security by means of global visibility of the network state where a conflict can be easily resolved from the logically centralized control plane. Hence, the SDN architecture empowers networks to actively monitor traffic and diagnose threats to facilitates network forensics, security policy alteration, and security service insertion. The separation of the control and data planes, however, opens security challenges, such as man-in-the middle attacks, denial of service (DoS) attacks, and saturation attacks. In this paper, we analyze security threats to application, control, and data planes of SDN. The security platforms that secure each of the planes are described followed by various security approaches for network-wide security in SDN. SDN security is analyzed according to security dimensions of the ITU-T recommendation, as well as, by the costs of security solutions. In a nutshell, this paper highlights the present and future security challenges in SDN and future directions for secure SDN. --- paper_title: FlowVisor: A Network Virtualization Layer paper_content: Network virtualization has long been a goal of of the network research community. With it, multiple isolated logical networks each with potentially different addressing and forwarding mechanisms can share the same physical infrastructure. Typically this is achieved by taking advantage of the flexibility of software (e.g. [20, 23]) or by duplicating components in (often specialized) hardware[19]. In this paper we present a new approach to switch virtualization in which the same hardware forwarding plane can be shared among multiple logical networks, each with distinct forwarding logic. We use this switch-level virtualization to build a research platform which allows multiple network experiments to run side-by-side with production traffic while still providing isolation and hardware forwarding speeds. We also show that this approach is compatible with commodity switching chipsets and does not require the use of programmable hardware such as FPGAs or network processors. We build and deploy this virtualization platform on our own production network and demonstrate its use in practice by running five experiments simultaneously within a campus network. Further, we quantify the overhead of our approach and evaluate the completeness of the isolation between virtual slices. --- paper_title: Network virtualization: a hypervisor for the Internet? paper_content: Network virtualization is a relatively new research topic. A number of articles propose that certain benefits can be realized by virtualizing links between network elements as well as adding virtualization on intermediate network elements. In this article we argue that network virtualization may bring nothing new in terms of technical capabilities and theoretical performance, but it provides a way of organizing networks such that it is possible to overcome some of the practical issues in today?s Internet. We strengthen our case by an analogy between the concept of network virtualization as it is currently presented in research, and machine virtualization as proven useful in deployments in recent years. First we make an analogy between the functionality of an operating system and that of a network, and identify similar concepts and elements. Then we emphasize the practical benefits realized by machine virtualization, and we exploit the analogy to derive potential benefits brought by network virtualization. We map the established applications for machine virtualization to network virtualization, thus identifying possible use cases for network virtualization. We also use this analogy to structure the design space for network virtualization. --- paper_title: Network Innovation using OpenFlow: A Survey paper_content: OpenFlow is currently the most commonly deployed Software Defined Networking (SDN) technology. SDN consists of decoupling the control and data planes of a network. A software-based controller is responsible for managing the forwarding information of one or more switches; the hardware only handles the forwarding of traffic according to the rules set by the controller. OpenFlow is an SDN technology proposed to standardize the way that a controller communicates with network devices in an SDN architecture. It was proposed to enable researchers to test new ideas in a production environment. OpenFlow provides a specification to migrate the control logic from a switch into the controller. It also defines a protocol for the communication between the controller and the switches. As discussed in this survey paper, OpenFlow-based architectures have specific capabilities that can be exploited by researchers to experiment with new ideas and test novel applications. These capabilities include software-based traffic analysis, centralized control, dynamic updating of forwarding rules and flow abstraction. OpenFlow-based applications have been proposed to ease the configuration of a network, to simplify network management and to add security features, to virtualize networks and data centers and to deploy mobile systems. These applications run on top of networking operating systems such as Nox, Beacon, Maestro, Floodlight, Trema or Node.Flow. Larger scale OpenFlow infrastructures have been deployed to allow the research community to run experiments and test their applications in more realistic scenarios. Also, studies have measured the performance of OpenFlow networks through modelling and experimentation. We describe the challenges facing the large scale deployment of OpenFlow-based networks and we discuss future research directions of this technology. --- paper_title: Network Virtualization: Technologies, Perspectives, and Frontiers paper_content: Network virtualization refers to a broad set of technologies. Commercial solutions have been offered by the industry for years, while more recently the academic community has emphasized virtualization as an enabler for network architecture research, deployment, and experimentation. We review the entire spectrum of relevant approaches with the goal of identifying the underlying commonalities. We offer a unifying definition of the term “network virtualization” and examine existing approaches to bring out this unifying perspective. We also discuss a set of challenges and research directions that we expect to come to the forefront as network virtualization technologies proliferate. --- paper_title: A roadmap for traffic engineering in SDN-OpenFlow networks paper_content: Software Defined Networking (SDN) is an emerging networking paradigm that separates the network control plane from the data forwarding plane with the promise to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. Although traffic engineering techniques have been widely exploited in the past and current data networks, such as ATM networks and IP/MPLS networks, to optimize the performance of communication networks by dynamically analyzing, predicting, and regulating the behavior of the transmitted data, the unique features of SDN require new traffic engineering techniques that exploit the global network view, status, and flow patterns/characteristics available for better traffic control and management. This paper surveys the state-of-the-art in traffic engineering for SDNs, and mainly focuses on four thrusts including flow management, fault tolerance, topology update, and traffic analysis/characterization. In addition, some existing and representative traffic engineering tools from both industry and academia are explained. Moreover, open research issues for the realization of SDN traffic engineering solutions are discussed in detail. --- paper_title: Resource Discovery and Allocation in Network Virtualization paper_content: Network virtualization is considered an important potential solution to the gradual ossification of the Internet. In a network virtualization environment, a set of virtual networks share the resources of a common physical network although each virtual network is isolated from others. Benefits include increased flexibility, diversity, security and manageability. Resource discovery and allocation are fundamental steps in the process of creating new virtual networks. This paper surveys previous work on, and the present status of, resource discovery and allocation in network virtualization. We also describe challenges and suggest future directions for this area of research. --- paper_title: Interfaces, attributes, and use cases: A compass for SDN paper_content: The term Software Defined Networking (SDN) is prevalent in today's discussion about future communication networks. As with any new term or paradigm, however, no consistent definition regarding this technology has formed. The fragmented view on SDN results in legacy products being passed off by equipment vendors as SDN, academics mixing up the attributes of SDN with those of network virtualization, and users not fully understanding the benefits. Therefore, establishing SDN as a widely adopted technology beyond laboratories and insular deployments requires a compass to navigate the multitude of ideas and concepts that make up SDN today. The contribution of this article represents an important step toward such an instrument. It gives a thorough definition of SDN and its interfaces as well as a list of its key attributes. Furthermore, a mapping of interfaces and attributes to SDN use cases is provided, highlighting the relevance of the interfaces and attributes for each scenario. This compass gives guidance to a potential adopter of SDN on whether SDN is in fact the right technology for a specific use case. --- paper_title: Software-defined networking: management requirements and challenges paper_content: SDN is an emerging paradigm currently evidenced as a new driving force in the general area of computer networks. Many investigations have been carried out in the last few years about the benefits and drawbacks in adopting SDN. However, there are few discussions on how to manage networks based on this new paradigm. This article contributes to this discussion by identifying some of the main management requirements of SDN. Moreover, we describe current proposals and highlight major challenges that need to be addressed to allow wide adoption of the paradigm and related technology. --- paper_title: A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation paper_content: Software-defined network (SDN) has become one of the most important architectures for the management of largescale complex networks, which may require repolicing or reconfigurations from time to time. SDN achieves easy repolicing by decoupling the control plane from data plane. Thus, the network routers/switches just simply forward packets by following the flow table rules set by the control plane. Currently, OpenFlow is the most popular SDN protocol/standard and has a set of design specifications. Although SDN/OpenFlow is a relatively new area, it has attracted much attention from both academia and industry. In this paper, we will conduct a comprehensive survey of the important topics in SDN/OpenFlow implementation, including the basic concept, applications, language abstraction, controller, virtualization, quality of service, security, and its integration with wireless and optical networks. We will compare the pros and cons of different schemes and discuss the future research trends in this exciting area. This survey can help both industry and academia R&D people to understand the latest progress of SDN/OpenFlow designs. --- paper_title: Survey of virtual machine research paper_content: The complete instruction-by-instruction simulation of one computer system on a different system is a well-known computing technique. It is often used for software development when a hardware base is being altered. For example, if a programmer is developing software for some new special purpose (e.g., aerospace) computer X which is under construction and as yet unavailable, he will likely begin by writing a simulator for that computer on some available general-purpose machine G. The simulator will provide a detailed simulation of the special-purpose environment X, including its processor, memory, and I/O devices. Except for possible timing dependencies, programs which run on the “simulated machine X” can later run on the “real machine X” (when it is finally built and checked out) with identical effect. The programs running on X can be arbitrary — including code to exercise simulated I/O devices, move data and instructions anywhere in simulated memory, or execute any instruction of the simulated machine. The simulator provides a layer of software filtering which protects the resources of the machine G from being misused by programs on X. --- paper_title: Survey on Network Virtualization Hypervisors for Software Defined Networking paper_content: Software defined networking (SDN) has emerged as a promising paradigm for making the control of communication networks flexible. SDN separates the data packet forwarding plane, i.e., the data plane, from the control plane and employs a central controller. Network virtualization allows the flexible sharing of physical networking resources by multiple users (tenants). Each tenant runs its own applications over its virtual network, i.e., its slice of the actual physical network. The virtualization of SDN networks promises to allow networks to leverage the combined benefits of SDN networking and network virtualization and has therefore attracted significant research attention in recent years. A critical component for virtualizing SDN networks is an SDN hypervisor that abstracts the underlying physical SDN network into multiple logically isolated virtual SDN networks (vSDNs), each with its own controller. We comprehensively survey hypervisors for SDN networks in this paper. We categorize the SDN hypervisors according to their architecture into centralized and distributed hypervisors. We furthermore sub-classify the hypervisors according to their execution platform into hypervisors running exclusively on general-purpose compute platforms, or on a combination of general-purpose compute platforms with general- or special-purpose network elements. We exhaustively compare the network attribute abstraction and isolation features of the existing SDN hypervisors. As part of the future research agenda, we outline the development of a performance evaluation framework for SDN hypervisors. --- paper_title: A Survey on OFDM-Based Elastic Core Optical Networking paper_content: Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. --- paper_title: Software-Defined and Virtualized Future Mobile and Wireless Networks: A Survey paper_content: With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of MWN and significantly benefit the future mobile and wireless network. --- paper_title: SDN and Virtualization-Based LTE Mobile Network Architectures: A Comprehensive Survey paper_content: Software-defined networking (SDN) features the decoupling of the control plane and data plane, a programmable network and virtualization, which enables network infrastructure sharing and the "softwarization" of the network functions. Recently, many research works have tried to redesign the traditional mobile network using two of these concepts in order to deal with the challenges faced by mobile operators, such as the rapid growth of mobile traffic and new services. In this paper, we first provide an overview of SDN, network virtualization, and network function virtualization, and then describe the current LTE mobile network architecture as well as its challenges and issues. By analyzing and categorizing a wide range of the latest research works on SDN and virtualization in LTE mobile networks, we present a general architecture for SDN and virtualization in mobile networks (called SDVMN) and then propose a hierarchical taxonomy based on the different levels of the carrier network. We also present an in-depth analysis about changes related to protocol operation and architecture when adopting SDN and virtualization in mobile networks. In addition, we list specific use cases and applications that benefit from SDVMN. Last but not least, we discuss the open issues and future research directions of SDVMN. --- paper_title: Optical network evolution for 5G mobile applications and SDN-based control paper_content: The tight connection between advanced mobile techniques and optical networking has already been made by emerging cloud radio access network architectures, wherein fiber-optic links to/from remote cell sites have been identified as the leading high-speed, low-latency connectivity solution. By taking such fiber-optic mobile fronthaul networks as the reference case, this paper will consider their scaling to meet 5G demands as driven by key 5G mobile techniques, including massive multiple input multiple output (MIMO) and coordinated multipoint (CoMP), network densification via small/pico/femto cells, device-to-device (D2D) connectivity, and an increasingly heterogeneous bring-your-own-device (BYOD) networking environment. Ramifications on mobile fronthaul signaling formats, optical component selection and wavelength management, topology evolution and network control will be examined, highlighting the need to move beyond raw common public radio interface (CPRI) solutions, support all wavelength division multiplexing (WDM) optics types, enable topology evolution towards a meshed architecture, and adopt a software-defined networking (SDN)-based network control plane. The proposed optical network evolution approaches are viewed as opportunities for both optimizing user-side quality-of-experience (QoE) and monetizing the underlying optical network. --- paper_title: SDN for Optical Access Networks paper_content: This paper discusses SDN for optical access networks, with a focus on SDN overlays for existing networks, a unified control plane for next-generation optical access, and an overview of recent research progress in this area. --- paper_title: Software Defined Networking and applicability to access networks paper_content: This paper explores the applicability of the Software Defined Networking (SDN) paradigm to access networks. In particular, it describes Broadband and Enterprise use cases where SDN can play a role in enabling new network services. --- paper_title: A critical review of OpenFlow/SDN-based networks paper_content: The separation of the data and control planes simplify the implementation of SDN applications. The centralised architecture of a controller based on the OpenFlow protocol is appealing to the network operators. We have reviewed the concept of SDNs and its extension to optical networks, and constrained and unconstrained wireless access networks. The current status of the proposed and implemented SDN architectures is such that the fulfilment of a SLA is an open issue. This aspect is left to be tackled by the SDN applications and the proposed architectures do not provide means to describe the interplay between different technology domains. In this paper we make an in depth analysis of the current proposed architectures and identify important challenges to be addressed by a novel integrated SDN architecture. --- paper_title: Wireless Network Virtualization: A Survey, Some Research Issues and Challenges paper_content: Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization. --- paper_title: Wireless Sensor Network Virtualization: A Survey paper_content: Wireless Sensor Networks (WSNs) are the key components of the emerging Internet-of-Things (IoT) paradigm. They are now ubiquitous and used in a plurality of application domains. WSNs are still domain specific and usually deployed to support a specific application. However, as WSNs' nodes are becoming more and more powerful, it is getting more and more pertinent to research how multiple applications could share a very same WSN infrastructure. Virtualization is a technology that can potentially enable this sharing. This paper is a survey on WSN virtualization. It provides a comprehensive review of the state-of-the-art and an in-depth discussion of the research issues. We introduce the basics of WSN virtualization and motivate its pertinence with carefully selected scenarios. Existing works are presented in detail and critically evaluated using a set of requirements derived from the scenarios. The pertinent research projects are also reviewed. Several research issues are also discussed with hints on how they could be tackled. --- paper_title: Software-defined optical networks (SDONs): a survey paper_content: This paper gives an overview of software-defined optical networks (SDONs). It explains the general concepts on software-defined networks (SDNs), their relationship with network function virtualization, and also about OpenFlow, which is a pioneer protocol for SDNs. It then explains the benefits and challenges of extending SDNs to multilayer optical networks, including flexible grid and elastic optical networks, and how it compares to generalized multi-protocol label switching for implementing a unified control plane. An overview on the industry and research efforts on SDON standardization and implementation is given next, to bring the reader up to speed with the current state of the art in this field. Finally, the paper outlines the benefits achieved by SDONs for network operators, and also some of the important and relevant research problems that need to be addressed. --- paper_title: Field Trial of an OpenFlow-Based Unified Control Plane for Multilayer Multigranularity Optical Switching Networks paper_content: Software defined networking and OpenFlow, which allow operators to control the network using software running on a network operating system within an external controller, provide the maximum flexibility for the operator to control a network, and match the carrier's preferences given its centralized architecture, simplicity, and manageability. In this paper, we report a field trial of an OpenFlow-based unified control plane (UCP) for multilayer multigranularity optical switching networks, verifying its overall feasibility and efficiency, and quantitatively evaluating the latencies for end-to-end path creation and restoration. To the best of our knowledge, the field trial of an OpenFlow-based UCP for optical networks is a world first. --- paper_title: Software-Defined Access Network (SDAN) paper_content: Control-plane functions are migrating from dedicated network equipment into software running on commodity hardware. The Software-Defined Access Network (SDAN) concept is introduced here that extends the benefits of Software-Defined Networking (SDN) into broadband access. The SDAN virtualizes access-network control and management functions for broadband access, to enable network optimizations, streamline operations, and encourage innovative services creation, particularly in multi-operator environments. This paper identifies software-definable control and management functions for broadband access, and presents some specific network optimizations using the SDAN. --- paper_title: Software defined optical networks technology and infrastructure: Enabling software-defined optical network operations paper_content: Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. --- paper_title: A survey of dynamic bandwidth allocation algorithms for Ethernet Passive Optical Networks paper_content: Ethernet Passive Optical Network (EPON) has been widely considered as a promising technology for implementing the FTTx solutions to the ''last mile'' bandwidth bottleneck problem. Bandwidth allocation is one of the critical issues in the design of EPON systems. In an EPON system, multiple optical network units (ONUs) share a common upstream channel for data transmission. To efficiently utilize the limited bandwidth of the upstream channel, an EPON system must dynamically allocate the upstream bandwidth among multiple ONUs based on the instantaneous bandwidth demands and quality of service requirements of end users. This paper introduces the fundamental concepts on EPONs, discusses the major issues related to bandwidth allocation in EPON systems, and presents a survey of the state-of-the-art dynamic bandwidth allocation (DBA) algorithms for EPONs. --- paper_title: Defining optical software-defined networks (SDN): From a compilation of demos to network model synthesis paper_content: We propose the first optical SDN model enabling performance optimization and comparison of heterogeneous SDN scenarios. We exploit it to minimize latency and compare cost for non-SDN, partial-SDN and full-SDN variants of the same network. --- paper_title: New devices enabling software-defined optical networks paper_content: Next-generation ROADM networks are incorporating an extensive range of new features and capabilities including colorless, directionless, and contentionless multiplexing and demultiplexing, flexible spectrum channel definition, and higher-order modulation formats. To efficiently support these new features, both new ROADM node architectures along with complementary optical components and technologies are being synergistically designed. In this article, we describe these new architectures, components, and technologies, and how they work together to support these features in a compact and costefficient manner. --- paper_title: Flexible next-generation optical access paper_content: We propose a fibre access network paradigm achieving low latency, high throughput and energy efficiency, by combining the best of PON and AON, optical and electrical forwarding, and the concepts of software defined networks, flexible grid, and cache assisted networking. --- paper_title: Future Internet Infrastructure Based on the Transparent Integration of Access and Core Optical Transport Networks paper_content: It is increasingly recognized that the Internet is transforming into a platform providing services beyond todaypsilas expectations. To successfully realize this transformation, the structural limitations of current networking architectures must be raised so that information transport infrastructure gracefully evolves to address transparent core-access integration, optical flow/packet transport, and end-to-end service delivery capability, overcoming the limitations of segmentation between access, metro, and core networks and domains. We propose and evaluate an integrated control plane for optical access and core networks, which addresses the above consideration. The proposed control plane can lead to a unified transport infrastructure integrating state-of-the-art components and technologies including wavelength division multiplexing, passive optical networking, and optical packet routers with inherent traffic grooming capabilities. The performance of the proposed architecture is assessed by means of simulation in terms of cost, resource utilization, and delay. --- paper_title: Cognitive dynamic optical networks paper_content: Cognitive networks are a promising solution for the control of heterogeneous optical networks. We review their fundamentals as well as a number of applications developed in the framework of the EU FP7 CHRON project. --- paper_title: A control plane framework for future cognitive heterogeneous optical networks paper_content: Future optical networks are expected to provide an efficient infrastructure able to deliver a growing number of services, which have to meet various requirements in terms of quality of service. To achieve this objective the physical network is going through an evolution aimed at increasing its flexibility in terms of spectrum utilization and its level of heterogeneity in terms of supported services and technologies. In this context, cognitive optical networks represent a viable solution to fill the gap between the intelligence required by the future networks and the current optical technology. This paper proposes a control plane framework developed to coordinate the interactions among the elements of the future cognitive optical networks. The building blocks of the framework and the involved protocols are presented. Moreover, this paper provides an insight of the control plane issues related to the introduction of the flexible optical technology. --- paper_title: Software-defined optical access networks for multiple broadband access solutions paper_content: The principles of software-defined networking as applied to multi-service broadband optical access systems are discussed, with an emphasis on centralized software-reconfigurable resource management, digital signal processing (DSP)-enhanced transceivers and multi-service support via software-reconfigurable network “apps”. --- paper_title: Reconfigurable Long-Reach UltraFlow Access Network: A Flexible, Cost-Effective, and Energy-Efficient Solution paper_content: In this paper, we propose and experimentally demonstrate a reconfigurable long-reach (R-LR) UltraFlow access network to provide flexible dual-mode (IP and Flow) service with lower capital expenditure (CapEx) and higher energy efficiency. UltraFlow is a research project involves the collaboration of Stanford, MIT, and UT-Dallas. The design of the R-LR UltraFlow access network enables seamless integration of the Flow service with IP passive optical networks deployed with different technologies. To fulfill the high-wavelength demand incurred by the extended service reach, we propose the use of multiple feeder fibers to form subnets within the UltraFlow access network. Two layers of custom switching devices are installed at the central office (CO) and remote node to provide flexibility in resource allocation and user grouping. With a centralized software-defined network (SDN) controller at the CO to control the dual-mode service, numerical analysis indicates that the reconfiguration architecture is able to reduce the CapEx during initial deployment by about 30%. A maximum of around 50% power savings is also achieved during low traffic period. The feasibility of the new architecture and the operation of the SDN controller are both successfully demonstrated on our experimental testbed. --- paper_title: UltraFlow access testbed: Experimental exploration of dual-mode access networks paper_content: Electrical packet switching is well known as a flexible solution for small data transfers, whereas optical flow switching (OFS) might be an effective solution for large Internet file transfers. The UltraFlow project, a joint effort of three universities, Stanford, Massachusetts Institute of Technology, and University of Texas-Dallas, aims at providing an efficient dual-mode solution (i.e., IP and OFS) to the current network. In this paper, we propose and experimentally demonstrate UltraFlow Access, a novel optical access network that enables dual-mode service to the end users: IP and OFS. The new architecture cooperates with legacy passive optical networks (PONs) to provide both IP and novel OFS services. The latter is facilitated by a novel optical flow network unit (OFNU) that we have proposed, designed, and experimentally demonstrated. Different colored and colorless OFNU designs are presented, and their impact on the network performance is explored. Our testbed experiments demonstrate concurrent bidirectional 1.25 Gbps IP and 10 Gbps per-wavelength Flow error-free communication delivered over the same infrastructure. The support of intra-PON OFS communication, that is, between two OFNUs in the same PON, is also explored and experimentally demonstrated. --- paper_title: Optical Flow Switching Networks paper_content: Present-day networks are being challenged by dramatic increases in data rate demands of emerging applications. A new network architecture, incorporating “optical flow switching,” will enable significant rate growth, power efficiency, and cost-effective scalability of next-generation networks. We will explore architecture concepts germinated 22 years ago, technology and testbed demonstrations performed in the last 17 years, and the architecture construct from the physical layer to the transport layer of an implementable optical flow switching network that is scalable and manageable. --- paper_title: Integrated packet/circuit hybrid network field trial with production traffic [invited] paper_content: We report the first field trials of an integrated packet/circuit hybrid optical network. In a long-haul field trial with production traffic, the mean capacity utilization of an Ethernet wavelength is doubled. The transport shares a single lightpath between the circuit and packet layers. Router bypassing is demonstrated at sub-wavelength granularity in a metro network field trial. In both trials the circuit quality of service is shown to be independent of the load of the network. The vacant resources in the circuit are utilized by the packet layer's statistical multiplexing in an interleaved manner without affecting the timing of the circuit. Inaddition, an analytical model that provides an upper bound on the maximum achievable utilization is presented. --- paper_title: Data center optical networks (DCON) with OpenFlow based Software Defined Networking (SDN) paper_content: As the infrastructure of cloud computing and big data, data centers have been deployed widely. Then, how to make full use of the computing and storage resources in data centers will be the focus. Data center networks are considered important solution for the problem above, which include intra-data center and inter-data center networks. Both of them will depend on the optical networking due to its advantages, such as high bandwidth, low latency, and low energy consumption. Data center interconnected by flexi-grid optical networks is a promising scenario to allocate spectral resources for applications in a dynamic, tunable and efficient control manner. Compared with inter-data center, optical interconnect in intra-data center networks is a more pressing need and promising scenario to accommodate these applications in a dynamic, flexible and efficient manner. OpenFlow based Software Defined Networking (SDN) is considered as a good technology, which is very suitable for data center networks. This paper mainly focuses on the data center optical networks based on software defined networking (SDN), which can control the heterogeneous networks with unified resource interface. Architecture and experimental demonstration of OpenFlow-based optical interconnects in intra-data center Networks and OpenFlow-based flexi-grid optical networks for inter-data center are presented in the paper respectively. Some future works are listed finally. --- paper_title: Scalability analysis of SDN-controlled optical ring MAN with hybrid traffic paper_content: The development of software defined networking (SDN) has instigated a growing number of experimental studies which demonstrate the flexibility in network control and management introduced by this technique. Optical networks add new challenges for network designers and operators to successfully dimension and deploy an SDN-based in the optical domain. At present, few performance evaluations and scalability studies that consider the high-bandwidth of the optical domain and the flow characterization from current Internet statistics have been developed. In this paper these parameters are taken as key inputs to study SDN scalability in the optical domain. As a relevant example an optical ring Metropolitan Area Network (MAN) is analyzed with circuit and packet traffic integrated at the wavelength level. The numerical results characterize the limitations in network dimensioning when considering an SDN controller implementation in the presence of different flow mixes. Employing flow aggregation and/or parallel distributed controllers is outlined as potential solution to achieve SDN network scalability. --- paper_title: Packet and circuit network convergence with OpenFlow paper_content: IP and Transport networks are controlled and operated independently today, leading to significant Capex and Opex inefficiencies for the providers. We discuss a unified approach with OpenFlow, and present a recent demonstration of a unified control plane for OpenFlow enabled IP/Ethernet and TDM switched networks. --- paper_title: Realizing packet-optical integration with SDN and OpenFlow 1.1 extensions paper_content: This paper discusses the benefits of applying software defined networking (SDN) to circuit based transport networks. It first establishes the need for SDN in the context of transport networks. This paper argues that the use of SDN in the transport layers could be the enabler for both packet-optical integration and improved transport network applications. Then, this paper proposes extensions to OpenFlow 1.1 to achieve control of switches in multi-technology transport layers. The approach presented in this paper is simple, yet it distinguishes itself from similar work by its friendliness with respect to the current transport layer control plane based on generalized multiprotocol label switching (GMPLS). This is important as it will enable an easier and gradual injection of SDN into existing transport networks. This paper is completed with a few use case applications of SDN in transport networks. --- paper_title: A Survey on Optical Interconnects for Data Centers paper_content: Data centers are experiencing an exponential increase in the amount of network traffic that they have to sustain due to cloud computing and several emerging web applications. To face this network load, large data centers are required with thousands of servers interconnected with high bandwidth switches. Current data center networks, based on electronic packet switches, consume excessive power to handle the increased communication bandwidth of emerging applications. Optical interconnects have gained attention recently as a promising solution offering high throughput, low latency and reduced energy consumption compared to current networks based on commodity switches. This paper presents a thorough survey on optical interconnects for next generation data center networks. Furthermore, the paper provides a qualitative categorization and comparison of the proposed schemes based on their main features such as connectivity and scalability. Finally, the paper discusses the cost and the power consumption of these schemes that are of primary importance in the future data center networks. --- paper_title: Integrated OpenFlow — GMPLS control plane: An overlay model for software defined packet over optical networks paper_content: A novel software-defined packet over optical networks solution based on the OpenFlow and GMPLS control plane integration is demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows is reported. --- paper_title: SDN and OpenFlow for Dynamic Flex-Grid Optical Access and Aggregation Networks paper_content: We propose and discuss the extension of software-defined networking (SDN) and OpenFlow principles to optical access/aggregation networks for dynamic flex-grid wavelength circuit creation. The first experimental demonstration of an OpenFlow1.0-based flex-grid λ-flow architecture for dynamic 150 Mb/s per-cell 4 G Orthogonal Frequency Division Multiple Access (OFDMA) mobile backhaul (MBH) overlays onto 10 Gb/s passive optical networks (PON) without optical network unit (ONU)-side optical filtering, amplification, or coherent detection, over 20 km standard single mode fiber (SSMF) with a 1:64 passive split is also detailed. The proposed approach can be attractive for monetizing optical access/aggregation networks via on-demand support for high-speed, low latency, high quality of service (QoS) applications over legacy fiber infrastructure. --- paper_title: A Survey on OFDM-Based Elastic Core Optical Networking paper_content: Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. --- paper_title: Optical wireless network convergence in support of energy-efficient mobile cloud services paper_content: Mobile computation offloading has been identified as a key-enabling technology to overcome the inherent processing power and storage constraints of mobile end devices. To satisfy the low-latency requirements of content-rich mobile applications, existing mobile cloud computing solutions allow mobile devices to access the required resources by accessing a nearby resource-rich cloudlet, suffering increased capital and operational expenditures. To address this issue, in this paper, we propose an infrastructure and architectural approach based on the orchestrated planning and operation of optical data center networks and wireless access networks. To this end, a novel formulation based on a multi-objective nonlinear programming model is presented that considers energy-efficient virtual infrastructure planning over the converged wireless, optical network interconnecting DCs with mobile devices, taking a holistic view of the infrastructure. Our modelling results identify trends and trade-offs relating to end-to-end service delay, mobility, resource requirements and energy consumption levels of the infrastructure across the various technology domains. --- paper_title: Planning of dynamic virtual optical cloud infrastructures: The GEYSERS approach paper_content: This article focuses on planning and replanning of virtual infrastructures over optical cloud infrastructures comprising integrated optical network and IT resources. This concept has been developed in the context of the European project GEYSERS. GEYSERS has proposed a novel multi-layer architecture, described in detail, that employs optical networking capable of provisioning optical network and IT resources for end-to-end cloud service delivery. The procedures required to perform virtual infrastructure planning and replanning at the different architecture layers are also detailed. An optimization scheme suitable to dynamically plan and replan virtual infrastructures is presented and compared to conventional approaches, and the benefits of dynamic replanning are discussed and quantified. The final project demonstration, focusing on planning, replanning, and dynamically establishing virtual infrastructures over the physical resources, is presented, while some emulation results are provided to further evaluate the performance of the GEYSERS solution. --- paper_title: Software Defined Flexible Optical Access Networks Enabling Throughput Optimization and OFDM-Based Dynamic Service Provisioning for Future Mobile Backhaul paper_content: SUMMARY In this invited paper, software defined network (SDN)based approaches for future cost-effective optical mobile backhaul (MBH) networks are discussed, focusing on key principles, throughput optimization and dynamic service provisioning as its use cases. We propose a novel physical-layer aware throughput optimization algorithm that confirms > 100 Mb/s end-to-end per-cell throughputs with ≥2.5 Gb/s optical links deployed at legacy cell sites. We also demonstrate the first optical line terminal (OLT)-side optical Nyquist filtering of legacy 10G on-offkeying (OOK) signals, enabling dynamic >10 Gb/s Orthogonal Frequency Domain Multiple Access (OFDMA) λ-overlays for MBH over passive optical network (PON) with 40-km transmission distances and 1:128 splitting ratios, without any ONU-side equipment upgrades. The software defined flexible optical access network architecture described in this paper is thus highly promising for future MBH networks. --- paper_title: HYDRA: A Scalable Ultra Long Reach/High Capacity Access Network Architecture Featuring Lower Cost and Power Consumption paper_content: This paper proposes HYbriD long-Reach fiber Access network (HYDRA), a novel network architecture that overcomes many limitations of the current WDM/TDM PON approaches leading to significantly improved cost and power consumption figures. The key concept is the introduction of an active remote node that interfaces to end-users by means of the lowest cost/power consumption technology (short-range xPON, wireless, etc.) while on the core network side it employs adaptive ultra-long reach links to bypass the metropolitan area network. The scheme leads to a higher degree of node consolidation and access-core integration. We demonstrate that HYDRA can achieve very high performance based on mature component technologies ensuring very low cost end-user terminals, reduced complexity, and high scalability. --- paper_title: Data center optical networks (DCON) with OpenFlow based Software Defined Networking (SDN) paper_content: As the infrastructure of cloud computing and big data, data centers have been deployed widely. Then, how to make full use of the computing and storage resources in data centers will be the focus. Data center networks are considered important solution for the problem above, which include intra-data center and inter-data center networks. Both of them will depend on the optical networking due to its advantages, such as high bandwidth, low latency, and low energy consumption. Data center interconnected by flexi-grid optical networks is a promising scenario to allocate spectral resources for applications in a dynamic, tunable and efficient control manner. Compared with inter-data center, optical interconnect in intra-data center networks is a more pressing need and promising scenario to accommodate these applications in a dynamic, flexible and efficient manner. OpenFlow based Software Defined Networking (SDN) is considered as a good technology, which is very suitable for data center networks. This paper mainly focuses on the data center optical networks based on software defined networking (SDN), which can control the heterogeneous networks with unified resource interface. Architecture and experimental demonstration of OpenFlow-based optical interconnects in intra-data center Networks and OpenFlow-based flexi-grid optical networks for inter-data center are presented in the paper respectively. Some future works are listed finally. --- paper_title: Activities, drivers and benefits of extending PON over other media paper_content: This paper reviews the latest effort of extending PON over other media. After looking at standards progress in ITU-T and BBF, we explore possible interworking solutions for the hybrid access. --- paper_title: Wireless network virtualization: The CONTENT project approach paper_content: Undoubtedly, SDN has the potential to fundamentally change the way end-to-end networks are provisioned and designed. By introducing programmability into networking, the handling of network changes and network provisioning via open software interfaces has become a reality; together with the hardware/software virtualization advances in server systems and software platforms, cloud services can now be rapidly deployed and efficiently managed. The CONTENT project is an EU funded effort for network and infrastructure virtualization over heterogeneous, wireless and metro optical networks, that can be used to provide end-to-end cloud services. In this work we present the wireless network virtualization solution, in the light of the CONTENT technical approach, where a convergent LTE/Wi-Fi network is virtualized and interconnected with an optical TSON metro network. We present our approach in designing and implementing converged virtual 802.11 and LTE wireless networks and the corresponding efficient wireless data-plane mechanisms required, in order to satisfy strict QoS requirements. --- paper_title: Experimental testbed of reconfigurable flexgrid optical network with virtualized GMPLS control plane and autonomic controls towards SDN paper_content: This paper demonstrates a testbed of a reconfigurable optical network composed by four ROADMs equipped with flexgrid WSS modules, optical amplifiers, optical channel monitors, and supervisor boards. A controller daemon implements a node abstraction layer based in the YANG language, providing NETCONF and CLI interfaces. Additionally we demonstrate the virtualization of GMPLS control plane, while supporting automatic topology discovery and TE-Link instantiation, enabling a path towards SDN. GMPLS have been extended to collect specific DWDM measurement data allowing the implementation of adaptive/cognitive controls and policies for autonomic operation, based on global network view. --- paper_title: Locking the sky: a survey on IaaS cloud security paper_content: Cloud computing is expected to become a common solution for deploying applications thanks to its capacity to leverage developers from infrastructure management tasks, thus reducing the overall costs and services’ time to market. Several concerns prevent players’ entry in the cloud; security is arguably the most relevant one. Many factors have an impact on cloud security, but it is its multitenant nature that brings the newest and more challenging problems to cloud settings. Here, we analyze the security risks that multitenancy induces to the most established clouds, Infrastructure as a service clouds, and review the literature available to present the most relevant threats, state of the art of solutions that address some of the associated risks. A major conclusion of our analysis is that most reported systems employ access control and encryption techniques to secure the different elements present in a virtualized (multitenant) datacenter. Also, we analyze which are the open issues and challenges to be addressed by cloud systems in the security field. --- paper_title: A novel SDN enabled hybrid optical packet/circuit switched data centre network: The LIGHTNESS approach paper_content: Current over-provisioned and multi-tier data centre networks (DCN) deploy rigid control and management platforms, which are not able to accommodate the ever-growing workload driven by the increasing demand of high-performance data centre (DC) and cloud applications. In response to this, the EC FP7 project LIGHTNESS (Low Latency and High Throughput Dynamic Network Infrastructures for High Performance Datacentre Interconnects) is proposing a new flattened optical DCN architecture capable of providing dynamic, programmable, and highly available DCN connectivity services while meeting the requirements of new and emerging DC and cloud applications. LIGHTNESS DCN comprises all-optical switching technologies (Optical Packet Switching (OPS) and Optical Circuit Switching (OCS)) and hybrid Top-of-the-Rack (ToR) switches, controlled and operated by a Software Defined Networking (SDN) based control plane for enhanced programmability of heterogeneous network functions and protocols. Harnessing the power of optics enables DCs to effectively cope with the high-performance applications' demands. The programmability and flexibility provided by the SDN based control plane allow to fully exploit the benefits of the LIGHTNESS multi-technology optical DCN, while provisioning on-demand, dynamic, flexible and highly resilient network services inside DCs. --- paper_title: The GEYSERS optical testbed: A platform for the integration, validation and demonstration of cloud-based infrastructure services paper_content: The recent evolution of cloud services is leading to a new service transformation paradigm to accommodate network infrastructures in a cost-scalable way. In this transformation, the network constitutes the key to efficiently connect users to services and applications. In this paper we describe the deployment, validation and demonstration of the optical integrated testbed for the ''GEneralized architecture for dYnamic infrastructure SERviceS'' (GEYSERS) project to accommodate such cloud based Infrastructure Services. The GEYSERS testbed is composed of a set of local physical testbeds allocated in the facilities of the GEYSERS partners. It is built up based on the requirements specification, architecture definition and per-layer development that constitutes the whole GEYSERS ecosystem, and validates the procedures on the GEYSERS prototypes. The testbed includes optical devices (layer 1), switches (layer 2), and IT resources deployed in different local testbeds provided by the project partners and interconnected among them to compose the whole testbed layout. The main goal of the GEYSERS testbed is twofold. On one hand, it aims at providing a validation ground for the architecture, concepts and business models proposed by GEYSERS, sustained by two main paradigms: Infrastructure as a Service (IaaS) and the coupled provisioning of optical network and IT resources. On the other hand, it is used as a demonstration platform for testing the software prototypes within the project and to demonstrate to the research and business community the project approach and solutions. In this work, we discuss our experience in the deployment of the testbed and share the results and insights learned from our trials in the process. Additionally, the paper highlights the most relevant experiments carried out in the testbed, aimed at the validation of the overall GEYSERS architecture. --- paper_title: Routing and Spectrum Allocation in Elastic Optical Networks: A Tutorial paper_content: Flexgrid technology is now considered to be a promising solution for future high-speed network design. In this context, we need a tutorial that covers the key aspects of elastic optical networks. This tutorial paper starts with a brief introduction of the elastic optical network and its unique characteristics. The paper then moves to the architecture of the elastic optical network and its operation principle. To complete the discussion of network architecture, this paper focuses on the different node architectures, and compares their performance in terms of scalability and flexibility. Thereafter, this paper reviews and classifies routing and spectrum allocation (RSA) approaches including their pros and cons. Furthermore, various aspects, namely, fragmentation, modulation, quality-of-transmission, traffic grooming, survivability, energy saving, and networking cost related to RSA, are presented. Finally, the paper explores the experimental demonstrations that have tested the functionality of the elastic optical network, and follows that with the research challenges and open issues posed by flexible networks. --- paper_title: Spectrum engineering in flexible grid data center optical networks paper_content: Abstract Data centers provide a volume of computation and storage resources for cloud-based services, and generate very huge traffic in data center networks. Usually, data centers are connected by ultra-long-haul WDM optical transport networks due to its advantages, such as high bandwidth, low latency, and low energy consumption. However, since the rigid bandwidth and coarse granularity, it shows inefficient spectrum utilization and inflexible accommodation of various types of traffic. Based on OFDM, a novel architecture named flexible grid optical network has been proposed, and becomes a promising technology in data center interconnections. In flexible grid optical networks, the assignment and management of spectrum resources are more flexible, and agile spectrum control and management strategies are needed. In this paper, we introduce the concept of Spectrum Engineering, which could be used to maximize spectral efficiency in flexible grid optical networks. Spectrum Defragmentation, as one of the most important aspect in Spectrum Engineering, is demonstrated by OpenFlow in flexible grid optical networks. Experimental results are reported and verify the feasibility of Spectrum Engineering. --- paper_title: A survey of dynamic bandwidth allocation algorithms for Ethernet Passive Optical Networks paper_content: Ethernet Passive Optical Network (EPON) has been widely considered as a promising technology for implementing the FTTx solutions to the ''last mile'' bandwidth bottleneck problem. Bandwidth allocation is one of the critical issues in the design of EPON systems. In an EPON system, multiple optical network units (ONUs) share a common upstream channel for data transmission. To efficiently utilize the limited bandwidth of the upstream channel, an EPON system must dynamically allocate the upstream bandwidth among multiple ONUs based on the instantaneous bandwidth demands and quality of service requirements of end users. This paper introduces the fundamental concepts on EPONs, discusses the major issues related to bandwidth allocation in EPON systems, and presents a survey of the state-of-the-art dynamic bandwidth allocation (DBA) algorithms for EPONs. --- paper_title: Unified control system for heterogeneous networks with Software Defined Networking (SDN) paper_content: Driven by various broadband applications, data center has become one of the most important service resources, connected by IP and optical networks. Then how to use the service resource and network resource together effectively will become the research focus. Towards realizing this goal, this paper proposes a unified control system for heterogeneous networks, which is implemented with Software Defined Networking (SDN) enabled by OpenFlow protocol. Data center, IP network and optical network resources can be abstracted as unified resource interface. NOX based controller can make full use of these resources, and provide the user with different kind of services. Remote demonstration is first accessed and presented with large scale multi-layer and multi-domain networks. --- paper_title: PON/xDSL hybrid access networks paper_content: We discuss hybrid fiber/copper access networks with a focus on XG-PON/VDSL2 hybrid access networks. We present tutorial material on the XG-PON and VDSL2 protocols as standardized by the ITU. We investigate mechanisms to reduce the functional logic at the device that bridges the fiber and copper segments of the hybrid fiber/copper access network. This device is called a drop-point device. Reduced functional logic translates into lower energy consumption and cost for the drop-point device. We define and analyze the performance of several mechanisms to move some of the VDSL2 functional logic blocks from the drop-point device into the XG-PON Optical Line Terminal. Our analysis uncovers that silence suppression mechanisms are necessary to achieve statistical multiplexing gain when carrying synchronous intermediate VDSL2 data formats across the XG-PON. --- paper_title: Multi-Tenant Software-Defined Hybrid Optical Switched Data Centre paper_content: We introduce a holistic solution for software-defined optical data centres (DC). Hybrid optical circuit/packet switching technologies are employed in the data plane, while a software-defined networking (SDN) controller based on OpenDaylight with significant extensions is adopted for the data centre network (DCN) control and management. Novel functional modules in the SDN controller together with its northbound (NBI) and southbound interfaces (SBI) are designed and developed. The OpenFlow protocol is extended at the SBI to support communication between the extended OpenDaylight SDN controller and the optical DCN devices. Over the NBIs, DC applications and the cloud management system directly interact with the optical DCN. A virtual data centre (VDC) application is designed and developed that dynamically creates and provisions multiple coexisting but isolated VDCs. An optical network-aware virtual machine (VM) placement method is proposed and implemented for a single-step deployment of both network and IT (VM) resources to accommodate the VDC requests. The VDC deployment process is extensively simulated and experimentally demonstrated. --- paper_title: IaaS Cloud Architecture: From Virtualized Datacenters to Federated Cloud Infrastructures paper_content: As a key component in a modern datacenter, the cloud operating system is responsible for managing the physical and virtual infrastructure, orchestrating and commanding service provisioning and deployment, and providing federation capabilities for accessing and deploying virtual resources in remote cloud infrastructures. --- paper_title: Next generation optical network architecture featuring distributed aggregation, network processing and information routing paper_content: The ontology of communications is rapidly changing, shifting interest to machine-to-machine (M2M) interactions and the internet of Things (IoT). These are becoming vital for sustainability of social life and the revitalization of the economy providing the infrastructure to new production forms like distributed manufacturing, cloud robotics while becoming important to grid-based energy systems. Adding to them the voracious needs for data of the traditional broadband users, residential or business, together with the back/front hauling requirements of mobile operators, one is expecting a significant strain in the access. A multitude of heterogeneous access networks are emerging and the integration of them in a single platform ensuring seamless data-exchange with Data-Centres is of major importance. In this paper we describe HYDRA (HYbriD long-Reach fiber Access network), a novel network architecture that overcomes the limitations of both long-reach PONs as well as mobile backhauling schemes, leading to significantly improved cost and power consumption figures. The key concept is the introduction of an Active Remote Node (ARN) that interfaces to end-users by means of the lowest cost/power consumption technology (short-range xPON, wireless, etc.) whilst on the core network side it employs adaptive ultra-long reach links to bypass the Metropolitan Area Network. The scheme leads to a higher degree of node consolidation, network convergence and Access-Core integration. The proposed architecture can enhance performance while supporting network virtualization and efficient resource orchestration based on Software Defined Networking (SDN) principles and open access networking models. Keywords—Ubiquitous network infrastructure, software defined networking, network convergence, optical networks --- paper_title: NOX: towards an operating system for networks paper_content: As anyone who has operated a large network can attest, enterprise networks are difficult to manage. That they have remained so despite significant commercial and academic efforts suggests the need for a different network management paradigm. Here we turn to operating systems as an instructive example in taming management complexity. In the early days of computing, programs were written in machine languages that had no common abstractions for the underlying physical resources. This made programs hard to write, port, reason about, and debug. Modern operating systems facilitate program development by providing controlled access to high-level abstractions for resources (e.g., memory, storage, communication) and information (e.g., files, directories). These abstractions enable programs to carry out complicated tasks safely and efficiently on a wide variety of computing hardware. In contrast, networks are managed through low-level configuration of individual components. Moreover, these configurations often depend on the underlying network; for example, blocking a user’s access with an ACL entry requires knowing the user’s current IP address. More complicated tasks require more extensive network knowledge; forcing guest users’ port 80 traffic to traverse an HTTP proxy requires knowing the current network topology and the location of each guest. In this way, an enterprise network resembles a computer without an operating system, with network-dependent component configuration playing the role of hardware-dependent machine-language programming. What we clearly need is an “operating system” for networks, one that provides a uniform and centralized programmatic interface to the entire network. Analogous to the read and write access to various resources provided by computer operating systems, a network operating system provides the ability to observe and control a network. A network operating system does not manage the network itself; it merely provides a programmatic interface. Applications implemented on top of the network operating system perform the actual management tasks. The programmatic interface should be general enough to support a broad spectrum of network management applications. Such a network operating system represents two major conceptual departures from the status quo. First, the network operating system presents programs with a centralized programming model; programs are written as if the entire network were present on a single machine (i.e., one would use Dijkstra to compute shortest paths, not Bellman-Ford). This requires (as in [3, 8, 14] and elsewhere) centralizing network state. Second, programs are written in terms of high-level abstractions (e.g., user and host names), not low-level configuration parameters (e.g., IP and MAC addresses). This allows management directives to be enforced independent of the underlying network topology, but it requires that the network operating system carefully maintain the bindings (i.e., mappings) between these abstractions and the low-level configurations. Thus, a network operating system allows management applications to be written as centralized programs over highlevel names as opposed to the distributed algorithms over low-level addresses we are forced to use today. While clearly a desirable goal, achieving this transformation from distributed algorithms to centralized programming presents significant technical challenges, and the question we pose here is: Can one build a network operating system at significant scale? --- paper_title: A Survey on OFDM-Based Elastic Core Optical Networking paper_content: Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed. --- paper_title: An optical SDN Controller for Transport Network virtualization and autonomic operation paper_content: This paper proposes an architecture for Software Defined Optical Transport Networks. The SDN Controller includes a network abstraction layer allowing the implementation of cognitive controls and policies for autonomic operation, based on global network view. Additionally, the controller implements a virtualized GMPLS control plane, offloading and simplifying the network elements, while unlocking the implementation of new services such as optical VPNs, optical network slicing, and keeping standard OIF interfaces, such as UNI and NNI. The concepts have been implemented and validated in a real testbed network formed by five DWDM nodes equipped with flexgrid WSS ROADMs. --- paper_title: Software defined networking (SDN) over space division multiplexing (SDM) optical networks: features, benefits and experimental demonstration paper_content: We present results from the first demonstration of a fully integrated SDN-controlled bandwidth-flexible and programmable SDM optical network utilizing sliceable self-homodyne spatial superchannels to support dynamic bandwidth and QoT provisioning, infrastructure slicing and isolation. Results show that SDN is a suitable control plane solution for the high-capacity flexible SDM network. It is able to provision end-to-end bandwidth and QoT requests according to user requirements, considering the unique characteristics of the underlying SDM infrastructure. --- paper_title: Space-division multiplexing in optical fibres paper_content: This Review summarizes the simultaneous transmission of several independent spatial channels of light along optical fibres to expand the data-carrying capacity of optical communications. Recent results achieved in both multicore and multimode optical fibres are documented. --- paper_title: Centralized Lightwave WDM-PON Employing 16-QAM Intensity Modulated OFDM Downstream and OOK Modulated Upstream Signals paper_content: We have proposed and experimentally demonstrated a novel architecture for orthogonal frequency-division- multiplexing (OFDM) wavelength-division-multiplexing passive optical network with centralized lightwave. In this architecture, 16 quadrature amplitude modulation intensity-modulated OFDM signals at 10 Gb/s are utilized for downstream transmission. A wavelength-reuse scheme is employed to carry the upstream data to reduce the cost at optical network unit. By using one intensity modulator, the downstream signal is remodulated for upstream on-off keying (OOK) data at 2.5 Gb/s based on its return-to-zero shape waveform. We have also studied the fading effect caused by double-sideband (DSB) downstream signals. Measurement results show that 2.5-dB power penalty is caused by the fading effect. The fading effect can be removed when the DSB OFDM downstream signals are converted to single sideband (SSB) after vestigial filtering. The power penalty is negligible for both SSB OFDM downstream and the remodulated OOK upstream signals after over 25-km standard single-mode-fiber transmission. Index --- paper_title: Flexible PON Key technologies: Digital advanced modulation formats and devices paper_content: Flexible PON are future paradigm in parallel with flexible and elastic optical networks are under research for core networks. In the same way as those backbone optical networks can be significantly improved by following software-defined network (SDN) techniques, it is described how SDN PONs can be implemented by highly spectral efficient digital modulation formats. A main challenge is the implementation by cost effective devices. We will show the progress in alternatives implementations and adequacy of diverse modulation formats to cost effective bandwidth limited optical sources and receivers. --- paper_title: Using OOK Modulation for Symmetric 40-Gb/s Long-Reach Time-Sharing Passive Optical Networks paper_content: Due to the requirement of broad bandwidth for next-generation access networks, present passive optical networks (PONs) will be upgraded to 40 Gb/s or higher data rate PONs. Hence, we propose and experimentally demonstrate a simple and efficient scheme to achieve a symmetric 40-Gb/s long-reach (LR) time-division-multiplexed PON by using four wavelength-division-multiplexed 10-Gb/s external on-off keying format channels to serve as the optical transmitter for downstream and upstream traffic simultaneously. Moreover, the system performance of LR transmission and split ratio have also been analyzed and discussed without dispersion compensation. --- paper_title: Spectrum defragmentation implementation based on software defined networking (SDN) in flexi-grid optical networks paper_content: OFDM has been considered as a promising candidate for future high-speed optical transmission technology. Based on OFDM, a novel architecture named flexi-grid optical network has been proposed, and it has drawn increasing attention in both academic and industry. In flexi-grid optical networks, with connection setting up and tearing down, the spectrum resources are separated into small non-contiguous spectrum bands, which may lead to inefficient spectrum utilization. The key requirement is spectrum defragmentation, which refers to periodically reconfigure the network and return it to its optimal states. Spectrum defragmentation should be operated under minimum cost including interrupting services or affecting the QoS (i.e. delay, bandwidth, bitrate). In this paper, we demonstrate for the first time spectrum defragmentation based on software defined networking (SDN) in flexi-grid optical networks. Experimental results are reported on our testbed and verify the feasibility of our proposed architecture. --- paper_title: First demonstration of software defined networking (SDN) over space division multiplexing (SDM) optical networks paper_content: We demonstrate for the first time a fully integrated SDN-controlled bandwidth-flexible and programmable SDM optical network utilising sliceable self-homodyne spatial superchannels to support dynamic bandwidth and QoT provisioning, infrastructure slicing and isolation. --- paper_title: Flexible TDMA access optical networks enabled by burst-mode software defined coherent transponders paper_content: We propose a concept of flexible PON and show with experiments and network dimensioning how burst-mode, software-defined coherent transponders can more than double the average capacity per user in TDMA access networks. --- paper_title: Poster: SDN based energy management system for optical access network paper_content: In recent years, Passive Optical Network (PON) is developing rapidly in the access network, in which the high energy consumption problem is attracting more and more attention. In the paper, SDN (Software Defined Network) is first introduced in optical access networks to implement an energy-efficient control mechanism through OpenFlow protocol. Some theoretical analysis work for the energy consumption of this architecture has been conducted. Numeric results show that the proposed SDN based control architecture can reduce the energy consumption of the access network, and facilitates integration of access and metro networks. --- paper_title: Novel optical access network virtualization and dynamic resource allocation algorithms for the Internet of Things paper_content: Novel optical access network virtualization and resource allocation algorithms for Internet-of-Things support are proposed and implemented on a real-time SDN-controller platform. 30–50% gains in served request number, traffic prioritization, and revenue are demonstrated. --- paper_title: Software-defined dynamic bandwidth optimization (SD-DBO) algorithm for optical access and aggregation networks paper_content: The optical access networks and aggregation networks are necessary to be controlled together to improve the bandwidth resource availability globally. Unified control architecture for optical access networks and aggregation networks is designed based on software-defined networking controller, the function modules of which have been described and the related extended protocol solution has been given. A software-defined dynamic bandwidth optimization (SD-DBO) algorithm is first proposed for optical access and aggregation networks, which can support unified optimizations and efficient scheduling by allocating bandwidth resources from a global network view in real time. The performance of the proposed algorithm has been verified and compared with traditional DBA algorithm in terms of resource utilization rate and average delay time. Simulation result shows that SD-DBO algorithm performs better. --- paper_title: Dynamic Software-Defined Resource Optimization in Next-Generation Optical Access Enabled by OFDMA-Based Meta-MAC Provisioning paper_content: In order to address a diverse and demanding set of service and network drivers, several technology candidates with inherent physical layer (PHY) differences are emerging for future optical access networks. To overcome this PHY divide and enable both cost and bandwidth efficient heterogeneous technology co-existence in future optical access, we propose a novel Orthogonal Frequency Division Multiple Access (OFDMA)-based “meta-MAC”, which encapsulates PHY variations and enables fair inter-technology bandwidth arbitration. The new software-defined meta-MAC is envisioned to work on top of constituent MAC protocols, and exploit virtual OFDMA subcarriers as both finely granular and scalable bandwidth assignment units. We introduce important OFDMA meta-MAC design principles, and propose an elaborate three-stage dynamic resource provisioning scheme that satisfies the key requirements. The performance benefits of the meta-MAC concept and the proposed dynamic resource provisioning schemes in terms of spectrum management flexibility and support of diverse services are verified via real-time traffic simulation, confirming the attractiveness of the new approach for future optical access systems. --- paper_title: Global dynamic bandwidth optimization for software defined optical access and aggregation networks paper_content: We propose a global dynamic bandwidth optimization algorithm for software defined optical access and aggregation networks, which can support unified optimizations and efficient scheduling by allocating bandwidth resources from a global network view in real-time. The performance benefits of the proposed algorithm in terms of resource utilization rate, average delay and delay of a single mobile user are verified through network simulation. --- paper_title: An SDN-driven approach to a flat Layer-2 telecommunications network paper_content: In this paper, we propose a design for a flat Layer 2 Metro-Core network as part of a Long Reach PON architecture that meets the demands of scalability, efficiency and economy within a modern telecommunications network. We introduce the concept of Mac Address Translation, which is equivalent to Network Address translation at Layer 3 but applied instead to layer 2. This allows layer 2 address space to be structured and fits well with the table driven approach of OpenFlow and the wider Software Defined Networks. Without structure at the layer 2 addressing level, the number of flow table rules to support a moderately sized layer 2 network would be very significant, for which there are few if any OpenFlow switch available with adequate TCAM tables. --- paper_title: Heterogeneous bandwidth provisioning for virtual machine migration over SDN-enabled optical networks paper_content: Virtual machine migration in cloud-computing environments is an important operational technique, and requires significant network bandwidth. We demonstrate that heterogeneous bandwidth (vs. homogeneous bandwidth) for migration reduces significant resource consumption in SDN-enabled optical networks. --- paper_title: Fast restoration in SDN-based flexible optical networks paper_content: The benefits of the SDN control plane to drive fast restoration are demonstrated on flexible optical networks. Required OpenFlow extensions are detailed. Simulations report improved recovery time with respect to GMPLS/PCE restoration. --- paper_title: Software defined optical network paper_content: Software define network, which has good potential for packet-switched IP network, is currently not available on circuit-switched transport network. This paper introduces the concept of software define optical network that applies SDN-like features to optical transport network, and reviews key enabling technologies at various layers, such as variable transponder, flexible switching node, control applications, and open interface with circuit extension. --- paper_title: Resonance: dynamic access control for enterprise networks paper_content: Enterprise network security is typically reactive, and it relies heavily on host security and middleboxes. This approach creates complicated interactions between protocols and systems that can cause incorrect behavior and slow response to attacks. We argue that imbuing the network layer with mechanisms for dynamic access control can remedy these ills. We propose Resonance, a system for securing enterprise networks, where the network elements themselves enforce dynamic access control policies based on both flow-level information and real-time alerts. Resonance uses programmable switches to manipulate traffic at lower layers; these switches take actions (e.g., dropping or redirecting traffic) to enforce high-level security policies based on input from both higherlevel security policies and distributed monitoring and inference systems. We describe the design of Resonance, apply it to Georgia Tech's network access control system, show how it can both overcome the current shortcomings and provide new security functions, describe our proposed deployment, and discuss open research questions. --- paper_title: FlowNAC: Flow-based Network Access Control paper_content: This paper presents FlowNAC, a Flow-based Network Access Control solution that allows to grant users the rights to access the network depending on the target service requested. Each service, defined univocally as a set of flows, can be independently requested and multiple services can be authorized simultaneously. Building this proposal over SDN principles has several benefits: SDN adds the appropriate granularity (fine-or coarse-grained) depending on the target scenario and flexibility to dynamically identify the services at data plane as a set of flows to enforce the adequate policy. FlowNAC uses a modified version of IEEE 802.1X (novel EAPoL-in-EAPoL encapsulation) to authenticate the users (without the need of a captive portal) and service level access control based on proactive deployment of flows (instead of reactive). Explicit service request avoids misidentifying the target service, as it could happen by analyzing the traffic (e.g. private services). The proposal is evaluated in a challenging scenario (concurrent authentication and authorization processes) with promising results. --- paper_title: Open transport switch: a software defined networking architecture for transport networks paper_content: There have been a lot of proposals to unify the control and management of packet and circuit networks but none have been deployed widely. In this paper, we propose a simple programmable architecture that abstracts a core transport node into a programmable virtual switch, that meshes well with the software-defined network paradigm while leveraging the OpenFlow protocol for control. A demonstration use-case of an OpenFlow-enabled optical virtual switch implementation managing a small optical transport network for big-data applications is described. With appropriate extensions to OpenFlow, we discuss how the programmability and flexibility SDN brings to packet-optical backbone networks will be substantial in solving some of the complex multi-vendor, multi-layer, multi-domain issues service providers face today. --- paper_title: Which is more suitable for the control over large scale optical networks, GMPLS or OpenFlow? paper_content: Two testbeds based on GMPLS and OpenFlow are built respectively to validate their performance over large scale optical networks. Blocking probability, wavelength utilization and lightpath setup time are shown on the topology with 1000 nodes. --- paper_title: Backup reprovisioning with partial protection for disaster-survivable software-defined optical networks paper_content: As networks grow in size, large-scale failures caused by disasters may lead to huge data loss, especially in an optical network employing wavelength-division multiplexing (WDM). Providing 100 % protection against disasters would require massive and economically unsustainable bandwidth overprovisioning, as disasters are difficult to predict, statistically rare, and may create large-scale failures. Backup reprovisioning schemes are proposed to remedy this problem, but in case of a large-scale disaster, even the flexibility provided by backup reprovisioning may not be enough, given the sudden reduction in available network resource, i.e., resource crunch. To mitigate the adverse effects of resource crunch, an effective resource reallocation is possible by exploiting service heterogeneity, specifically degraded-service tolerance, which makes it possible to provide some level of service, e.g., reduced capacity, to connections that can tolerate degraded service, versus no service at all. Software-Defined Networking (SDN) is a promising approach to perform such dynamic changes (redistribution of network resources) as it simplifies network management via centralized control logic. By exploiting these new opportunities, we propose a Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared-path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization. The performance of the proposed scheme is evaluated by means of SDN emulation using Mininet environment and OpenDaylight as the controller. --- paper_title: Design and test of a software defined hybrid network architecture paper_content: Circuit and packet switching convergence offers significant advantages in core networks to exploit their complementary characteristics in terms of flexibility, scalability and quality of service. This paper considers the possibility of unifying the two different types of transport using the Software Defined Networking (SDN) approach. The proposed architecture applies a modular design to the whole set of node functions, representing the key enabler for a fully programmable network implementation. This paper also proposes a possible extension to the basic concept of flow defined by the current OpenFlow standard to properly support a hybrid network. A set of experiments are performed to assess the main functionality and the performance of the hybrid node where packet and circuit switching are assumed to be configured through the OpenFlow protocol in a fully automated way. --- paper_title: Demonstration of SDN orchestration in optical multi-vendor scenarios paper_content: SDN brings automation to network operation and abstracts the complexity of optical networks. An orchestration layer is required to support multivendor interoperability scenarios. This work demonstrates that an ABNO architecture enables SDN controlled domain interoperability. --- paper_title: Towards networks of the future: SDN paradigm introduction to PON networking for business applications paper_content: The paper is devoted to consideration of an innovative access network dedicated to B2B (Business To Business) applications. We present a network design based on passive optical LAN architecture utilizing proven GPON technology. The major advantage of the solution is an introduction of SDN paradigm to PON networking. Thanks to such approach network configuration can be easily adapted to business customers' demands and needs that can change dynamically. The proposed solution provides a high level of service flexibility and supports sophisticated methods allowing user traffic forwarding in effective way within the considered architecture. --- paper_title: Unifying Packet and Circuit Switched Networks paper_content: There have been many attempts to unify the control and management of circuit and packet switched networks, but none have taken hold. In this paper we propose a simple way to unify both types of network using OpenFlow. The basic idea is that a simple flow abstraction fits well with both types of network, provides a common paradigm for control, and makes it easy to insert new functionality into the network. OpenFlow provides a common API to the underlying hardware, and allows all of the routing, control and management to be defined in software outside the datapath. --- paper_title: Providing Optical Network as a Service with Policy-based Transport SDN paper_content: This paper presents a novel policy-based mechanism to provide context-aware network-wide policies to Software Defined Networking (SDN) applications, implemented with a policy flow based on property graph models. The proposal has been validated in a transport SDN controller, supporting optical network virtualization via slicing of physical resources such as nodes, links and wavelengths, through use case testbed demonstrations of policy enforcement for SDN applications, including optical equalization and virtual optical network control. Additionally, the policy engine incorporates a simulation-assisted pre-setting mechanism for local policy decisions in case of problems in communication with the controller. --- paper_title: Experimental demonstration of OpenFlow control of packet and circuit switches paper_content: OpenFlow is presented as a unified control plane and architecture for packet and circuit switched networks. We demonstrate a simple proof-of-concept testbed, where a bidirectional wavelength circuit is dynamically created to transport a TCP flow. --- paper_title: Design and experimental test of 1:1 end-to-end protection for LR-PON using an SDN multi-tier control plane paper_content: We test an end-to-end 1:1 protection scheme for a combined LR-PON access and core networks using separate but loosely coupled SDN controllers, over a Pan-European network. Fast recovery is achieved in 7ms in the access and 52ms in the core. --- paper_title: Software defined optical networks technology and infrastructure: Enabling software-defined optical network operations paper_content: Software-defined networking (SDN) enables programmable SDN control and management functions at a number of layers, allowing applications to control network resources or information across different technology domains, e.g., Ethernet, wireless, and optical. Current cloud-based services are pushing networks to new boundaries by deploying cutting edge optical technologies to provide scalable and flexible services. SDN combined with the latest optical transport technologies, such as elastic optical networks, enables network operators and cloud service providers to customize their infrastructure dynamically to user/application requirements and therefore minimize the extra capital and operational costs required for hosting new services. In this paper a unified control plane architecture based on OpenFlow for optical SDN tailored to cloud services is introduced. Requirements for its implementation are discussed considering emerging optical transport technologies. Implementations of the architecture are proposed and demonstrated across heterogeneous state-of-the-art optical, packet, and IT resource integrated cloud infrastructure. Finally, its performance is evaluated using cloud use cases and its results are discussed. --- paper_title: Ethane: taking control of the enterprise paper_content: This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts. Our operational Ethane network has supported over 300 hosts for the past four months in a large university network, and this deployment experience has significantly affected Ethane's design. --- paper_title: Integrated OpenFlow — GMPLS control plane: An overlay model for software defined packet over optical networks paper_content: A novel software-defined packet over optical networks solution based on the OpenFlow and GMPLS control plane integration is demonstrated. The proposed architecture, experimental setup, and average flow setup time for different optical flows is reported. --- paper_title: Future Proof Access Networks for B2B Applications paper_content: The paper offers an innovative approach for building future proof access network dedicated to B2B (Business To Business) applications. The conceptual model of considered network is based on three main assumptions. Firstly, we present a network design based on passive optical LAN architecture utilizing proven GPON (Gigabit-capable Passive Optical Network) technology. Secondly, the new business model is proposed. Finally, the major advantage of the solution is an introduction of SDN (Software-Defined Networking) paradigm to GPON area. Thanks to such approach network configuration can be easily adapted to business customers' demands and needs that can change dynamically over the time. The proposed solution provides a high level of service flexibility and supports sophisticated methods allowing users' traffic forwarding in efficient way. The paper extends a description of the OpenFlowPLUS protocol proposed in [18] . Additionally it provides an exemplary logical scheme of traffic forwarding relevant for GPON devices employing the OpenFlowPLUS solution. --- paper_title: Flexible traffic management in broadband access networks using Software Defined Networking paper_content: Over the years, the demand for high bandwidth services, such as live and on-demand video streaming, steadily increased. The adequate provisioning of such services is challenging and requires complex network management mechanisms to be implemented by Internet service providers (ISPs). In current broadband network architectures, the traffic of subscribers is tunneled through a single aggregation point, independent of the different service types it belongs to. While having a single aggregation point eases the management of subscribers for the ISP, it implies huge bandwidth requirements for the aggregation point and potentially high end-to-end latency for subscribers. An alternative would be a distributed subscriber management, adding more complexity to the management itself. In this paper, a new traffic management architecture is proposed that uses the concept of Software Defined Networking (SDN) to extend the existing Ethernet-based broadband network architecture, enabling a more efficient traffic management for an ISP. By using SDN-enabled home gateways, the ISP can configure traffic flows more dynamically, optimizing throughput in the network, especially for bandwidth-intensive services. Furthermore, a proof-of-concept implementation of the approach is presented to show the general feasibility and study configuration tradeoffs. Analytic considerations and testbed measurements show that the approach scales well with an increasing number of subscriber sessions. --- paper_title: An SDN-based integration of green TWDM-PONs and metro networks preserving end-to-end delay paper_content: A novel latency-aware aggregation node architecture supporting TWDM-PONs is successfully demonstrated. The node, performing traffic scheduling according to sleep-mode operations, includes a lightweight SDN solution, scaled to operate as intra-node controller. --- paper_title: Software-Defined Networks and the Interface to the Routing System (I2RS) paper_content: The key challenges facing network architecture today are the ability to change rapidly with business needs and to control complexity. The Interface to the Routing System is one form of software-defined networks designed to address specific problems at the Internet scale. --- paper_title: Quality of service management based on Software Defined Networking approach in wide GbE networks paper_content: This work experimentally demonstrates how to control and manage user Quality of Service (QoS) by acting on the switching on-off of the optical Gigabit Ethernet (GbE) interfaces in a wide area network test bed including routers and GPON accesses. The QoS is monitored at the user location by means of active probes developed in the framework of the FP7 MPLANE project. The network topology is managed according to some current Software Defined Network issues and in particular an Orchestrator checks the user quality, the traffic load in the GbE links and manages the network interface reconfiguration when congestion occurs in some network segments. --- paper_title: Energy efficiency with QoS control in dynamic optical networks with SDN enabled integrated control plane paper_content: The paper presents energy efficient routing algorithms based on a novel integrated control plane platform. The centralized control plane structure enables the use of flexible heuristic algorithms for route selection in optical networks. Differentiated routing for various traffic types is used in our previous work. The work presented in this paper further optimizes the energy performance in the whole network by utilizing a multi-objective evolutionary algorithm for route selection. The trade-off between energy optimization and QoS for high priority traffic is examined and results show an overall improvement in energy performance whilst maintaining satisfactory QoS. Energy savings are obtained on the low priority traffic whilst the QoS for the high priority traffic is not degraded. --- paper_title: Generalized SDN control for access/metro/core integration in the framework of the interface to the Routing System (I2RS) paper_content: Software defined networking (SDN), originally designed to operate on access Ethernet-based networks, has been recently proposed for different specific networking scenarios, including core or metro/aggregation networks. In this study, we extend this concept to enable a comprehensive control of a converged access, metro and edges of a core network. In particular, a generalized SDN controller is proposed for upstream global QoS traffic engineering of passive optical networks (PONs), Ethernet metro/aggregation segment and IP/MPLS networks through the adoption of an unique interface, in the framework of the Interface to the Routing System (I2RS). Extended OpenFlow functionalities and Path Computation Element Protocol (PCEP) interfaces are encompassed to achieve effective dynamic flow control. --- paper_title: Time-aware software defined networking (Ta-SDN) for flexi-grid optical networks supporting data center application paper_content: Data center interconnected by flexi-grid optical networks is a promising scenario to meet the high burstiness and high-bandwidth requirement of data center application, because flexi-grid optical networks can allocate spectral resources for applications in a dynamic, tunable and efficient control manner. Meanwhile, as centralized control architecture, the software defined networking (SDN) enabled by OpenFlow protocol can provide maximum flexibility for the networks and make a unified control over various resources for the joint optimization of data center and network resource. Time factor is first introduced into SDN based control architecture for flexi-grid optical networks supporting data center application. Traffic model considering time factor is built and a requirement parameter i.e. bandwidth-delay product is adopted for the service requirement measurement. Then, time-aware software defined networking (Ta-SDN) based control architecture is designed with OpenFlow protocol extension. A novel time-correlated PCE (TC-PCE) algorithm is proposed for the time-correlated service under Ta-SDN based control architecture, which can complete data center selection, path computation and bandwidth resource allocation. Finally, simulation results shows that our proposed Ta-SDN control architecture and time-correlated PCE algorithm can improve the application and network performance to a large extent in blocking probability. --- paper_title: A Survey on the Path Computation Element (PCE) Architecture paper_content: Quality of Service-enabled applications and services rely on Traffic Engineering-based (TE) Label Switched Paths (LSP) established in core networks and controlled by the GMPLS control plane. Path computation process is crucial to achieve the desired TE objective. Its actual effectiveness depends on a number of factors. Mechanisms utilized to update topology and TE information, as well as the latency between path computation and resource reservation, which is typically distributed, may affect path computation efficiency. Moreover, TE visibility is limited in many network scenarios, such as multi-layer, multi-domain and multi-carrier networks, and it may negatively impact resource utilization. The Internet Engineering Task Force (IETF) has promoted the Path Computation Element (PCE) architecture, proposing a dedicated network entity devoted to path computation process. The PCE represents a flexible instrument to overcome visibility and distributed provisioning inefficiencies. Communications between path computation clients (PCC) and PCEs, realized through the PCE Protocol (PCEP), also enable inter-PCE communications offering an attractive way to perform TE-based path computation among cooperating PCEs in multi-layer/domain scenarios, while preserving scalability and confidentiality. This survey presents the state-of-the-art on the PCE architecture for GMPLS-controlled networks carried out by research and standardization community. In this work, packet (i.e., MPLS-TE and MPLS-TP) and wavelength/spectrum (i.e., WSON and SSON) switching capabilities are the considered technological platforms, in which the PCE is shown to achieve a number of evident benefits. --- paper_title: PCE: What is It, How Does It Work and What are Its Limitations? paper_content: In GMPLS-controlled optical networks, the utilization of source-based path computation has some limitations, especially in large networks with stringent constraints (e.g., optical impairments) or in multilayer and multidomain networks, which leads to suboptimal routing solutions. The path computation eElement (PCE) can mitigate some weaknesses of GMPLS-controlled optical networks. The main idea behind the PCE is to decouple the path computation function from the GMPLS controllers into a dedicated entity with an open and well-defined interface and protocol. A (stateless) PCE is capable of computing a network path or route based on a network graph (i.e., the traffic engineering database-TED) and applying computational constraints. First, we present an overview of the PCE architecture and its communication protocol (PCEP). Then, we present in detail the considered source-routing shortcomings in GMPLS-controlled networks, namely, impairment-aware path computation, multidomain path computation and multilayer path computation, as well as the different PCE-based solutions that have been proposed to overcome each one of these problems. However, PCE-based computation also presents some limitations that lead to an increase in the path computation blocking or to suboptimal path computations. The stateful PCE overcomes the limitations of the stateless PCE, such as the outdated TED, the lack of global LSP state (i.e., set of computed paths and reserved resources in use in the network), and the lack of control of path reservations. A passive stateful PCE allows optimal path computation and increased path computation success, considering both the network state (TED) and the Label Switched Paths (LSP) state (LSP Database-LSPDB). Additionally, an active stateful PCE can modify existing LSPs (i.e., connections), and optionally, setup and/or release existing LSPs. Finally, the formal decoupling of the path computation allows more flexibility in the deployment of PCEs in other control paradigms outside their original scope (MPLS/GMPLS). In this sense, we provide an overview of three PCE deployment models in the software defined network (SDN) control architecture. --- paper_title: Group Sparse Beamforming for Green Cloud-RAN paper_content: A cloud radio access network (Cloud-RAN) is a network architecture that holds the promise of meeting the explosive growth of mobile data traffic. In this architecture, all the baseband signal processing is shifted to a single baseband unit (BBU) pool, which enables efficient resource allocation and interference management. Meanwhile, conventional powerful base stations can be replaced by low-cost low-power remote radio heads (RRHs), producing a green and low-cost infrastructure. However, as all the RRHs need to be connected to the BBU pool through optical transport links, the transport network power consumption becomes significant. In this paper, we propose a new framework to design a green Cloud-RAN, which is formulated as a joint RRH selection and power minimization beamforming problem. To efficiently solve this problem, we first propose a greedy selection algorithm, which is shown to provide near- optimal performance. To further reduce the complexity, a novel group sparse beamforming method is proposed by inducing the group-sparsity of beamformers using the weighted $\ell_1/\ell_2$-norm minimization, where the group sparsity pattern indicates those RRHs that can be switched off. Simulation results will show that the proposed algorithms significantly reduce the network power consumption and demonstrate the importance of considering the transport link power consumption. --- paper_title: Green Cellular Networks: A Survey, Some Research Issues and Challenges paper_content: Energy efficiency in cellular networks is a growing concern for cellular operators to not only maintain profitability, but also to reduce the overall environment effects. This emerging trend of achieving energy efficiency in cellular networks is motivating the standardization authorities and network operators to continuously explore future technologies in order to bring improvements in the entire network infrastructure. In this article, we present a brief survey of methods to improve the power efficiency of cellular networks, explore some research issues and challenges and suggest some techniques to enable an energy efficient or "green" cellular network. Since base stations consume a maximum portion of the total energy used in a cellular system, we will first provide a comprehensive survey on techniques to obtain energy savings in base stations. Next, we discuss how heterogeneous network deployment based on micro, pico and femto-cells can be used to achieve this goal. Since cognitive radio and cooperative relaying are undisputed future technologies in this regard, we propose a research vision to make these technologies more energy efficient. Lastly, we explore some broader perspectives in realizing a "green" cellular network technology --- paper_title: A survey of mobile cloud computing: architecture, applications, and approaches paper_content: Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: Demonstrating inter-testbed network virtualization in OFELIA SDN experimental facility paper_content: OFELIA is an experimental network designed to offer a diverse OpenFlow-enabled infrastructure to allow Software Defined Networking (SDN) experimentation. OFELIA is currently composed of ten sub–testbeds (called islands), most of them in Europe and one in Brazil. An experimenter get access to a so-called slice; a subset of the testbed resources like nodes and links, including the Openflow programmable switches to carry on an experiment. A new network virtualization tool called VeRTIGO has been recently presented to extend the way isolation is achieved between slices (slicing), allowing each experimenter to instantiate an arbitrary virtual network topology on top of a physical testbed. In this paper we present preliminary results obtained by deploying and using VeRTIGO in an experiment running across several OFELIA islands, which has proven to increase flexibility to experimenters willing to play with novel SDN concepts at large scale. --- paper_title: SDN-based cloud computing networking paper_content: Software Defined Networking (SDN) is a concept which provides the network operators and data centres to flexibly manage their networking equipment using software running on external servers. According to the SDN framework, the control and management of the networks, which is usually implemented in software, is decoupled from the data plane. On the other hand cloud computing materializes the vision of utility computing. Tenants can benefit from on-demand provisioning of networking, storage and compute resources according to a pay-per-use business model. In this work we present the networking issues in IaaS and networking and federation challenges that are currently addressed with existing technologies. We also present innovative software-define networking proposals, which are applied to some of the challenges and could be used in future deployments as efficient solutions. cloud computing networking and the potential contribution of software-defined networking along with some performance evaluation results are presented in this paper. --- paper_title: Dynamic resource pooling and trading mechanism in flexible-grid optical network virtualization paper_content: Optical networks are ideal candidates for the future intraand inter-data center networks, due to their merits of high throughput, low energy consumption, high reliability, and so on. Optical network virtualization is a key technology to realize the deployment of various types of network-based applications on a single optical network infrastructure. Current virtual optical network embedding allocates resources in an exclusive and excessive manner. Typically, the spectrum amount of the virtual optical network’s peak traffic is reserved along the optical paths. It may lead to high user cost and low carrier revenue. In this paper, we propose a dynamic resource pooling and trading mechanism, in which users do no need to reserve the spectrum amount of their peak traffic demands. We compare the user cost and carrier revenue of our dynamic mechanism with the traditional exclusive resource allocation, by formulating our mechanism as a Stackelberg game and finding the Subgame Perfect Equilibrium. The numerical results show that our proposed dynamic mechanism can save user cost while increase carrier revenue under certain conditions. --- paper_title: OpenFlow and Multi-layer Extensions: Overview and Next Steps paper_content: Even though software-defined networking (SDN) and the OpenFlow protocol have demonstrated great practicality in the packet domain, there has been some hesitance in extending the OpenFlow specification to circuit and optical switched domains that constitute wide area multi-layer transport networks. This paper presents an overview of various proposals with regards to extending OpenFlow to support circuit switched multi-layer networks. The goal is to shed light on these ideas and propose a way forward. This paper favors a top-down approach, which relies on transport network's main SDN use case: packet-optical integration, to help identify the sufficient extensions for OpenFlow to support circuit/optical switching. --- paper_title: Orion: A Hybrid Hierarchical Control Plane of Software-Defined Networking for Large-Scale Networks paper_content: The decoupled architecture and the fine-grained flow control feature of SDN limit the scalability of SDN network. In order to address this problem, some studies construct the flat control plane architecture, other studies build the hierarchical control plane architecture to improve the scalability of SDN. However, the two kinds of structure still have unresolved issues: the flat control plane structure can not solve the super-linear computational complexity growth of the control plane when SDN network scales to large size, the centralized abstracted hierarchical control plane structure brings path stretch problem. To address the two issues, we propose Orion, a hybrid hierarchical control plane for large-scale networks. Orion can effectively reduce the computational complexity growth of SDN control plane from super-linear to linear. Meanwhile, we design an abstracted hierarchical routing method to solve the path stretch problem. Further, Orion is implemented to verify the feasibility of the hybrid hierarchical approach. Finally, we verify the effectiveness of Orion both from the theoretical and experimental aspects. --- paper_title: Network virtualization controller for abstraction and control of OpenFlow-enabled multi-tenant multi-technology transport networks paper_content: A network hypervisor is introduced to dynamically deploy multi-tenant virtual networks on top of multi-technology optical networks. It provides an abstract view of each virtual network and enables its control through an independent SDN controller. --- paper_title: Optimal allocation of virtual optical networks for the future internet paper_content: Optical network infrastructures can be partitioned into multiple parallel, dedicated virtual networks for a physical infrastructure sharing purpose. However, different transport technologies may impact in both the amount and the characteristics of the different virtual instances that can be built on top of a single physical infrastructure. To analyse the impact of the transport technology in this regard, we present exact Integer Linear Programming (ILP) formulations that address the off-line problem of optimally allocate a set of virtual networks in two kind of substrates: wavelength switching and spectrum switching. Both formulations serve the purpose to provide opaque transport services from the virtual network point of view, where electronic terminations are assumed in the virtual network nodes. We carry out a series of experiments to validate the presented formulations and determine which is the impact of both substrates in the number of virtual networks that can be optimally allocated in the transport network. --- paper_title: Demonstration of SDN Based Optical Network Virtualization and Multidomain Service Orchestration paper_content: This paper describes a demonstration of SDN-based optical transport network virtualization and orchestration. Two scenarios are demonstrated: a dynamic setup of optical connectivity services inside a single domain as well as a multidomain service orchestration over a shared optical infrastructure using the architecture defined in the STRAUSS project. --- paper_title: Efficient wide area data transfer protocols for 100 Gbps networks and beyond paper_content: Due to a number of recent technology developments, now is the right time to re-examine the use of TCP for very large data transfers. These developments include the deployment of 100 Gigabit per second (Gbps) network backbones, hosts that can easily manage 40 Gbps, and higher, data transfers, the Science DMZ model, the availability of virtual circuit technology, and wide-area Remote Direct Memory Access (RDMA) protocols. In this paper we show that RDMA works well over wide-area virtual circuits, and uses much less CPU than TCP or UDP. We also characterize the limitations of RDMA in the presence of other traffic, including competing RDMA flows. We conclude that RDMA for Science DMZ to Science DMZ transfers of massive data is a viable and desirable option for high-performance data transfer. --- paper_title: WiMAX-VPON: A Framework of Layer-2 VPNs for Next-Generation Access Networks paper_content: This paper proposes WiMAX-VPON, a novel framework for establishing layer-2 virtual private networks (VPNs) over the integration of WiMAX and Ethernet passive optical networks, which has lately been considered as a promising candidate for next-generation fiber-wireless backhaul-access networks. With WiMAX-VPON, layer-2 VPNs support a bundle of service requirements to the respective registered wireless/wired users. These requirements are stipulated in the service level agreement and should be fulfilled by a suite of effective bandwidth management solutions. To achieve this, we propose a novel VPN-based admission control and bandwidth allocation scheme that provides per-stream quality-of-service protection and bandwidth guarantee for real-time flows. The bandwidth allocation is performed via a common medium access control protocol working in both the optical and wireless domains. An event-driven simulation model is implemented to study the effectiveness of the proposed framework. --- paper_title: Optical Network Design With Mixed Line Rates and Multiple Modulation Formats paper_content: With the growth of traffic volume and the emergence of various new applications, future telecom networks are expected to be increasingly heterogeneous with respect to applications supported and underlying technologies employed. To address this heterogeneity, it may be most cost effective to set up different lightpaths at different bit rates in such a backbone telecom mesh network employing optical wavelength-division multiplexing. This approach can be cost effective because low-bit-rate services will need less grooming (i.e., less multiplexing with other low-bit-rate services onto high-capacity wavelengths), while a high-bit-rate service can be accommodated directly on a wavelength itself. Optical networks with mixed line rates (MLRs), e.g., 10/40/100 Gb/s over different wavelength channels, are a new networking paradigm. The unregenerated reach of a lightpath depends on its line rate. So, the assignment of a line rate to a lightpath is a tradeoff between its capacity and transparent reach. Thus, based on their signal-quality constraints (threshold bit error rate), intelligent assignment of line rates to lightpaths can minimize the need for signal regeneration. This constraint on the transparent reach based on threshold signal quality can be relaxed by employing more advanced modulation formats, but with more investment. We propose a design method for MLR optical networks with transceivers employing different modulation formats. Our results demonstrate the tradeoff between a transceiver's cost and its optical reach in overall network design. --- paper_title: From static to software-defined optical networks paper_content: Software-defined optical transceivers, a fully programmable optical express layer, and control plane-assisted network automation are key constituents of a new generation of optical core networks. This paper explains enabling technologies, reviews emerging applications, and discusses new questions arising for network design and modeling. It also examines the integration of the optical wavelength with the OTN and MPLS layers. --- paper_title: Optical FlowVisor: An OpenFlow-based optical network virtualization approach paper_content: A novel impairment aware optical network virtualization mechanism (Optical FlowVisor) based on software defined networking (OpenFlow-based) paradigm is experimentally demonstrated. End-to-end flow setup time and the performance of virtual switches and OpenFlow controller are reported. --- paper_title: Pushing Software Defined Networking to the Access paper_content: As Software Defined Networking (SDN) and in particular OpenFlow (OF) availability increases, the desire to extend its use in other scenarios appears. It would be appealing to include substantial parts of the network under OF control but until recently this implied replacing much of the hardware with OF enabled versions. There are some cases, such as access networks in which the benefits could be considerable but deal with a great amount of legacy equipment that is difficult tore place. In this case an alternative method of enabling Odon these devices would be useful. In this paper we describe an architecture and software which could enable OF on many access technologies with minimal changes. The software has been written and tested on a Gigabit Ethernet Passive Optical Network(GEPON). The approach is engineered to be easily ported to any access technology with minimal requirements made on that hardware. --- paper_title: Intelligent Multipath Access in Fiber-Wireless (FiWi) Network with Network Virtualization paper_content: We apply network virtualization to remove the differences between heterogeneous networks in Fiber-Wireless (FiWi) network to establish intelligent multipath access through the flexible use of virtual networks (VNs) deployed in virtual resource manager (VRM). --- paper_title: Role of optical network virtualization in cloud computing [invited] paper_content: New and emerging Internet applications are increasingly becoming high-performance and network-based, relying on optical network and cloud computing services. Due to the accelerated evolution of these applications, the flexibility and efficiency of the underlying optical network infrastructure as well as the cloud computing infrastructure [i.e., data centers (DCs)] become more and more crucial. In order to achieve the required flexibility and efficiency, coordinated provisioning of DCs and optical network interconnecting DCs is essential. In this paper, we address the role of high-performance dynamic optical networks in cloud computing environments. A DC as a service architecture for future cloud computing is proposed. Central to the proposed architecture is the coordinated virtualization of optical network and IT resources of distributed DCs, enabling the composition of virtual infrastructures (VIs). During the composition process of the multiple coexisting but isolated VIs, the unique characteristics of optical networks (e.g., optical layer constraints and impairments) are addressed and taken into account. The proposed VI composition algorithms are evaluated over various network topologies and scenarios. The results provide a set of guidelines for the optical network and DC infrastructure providers to be able to effectively and optimally provision VI services to users and satisfy their requirements. --- paper_title: RPR-EPON-WiMAX hybrid network: A solution for access and metro networks paper_content: The integration of Ethernet passive optical networks (EPONs) with wireless worldwide interoperability for microwave access (WiMAX) is an approved solution for an access network. A resilient packet ring (RPR) is a good candidate for a metro network. Hence RPR, EPON, and WiMAX integration is a viable solution for metro-access network bridging. The present paper examines such integration, including an architecture and a joint media access control (MAC) protocol, as a solution for both access and metro networks. The proposed architecture is reliable due to the dependability of the RPR standard and the protection mechanism employed in the EPON. Moreover, the architecture contains a high fault tolerance against node and connection failure. The suggested MAC protocol includes a multi-level dynamic bandwidth allocation algorithm, a distributed admission control, a scheduler, and a routing algorithm. This MAC protocol aims at maximizing the advantages of the proposed architecture by distributing its functionalities over different parts of the architecture and jointly executing the parts of the MAC protocol. --- paper_title: SDN/NFV orchestration for dynamic deployment of virtual SDN controllers as VNF for multi-tenant optical networks paper_content: We propose to virtualize the SDN control functions and move them to the cloud. We experimentally evaluate the first SDN/NFV orchestration architecture to dynamically deploy independent SDN controller instances for each deployed virtual optical network. --- paper_title: OpenSlice: An OpenFlow-based control plane for spectrum sliced elastic optical path networks paper_content: A control plane is a key enabling technique for dynamic and intelligent end-to-end path provisioning in optical networks. In this paper, we present an OpenFlow-based control plane for spectrum sliced elastic optical path networks, called OpenSlice, for dynamic end-to-end path provisioning and IP traffic offloading. Experimental demonstration and numerical evaluation show its overall feasibility and efficiency. --- paper_title: Performance improvement for applying network virtualization in fiber-wireless (FiWi) access networks paper_content: Fiber-wireless (FiWi) access networks, which are a combination of fiber networks and wireless networks, have the advantages of both networks, such as high bandwidth, high security, low cost, and flexible access. However, with the increasing need for bandwidth and types of service from users, FiWi networks are still relatively incapable and ossified. To alleviate bandwidth tension and facilitate new service deployment, we attempt to apply network virtualization in FiWi networks, in which the network’s control plane and data plane are separated from each other. Based on a previously proposed hierarchical model and service model for FiWi network virtualization, the process of service implementation is described. The performances of the FiWi access networks applying network virtualization are analyzed in detail, including bandwidth for links, throughput for nodes, and multipath flow transmission. Simulation results show that the FiWi network with virtualization is superior to that without. --- paper_title: Using SDN technology to enable cost-effective bandwidth-on-demand for cloud services paper_content: We describe bandwidth-on-demand in an evolved multi-layer, SDN-based Cloud Services model. We also show an initial proof-of-concept demonstration of this capability. --- paper_title: SDN orchestration of OpenFlow and GMPLS flexi-grid networks with a stateful hierarchical PCE [invited] paper_content: New and emerging use cases, such as the interconnection of geographically remote data centers, are drawing attention to the need for provisioning end-to-end connectivity services spanning multiple and heterogeneous network domains. This heterogeneity is due not only to the data transmission and switching technology (the so-called data plane) but also to the deployed control plane, which may be used within each domain to automate the setup and recovery of such services, dynamically. The choice of a control plane is affected by factors such as availability, maturity, operator's preference, and the ability to satisfy a list of functional requirements. Given the current developments around OpenFlow and software-defined networking (SDN) along with the need to account for existing deployments based on GMPLS, the problem of heterogeneous control plane interworking needs to be solved. The retained solution must equally address the specific issues of multidomain networks, such as limited domain topology visibility, given the scalability and confidentiality constraints that characterize them. In this setting, we propose a functional and protocol architecture for such interworking, based on the key concepts of network abstraction and overarching control, implemented in terms of a hierarchical stateful path computation element (PCE), which provides the orchestration and coordination layer. In the proposed architecture, the PCEP and BGP-LS protocols are extended to support OpenFlow addresses and datapath identifiers, unifying both GMPLS and OpenFlow domains. The solution is deployed in an experimental testbed and validated. Although the main scope of the approach is the interworking of OpenFlow and GMPLS, the same approach can be directly applied to a wide range of multidomain scenarios, with either homogeneous or heterogeneous control technologies. --- paper_title: Resilient design of a cloud system over an optical backbone paper_content: Cloud computing enables users to receive infrastructure/platform/software as a Service (XaaS) via a shared pool of resources in a pay-as-you-go fashion. Data centers, as the hosts of physical servers, play a key role in the delivery of cloud services. Therefore, interconnection of data centers over a backbone network is one of the major challenges affecting the performance of the cloud system, as well as the operational expenditures of service providers. This article proposes resilient design of a cloud backbone through demand profile-based network virtualization where the data centers are located at the core nodes of an IP over elastic optical network. Three approaches, MOPIC, MRPIC, and RS-MOPIC, are proposed. MOPIC aims to design the cloud backbone with minimum outage probability per demand. MRPIC aims to minimize the usage of network resources while routing the cloud demands toward data centers. RS-MOPIC is a hybrid of both approaches aiming to reduce network resource usage while minimizing outage probability. Through simulations of a small-scale cloud scenario, we show that incorporation of manycast provisioning ensures significantly low outage probability on the order of 10–7. Furthermore, integration of a resource saving objective into MOPIC can make a compromise between network resource consumption and outage probability of the workloads submitted to the cloud. --- paper_title: Invited paper: The audacity of fiber-wireless (FiWi) networks: revisited for clouds and cloudlets paper_content: There is a growing awareness among industry players of reaping the benefits of mobile-cloud convergence by extending today’s unmodified cloud to a decentralized two-level cloud-cloudlet architecture based on emerging mobile-edge computing (MEC) capabilities. In light of future 5G mobile networks moving toward decentralization based on cloudlets, intelligent base stations, and MEC, the inherent distributed processing and storage capabilities of radio-and-fiber (R&F) networks may be exploited for new applications, e.g., cognitive assistance, augmented reality, or cloud robotics. In this paper, we first revisit fiber-wireless (FiWi) networks in the context of conventional clouds and emerging cloudlets, thereby highlighting the limitations of conventional radio-overfiber (RoF) networks such as China Mobile’s centralized cloud radio access network (C-RAN) to meet the aforementioned trends. Furthermore, we pay close attention to the specific design challenges of data center networks and revisit our switchless arrayedwaveguide grating (AWG) based network with efficient support of east-west flows and enhanced scalability. --- paper_title: A survey of recent developments on flexible/elastic optical networking paper_content: There is a growing awareness that the utilized bandwidth of deployed optical fiber is rapidly approaching its maximum limit. Given the possibility for such capacity crunch, the research community has focused on seeking solutions that make the most out of the scarce network resources (such as the fiber bandwidth) and allow accommodating the ever-increasing traffic demands. In such context, new spectrum efficient optical networking techniques have been introduced as a way to offer efficient utilization of the available optical resources. "Flexible", "elastic", "tunable", "gridless" or "adaptive" are few examples of the terms used in literature to describe solutions that migrate from the fixed WDM single line rate systems to systems that provide support for the most efficient bandwidth utilization. In this paper, we review the recent developments on the research topic of flexible/elastic networking and we highlight the future research challenges. --- paper_title: Virtualized optical network (VON) for agile cloud computing environment paper_content: A virtualized optical network is proposed as a key to implementing increased agility and flexibility into a cloud computing environment by providing any-to-any connectivity with the appropriate optical bandwidth at the appropriate time. --- paper_title: Network virtualization based seamless networking scheme for fiber-wireless (FiWi) networks paper_content: In order to reduce cost and complexity, fiber-wireless (FiWi) networks emerge, combining the huge amount of available bandwidth of fiber networks and the flexibility, mobility of wireless networks. However, there is still a long way to go before taking fiber and wireless systems as fully integrated networks. In this paper, we propose a network virtualization based seamless networking scheme for FiWi networks, including hierarchical model, service model, service implementation and dynamic bandwidth assignment (DBA). Then, we evaluate the performance changes after network virtualization is introduced. Throughput for nodes, bandwidth for links and overheads leaded by network virtualization are analyzed. The performance of our proposed networking scheme is evaluated by simulation and real implementations, respectively. The results show that, compared to traditional networking scheme, our scheme has a better performance. --- paper_title: Clouds of virtual machines in edge networks paper_content: This article addresses the potential impact of emerging technologies and solutions, such as software defined networking and network function virtualization, on carriers' network evolution. It is argued that standard hardware advances and these emerging paradigms can bring the most impactful disruption at the network's edge, enabling the deployment of clouds of nodes using standard hardware: it will be possible to virtualize network and service functions, which are provided today by expensive middleboxes, and move them to the edge, as close as possible to users. Specifically, this article identifies some of key technical challenges behind this vision, such as dynamic allocation, migration, and orchestration of ensembles of virtual machines across wide areas of interconnected edge networks. This evolution of the network will profoundly affect the value chain: it will create new roles and business opportunities, reshaping the entire ICT world. --- paper_title: Spectrum management techniques for elastic optical networks: A survey☆ paper_content: In recent years, OFDM has been the focus of extensive research efforts in optical transmission and networking, initially as a means to overcome physical impairments in optical communications. However, unlike, say, in wireless LANs or xDSL systems where OFDM is deployed as a transmission technology in a single link, in optical networks it is being considered as the technology underlying the novel elastic network paradigm. Consequently, network-wide spectrum management arises as the key challenge to be addressed in network design and control. In this work, we review and classify a range of spectrum management techniques for elastic optical networks, including offline and online routing and spectrum assignment (RSA), distance-adaptive RSA, fragmentation-aware RSA, traffic grooming, and survivability. --- paper_title: Integrated SDN/NFV management and orchestration architecture for dynamic deployment of virtual SDN control instances for virtual tenant networks [invited] paper_content: Software-defined networking (SDN) and network function virtualization (NFV) have emerged as the most promising candidates for improving network function and protocol programmability and dynamic adjustment of network resources. On the one hand, SDN is responsible for providing an abstraction of network resources through well-defined application programming interfaces. This abstraction enables SDN to perform network virtualization, that is, to slice the physical infrastructure and create multiple coexisting application-specific virtual tenant networks (VTNs) with specific quality-of-service and service-levelagreement requirements, independent of the underlying optical transport technology and network protocols. On the other hand, the notion of NFV relates to deploying network functions that are typically deployed in specialized and dedicated hardware, as software instances [called virtual network functions (VNFs)] running on commodity servers (e.g., in data centers) through software virtualization techniques. Despite all the attention that has been given to virtualizing IP functions (e.g., firewall; authentication, authorization, and accounting) or Long-Term Evolution control functions (e.g., mobility management entity, serving gateway, and packet data network gateway), some transport control functions can also be virtualized and moved to the cloud as a VNF. In this work we propose virtualizing the tenant SDN control functions of a VTN and moving them into the cloud. The control of a VTN is a key requirement associated with network virtualization, since it allows the dynamic programming (i.e., direct control and configuration) of the virtual resources allocated to the VTN. We experimentally assess and evaluate the first SDN/NFV orchestration architecture in a multipartner testbed to dynamically deploy independent SDN controller instances for each instantiated VTN and to provide the required connectivity within minutes. --- paper_title: Enhancing restoration performance using service relocation in PCE-based resilient optical clouds paper_content: This paper investigates the benefits of dynamic restoration with service relocation in resilient optical clouds. Results from the proposed optimization model show that service availability can be significantly improved by allowing a few service relocations. --- paper_title: Towards a carrier SDN: an example for elastic inter-datacenter connectivity paper_content: We propose a network-driven transfer mode for cloud operations in a step towards a carrier SDN. Inter-datacenter connectivity is requested in terms of volume of data and completion time. The SDN controller translates and forwards requests to an ABNO controller in charge of a flexgrid network. --- paper_title: PONIARD: A Programmable Optical Networking Infrastructure for Advanced Research and Development of Future Internet paper_content: Motivated by the design goals of Global Environment for Network Innovation (GENI), we consider how to support the slicing of link bandwidth resources as well as the virtualization of optical access networks and optical backbone mesh networks. Specifically, in this paper, we study a novel programmable mechanism called optical orthogonal frequency division multiplexing (OFDM)/orthogonal frequency division multiple access (OFDMA) for link virtualization. Unlike conventional time division multiplexing (TDM)/time division multiple access (TDMA) and wavelength division multiplexing (WDM)/wavelength division multiple access (WDMA) methods, optical OFDM/OFDMA utilizes advanced digital signal processing (DSP), parallel signal detection (PSD), and flexible resource management schemes for subwavelength level multiplexing and grooming. Simulations as well as experiments are conducted to demonstrate performance improvements and system benefits including cost-reduction and service transparency. --- paper_title: Design and implementation of the OFELIA FP7 facility: The European OpenFlow testbed paper_content: The growth of the Internet in terms of number of devices, the number of networks associated to each device and the mobility of devices and users makes the operation and management of the Internet network infrastructure a very complex challenge. In order to address this challenge, innovative solutions and ideas must be tested and evaluated in real network environments and not only based on simulations or laboratory setups. OFELIA is an European FP7 project and its main objective is to address the aforementioned challenge by building and operating a multi-layer, multi-technology and geographically distributed Future Internet testbed facility, where the network itself is precisely controlled and programmed by the experimenter using the emerging OpenFlow technology. This paper reports on the work done during the first half of the project, the lessons learned as well as the key advantages of the OFELIA facility for developing and testing new networking ideas. An overview on the challenges that have been faced on the design and implementation of the testbed facility is described, including the OFELIA Control Framework testbed management software. In addition, early operational experience of the facility since it was opened to the general public, providing five different testbeds or islands, is described. --- paper_title: Renewable Energy-Aware Inter-Datacenter Virtual Machine Migration over Elastic Optical Networks paper_content: Datacenters (DCs) are deployed in a large scale to support the ever increasing demand for data processing to support various applications. The energy consumption of DCs becomes a critical issue. Powering DCs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem. Owing to geographical deployments of DCs, the renewable energy generation and the data processing demands usually vary in different DCs. Migrating virtual machines (VMs) among DCs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in DCs, and thus maximizes the utilization of renewable energy. Since migrating VMs incurs additional traffic in the network, the VM migration is constrained by the network capacity. The inter-datacenter (inter-DC) VM migration with network capacity constraints is an NP-hard problem. In this paper, we propose two heuristic algorithms that approximate the optimal VM migration solution. Through extensive simulations, we show that the proposed algorithms, by migrating VM among DCs, can reduce up to 31% of brown energy consumption. --- paper_title: SDN and OpenFlow for converged access/aggregation networks paper_content: This paper discusses necessary steps for the migration from today's residential network model to a converged access/aggregation platform based on software defined networks (SDN). --- paper_title: Traffic Optimization in Multi-layered WANs Using SDN paper_content: Wide area networks (WAN) forward traffic through a mix of packet and optical data planes, composed by a variety of devices from different vendors. Multiple forwarding technologies and encapsulation methods are used for each data plane (e.g. IP, MPLS, ATM, SONET, Wavelength Switching). Despite standards defined, the control planes of these devices are usually not interoperable, and different technologies are used to manage each forwarding segment independently (e.g. Open Flow, TL-1, GMPLS). The result is lack of coordination between layers and inefficient resource usage. In this paper we discuss the design and implementation of a system that uses unmodified Open Flow to optimize network utilization across layers, enabling practical bandwidth virtualization. We discuss strategies for scalable traffic monitoring and to minimize losses on route updates across layers. A prototype of the system was built using a traditional circuit reservation application and an unmodified SDN controller, and its evaluation was performed on a multi-vendor test bed. --- paper_title: Cloudnet: A platform for optimized wan migration of virtual machines paper_content: Cloud computing platforms are growing from clusters of machines within a data center to networks of data centers with resources spread across the globe. Virtual machine migration within the LAN has changed the scale of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration to likewise transform the scope of provisioning from a single data center to multiple data centers spread across the country or around the world. In this paper we propose a cloud computing platform linked with a VPN based network infrastructure that provides seamless connectivity between enterprise and data center sites, as well as support for live WAN migration of virtual machines. We describe a set of optimizations that minimize the cost of transferring persistent storage and moving virtual machine memory during migrations over low bandwidth, high latency Internet links. Our evaluation on both a local testbed and across two real data centers demonstrates that these improvements can reduce total migration and pause time by over 30%. During simultaneous migrations of four VMs between Texas and Illinois, CloudNet’s optimizations reduce memory migration time by 65% and lower bandwidth consumption for the storage and memory transfer by 20GB, a 57% reduction. --- paper_title: Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies paper_content: The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture. --- paper_title: Efficient load balancing multipath algorithm for fiber-wireless network virtualization paper_content: The fiber-wireless network which is a combination of the fiber subnetwork and wireless subnetwork has provided high bandwidth access with ubiquity and mobility. But traditional single-path transmission cannot satisfy people's requirements for network performance due to various services. Multipath algorithms have been proposed as a solution to the network congestion. The multipath algorithm in traditional network has some limits in such heterogeneous networks. The application of network virtualization which can hide the differences of underlying physical infrastructures provides a potential method to solve this problem. In this paper we propose a Modified Weighted Round Robin (MWRR) algorithm based on the model of fiber-wireless network virtualization. Specific scheduling schemes will be arranged due to the quality of service requests and the states of links via the global view in the control plane. The simulation results show the smaller end-to-end delay and better load balancing. --- paper_title: Routing and Spectrum Allocation in Elastic Optical Networks: A Tutorial paper_content: Flexgrid technology is now considered to be a promising solution for future high-speed network design. In this context, we need a tutorial that covers the key aspects of elastic optical networks. This tutorial paper starts with a brief introduction of the elastic optical network and its unique characteristics. The paper then moves to the architecture of the elastic optical network and its operation principle. To complete the discussion of network architecture, this paper focuses on the different node architectures, and compares their performance in terms of scalability and flexibility. Thereafter, this paper reviews and classifies routing and spectrum allocation (RSA) approaches including their pros and cons. Furthermore, various aspects, namely, fragmentation, modulation, quality-of-transmission, traffic grooming, survivability, energy saving, and networking cost related to RSA, are presented. Finally, the paper explores the experimental demonstrations that have tested the functionality of the elastic optical network, and follows that with the research challenges and open issues posed by flexible networks. --- paper_title: ABNO: A feasible SDN approach for multi-vendor IP and optical networks paper_content: ABNO architecture is proposed in IETF as a framework which enables network automation and programmability thanks to the utilization of standard protocols and components. This work not only justifies the architecture but also presents the first experimental demonstration. --- paper_title: Performance Analysis of QoS-Aware Layer-2 VPNs over Fiber-Wireless (FiWi) Networks paper_content: The integration of Ethernet Passive Optical Networks (EPONs) and IEEE 802.16 (WiMAX) has been lately presented as a promising fiber-wireless (FiWi) broadband access network. Conversely, lightweight layer-2 virtual private networks (VPNs) over FiWi, which can provide bandwidth guarantee to the respective users, were only recently addressed by Dhaini et. al. In this paper, WiMAX-VPON, the framework proposed by Dhaini et. al to support layer-2 VPNs over EPON-WiMAX, is improved to take into account the polling control overhead when distributing the VPN bandwidth. A new generic analytical model is also presented to evaluate the performance of each registered VPN service. Our proposed model, which can also be used to analyze any polling-based FiWi network, applies for wireless and optical domains and provides performance measurements such as packet queuing delay, end-to-end (from wireless user to optical server) packet delay and average queue size. Numerical results are compared with simulation experiments, and show consistency between both outcomes. --- paper_title: Physical layer impairment aware routing (PLIAR) in WDM optical networks: issues and challenges paper_content: In WDM optical networks, the physical layer impairments (PLIs) and their significance depend on network type-opaque, translucent, or transparent; the reach-access, metro, or core/long-haul; the number and type of network elements-fiber, wavelengths, amplifiers, switching elements, etc.; and the type of applications-real-time, non-real time, missioncritical, etc. In transparent optical networks, PLIs incurred by non-ideal optical transmission media accumulate along an optical path, and the overall effect determines the feasibility of the lightpaths. If the received signal quality is not within the receiver sensitivity threshold, the receiver may not be able to correctly detect the optical signal and this may result in high bit-error rates. Hence, it is important to understand various PLIs and their effect on optical feasibility, analytical models, and monitoring and mitigation techniques. Introducing optical transparency in the physical layer on one hand leads to a dynamic, flexible optical layer with the possibility of adding intelligence such as optical performance monitoring, fault management, etc. On the other hand, transparency reduces the possibility of client layer interaction with the optical layer at intermediate nodes along the path. This has an impact on network design, planning, control, and management. Hence, it is important to understand the techniques that provide PLI information to the control plane protocols and that use this information efficiently to compute feasible routes and wavelengths. The purpose of this article is to provide a comprehensive survey of various PLIs, their effects, and the available modeling and mitigation techniques. We then present a comprehensive survey of various PLI-aware network design techniques, regenerator placement algorithms, routing and wavelength assignment algorithms, and PLI-aware failure recovery algorithms. Furthermore, we identify several important research issues that need to be addressed to realize dynamically reconfigurable next-generation optical networks. We also argue the need for PLI-aware control plane protocol extensions and present several interesting issues that need to be considered in order for these extensions to be deployed in real-world networks. --- paper_title: Performance of Multipath in Fiber-Wireless (FiWi) Access Network with Network Virtualization paper_content: Nowadays, multipath routing algorithms and resource distribution strategies of the homogeneous network are the research focus of the Fiber-Wireless (FiWi) network which is the combination of optical subnetwork and wireless subnetwork. However few studies concerned on the efficient way to set up multipath in FiWi network as the affiliation of heterogeneous networks and packet reordering are tangled problems. Separating the Internet service provider (ISP) into two independent sections, nfrastructure provider (InP) and service provider (SP), the proposal of network virtualization provides a potential method to solve this problem. As a starting point, in this paper we apply network virtualization to remove the differences between heterogeneous networks to take FiWi network as a whole. Moreover, we propose a viable way to establish multipath access in the FiWi network through the flexible use of virtual networks (VNs) which can be deployed in the virtual resource manager (VRM). Besides, we present the superior performance of multipath in the FiWi network with network virtualization based on the simulation results by using the multipath scheduling policy Weighted Round Robin (WRR). --- paper_title: SDN-Based Network Orchestration of Variable-Capacity Optical Packet Switching Network Over Programmable Flexi-Grid Elastic Optical Path Network paper_content: A multidomain and multitechnology optical network orchestration is demonstrated in an international testbed located in Japan, the U.K., and Spain. The application-based network operations architecture is proposed as a carrier software-defined network solution for provisioning end-to-end optical transport services through a multidomain multitechnology network scenario, consisting of a 46–108 Gb/s variable-capacity OpenFlow-capable optical packet switching network and a programmable, flexi-grid elastic optical path network. --- paper_title: An impairment-aware virtual optical network composition mechanism for future Internet paper_content: In this paper, a novel Infrastructure as a Service architecture for future Internet enabled by optical network virtualization is proposed. Central to this architecture is a novel virtual optical network (VON) composition mechanism capable of taking physical layer impairments (PLIs) into account. The impact of PLIs on VON composition is investigated based on both analytical model of PLIs and industrial parameters. Furthermore, the impact of network topology on VON composition is evaluated. --- paper_title: Impairment-aware optical network virtualization in single-line-rate and mixed-line-rate WDM networks paper_content: Optical network virtualization enables network operators to compose and operate multiple independent and application-specific virtual optical networks (VONs) sharing a common physical infrastructure. To achieve this capability, the virtualization mechanism must guarantee isolation between coexisting VONs. In order to satisfy this fundamental requirement, the VON composition mechanism must take into account the impact of physical layer impairments (PLIs). In this paper we propose a new infrastructure as a service architecture utilizing optical network virtualization. We introduce novel PLI-aware VON composition algorithms suitable for single-line-rate (SLR) and mixed-line-rate (MLR) network scenarios. In order to assess the impact of PLIs and guarantee the isolation of multiple coexisting VONs, PLI assessment models for intra- and inter-VON impairments are proposed and adopted in the VON composition process for both SLR and MLR networks. In the SLR networks, the PLI-aware VON composition mechanisms with both heuristic and optimal (MILP) mapping methods are proposed. A replanning strategy is proposed for the MILP mapping method in order to increase its efficiency. In the MLR networks, a new virtual link mapping method suitable for the MLR network scenario and two line rate distribution methods are proposed. With the proposed PLI-aware VON composition methods, multiple coexisting and cost-effective VONs with guaranteed transmission quality can be dynamically composed. We evaluate and compare the performance of the proposed VON composition methods through extensive simulation studies with various network scenarios. --- paper_title: Adaptive IP/optical OFDM networking design paper_content: A new networking approach based on IP/optical OFDM technologies is proposed, providing an adaptive mechanism of bandwidth provisioning and pipe resizing for dynamic traffic flows. A comparison study is presented to demonstrate its advantages. --- paper_title: Cloudlets: bringing the cloud to the mobile user paper_content: Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application. --- paper_title: Survivable IP/MPLS-Over-WSON Multilayer Network Optimization paper_content: Network operators are facing the problem of dimensioning their networks for the expected huge IP traffic volumes while keeping constant or even reducing the connectivity prices. Therefore, new architectural solutions able to cope with the expected traffic increase in a more cost-effective way are needed. In this work, we study the survivable IP/multi-protocol label switching (MPLS) over wavelength switched optical network (WSON) multilayer network problem as a capital expenditure (CAPEX) minimization problem. Two network approaches providing survivability against optical links, IP/MPLS nodes, and opto-electronic port failures are compared: the classical overlay approach where two redundant IP/MPLS networks are deployed, and the new joint multilayer approach which provides the requested survivability through an orchestrated interlayer recovery scheme which minimizes the over-dimensioning of IP/MPLS nodes. Mathematical programming models are developed for both approaches. Solving these models, however, becomes impractical for realistic networks. In view of this, evolutionary heuristics based on the biased random-key genetic algorithm framework are also proposed. Exhaustive experiments on several reference network scenarios illustrate the effectiveness of the proposed approach in minimizing network CAPEX. --- paper_title: A converged network architecture for energy efficient mobile cloud computing paper_content: Mobile computation offloading has been identified as a key enabling technology to overcome the inherent processing power and storage constraints of mobile end devices. To satisfy the low-latency requirements of content-rich mobile applications, existing mobile cloud computing solutions allow mobile devices to access the required resources by accessing a nearby resource-rich cloudlet, suffering increased capital and operational expenditures. To address this issue, in this paper we propose an infrastructure and architectural approach based on the orchestrated planning and operation of Optical Data Center networks and Wireless Access networks. To this end, a novel formulation based on a multi-objective Non Linear Programming model is presented that considers energy efficient virtual infrastructure planning over the converged wireless, optical network interconnecting DCs with mobile devices, taking a holistic view of the infrastructure. Our modelling results identify trends and trade-offs related to end-to-end service delay, resource requirements and energy consumption levels of the infrastructure across the various technology domains. --- paper_title: Evaluation of Technology Options for Software-Defined Transceivers in Fixed WDM Grid versus Flexible WDM Grid Optical Transport Networks paper_content: Spectrum-efficient optical transmission with bitrates of 400 Gb/s and beyond can be achieved using flexible modulation with advanced DSPs. The technology options include modulation format, signal baud rate, number of subcarriers, and spectral bandwidth. A fine-granular spectral bandwidth requires a flexible WDM grid as recently defined by ITU-T. Transmission of a signal with multiple optical carriers, each potentially with their own set of modulation options, allows bandwidth-variable multi-flow transceivers. This can reduce the spectrum continuity constraint in the network. At the network layer, these new degrees of freedom create additional levels of complexity and constraints during network design, planning and operation. Which modulation constellation should be chosen for a new optical connection? What are the impacts on transmission reach, spectrum continuity constraints, and network utilization? Routing and spectrum assignment is becoming more complex, and the inevitable spectrum fragmentation reduces the spectral efficiency gained though efficient modulation schemes. Dynamic spectrum defragmentation requires transceivers supporting hitless defragmentation, or it is traffic affecting for the reallocated signals. The sheer number of technology options will increase the operational complexity of the network. In this paper, we give an overview of technology options for software-defined transceivers for fixed-grid and flex-grid optical transport networks, and their impact for network planning and operation. We evaluate the spectral network efficiency, and operational complexity of selected technology options, such as multi-carrier transmission in fixed WDM grid, bandwidth-variable transponders with multiple subcarriers in a flexible WDM grid, and fully flexible multi-flow transponders. The evaluation is done based on a network planning study on a national European and US reference network. Based on the evaluation result, a guideline is given for a technology strategy with a good balance between flexibility, spectrum efficiency, and network cost. --- paper_title: PGA: Using Graphs to Express and Automatically Reconcile Network Policies paper_content: Software Defined Networking (SDN) and cloud automation enable a large number of diverse parties (network operators, application admins, tenants/end-users) and control programs (SDN Apps, network services) to generate network policies independently and dynamically. Yet existing policy abstractions and frameworks do not support natural expression and automatic composition of high-level policies from diverse sources. We tackle the open problem of automatic, correct and fast composition of multiple independently specified network policies. We first develop a high-level Policy Graph Abstraction (PGA) that allows network policies to be expressed simply and independently, and leverage the graph structure to detect and resolve policy conflicts efficiently. Besides supporting ACL policies, PGA also models and composes service chaining policies, i.e., the sequence of middleboxes to be traversed, by merging multiple service chain requirements into conflict-free composed chains. Our system validation using a large enterprise network policy dataset demonstrates practical composition times even for very large inputs, with only sub-millisecond runtime latencies. --- paper_title: An intent-based approach for network virtualization paper_content: Virtualizing resources for easy pooling and accounting, as well as for rapid provisioning and release, is essential for the effective management of modern data centers. Although the compute and storage resources can be virtualized quite effectively, a comprehensive solution for network virtualization has yet to be developed. Our analysis of the requirements for a comprehensive network virtualization solution identified two complimentary steps of ultimate importance. One is specifying the network-related requirements, another is carrying out the requirements of multiple independent tenants in an efficient and scalable manner. We introduce a novel intent-based modeling abstraction for specifying the network as a policy governed service and present an efficient network virtualization architecture, Distributed Overlay Virtual Ethernet network (DOVE), realizing the proposed abstraction. We describe the working prototype of DOVE architecture and report the results of the extensive simulation-based performance study, demonstrating the scalability and the efficiency of the solution. --- paper_title: InSeRt An Intent-Based Service Request API for Service Exposure in Next Generation Networks paper_content: Modern telecommunication networks and classical roles of operators are subject to fundamental change. Many network operators are currently seeking for new sources to generate revenue by exposing network capabilities to 3rd party service providers.At the same time we can observe that applications on the World Wide Web (WWW) are becoming more mature in terms of the definition of APIs that are offered towards other services. The combinations of those services are commonly referred to as Web 2.0 mash-ups.This report describes our approach to prototype a policy-based service broker function for Next Generation Networks (NGN)-based telecommunications service delivery platforms to provide flexible service exposure anchor points for service integration into so called mash-ups. The defined exposure API uses Intent-based request constructs to allow a description of services in business terms, i.e. intentions and strategies to achieve them and to organize their publication, search and composition on the basis of these descriptions. --- paper_title: Future access architecture: Software-defined accesss networking paper_content: In the cloud era, booming broadband access, and the new services that it facilitates, is posing a greater demand on access technologies. Legacy access technologies are facing lots of challenges: Complexity for multi-point access management; Scalability and energy efficiency in remote nodes; Painfulness of choosing a technology; Difficulty of access network wholesale. To address these challenges, this paper discusses Software-defined Access Networks (SDAN) as the next-gen architecture for access networking. With its simplified access nodes, flexible and programmable line technologies, and cloud home gateway & services, SDAN helps operators construct a simple, agile, elastic, and value-added access network. --- paper_title: Security in Software Defined Networks: A Survey paper_content: Software defined networking (SDN) decouples the network control and data planes. The network intelligence and state are logically centralized and the underlying network infrastructure is abstracted from applications. SDN enhances network security by means of global visibility of the network state where a conflict can be easily resolved from the logically centralized control plane. Hence, the SDN architecture empowers networks to actively monitor traffic and diagnose threats to facilitates network forensics, security policy alteration, and security service insertion. The separation of the control and data planes, however, opens security challenges, such as man-in-the middle attacks, denial of service (DoS) attacks, and saturation attacks. In this paper, we analyze security threats to application, control, and data planes of SDN. The security platforms that secure each of the planes are described followed by various security approaches for network-wide security in SDN. SDN security is analyzed according to security dimensions of the ITU-T recommendation, as well as, by the costs of security solutions. In a nutshell, this paper highlights the present and future security challenges in SDN and future directions for secure SDN. --- paper_title: Novel optical access network virtualization and dynamic resource allocation algorithms for the Internet of Things paper_content: Novel optical access network virtualization and resource allocation algorithms for Internet-of-Things support are proposed and implemented on a real-time SDN-controller platform. 30–50% gains in served request number, traffic prioritization, and revenue are demonstrated. --- paper_title: A Flexible OpenFlow-Controller Benchmark paper_content: Software defined networking (SDN) promises a way to more flexible networks that can adapt to changing demands. At the same time these networks should also benefit from simpler management mechanisms. This is achieved by moving the network control out of the forwarding devices to purpose-tailored software-applications on top of a "networking operating system". Currently, the most notable representative of this approach is OpenFlow. In the OpenFlow architecture the operating system is represented by the OpenFlow controller. As the key component of the OpenFlow ecosystem, the behavior and performance of the controller are significant for the entire network. Therefore, it is important to understand these influence factors, when planning an OpenFlow-based SDN deployment. In this work, we introduce a tool to help achieving just that - a flexible OpenFlow controller benchmark. The benchmark creates a set of message-generating virtual switches, which can be configured independently from each other to emulate a certain scenario and also keep their own statistics. This way a granular controller performance analysis is possible. --- paper_title: OFCProbe: A platform-independent tool for OpenFlow controller analysis paper_content: I. INTRODUCTION The key component in the Software Defined Networking architecture is the controller or ”networking operating system”. The controller provides a platform for the operation of diverse network control and management applications. However, little is known about the stability and performance of current controller applications, which is a requirement for a smooth operation of the network. In case of OpenFlow, the currently most popular realization of SDN, the controller is not specified by the standard. Its performance depends on the specific implementation. As a consequence, some controllers are more suitable for certain tasks than others. Choosing the right controller for a task requires a thorough analysis of the available candidates in terms of system behavior and performance. In this paper, we present the extended platformindependent and flexible OpenFlow controller performance analyzer ”OFCProbe” as a follow up to our previous work with ”OFCBenchmark”. The new tool features a scalable and modular architecture that allows a deep granular analysis of a controller’s behavior and characteristics. It allows the emulation of virtual switches that each provide sophisticated statistics about different aspects of the controller’s performance. The virtual switches are arrangeable into topologies to emulate different scenarios and traffic patterns. This way a detailed insight and deep analysis of possible bottlenecks concerning the controller performance or unexpected behavior is possible. Key features of the re-implementation are a more flexible, simulation-style packet generation system as well as Java Selector-based connection handling. In order to highlight the tool’s features, we perform some experiments for the Nox and Floodlight controllers in different scenarios. The remainder of this paper is structured as follows. In Section II, we discuss related work in terms of OpenFlow controller performance. The architecture and features of OFCProbe are then introduced in Section III. We show and discuss the results of our example experiments in Section IV before drawing our conclusions in Section V. --- paper_title: An Open Testing Framework for Next-Generation Openflow Switches paper_content: The deployment experience of OpenFlow support in production networks has highlighted variable limitations between network devices and vendors, while the recent integration of OpenFlow control abstractions in 10 GbE switches, increases further the performance requirements to support the switch control plane. This paper presents OFLOPS-Turbo, an effort to integrate OFLOPS, the OpenFlow switch evaluation platform, with OSNT, a hardware-accelerated traffic generation and capture system. ---
Title: Software Defined Optical Access Networks (SDOANs): A Comprehensive Survey Section 1: INTRODUCTION Description 1: This section introduces the motivation, scope, and structure of the survey. Section 2: BACKGROUND AND RELATED SURVEYS Description 2: This section provides background information and reviews related surveys on access networks, SDN, and network virtualization. Section 3: SDOAN ARCHITECTURES Description 3: This section presents a comprehensive review of studies focused on network architecture aspects of Software Defined Optical Access Networks (SDOANs). Section 4: SDOAN PROTOCOLS Description 4: This section reviews studies that focus on the network protocol aspects of SDOANs, classified by protocol layer. Section 5: OPEN CHALLENGES AND FUTURE SDOAN RESEARCH DIRECTIONS Description 5: This section outlines the overall cross-cutting open challenges and future research directions for SDOANs. Section 6: CONCLUSION Description 6: This section provides a summary of the survey findings and concludes the paper.
A review of statistical methods for prediction of proteolytic cleavage
14
--- paper_title: A new method for predicting signal sequence cleavage sites paper_content: A new method for identifying secretory signal sequences and for predicting the site of cleavage between a signal sequence and the mature exported protein is described. The predictive accuracy is estimated to be around 75-80% for both prokaryotic and eukaryotic proteins. --- paper_title: HIVcleave: a web-server for predicting human immunodeficiency virus protease cleavage sites in proteins. paper_content: According to the ‘‘distorted key theory’’ [K.C. Chou, Analytical Biochemistry, 233 (1996) 1–14], the information of cleavage sites of proteins by HIV (human immunodeficiency virus) protease is very useful for finding effective inhibitors against HIV, the culprit of AIDS (acquired immunodeficiency syndrome). To meet the increasing need in this regard, a web-server called HIVcleave was established at http://chou.med.harvard.edu/bioinf/HIV/. In this note we provide a step-to-step guide for how to use HIVcleave to identify the cleavage sites of a query protein sequence by HIV-1 and HIV-2 proteases, respectively. --- paper_title: Genetic variation in the gene encoding calpain-10 is associated with type 2 diabetes mellitus paper_content: Type 2 or non-insulin-dependent diabetes mellitus (NIDDM) is the most common form of diabetes worldwide, affecting approximately 4% of the world's adult population. It is multifactorial in origin with both genetic and environmental factors contributing to its development. A genome-wide screen for type 2 diabetes genes carried out in Mexican Americans localized a susceptibility gene, designated NIDDM1, to chromosome 2. Here we describe the positional cloning of a gene located in the NIDDM1 region that shows association with type 2 diabetes in Mexican Americans and a Northern European population from the Botnia region of Finland. This putative diabetes-susceptibility gene encodes a ubiquitously expressed member of the calpain-like cysteine protease family, calpain-10 (CAPN10). This finding suggests a novel pathway that may contribute to the development of type 2 diabetes. --- paper_title: Why Neural Networks Should Not Be Used for HIV-1 Protease Cleavage Site Prediction paper_content: Summary: Several papers have been published where nonlinear machine learning algorithms, e.g. artificial neural networks, support vector machines and decision trees, have been used to model the specificity of the HIV-1 protease and extract specificity rules. We show that the dataset used in these studies is linearly separable and that it is a misuse of nonlinear classifiers to apply them to this problem. The best solution on this dataset is achieved using a linear classifier like the simple perceptron or the linear support vector machine, and it is straightforward to extract rules from these linear models. We identify key residues in peptides that are efficiently cleaved by the HIV-1 protease and list the most prominent rules, relating them to experimental results for the HIV-1 protease. ::: ::: Motivation: Understanding HIV-1 protease specificity is important when designing HIV inhibitors and several different machine learning algorithms have been applied to the problem. However, little progress has been made in understanding the specificity because nonlinear and overly complex models have been used. ::: ::: Results: We show that the problem is much easier than what has previously been reported and that linear classifiers like the simple perceptron or linear support vector machines are at least as good predictors as nonlinear algorithms. We also show how sets of specificity rules can be generated from the resulting linear classifiers. ::: ::: Availability: The datasets used are available at http://www.hh.se/staff/bioinf/ --- paper_title: Calpain-dependent proteolysis of NF2 protein: Involvement in schwannomas and meningiomas paper_content: The neurofibromatosis type 2 (NF2) protein, known as merlin or schwannomin, is a tumor suppressor, and the NF2 gene has been found to be mutated in the majority of schwannomas and meningiomas, including both sporadically occurring and familial NF2 cases. Although the development of these tumors depends on the loss of merlin, the presence of tumors lacking detectable NF2 mutations suggests different mechanisms for inactivating merlin. Recent studies have demonstrated cleavage of merlin by calpain, a calcium-dependent neutral cysteine protease, and marked activation of the calpain system resulting in the degradation of merlin in these tumors. Increased turnover of merlin by calpain in some schwannomas and meningiomas exemplifies tumorigenesis linked to the calpain-mediated proteolytic pathway. --- paper_title: Calpains and Their Multiple Roles in Diabetes Mellitus paper_content: : Type 2 diabetes mellitus (T2DM) can lead to death without treatment and it has been predicted that the condition will affect 215 million people worldwide by 2010. T2DM is a multifactorial disorder whose precise genetic causes and biochemical defects have not been fully elucidated, but at both levels, calpains appear to play a role. Positional cloning studies mapped T2DM susceptibility to CAPN10, the gene encoding the intracellular cysteine protease, calpain 10. Further studies have shown a number of noncoding polymorphisms in CAPN10 to be functionally associated with T2DM while the identification of coding polymorphisms, suggested that mutant calpain 10 proteins may also contribute to the disease. Here we review recent studies, which in addition to the latter enzyme, have linked calpain 5, calpain 3, and its splice variants, calpain 2 and calpain 1 to T2DM-related metabolic pathways along with T2DM-associated phenotypes, such as obesity and impaired insulin secretion, and T2DM-related complications, such as epithelial dysfunction and diabetic cataract. --- paper_title: Calpain proteases in cell adhesion and motility. paper_content: Cell adhesion and its role during cell spreading and motility are central to normal development and homeostasis, including its effects on immune response and wound repair and tissue regeneration. Disruption of cell adhesion impacts not only the healing process but promotes tumor invasion and metastasis. A family of intracellular, limited proteases, the calpains, has recently been shown to be a key molecular control point in attachment of cells to the surrounding matrix. Herein, the two main and ubiquitously expressed calpain isoforms will be introduced as to their modes of regulation and the current status of research will be discussed as to how these calpains might function in the biophysical process of adhesion and biological cellular responses of spreading and motility. --- paper_title: Calpain Cleavage Prediction Using Multiple Kernel Learning paper_content: Calpain, an intracellular Ca²⁺-dependent cysteine protease, is known to play a role in a wide range of metabolic pathways through limited proteolysis of its substrates. However, only a limited number of these substrates are currently known, with the exact mechanism of substrate recognition and cleavage by calpain still largely unknown. While previous research has successfully applied standard machine-learning algorithms to accurately predict substrate cleavage by other similar types of proteases, their approach does not extend well to calpain, possibly due to its particular mode of proteolytic action and limited amount of experimental data. Through the use of Multiple Kernel Learning, a recent extension to the classic Support Vector Machine framework, we were able to train complex models based on rich, heterogeneous feature sets, leading to significantly improved prediction quality (6% over highest AUC score produced by state-of-the-art methods). In addition to producing a stronger machine-learning model for the prediction of calpain cleavage, we were able to highlight the importance and role of each feature of substrate sequences in defining specificity: primary sequence, secondary structure and solvent accessibility. Most notably, we showed there existed significant specificity differences across calpain sub-types, despite previous assumption to the contrary. Prediction accuracy was further successfully validated using, as an unbiased test set, mutated sequences of calpastatin (endogenous inhibitor of calpain) modified to no longer block calpain's proteolytic action. An online implementation of our prediction tool is available at http://calpain.org. --- paper_title: Neural network prediction of the HIV-1 protease cleavage sites. paper_content: Abstract A back propagation neural network method has been developed to study the pattern of polypeptides than can be cleaved by the HIV-1 protease. This method can incorporate many characteristics of the peptides, such as hydrophobicity, β-sheet and α-helix propensities. Mutations can also be applied to probe the most important factors that influence the cleavage. --- paper_title: Redesigning trypsin: alteration of substrate specificity. paper_content: A general method for modifying eukaryotic genes by site-specific mutagenesis and subsequent expression in mammalian cells was developed to study the relation between structure and function of the proteolytic enzyme trypsin. Glycine residues at positions 216 and 226 in the binding cavity of trypsin were replaced by alanine residues, resulting in three trypsin mutants. Computer graphic analysis suggested that these substitutions would differentially affect arginine and lysine substrate binding of the enzyme. Although the mutant enzymes were reduced in catalytic rate, they showed enhanced substrate specificity relative to the native enzyme. This increased specificity was achieved by the unexpected differential effects on the catalytic activity toward arginine and lysine substrates. Mutants containing alanine at position 226 exhibited an altered conformation that may be converted to a trypsin-like structure upon binding of a substrate analog. --- paper_title: Mutations in the proteolytic enzyme calpain 3 cause limb-girdle muscular dystrophy type 2A paper_content: Abstract Limb-girdle muscular dystrophies (LGMDs) are a group of inherited diseases whose genetic etiology has yet to be elucidated. The autosomal recessive forms (LGMD2) constitute a genetically heterogeneous group with LGMD2A mapping to chromosome 15815.1–821.1. The gene encoding the muscle-specific calcium-activated neutral protease 3 (CANP3) large subunit is located in this region. This cysteine protease belongs to the family of intracellular calpains. Fifteen nonsense, splice site, frameshift, or missense calpain mutations cosegregate with the disease in LGMD2A families, six of which were found within La Reunion island patients. A digenic inheritance model is proposed to account for the unexpected presence of multiple independent mutations in this small inbred population. Finally, these results demonstrate an enzymatic rather than a structural protein defect causing a muscular dystrophy, a defect that may have regulatory consequences, perhaps in signal transduction. --- paper_title: FUNCTIONAL DEFECTS OF A MUSCLE-SPECIFIC CALPAIN, P94, CAUSED BY MUTATIONS ASSOCIATED WITH LIMB-GIRDLE MUSCULAR DYSTROPHY TYPE 2A paper_content: Next Section Abstract p94 (calpain3), a muscle-specific member of the calpain family, has been shown to be responsible for limb-girdle muscular dystrophy type 2A (LGMD2A), a form of autosomal recessive and progressive neuromuscular disorder. To elucidate the molecular mechanism of LGMD2A, we constructed nine p94 missense point mutants found in LGMD2A and analyzed their p94 unique properties. All mutants completely or almost completely lose the proteolytic activity against a potential substrate, fodrin. However, some of the mutants still possess autolytic activity and/or connectin/titin binding ability, indicating these properties are not necessary for the LGMD2A phenotypes. These results provide strong evidence that LGMD2A results from the loss of proteolysis of substrates by p94, suggesting a novel molecular mechanism leading to muscular dystrophies. --- paper_title: Prediction of caspase cleavage sites using Bayesian bio-basis function neural networks paper_content: Motivation: Apoptosis has drawn the attention of researchers because of its importance in treating some diseases through finding a proper way to block or slow down the apoptosis process. Having understood that caspase cleavage is the key to apoptosis, we find novel methods or algorithms are essential for studying the specificity of caspase cleavage activity and this helps the effective drug design. As bio-basis function neural networks have proven to outperform some conventional neural learning algorithms, there is a motivation, in this study, to investigate the application of bio-basis function neural networks for the prediction of caspase cleavage sites. ::: ::: Results: Thirteen protein sequences with experimentally determined caspase cleavage sites were downloaded from NCBI. Bayesian bio-basis function neural networks are investigated and the comparisons with single-layer perceptrons, multilayer perceptrons, the original bio-basis function neural networks and support vector machines are given. The impact of the sliding window size used to generate sub-sequences for modelling on prediction accuracy is studied. The results show that the Bayesian bio-basis function neural network with two Gaussian distributions for model parameters (weights) performed the best and the highest prediction accuracy is 97.15 ± 1.13%. ::: ::: Availability: The package of Bayesian bio-basis function neural network can be obtained by request to the author. ::: ::: Contact: [email protected] --- paper_title: SVM-based prediction of caspase substrate cleavage sites paper_content: BackgroundCaspases belong to a class of cysteine proteases which function as critical effectors in apoptosis and inflammation by cleaving substrates immediately after unique sites. Prediction of such cleavage sites will complement structural and functional studies on substrates cleavage as well as discovery of new substrates. Recently, different computational methods have been developed to predict the cleavage sites of caspase substrates with varying degrees of success. As the support vector machines (SVM) algorithm has been shown to be useful in several biological classification problems, we have implemented an SVM-based method to investigate its applicability to this domain.ResultsA set of unique caspase substrates cleavage sites were obtained from literature and used for evaluating the SVM method. Datasets containing (i) the tetrapeptide cleavage sites, (ii) the tetrapeptide cleavage sites, augmented by two adjacent residues, P1' and P2' amino acids and (iii) the tetrapeptide cleavage sites with ten additional upstream and downstream flanking sequences (where available) were tested. The SVM method achieved an accuracy ranging from 81.25% to 97.92% on independent test sets. The SVM method successfully predicted the cleavage of a novel caspase substrate and its mutants.ConclusionThis study presents an SVM approach for predicting caspase substrate cleavage sites based on the cleavage sites and the downstream and upstream flanking sequences. The method shows an improvement over existing methods and may be useful for predicting hitherto undiscovered cleavage sites. --- paper_title: Substrate profiling of cysteine proteases using a combinatorial peptide library identifies functionally unique specificities. paper_content: The substrate specificities of papain-like cysteine proteases (clan CA, family C1) papain, bromelain, and human cathepsins L, V, K, S, F, B, and five proteases of parasitic origin were studied using a completely diversified positional scanning synthetic combinatorial library. A bifunctional coumarin fluorophore was used that facilitated synthesis of the library and individual peptide substrates. The library has a total of 160,000 tetrapeptide substrate sequences completely randomizing each of the P1, P2, P3, and P4 positions with 20 amino acids. A microtiter plate assay format permitted a rapid determination of the specificity profile of each enzyme. Individual peptide substrates were then synthesized and tested for a quantitative determination of the specificity of the human cathepsins. Despite the conserved three-dimensional structure and similar substrate specificity of the enzymes studied, distinct amino acid preferences that differentiate each enzyme were identified. The specificities of cathepsins K and S partially match the cleavage site sequences in their physiological substrates. Capitalizing on its unique preference for proline and glycine at the P2 and P3 positions, respectively, selective substrates and a substrate-based inhibitor were developed for cathepsin K. A cluster analysis of the proteases based on the complete specificity profile provided a functional characterization distinct from standard sequence analysis. This approach provides useful information for developing selective chemical probes to study protease-related pathologies and physiologies. --- paper_title: Artificial neural network model for predicting HIV protease cleavage sites in protein paper_content: Knowledge of the polyprotein cleavage sites by HIV protease will refine our understanding of its specificity, and the information thus acquired will be useful for designing specific and efficient HIV protease inhibitors. The search for inhibitors of HIV protease will be greatly expedited if one can find an accurate, robust, and rapid method for predicting the cleavage sites in proteins by HIV protease. In this paper, Kohonen's self-organization model, which uses typical artificial neural networks, is applied to predict the cleavability of oligopeptides by proteases with multiple and extended specificity subsites. We selected HIV-1 protease as the subject of study. We chose 299 oligopeptides for the training set, and another 63 oligopeptides for the test set. Because of its high rate of correct prediction (58/63 = 92.06%) and stronger fault-tolerant ability, the neural network method should be a useful technique for finding effective inhibitors of HIV protease, which is one of the targets in designing potential drugs against AIDS. The principle of the artificial neural network method can also be applied to analyzing the specificity of any multisubsite enzyme. --- paper_title: CutDB: a proteolytic event database paper_content: Beyond the well-known role of proteolytic machinery in protein degradation and turnover, many specialized proteases play a key role in various regulatory processes. Thousands of highly specific proteolytic events are associated with normal and pathological conditions, including bacterial and viral infections. However, the information about individual proteolytic events is dispersed over multiple publications and is not easily available for large-scale analysis. CutDB is one of the first systematic efforts to build an easily accessible collection of documented proteolytic events for natural proteins in vivo or in vitro. A CutDB entry is defined by a unique combination of these three attributes: protease, protein substrate and cleavage site. Currently, CutDB integrates 3070 proteolytic events for 470 different proteases captured from public archives (such as MEROPS and HPRD) and publications. CutDB supports various types of data searches and displays, including clickable network diagrams. Most importantly, CutDB is a community annotation resource based on a Wikipedia approach, providing a convenient user interface to input new data online. A recent contribution of 568 proteolytic events by several experts in the field of matrix metallopeptidases suggests that this approach will significantly accelerate the development of CutDB content. CutDB is publicly available at http://cutdb.burnham.org. --- paper_title: MEROPS: the peptidase database paper_content: Peptidases (proteolytic enzymes) are of great relevance to biology, medicine and biotechnology. This practical importance creates a need for an integrated source of information about them, and also about their natural inhibitors. The MEROPS database (http://merops.sanger.ac.uk) aims to fill this need. The organizational principle of the database is a hierarchical classification in which homologous sets of the proteins of interest are grouped in families and the homologous families are grouped in clans. Each peptidase, family and clan has a unique identifier. The database has recently been expanded to include the protein inhibitors of peptidases, and these are classified in much the same way as the peptidases. Forms of information recently added include new links to other databases, summary alignments for peptidase clans, displays to show the distribution of peptidases and inhibitors among organisms, substrate cleavage sites and indexes for expressed sequence tag libraries containing peptidases. A new way of making hyperlinks to the database has been devised and a BlastP search of our library of peptidase and inhibitor sequences has been added. --- paper_title: 202 CaMPDB: A RESOURCE FOR CALPAIN AND MODULATORY PROTEOLYSIS paper_content: While the importance of modulatory proteolysis in research has steadily increased, knowledge on this process has remained largely disorganized, with the nature and role of entities composing modulatory proteolysis still uncertain. We built CaMPDB, a resource on modulatory proteolysis, with a focus on calpain, a well-studied intracellular protease which regulates substrate functions by proteolytic processing. CaMPDB contains sequences of calpains, substrates and inhibitors as well as substrate cleavage sites, collected from the literature. Some cleavage efficiencies were evaluated by biochemical experiments and a cleavage site prediction tool is provided to assist biologists in understanding calpain-mediated cellular processes. CaMPDB is freely accessible at http://calpain.org. --- paper_title: A unique specificity of a calcium activated neutral protease indicated in histone hydrolysis. paper_content: Calf thymus histones were found to be susceptible to a calcium-activated neutral protease [CANP: EC 3.4.22.17] which required a high concentration of calcium ions for its activity (mCANP). The susceptibilities of histones were in the order of relative degradation rate: H2B, H2A, and H3. The major peptide fragments released by CANP from H2A, H2B, and H3 were isolated and the cleavage sites were determined. Examination of amino acid sequences and environmental features around the cleavage site as well as kinetic analysis of the degradation process led us to the following conclusions about the mode of substrate recognition of mCANP: 1) The cleavage sites in histones could not be interpreted in terms of the primary structure around them. Thus, it seems unlikely that the specificity of CANP solely depends on its recognition of any specific amino acid residues or sequences. 2) The susceptible bonds were never located in the midst of either a hydrophobic or hydrophilic alignment of amino acid residues but in the vicinity of the boundary between hydrophilic and hydrophobic clusters. 3) Once a peptide fragment was generated by the proteolytic degradation, no further cleavage occurred even if the peptide still contained a bond corresponding to what was susceptible to CANP in an intact histone. This observation was interpreted to mean that CANP may recognize a certain higher order structure of its substrates. --- paper_title: Predicting the secondary structure of globular proteins using neural networks models paper_content: We present a new method for predicting the secondary structure of globular proteins based on non-linear neural network models. Network models learn from existing protein structures how to predict the secondary structure of local sequences of amino acids. The average success rate of our method on a testing set of proteins non-homologous with the corresponding training set was 64.3% on three types of secondary structure (alpha-helix, beta-sheet, and coil), with correlation coefficients of C alpha = 0.41, C beta = 0.31 and Ccoil = 0.41. These quality indices are all higher than those of previous methods. The prediction accuracy for the first 25 residues of the N-terminal sequence was significantly better. We conclude from computational experiments on real and artificial structures that no method based solely on local information in the protein sequence is likely to produce significantly better results for non-homologous proteins. The performance of our method of homologous proteins is much better than for non-homologous proteins, but is not as good as simply assuming that homologous sequences have identical structures. --- paper_title: Bio-support vector machines for computational proteomics paper_content: MOTIVATION ::: One of the most important issues in computational proteomics is to produce a prediction model for the classification or annotation of biological function of novel protein sequences. In order to improve the prediction accuracy, much attention has been paid to the improvement of the performance of the algorithms used, few is for solving the fundamental issue, namely, amino acid encoding as most existing pattern recognition algorithms are unable to recognize amino acids in protein sequences. Importantly, the most commonly used amino acid encoding method has the flaw that leads to large computational cost and recognition bias. ::: ::: ::: RESULTS ::: By replacing kernel functions of support vector machines (SVMs) with amino acid similarity measurement matrices, we have modified SVMs, a new type of pattern recognition algorithm for analysing protein sequences, particularly for proteolytic cleavage site prediction. We refer to the modified SVMs as bio-support vector machine. When applied to the prediction of HIV protease cleavage sites, the new method has shown a remarkable advantage in reducing the model complexity and enhancing the model robustness. --- paper_title: Applying support vector machines to imbalanced datasets paper_content: Support Vector Machines (SVM) have been extensively studied and have shown remarkable success in many applications. However the success of SVM is very limited when it is applied to the problem of learning from imbalanced datasets in which negative instances heavily outnumber the positive instances (e.g. in gene profiling and detecting credit card fraud). This paper discusses the factors behind this failure and explains why the common strategy of undersampling the training data may not be the best choice for SVM. We then propose an algorithm for overcoming these problems which is based on a variant of the SMOTE algorithm by Chawla et al, combined with Veropoulos et al's different error costs algorithm. We compare the performance of our algorithm against these two algorithms, along with undersampling and regular SVM and show that our algorithm outperforms all of them. --- paper_title: A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems paper_content: The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification. --- paper_title: UniProtKB/Swiss-Prot paper_content: The Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI), and the Protein Information Resource (PIR) form the Universal Protein Resource (UniProt) consortium. Its main goal is to provide the scientific community with a central resource for protein sequences and functional information. The UniProt consortium maintains the UniProt KnowledgeBase (UniProtKB) and several supplementary databases including the UniProt Reference Clusters (UniRef) and the UniProt Archive (UniParc). (1) UniProtKB is a comprehensive protein sequence knowledgebase that consists of two sections: UniProtKB/Swiss-Prot, which contains manually annotated entries, and UniProtKB/TrEMBL, which contains computer-annotated entries. UniProtKB/Swiss-Prot entries contain information curated by biologists and provide users with cross-links to about 100 external databases and with access to additional information or tools. (2) The UniRef databases (UniRef100, UniRef90, and UniRef50) define clusters of protein sequences that share 100, 90, or 50% identity. (3) The UniParc database stores and maps all publicly available protein sequence data, including obsolete data excluded from UniProtKB. The UniProt databases can be accessed online (http://www.uniprot.org/) or downloaded in several formats (ftp://ftp.uniprot.org/pub). New releases are published every 2 weeks. The purpose of this chapter is to present a guided tour of a UniProtKB/Swiss-Prot entry, paying particular attention to the specificities of plant protein annotation. We will also present some of the tools and databases that are linked to each entry. --- paper_title: Determination of protease cleavage site motifs using mixture-based oriented peptide libraries paper_content: The number of known proteases is increasing at a tremendous rate as a consequence of genome sequencing projects. Although one can guess at the functions of these novel enzymes by considering sequence homology to known proteases, there is a need for new tools to rapidly provide functional information on large numbers of proteins. We describe a method for determining the cleavage site specificity of proteolytic enzymes that involves pooled sequencing of peptide library mixtures. The method was used to determine cleavage site motifs for six enzymes in the matrix metalloprotease (MMP) family. The results were validated by comparison with previous literature and by analyzing the cleavage of individually synthesized peptide substrates. The library data led us to identify the proteoglycan neurocan as a novel MMP-2 substrate. Our results indicate that a small set of libraries can be used to quickly profile an expanding protease family, providing information applicable to the design of inhibitors and to the identification of protein substrates. --- paper_title: CaSPredictor: a new computer-based tool for caspase substrate prediction paper_content: Motivation:In vitro studies have shown that the most remarkable catalytic features of caspases, a family of cysteineproteases, are their stringent specificity to Asp (D) in the S1 subsite and at least four amino acids to the left of scissile bound. However, there is little information about the substrate recognition patterns in vivo. The prediction and characterization of proteolytic cleavage sites in natural substrates could be useful for uncovering these structural relationships. ::: ::: Results: PEST-like sequences rich in the amino acids Ser (S), Thr (T), Pro (P), Glu or Asp (E/D), including Asn (N) and Gln (Q) are adjacent structural/sequential elements in the majority of cleavage site regions of the natural caspase substrates described in the literature, supporting its possible implication in the substrate selection by caspases. We developed CaSPredictor, a software which incorporated a PEST-like index and the position-dependent amino acid matrices for prediction of caspase cleavage sites in individual proteins and protein datasets. The program predicted successfully 81% (111/137) of the cleavage sites in experimentally verified caspase substrates not annotated in its internal data file. Its accuracy and confidence was estimated as 80% using ROC methodology. The program was much more efficient in predicting caspase substrates when compared with PeptideCutter and PEPS software. Finally, the program detected potential cleavage sites in the primary sequences of 1644 proteins in a dataset containing 9986 protein entries. ::: ::: Availability: Requests for software should be made to Dr Jose E. Belizario ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary information is available for academic users at site http://icb.usp.br/~farmaco/Jose/CaSpredictorfiles --- paper_title: The Folding Type of a Protein Is Relevant to the Amino Acid Composition paper_content: The folding types of 135 proteins, the three-dimensional structures of which are known, were analyzed in terms of the amino acid composition. The amino acid composition of a protein was expressed as a point in a multidimensional space spanned with 20 axes, on which the corresponding contents of 20 amino acids in the protein were represented. The distribution pattern of proteins in this composition space was examined in relation to five folding types, alpha, beta, alpha/beta, alpha + beta, and irregular type. The results show that amino acid compositions of the alpha, beta, and alpha/beta types are located in different regions in the composition space, thus allowing distinct separation of proteins depending on the folding types. The points representing proteins of the alpha + beta and irregular types, however, are widely scattered in the space, and the existing regions overlap with those of the other folding types. A simple method of utilizing the "distance" in the space was found to be convenient for classification of proteins into the five folding types. The assignment of the folding type with this method gave an accuracy of 70% in the coincidence with the experimental data. --- paper_title: Using substitution probabilities to improve position-specific scoring matrices paper_content: Each column of amino acids in a multiple alignment of protein sequences can be represented as a vector of 20 amino acid counts. For alignment and searching applications, the count vector is an imperfect representation of a position, because the observed sequences are an incomplete sample of the full set of related sequences. One general solution to this problem is to model unobserved sequences by adding artificial 'pseudo-counts' to the observed counts. We introduce a simple method for computing pseudo-counts that combines the diversity observed in each alignment position with amino acid substitution probabilities. In extensive empirical tests, this position-based method out-performed other pseudo-count methods and was a substantial improvement over the traditional average score method used for constructing profiles. --- paper_title: A cumulative specificity model for proteases from human immunodeficiency virus types 1 and 2, inferred from statistical analysis of an extended substrate data base. paper_content: Statistical analysis of an expanded data base of regions in viral polyproteins and in non-viral proteins that are sensitive to hydrolysis by the protease from human immunodeficiency virus (HIV) type 1 has generated a model which characterizes the substrate specificity of this retroviral enzyme. The model leads to an algorithm for predicting protease-susceptible sites from primary structure. Amino acids in each of the sites from P4 to P4' are tabulated for 40 protein substrates, and the frequency of occurrence for each residue is compared to the natural abundance of that amino acid in a selected data set of globular proteins. The results suggest that the highest stringency for particular amino acid residues is at the P2, P1, and P2' positions of the substrate. The broad specificity of the HIV-1 protease appears to be a consequence of its being able to bind productively substrates in which interactions with only a few Pi or Pi' side-chains need be optimized. The analysis, extended to 22 protein segments cleaved by the HIV-2 protease, delineates marked differences in specificity from that of the HIV-1 enzyme. --- paper_title: Position-based sequence weights paper_content: Sequence weighting methods have been used to reduce redundancy and emphasize diversity in multiple sequence alignment and searching applications. Each of these methods is based on a notion of distance between a sequence and an ancestral or generalized sequence. We describe a different approach, which bases weights on the diversity observed at each position in the alignment, rather than on a sequence distance measure. These position-based weights make minimal assumptions, are simple to compute, and perform well in comprehensive evaluations. --- paper_title: A vector projection method for predicting the specificity of GalNAc-transferase paper_content: The specificity of UDP-Gal-NAc:polypeptide N-acetylgalactosaminytransferase (GalNAc-transferase) is consistent with the existence of an extended site composed of nine subsites, denoted by P4, P3, P2, P1, P0, P1′, P2′, P3′, and P4′, where the acceptor at P0 is being either Ser or Thr. To predict whether a peptide will react with the enzyme to form a Ser- or Thr-conjugated glycopeptide, a vector projection method is proposed which uses a training set of amino acid sequences surrounding 90 Ser and 106 Thr O-glycosylation sites extracted from the National Biomedical Research Foundation Protein Database. The model postulates independent interactions of the 9 amino acid moieties with their respective binding sites. The high ratio of correct predictions vs. total predictions for the data in both the training and the testing sets indicates that the method is self-consistent and efficient. It provides a rapid means for predicting O-glycosylation and designing effective inhibitors of GalNAc-transferase. © 1995 Wiley-Liss, Inc. --- paper_title: Substrate profiling of cysteine proteases using a combinatorial peptide library identifies functionally unique specificities. paper_content: The substrate specificities of papain-like cysteine proteases (clan CA, family C1) papain, bromelain, and human cathepsins L, V, K, S, F, B, and five proteases of parasitic origin were studied using a completely diversified positional scanning synthetic combinatorial library. A bifunctional coumarin fluorophore was used that facilitated synthesis of the library and individual peptide substrates. The library has a total of 160,000 tetrapeptide substrate sequences completely randomizing each of the P1, P2, P3, and P4 positions with 20 amino acids. A microtiter plate assay format permitted a rapid determination of the specificity profile of each enzyme. Individual peptide substrates were then synthesized and tested for a quantitative determination of the specificity of the human cathepsins. Despite the conserved three-dimensional structure and similar substrate specificity of the enzymes studied, distinct amino acid preferences that differentiate each enzyme were identified. The specificities of cathepsins K and S partially match the cleavage site sequences in their physiological substrates. Capitalizing on its unique preference for proline and glycine at the P2 and P3 positions, respectively, selective substrates and a substrate-based inhibitor were developed for cathepsin K. A cluster analysis of the proteases based on the complete specificity profile provided a functional characterization distinct from standard sequence analysis. This approach provides useful information for developing selective chemical probes to study protease-related pathologies and physiologies. --- paper_title: Sequence Logos: A New Way to Display Consensus Sequences paper_content: A graphical method is presented for displaying the patterns in a set of aligned sequences. The characters representing the sequence are stacked on top of each other for each position in the aligned sequences. The height of each letter is made proportional to its frequency, and the letters are sorted so the most common one is on top. The height of the entire stack is then adjusted to signify the information content of the sequences at that position. From these 'sequence logos', one can determine not only the consensus sequence but also the relative frequency of bases and the information content (measured in bits) at every position in a site or sequence. The logo displays both significant residues and subtle sequence patterns. --- paper_title: Why Neural Networks Should Not Be Used for HIV-1 Protease Cleavage Site Prediction paper_content: Summary: Several papers have been published where nonlinear machine learning algorithms, e.g. artificial neural networks, support vector machines and decision trees, have been used to model the specificity of the HIV-1 protease and extract specificity rules. We show that the dataset used in these studies is linearly separable and that it is a misuse of nonlinear classifiers to apply them to this problem. The best solution on this dataset is achieved using a linear classifier like the simple perceptron or the linear support vector machine, and it is straightforward to extract rules from these linear models. We identify key residues in peptides that are efficiently cleaved by the HIV-1 protease and list the most prominent rules, relating them to experimental results for the HIV-1 protease. ::: ::: Motivation: Understanding HIV-1 protease specificity is important when designing HIV inhibitors and several different machine learning algorithms have been applied to the problem. However, little progress has been made in understanding the specificity because nonlinear and overly complex models have been used. ::: ::: Results: We show that the problem is much easier than what has previously been reported and that linear classifiers like the simple perceptron or linear support vector machines are at least as good predictors as nonlinear algorithms. We also show how sets of specificity rules can be generated from the resulting linear classifiers. ::: ::: Availability: The datasets used are available at http://www.hh.se/staff/bioinf/ --- paper_title: Prediction of proprotein convertase cleavage sites. paper_content: Many secretory proteins and peptides are synthesized as inactive precursors that in addition to signal peptide cleavage undergo post-translational processing to become biologically active polypeptides. Precursors are usually cleaved at sites composed of single or paired basic amino acid residues by members of the subtilisin/kexin-like proprotein convertase (PC) family. In mammals, seven members have been identified, with furin being the one first discovered and best characterized. Recently, the involvement of furin in diseases ranging from Alzheimer's disease and cancer to anthrax and Ebola fever has created additional focus on proprotein processing. We have developed a method for prediction of cleavage sites for PCs based on artificial neural networks. Two different types of neural networks have been constructed: a furin-specific network based on experimental results derived from the literature, and a general PC-specific network trained on data from the Swiss-Prot protein database. The method predicts cleavage sites in independent sequences with a sensitivity of 95% for the furin neural network and 62% for the general PC network. The ProP method is made publicly available at http://www.cbs.dtu.dk/services/ProP. --- paper_title: Approximation by superpositions of a sigmoidal function paper_content: In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks. --- paper_title: Neural network studies. 1. Comparison of overfitting and overtraining paper_content: The application of feed forward back propagation artificial neural networks with one hidden layer (ANN) to perform the equivalent of multiple linear regression (MLR) has been examined using artificial structured data sets and real literature data. The predictive ability of the networks has been estimated using a training/ test set protocol. The results have shown advantages of ANN over MLR analysis. The ANNs do not require high order terms or indicator variables to establish complex structure-activity relationships. Overfitting does not have any influence on network prediction ability when overtraining is avoided by cross-validation. Application of ANN ensembles has allowed the avoidance of chance correlations and satisfactory predictions of new data have been obtained for a wide range of numbers of neurons in the hidden layer. --- paper_title: Bio-basis function neural network for prediction of protease cleavage sites in proteins paper_content: The prediction of protease cleavage sites in proteins is critical to effective drug design. One of the important issues in constructing an accurate and efficient predictor is how to present nonnumerical amino acids to a model effectively. As this issue has not yet been paid full attention and is closely related to model efficiency and accuracy, we present a novel neural learning algorithm aimed at improving the prediction accuracy and reducing the time involved in training. The algorithm is developed based on the conventional radial basis function neural networks (RBFNNs) and is referred to as a bio-basis function neural network (BBFNN). The basic principle is to replace the radial basis function used in RBFNNs by a novel bio-basis function. Each bio-basis is a feature dimension in a numerical feature space, to which a nonnumerical sequence space is mapped for analysis. The bio-basis function is designed using an amino acid mutation matrix verified in biology. Thus, the biological content in protein sequences can be maximally utilized for accurate modeling. Mutual information (MI) is used to select the most informative bio-bases and an ensemble method is used to enhance a decision-making process, hence, improving the prediction accuracy further. The algorithm has been successfully verified in two case studies, namely the prediction of Human Immunodeficiency Virus (HIV) protease cleavage sites and trypsin cleavage sites in proteins. --- paper_title: On the Sequential Determinants of Calpain Cleavage paper_content: Abstract The structural clues of substrate recognition by calpain are incompletely understood. In this study, 106 cleavage sites in substrate proteins compiled from the literature have been analyzed to dissect the signal for calpain cleavage and also to enable the design of an ideal calpain substrate and interfere with calpain action via site-directed mutagenesis. In general, our data underline the importance of the primary structure of the substrate around the scissile bond in the recognition process. Significant amino acid preferences were found to extend over 11 residues around the scissile bond, from P4 to P7′. In compliance with earlier data, preferred residues in the P2 position are Leu, Thr, and Val, and in P1 Lys, Tyr, and Arg. In position P1 ′, small hydrophilic residues, Ser and to a lesser extent Thr and Ala, occur most often. Pro dominates the region flanking the P2-P1′ segment, i.e. positions P3 and P2′-P4′; most notable is its occurrence 5.59 times above chance in P3′. Intriguingly, the segment C-terminal to the cleavage site resembles the consensus inhibitory region of calpastatin, the specific inhibitor of the enzyme. Further, the position of the scissile bond correlates with certain sequential attributes, such as secondary structure and PEST score, which, along with the amino acid preferences, suggests that calpain cleaves within rather disordered segments of proteins. The amino acid preferences were confirmed by site-directed mutagenesis of the autolysis sites of Drosophila calpain B; when amino acids at key positions were changed to less preferred ones, autolytic cleavage shifted to other, adjacent sites. Based on these preferences, a new fluorogenic calpain substrate, DABCYLTPLKSPPPSPR-EDANS, was designed and synthesized. In the case of μ- and m-calpain, this substrate is kinetically superior to commercially available ones, and it can be used for the in vivo assessment of the activity of these ubiquitous mammalian calpains. --- paper_title: Toward Computer-Based Cleavage Site Prediction of Cysteine Endopeptidases paper_content: Identification of relevant substrates is essential for elucidation of in vivo functions of peptidases. The recent availability of the complete genome sequences of many eukaryotic organisms holds the promise of identifyingspecific peptidase substrates by systematic proteome analyses in combination with computer-based screening of genome databases. Currently available proteomics and bioinformatics tools are not sufficient for reliable endopeptidase substrate predictions. To address these shortcomings the bioinformatics tool 'PEPS' (Prediction of Endopeptidase Substrates) has been developed and is presented here. PEPS uses individual rule-based endopeptidase cleavage site scoring matrices (CSSM). The efficiency of PEPS in predicting putative caspase 3, cathepsin B and cathepsin L cleavage sites is demonstrated in comparison to established algorithms. Mortalin, a member of the heat shock protein family HSP70, was identified by PEPS as a putative cathepsin L substrate. Comparative proteome analyses of cathepsin L-deficient and wild-type mouse fibroblasts showed that mortalin is enriched in the absence of cathepsin L. These results indicate that CSSM/PEPS can correctly predict relevant peptidase substrates. --- paper_title: Neuro-fuzzy Prediction of Biological Activity and Rule Extraction for HIV-1 Protease Inhibitors paper_content: A fuzzy neural network (FNN) and multiple linear regression (MLR) were used to predict biological activities of 26 newly designed HIV-1 protease potential inhibitory compounds. Molecular descriptors of 151 known inhibitors were used to train and test the FNN and to develop MLR models. The predictive ability of these two models was investigated and compared. We found the predictive ability of the FNN to be generally superior to that of MLR. The fuzzy IF/THEN rules were extracted from the trained network. These rules map chemical structure descriptors to predicted inhibitory values. The obtained rules can be used to analyze the influence of descriptors. Our results indicate that FNN and fuzzy IF/THEN rules are powerful modeling tools for QSAR studies. --- paper_title: Pripper: prediction of caspase cleavage sites from whole proteomes paper_content: BackgroundCaspases are a family of proteases that have central functions in programmed cell death (apoptosis) and inflammation. Caspases mediate their effects through aspartate-specific cleavage of their target proteins, and at present almost 400 caspase substrates are known. There are several methods developed to predict caspase cleavage sites from individual proteins, but currently none of them can be used to predict caspase cleavage sites from multiple proteins or entire proteomes, or to use several classifiers in combination. The possibility to create a database from predicted caspase cleavage products for the whole genome could significantly aid in identifying novel caspase targets from tandem mass spectrometry based proteomic experiments.ResultsThree different pattern recognition classifiers were developed for predicting caspase cleavage sites from protein sequences. Evaluation of the classifiers with quality measures indicated that all of the three classifiers performed well in predicting caspase cleavage sites, and when combining different classifiers the accuracy increased further. A new tool, Pripper, was developed to utilize the classifiers and predict the caspase cut sites from an arbitrary number of input sequences. A database was constructed with the developed tool, and it was used to identify caspase target proteins from tandem mass spectrometry data from two different proteomic experiments. Both known caspase cleavage products as well as novel cleavage products were identified using the database demonstrating the usefulness of the tool. Pripper is not restricted to predicting only caspase cut sites, but it gives the possibility to scan protein sequences for any given motif(s) and predict cut sites once a suitable cut site prediction model for any other protease has been developed. Pripper is freely available and can be downloaded from http://users.utu.fi/mijopi/Pripper.ConclusionsWe have developed Pripper, a tool for reading an arbitrary number of proteins in FASTA format, predicting their caspase cleavage sites and outputting the cleaved sequences to a new FASTA format sequence file. We show that Pripper is a valuable tool in identifying novel caspase target proteins from modern proteomics experiments. --- paper_title: C4.5: Programs for Machine Learning paper_content: From the Publisher: ::: Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the most significant contributions to their development. This book is a complete guide to the C4.5 system as implemented in C for the UNIX environment. It contains a comprehensive guide to the system's use , the source code (about 8,800 lines), and implementation notes. The source code and sample datasets are also available on a 3.5-inch floppy diskette for a Sun workstation. ::: ::: C4.5 starts with large sets of cases belonging to known classes. The cases, described by any mixture of nominal and numeric properties, are scrutinized for patterns that allow the classes to be reliably discriminated. These patterns are then expressed as models, in the form of decision trees or sets of if-then rules, that can be used to classify new cases, with emphasis on making the models understandable as well as accurate. The system has been applied successfully to tasks involving tens of thousands of cases described by hundreds of properties. The book starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting. Advantages and disadvantages of the C4.5 approach are discussed and illustrated with several case studies. ::: ::: This book and software should be of interest to developers of classification-based intelligent systems and to students in machine learning and expert systems courses. --- paper_title: Bio-support vector machines for computational proteomics paper_content: MOTIVATION ::: One of the most important issues in computational proteomics is to produce a prediction model for the classification or annotation of biological function of novel protein sequences. In order to improve the prediction accuracy, much attention has been paid to the improvement of the performance of the algorithms used, few is for solving the fundamental issue, namely, amino acid encoding as most existing pattern recognition algorithms are unable to recognize amino acids in protein sequences. Importantly, the most commonly used amino acid encoding method has the flaw that leads to large computational cost and recognition bias. ::: ::: ::: RESULTS ::: By replacing kernel functions of support vector machines (SVMs) with amino acid similarity measurement matrices, we have modified SVMs, a new type of pattern recognition algorithm for analysing protein sequences, particularly for proteolytic cleavage site prediction. We refer to the modified SVMs as bio-support vector machine. When applied to the prediction of HIV protease cleavage sites, the new method has shown a remarkable advantage in reducing the model complexity and enhancing the model robustness. --- paper_title: Specificity rule discovery in HIV-1 protease cleavage site analysis paper_content: Several machine learning algorithms have recently been applied to modeling the specificity of HIV-1 protease. The problem is challenging because of the three issues as follows: (1) datasets with high dimensionality and small number of samples could misguide classification modeling and its interpretation; (2) symbolic interpretation is desirable because it provides us insight to the specificity in the form of human-understandable rules, and thus helps us to design effective HIV inhibitors; (3) the interpretation should take into account complexity or dependency between positions in sequences. Therefore, it is neccessary to investigate multivariate and feature-selective methods to model the specificity and to extract rules from the model. We have tested extensively various machine learning methods, and we have found that the combination of neural networks and decompositional approach can generate a set of effective rules. By validation to experimental results for the HIV-1 protease, the specificity rules outperform the ones generated by frequency-based, univariate or black-box methods. --- paper_title: A profile hidden Markov model for signal peptides generated by HMMER paper_content: SUMMARY ::: Although the HMMER package is widely used to produce profile hidden Markov models (profile HMMs) for protein domains, it has been difficult to create a profile HMM for signal peptides. Here we describe an approach for building a complex model of eukaryotic signal peptides by the standard HMMER package. Signal peptide prediction with this model gives a 95.6% sensitivity and 95.7% specificity. ::: ::: ::: AVAILABILITY ::: The profile HMM for signal peptides, data sets, and the scripts for analyzing data are available for non-commercial use at http://share.gene.com/. --- paper_title: Machine learning approaches for the prediction of signal peptides and other protein sorting signals paper_content: Prediction of protein sorting signals from the sequence of amino acids has great importance in the field of proteomics today. Recently, the growth of protein databases, combined with machine learning approaches, such as neural networks and hidden Markov models, have made it possible to achieve a level of reliability where practical use in, for example automatic database annotation is feasible. In this review, we concentrate on the present status and future perspectives of SignalP, our neural network-based method for prediction of the most well-known sorting signal: the secretory signal peptide. We discuss the problems associated with the use of SignalP on genomic sequences, showing that signal peptide prediction will improve further if integrated with predictions of start codons and transmembrane helices. As a step towards this goal, a hidden Markov model version of SignalP has been developed, making it possible to discriminate between cleaved signal peptides and uncleaved signal anchors. Furthermore, we show how SignalP can be used to characterize putative signal peptides from an archaeon, Methanococcus jannaschii. Finally, we briefly review a few methods for predicting other protein sorting signals and discuss the future of protein sorting prediction in general. --- paper_title: Profile Hidden Markov Models paper_content: Summary : The recent literature on profile hidden Markov model (profile HMM) methods and software is reviewed. Profile HMMs turn a multiple sequence alignment into a position-specific scoring system suitable for searching databases for remotely homologous sequences. Profile HMM analyses complement standard pairwise comparison methods for large-scale sequence analysis. Several software implementations and two large libraries of profile HMMs of common protein domains are available, HMM methods performed comparably to threading methods in the CASP2 structure prediction exercise. Contact: [email protected]. --- paper_title: Efficient and Accurate Lp-Norm Multiple Kernel Learning paper_content: Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability. Unfortunately, l1-norm MKL is hardly observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary lp-norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary p > 1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the traditionally used wrapper approaches. Finally, we apply lp-norm MKL to real-world problems from computational biology, showing that non-sparse MKL achieves accuracies that go beyond the state-of-the-art. --- paper_title: Calpain Cleavage Prediction Using Multiple Kernel Learning paper_content: Calpain, an intracellular Ca²⁺-dependent cysteine protease, is known to play a role in a wide range of metabolic pathways through limited proteolysis of its substrates. However, only a limited number of these substrates are currently known, with the exact mechanism of substrate recognition and cleavage by calpain still largely unknown. While previous research has successfully applied standard machine-learning algorithms to accurately predict substrate cleavage by other similar types of proteases, their approach does not extend well to calpain, possibly due to its particular mode of proteolytic action and limited amount of experimental data. Through the use of Multiple Kernel Learning, a recent extension to the classic Support Vector Machine framework, we were able to train complex models based on rich, heterogeneous feature sets, leading to significantly improved prediction quality (6% over highest AUC score produced by state-of-the-art methods). In addition to producing a stronger machine-learning model for the prediction of calpain cleavage, we were able to highlight the importance and role of each feature of substrate sequences in defining specificity: primary sequence, secondary structure and solvent accessibility. Most notably, we showed there existed significant specificity differences across calpain sub-types, despite previous assumption to the contrary. Prediction accuracy was further successfully validated using, as an unbiased test set, mutated sequences of calpastatin (endogenous inhibitor of calpain) modified to no longer block calpain's proteolytic action. An online implementation of our prediction tool is available at http://calpain.org. --- paper_title: Choosing Multiple Parameters for Support Vector Machines paper_content: The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance. --- paper_title: Cascleave: towards more accurate prediction of caspase substrate cleavage sites paper_content: MOTIVATION ::: The caspase family of cysteine proteases play essential roles in key biological processes such as programmed cell death, differentiation, proliferation, necrosis and inflammation. The complete repertoire of caspase substrates remains to be fully characterized. Accordingly, systematic computational screening studies of caspase substrate cleavage sites may provide insight into the substrate specificity of caspases and further facilitating the discovery of putative novel substrates. ::: ::: ::: RESULTS ::: In this article we develop an approach (termed Cascleave) to predict both classical (i.e. following a P(1) Asp) and non-typical caspase cleavage sites. When using local sequence-derived profiles, Cascleave successfully predicted 82.2% of the known substrate cleavage sites, with a Matthews correlation coefficient (MCC) of 0.667. We found that prediction performance could be further improved by incorporating information such as predicted solvent accessibility and whether a cleavage sequence lies in a region that is most likely natively unstructured. Novel bi-profile Bayesian signatures were found to significantly improve the prediction performance and yielded the best performance with an overall accuracy of 87.6% and a MCC of 0.747, which is higher accuracy than published methods that essentially rely on amino acid sequence alone. It is anticipated that Cascleave will be a powerful tool for predicting novel substrate cleavage sites of caspases and shedding new insights on the unknown caspase-substrate interactivity relationship. ::: ::: ::: AVAILABILITY ::: http://sunflower.kuicr.kyoto-u.ac.jp/ approximately sjn/Cascleave/ ::: ::: ::: CONTACT ::: [email protected]; [email protected]; james; [email protected] ::: ::: ::: SUPPLEMENTARY INFORMATION ::: Supplementary data are available at Bioinformatics online. --- paper_title: Prediction of protease substrates using sequence and structure features paper_content: Motivation:Granzyme B (GrB) and caspases cleave specific protein substrates to induce apoptosis in virally infected and neoplastic cells. While substrates for both types of proteases have been determined experimentally, there are many more yet to be discovered in humans and other metazoans. Here, we present a bioinformatics method based on support vector machine (SVM) learning that identifies sequence and structural features important for protease recognition of substrate peptides and then uses these features to predict novel substrates. Our approach can act as a convenient hypothesis generator, guiding future experiments by high-confidence identification of peptide-protein partners. ::: ::: Results:The method is benchmarked on the known substrates of both protease types, including our literature-curated GrB substrate set (GrBah). On these benchmark sets, the method outperforms a number of other methods that consider sequence only, predicting at a 0.87 true positive rate (TPR) and a 0.13 false positive rate (FPR) for caspase substrates, and a 0.79 TPR and a 0.21 FPR for GrB substrates. The method is then applied to ~25 000 proteins in the human proteome to generate a ranked list of predicted substrates of each protease type. Two of these predictions, AIF-1 and SMN1, were selected for further experimental analysis, and each was validated as a GrB substrate. ::: ::: Availability: All predictions for both protease types are publically available at http://salilab.org/peptide. A web server is at the same site that allows a user to train new SVM models to make predictions for any protein that recognizes specific oligopeptide ligands. ::: ::: Contact:[email protected]; [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online --- paper_title: Kernel methods for predicting protein-protein interactions paper_content: Motivation: Despite advances in high-throughput methods for discovering protein--protein interactions, the interaction networks of even well-studied model organisms are sketchy at best, highlighting the continued need for computational methods to help direct experimentalists in the search for novel interactions. ::: ::: Results: We present a kernel method for predicting protein--protein interactions using a combination of data sources, including protein sequences, Gene Ontology annotations, local properties of the network, and homologous interactions in other species. Whereas protein kernels proposed in the literature provide a similarity between single proteins, prediction of interactions requires a kernel between pairs of proteins. We propose a pairwise kernel that converts a kernel between single proteins into a kernel between pairs of proteins, and we illustrate the kernel's effectiveness in conjunction with a support vector machine classifier. Furthermore, we obtain improved performance by combining several sequence-based kernels based on k-mer frequency, motif and domain content and by further augmenting the pairwise sequence kernel with features that are based on other sources of data. ::: ::: We apply our method to predict physical interactions in yeast using data from the BIND database. At a false positive rate of 1% the classifier retrieves close to 80% of a set of trusted interactions. We thus demonstrate the ability of our method to make accurate predictions despite the sizeable fraction of false positives that are known to exist in interaction databases. ::: ::: Availability: The classification experiments were performed using PyML available at http://pyml.sourceforge.net. Data are available at: http://noble.gs.washington.edu/proj/sppi ::: ::: Contact: [email protected] --- paper_title: Regularization Strategies and Empirical Bayesian Learning for MKL paper_content: Multiple kernel learning (MKL), structured sparsity, and multi-task learning have recently received considerable attention. In this paper, we show how different MKL algorithms can be understood as applications of either regularization on the kernel weights or block-norm-based regularization, which is more common in structured sparsity and multi-task learning. We show that these two regularization strategies can be systematically mapped to each other through a concave conjugate operation. When the kernel-weight-based regularizer is separable into components, we can naturally consider a generative probabilistic model behind MKL. Based on this model, we propose learning algorithms for the kernel weights through the maximization of marginal likelihood. We show through numerical experiments that $\ell_2$-norm MKL and Elastic-net MKL achieve comparable accuracy to uniform kernel combination. Although uniform kernel combination might be preferable from its simplicity, $\ell_2$-norm MKL and Elastic-net MKL can learn the usefulness of the information sources represented as kernels. In particular, Elastic-net MKL achieves sparsity in the kernel weights. --- paper_title: SVM-based prediction of caspase substrate cleavage sites paper_content: BackgroundCaspases belong to a class of cysteine proteases which function as critical effectors in apoptosis and inflammation by cleaving substrates immediately after unique sites. Prediction of such cleavage sites will complement structural and functional studies on substrates cleavage as well as discovery of new substrates. Recently, different computational methods have been developed to predict the cleavage sites of caspase substrates with varying degrees of success. As the support vector machines (SVM) algorithm has been shown to be useful in several biological classification problems, we have implemented an SVM-based method to investigate its applicability to this domain.ResultsA set of unique caspase substrates cleavage sites were obtained from literature and used for evaluating the SVM method. Datasets containing (i) the tetrapeptide cleavage sites, (ii) the tetrapeptide cleavage sites, augmented by two adjacent residues, P1' and P2' amino acids and (iii) the tetrapeptide cleavage sites with ten additional upstream and downstream flanking sequences (where available) were tested. The SVM method achieved an accuracy ranging from 81.25% to 97.92% on independent test sets. The SVM method successfully predicted the cleavage of a novel caspase substrate and its mutants.ConclusionThis study presents an SVM approach for predicting caspase substrate cleavage sites based on the cleavage sites and the downstream and upstream flanking sequences. The method shows an improvement over existing methods and may be useful for predicting hitherto undiscovered cleavage sites. --- paper_title: Bio-support vector machines for computational proteomics paper_content: MOTIVATION ::: One of the most important issues in computational proteomics is to produce a prediction model for the classification or annotation of biological function of novel protein sequences. In order to improve the prediction accuracy, much attention has been paid to the improvement of the performance of the algorithms used, few is for solving the fundamental issue, namely, amino acid encoding as most existing pattern recognition algorithms are unable to recognize amino acids in protein sequences. Importantly, the most commonly used amino acid encoding method has the flaw that leads to large computational cost and recognition bias. ::: ::: ::: RESULTS ::: By replacing kernel functions of support vector machines (SVMs) with amino acid similarity measurement matrices, we have modified SVMs, a new type of pattern recognition algorithm for analysing protein sequences, particularly for proteolytic cleavage site prediction. We refer to the modified SVMs as bio-support vector machine. When applied to the prediction of HIV protease cleavage sites, the new method has shown a remarkable advantage in reducing the model complexity and enhancing the model robustness. --- paper_title: The Spectrum Kernel: A String Kernel for SVM Protein Classification paper_content: We introduce a new sequence-similarity kernel, the spectrum kernel, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. Our kernel is conceptually simple and efficient to compute and, in experiments on the SCOP database, performs well in comparison with state-of-the-art methods for homology detection. Moreover, our method produces an SVM classifier that allows linear time classification of test sequences. Our experiments provide evidence that string-based kernels, in conjunction with SVMs, could offer a viable and computationally efficient alternative to other methods of protein classification and homology detection. --- paper_title: A statistical framework for genomic data fusion paper_content: Motivation: During the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data. ::: ::: Results: This paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein--protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins---membrane proteins and ribosomal proteins---performs significantly better than the same algorithm trained on any single type of data. ::: ::: Availability: Supplementary data at http://noble.gs.washington.edu/proj/sdp-svm --- paper_title: Pripper: prediction of caspase cleavage sites from whole proteomes paper_content: BackgroundCaspases are a family of proteases that have central functions in programmed cell death (apoptosis) and inflammation. Caspases mediate their effects through aspartate-specific cleavage of their target proteins, and at present almost 400 caspase substrates are known. There are several methods developed to predict caspase cleavage sites from individual proteins, but currently none of them can be used to predict caspase cleavage sites from multiple proteins or entire proteomes, or to use several classifiers in combination. The possibility to create a database from predicted caspase cleavage products for the whole genome could significantly aid in identifying novel caspase targets from tandem mass spectrometry based proteomic experiments.ResultsThree different pattern recognition classifiers were developed for predicting caspase cleavage sites from protein sequences. Evaluation of the classifiers with quality measures indicated that all of the three classifiers performed well in predicting caspase cleavage sites, and when combining different classifiers the accuracy increased further. A new tool, Pripper, was developed to utilize the classifiers and predict the caspase cut sites from an arbitrary number of input sequences. A database was constructed with the developed tool, and it was used to identify caspase target proteins from tandem mass spectrometry data from two different proteomic experiments. Both known caspase cleavage products as well as novel cleavage products were identified using the database demonstrating the usefulness of the tool. Pripper is not restricted to predicting only caspase cut sites, but it gives the possibility to scan protein sequences for any given motif(s) and predict cut sites once a suitable cut site prediction model for any other protease has been developed. Pripper is freely available and can be downloaded from http://users.utu.fi/mijopi/Pripper.ConclusionsWe have developed Pripper, a tool for reading an arbitrary number of proteins in FASTA format, predicting their caspase cleavage sites and outputting the cleaved sequences to a new FASTA format sequence file. We show that Pripper is a valuable tool in identifying novel caspase target proteins from modern proteomics experiments. --- paper_title: GPS-CCD: A Novel Computational Program for the Prediction of Calpain Cleavage Sites paper_content: As one of the most essential post-translational modifications (PTMs) of proteins, proteolysis, especially calpain-mediated cleavage, plays an important role in many biological processes, including cell death/apoptosis, cytoskeletal remodeling, and the cell cycle. Experimental identification of calpain targets with bona fide cleavage sites is fundamental for dissecting the molecular mechanisms and biological roles of calpain cleavage. In contrast to time-consuming and labor-intensive experimental approaches, computational prediction of calpain cleavage sites might more cheaply and readily provide useful information for further experimental investigation. In this work, we constructed a novel software package of GPS-CCD (Calpain Cleavage Detector) for the prediction of calpain cleavage sites, with an accuracy of 89.98%, sensitivity of 60.87% and specificity of 90.07%. With this software, we annotated potential calpain cleavage sites for hundreds of calpain substrates, for which the exact cleavage sites had not been previously determined. In this regard, GPS-CCD 1.0 is considered to be a useful tool for experimentalists. The online service and local packages of GPS-CCD 1.0 were implemented in JAVA and are freely available at: http://ccd.biocuckoo.org/. --- paper_title: Why Neural Networks Should Not Be Used for HIV-1 Protease Cleavage Site Prediction paper_content: Summary: Several papers have been published where nonlinear machine learning algorithms, e.g. artificial neural networks, support vector machines and decision trees, have been used to model the specificity of the HIV-1 protease and extract specificity rules. We show that the dataset used in these studies is linearly separable and that it is a misuse of nonlinear classifiers to apply them to this problem. The best solution on this dataset is achieved using a linear classifier like the simple perceptron or the linear support vector machine, and it is straightforward to extract rules from these linear models. We identify key residues in peptides that are efficiently cleaved by the HIV-1 protease and list the most prominent rules, relating them to experimental results for the HIV-1 protease. ::: ::: Motivation: Understanding HIV-1 protease specificity is important when designing HIV inhibitors and several different machine learning algorithms have been applied to the problem. However, little progress has been made in understanding the specificity because nonlinear and overly complex models have been used. ::: ::: Results: We show that the problem is much easier than what has previously been reported and that linear classifiers like the simple perceptron or linear support vector machines are at least as good predictors as nonlinear algorithms. We also show how sets of specificity rules can be generated from the resulting linear classifiers. ::: ::: Availability: The datasets used are available at http://www.hh.se/staff/bioinf/ --- paper_title: Calpain Cleavage Prediction Using Multiple Kernel Learning paper_content: Calpain, an intracellular Ca²⁺-dependent cysteine protease, is known to play a role in a wide range of metabolic pathways through limited proteolysis of its substrates. However, only a limited number of these substrates are currently known, with the exact mechanism of substrate recognition and cleavage by calpain still largely unknown. While previous research has successfully applied standard machine-learning algorithms to accurately predict substrate cleavage by other similar types of proteases, their approach does not extend well to calpain, possibly due to its particular mode of proteolytic action and limited amount of experimental data. Through the use of Multiple Kernel Learning, a recent extension to the classic Support Vector Machine framework, we were able to train complex models based on rich, heterogeneous feature sets, leading to significantly improved prediction quality (6% over highest AUC score produced by state-of-the-art methods). In addition to producing a stronger machine-learning model for the prediction of calpain cleavage, we were able to highlight the importance and role of each feature of substrate sequences in defining specificity: primary sequence, secondary structure and solvent accessibility. Most notably, we showed there existed significant specificity differences across calpain sub-types, despite previous assumption to the contrary. Prediction accuracy was further successfully validated using, as an unbiased test set, mutated sequences of calpastatin (endogenous inhibitor of calpain) modified to no longer block calpain's proteolytic action. An online implementation of our prediction tool is available at http://calpain.org. --- paper_title: Pripper: prediction of caspase cleavage sites from whole proteomes paper_content: BackgroundCaspases are a family of proteases that have central functions in programmed cell death (apoptosis) and inflammation. Caspases mediate their effects through aspartate-specific cleavage of their target proteins, and at present almost 400 caspase substrates are known. There are several methods developed to predict caspase cleavage sites from individual proteins, but currently none of them can be used to predict caspase cleavage sites from multiple proteins or entire proteomes, or to use several classifiers in combination. The possibility to create a database from predicted caspase cleavage products for the whole genome could significantly aid in identifying novel caspase targets from tandem mass spectrometry based proteomic experiments.ResultsThree different pattern recognition classifiers were developed for predicting caspase cleavage sites from protein sequences. Evaluation of the classifiers with quality measures indicated that all of the three classifiers performed well in predicting caspase cleavage sites, and when combining different classifiers the accuracy increased further. A new tool, Pripper, was developed to utilize the classifiers and predict the caspase cut sites from an arbitrary number of input sequences. A database was constructed with the developed tool, and it was used to identify caspase target proteins from tandem mass spectrometry data from two different proteomic experiments. Both known caspase cleavage products as well as novel cleavage products were identified using the database demonstrating the usefulness of the tool. Pripper is not restricted to predicting only caspase cut sites, but it gives the possibility to scan protein sequences for any given motif(s) and predict cut sites once a suitable cut site prediction model for any other protease has been developed. Pripper is freely available and can be downloaded from http://users.utu.fi/mijopi/Pripper.ConclusionsWe have developed Pripper, a tool for reading an arbitrary number of proteins in FASTA format, predicting their caspase cleavage sites and outputting the cleaved sequences to a new FASTA format sequence file. We show that Pripper is a valuable tool in identifying novel caspase target proteins from modern proteomics experiments. --- paper_title: Cascleave: towards more accurate prediction of caspase substrate cleavage sites paper_content: MOTIVATION ::: The caspase family of cysteine proteases play essential roles in key biological processes such as programmed cell death, differentiation, proliferation, necrosis and inflammation. The complete repertoire of caspase substrates remains to be fully characterized. Accordingly, systematic computational screening studies of caspase substrate cleavage sites may provide insight into the substrate specificity of caspases and further facilitating the discovery of putative novel substrates. ::: ::: ::: RESULTS ::: In this article we develop an approach (termed Cascleave) to predict both classical (i.e. following a P(1) Asp) and non-typical caspase cleavage sites. When using local sequence-derived profiles, Cascleave successfully predicted 82.2% of the known substrate cleavage sites, with a Matthews correlation coefficient (MCC) of 0.667. We found that prediction performance could be further improved by incorporating information such as predicted solvent accessibility and whether a cleavage sequence lies in a region that is most likely natively unstructured. Novel bi-profile Bayesian signatures were found to significantly improve the prediction performance and yielded the best performance with an overall accuracy of 87.6% and a MCC of 0.747, which is higher accuracy than published methods that essentially rely on amino acid sequence alone. It is anticipated that Cascleave will be a powerful tool for predicting novel substrate cleavage sites of caspases and shedding new insights on the unknown caspase-substrate interactivity relationship. ::: ::: ::: AVAILABILITY ::: http://sunflower.kuicr.kyoto-u.ac.jp/ approximately sjn/Cascleave/ ::: ::: ::: CONTACT ::: [email protected]; [email protected]; james; [email protected] ::: ::: ::: SUPPLEMENTARY INFORMATION ::: Supplementary data are available at Bioinformatics online. --- paper_title: Artificial neural network model for predicting HIV protease cleavage sites in protein paper_content: Knowledge of the polyprotein cleavage sites by HIV protease will refine our understanding of its specificity, and the information thus acquired will be useful for designing specific and efficient HIV protease inhibitors. The search for inhibitors of HIV protease will be greatly expedited if one can find an accurate, robust, and rapid method for predicting the cleavage sites in proteins by HIV protease. In this paper, Kohonen's self-organization model, which uses typical artificial neural networks, is applied to predict the cleavability of oligopeptides by proteases with multiple and extended specificity subsites. We selected HIV-1 protease as the subject of study. We chose 299 oligopeptides for the training set, and another 63 oligopeptides for the test set. Because of its high rate of correct prediction (58/63 = 92.06%) and stronger fault-tolerant ability, the neural network method should be a useful technique for finding effective inhibitors of HIV protease, which is one of the targets in designing potential drugs against AIDS. The principle of the artificial neural network method can also be applied to analyzing the specificity of any multisubsite enzyme. --- paper_title: Penalized feature selection and classification in bioinformatics paper_content: In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classification techniques—which belong to the family of embedded feature selection methods—for bioinformatics studies with high-dimensional input. Classification objective functions, penalty functions and computational algorithms are discussed. Our goal is to make interested researchers aware of these feature selection and classification methods that are applicable to high-dimensional bioinformatics data. --- paper_title: Assessment of Proteasomal Cleavage Probabilities from Kinetic Analysis of Time-dependent Product Formation paper_content: Proteasomes are multicatalytic cellular protease complexes that degrade intracellular proteins into smaller peptides. Proteasomal in vitro digests have revealed that the various peptide bonds of a given substrate are cleaved in a highly selective manner. Regarding the key role of proteasomes as the main supplier of antigenic peptides for MHC class I-mediated antigen presentation, it is important to know to what extent these preferences for specific peptide bonds may vary among proteasomes of different cellular origin and of different subunit composition. Here, we quantify such cleavage rates by means of a kinetic proteasome model that relates the time-dependent changes of the amount of any generated peptide to the rates with which this peptide can be either generated from longer precursor peptides or degraded into smaller successor peptides. Numerical values for these rates are estimated by minimizing the distance between simulated and measured time-courses. The proposed method is applied to kinetic data obtained by combining HPLC fractionation and mass spectrometry (MS) to trace the degradation of two model peptides (pp89-25mer and LLO-27mer) by either the constitutive (T2) or immunoproteasome (T2.27). To convert the intensity of the MS signals into the respective peptide amounts, we use two methods leading to similar results: experimental calibration curves and theoretically determined linear scaling functions based on a novel approach using mass conservation rules. Comparison of the cleavage probabilities and procession rates obtained for the two types of proteasomes reveals that the striking differences between the time-dependent peptide profiles can be accounted for mainly by a generally higher turnover rate of the immunoproteasome. For the pp89-25mer, there is no significant change of the cleavage probabilities for any of the ten observed cleavage sites. For the LLO-27mer, there appears to be a significant change in the cleavage probabilities for four of the nine observed cleavage sites when switching from the constitutive to the immunoproteasome. ---
``` <format> Title: A Review of Statistical Methods for Prediction of Proteolytic Cleavage Section 1: INTRODUCTION Description 1: Write about the importance of understanding proteolytic cleavage in biology, the role of computational methods in advancing this understanding, and an overview of the review's focus on different prediction techniques. Section 2: BACKGROUND Proteolytic enzymes Description 2: Discuss the various proteolytic enzymes, their mechanisms, substrate specificity, and historical context of computational prediction models. Section 3: Data availability Description 3: Address the importance of data availability for supervised learning methods and provide information on relevant databases such as MEROPS, CutDB, and CaMPDB. Section 4: GENERAL APPROACH Problem setting Description 4: Define the general problem of predicting proteolytic cleavage, including problem formulation and common decision problems in prediction. Section 5: Practical issues in supervised learning Description 5: Discuss critical issues in applying supervised learning to cleavage prediction, including choice of features, vector encoding, data set imbalance, and evaluation metrics. Section 6: PREDICTION METHODS Position-based matrices Description 6: Elaborate on the use of position-specific scoring matrices (PSSM) and related methods to predict cleavage sites. Section 7: Artificial neural networks Description 7: Provide an overview of the application of artificial neural networks (ANN) in cleavage prediction, their strengths, weaknesses, and historical significance. Section 8: Decision trees and rule extraction methods Description 8: Describe the use of decision trees and rule extraction methods in cleavage prediction, focusing on their interpretability. Section 9: Hidden Markov models Description 9: Explain the implementation and benefits of hidden Markov models (HMM) for sequence-related predictions and their application in proteolytic cleavage prediction. Section 10: Kernel methods Description 10: Discuss the advantages of using support vector machines (SVM) and kernel methods in cleavage prediction, including various kernel types and their applications. Section 11: Other methods Description 11: Mention additional methods that have been applied to cleavage prediction, such as hierarchical clustering and integrative voting algorithms. Section 12: DISCUSSION Description 12: Summarize the relative performance of different methods, comparing evaluation metrics, and discussing practical considerations like computational complexity. Section 13: FUTURE DIRECTIONS Description 13: Outline new developments and potential future research directions in proteolytic cleavage prediction, including the application of sparse learning techniques and probabilistic models. Section 14: Key Points Description 14: Provide key takeaway messages about choosing appropriate algorithms based on protease type, balancing accuracy, readability, flexibility, and presenting illustrative examples of implementations. </format> ```
A survey on coverage path planning for robotics
12
--- paper_title: Region filling operations with random obstacle avoidance for mobile robots paper_content: The article presents a new topic in path planning for mobile robots, region filling. which involves a sweeping operation to fill a whole region with random obstacle avoidance. The approaches for global strip filling and local path searching driven by sensory data procedures are developed. A computer graphic simulation is used to verify the filling strategy available. The research was developed from the program for the design of a robot lawn mower. However, the solution appears generic. The significance is that a problem of wide application and generic solutions for general autonomous mobile robots have been developed. --- paper_title: Cleaning robot control paper_content: A small and lightweight cleaning robot powered from the AC power supply is produced for testing purposes. The robot provides a cable-length control function which prevents tangling of the cables during traveling, an ultrasonic sensor function which detects obstacles and dodges them, and a distance measuring function which makes it possible to run parallel to the wall. In a simple room with few obstacles, the robot can travel even if it does not incorporate information but in areas with complicated placement of obstacles, it is necessary to teach the robot the obstacle positions and the room size in advance. > --- paper_title: Randomized search strategies with imperfect sensors paper_content: In two previous papers we explored some of the systems aspects of applying large numbers of inexpensive robots to real world applications. The concept of coverage can help the user of such a system visualize its overall function and performance in mission-relevant terms, and thereby support necessary system command control functions. An important class of coverage applications are those that involve a search, in which a number of searching elements move about within a prescribed search area in order to find one or more target objects, which may be stationary or mobile. A simple analytical framework was employed in the previous work to demonstrate that the design of a cost-effective many-robot search system can depend sensitively on the interplay of sensor cost and performance levels with mission-specific functional and performance requirements. In the current paper we extend these results: we consider additional measures of effectiveness for area search systems to provide a broader basis for a tradeoff of coordinated versus random search models, and we explore how to deliberately achieve effectively randomized search strategies that provide uniform search coverage over a specified area. --- paper_title: Shortest Watchman Routes in Simple Polygons paper_content: In this paper we present an O(n4, log logn) algorithm to find a shortest watchman route in a simple polygon through a point,s, in its boundary. A watchman route is a route such that each point in the interior of the polygon is visible from at least one point along the route. --- paper_title: Coverage Algorithms for an Under-actuated Car-Like Vehicle in an Uncertain Environment paper_content: A coverage algorithm is an algorithm that deploys a strategy as to how to cover all points in terms of a given area using some set of sensors. In the past decades a lot of research has gone into development of coverage algorithms. Initially, the focus was coverage of structured and semi-structured indoor areas, but with time and development of better sensors and introduction of GPS, the focus has turned to outdoor coverage. Due to the unstructured nature of an outdoor environment, covering an outdoor area with all its obstacles and simultaneously performing reliable localization is a difficult task. In this paper, two path planning algorithms suitable for solving outdoor coverage tasks are introduced. The algorithms take into account the kinematic constraints of an under-actuated car-like vehicle, minimize trajectory curvatures, and dynamically avoid detected obstacles in the vicinity, all in real-time. We demonstrate the performance of the coverage algorithm in the field by achieving 95% coverage using an autonomous tractor mower without the aid of any absolute localization system or constraints on the physical boundaries of the area. --- paper_title: Sampling-based coverage path planning for inspection of complex structures paper_content: We present several new contributions in sampling-based coverage path planning, the task of finding feasible paths that give 100% sensor coverage of complex structures in obstacle-filled and visually occluded environments. First, we establish a framework for analyzing the probabilistic completeness of a sampling-based coverage algorithm, and derive results on the completeness and convergence of existing algorithms. Second, we introduce a new algorithm for the iterative improvement of a feasible coverage path; this relies on a sampling-based subroutine that makes asymptotically optimal local improvements to a feasible coverage path based on a strong generalization of the RRT* algorithm. We then apply the algorithm to the real-world task of autonomous in-water ship hull inspection. We use our improvement algorithm in conjunction with redundant roadmap coverage planning algorithm to produce paths that cover complex 3D environments with unprecedented efficiency. --- paper_title: Approximation algorithms for lawn mowing and milling paper_content: We study the problem of finding shortest tours/paths for “lawn mowing” and “milling” problems: Given a region in the plane, and given the shape of a “cutter” (typically, a circle or a square), find a shortest tour/path for the cutter such that every point within the region is covered by the cutter at some position along the tour/path. In the milling version of the problem, the cutter is constrained to stay within the region. The milling problem arises naturally in the area of automatic tool path generation for NC pocket machining. The lawn mowing problem arises in optical inspection, spray painting, and optimal search planning. ::: ::: Both problems are NP-hard in general. We give efficient constant-factor approximation algorithms for both problems. In particular, we give a (3+e)-approximation algorithm for the lawn mowing problem and a 2.5-approximation algorithm for the milling problem. Furthermore, we give a simple 65-approximation algorithm for the TSP problem in simple grid graphs, which leads to an 115-approximation algorithm for milling simple rectilinear polygons. --- paper_title: Robot control system for window cleaning paper_content: Window cleaning is a two-stage process; application of cleaning fluid, which is usually achieved by using a wetted applicator and removal of cleaning fluid by a squeegee blade without spillage on to other areas of the facade or previously cleaned areas of glass. This is particularly difficult for example if the window is located on the roof of a building and cleaning is performed from inside by the human window cleaner. Simulation studies were conducted to demonstrate the feasibility of a robot system to act and mimic the human operator; an end effector had to be designed to accommodate different tools such as applicator and squeegee; the pay load for tool handling, sensory feedback requirements; force and compliance control; and finally the cost of the overall system had to be feasible. As a result of the studies it was conceived that the end effector should contain a combined datuming/cleaning head. This arrangement would allow automatic datuming and location of the window pane relative to the robot using a specially designed and constructed compliant head. One advantage of a combined head being the elimination of tool changes between the datuming and wiping operation. A dedicated XYZR robot system was designed which makes use of an Industrial IBM PC connected to a DELTA-7AV systems PMAC card to drive the robot and to: coordinate its actions with those of the OCS roof mounted gantry delivery carrier system. --- paper_title: Coverage for robotics – A survey of recent results paper_content: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms. --- paper_title: Measuring Coverage Performances of a Floor Cleaning Mobile Robot Using a Vision System paper_content: This work introduces a method and apparatus for measuring real mobile robot cleaning performances based on a dedicated vision system. The measurement method can be applied to any kind of floor cleaning mobile robot but in this work it has been applied to a proprietary design. The paper presents the analysis of the first results obtained and ends with the main conclusions. --- paper_title: An Approximate Algorithm for Solving the Watchman Route Problem paper_content: The watchman route problem (WRP) was first introduced in 1988 and is defined as follows: How to calculate a shortest route completely contained inside a simple polygon such that any point inside this polygon is visible from at least one point on the route? So far the best known result for the WRP is an O(n3log n) runtime algorithm (with inherent numerical problems of its implementation). This paper gives an κ(Ɛ) × O(kn) approximate algorithm for WRP by using a rubberband algorithm, where n is the number of vertices of the simple polygon, k the number of essential cuts, Ɛ the chosen accuracy constant for the minimization of the calculated route, and κ(Ɛ) equals the length of the initial route minus the length of the calculated route, divided by Ɛ. --- paper_title: Path Planning for Robotic Demining: Robust Sensor-Based Coverage of Unstructured Environments and Probabilistic Methods paper_content: Demining and unexploded ordnance (UXO) clearance are extremely tedious and dangerous tasks. The use of robots bypasses the hazards and potentially increases the efficiency of both tasks. A first cr... --- paper_title: Uniform Coverage of Automotive Surface Patches paper_content: In spray painting applications, it is essential to generate a spray gun trajectory such that the entire surface is completely covered and receives an acceptably uniform layer of paint deposition; we call this the “uniform coverage” problem. The uniform coverage problem is challenging because the atomizer emits a non-trivial paint distribution, thus making the relationships between the spray gun trajectory and the deposition uniformity complex. To understand the key issues involved in uniform coverage, we consider surface patches that are geodesically convex and topologically simple as representative of subsets of realistic automotive surfaces. In addition to ensuring uniform paint deposition on the surface, our goal is to also minimize the associated process cycle time and paint waste. Based on the relationships between the spray gun trajectory and the output characteristics (i.e., uniformity, cycle time and paint waste), our approach decomposes the coverage trajectory generation problem into three subpro... --- paper_title: Approximation Algorithms For The Geometric Covering Salesman Problem paper_content: Abstract We introduce a geometric version of the Covering Salesman Problem: Each of the n salesman's clients specifies a neighborhood in which they are willing to meet the salesman. Identifying a tour of minimum length that visits all neighboirhoods is an NP-hard problem, since it is a generalization of the Traveling Salesman Problem. We present simple heuristic procedures for constructing tours, for a variety of neighborhood types, whose length is guaranteed to be within a constant factor of the length of an optimal tour. The neighborhoods we consider include parallel unit segments, translates of a polygonal region, and circles. --- paper_title: Coverage path planning: The Boustrophedon Cellular Decomposition paper_content: Coverage path planning is the determination of a path that a robot must take in order to pass over each point in an environment. Applica­ tions include vacuuming, floor scrubbing, and inspection. We developed the boustrophedon cellular decomposition, which is an exact cel­ lular decomposition approach, for the purposes of coverage. Each cell in the boustrophedon is covered with simple back and forth motions. Once each cell is covered, then the entire envi­ ronment is covered. Therefore, coverage is re­ duced to finding an exhaustive path through a graph which represents the adjacency relation­ ships of the cells in the boustrophedon decom­ position. This approach is provably complete and Experiments on a mobile robot validate this approach. --- paper_title: Morse Decompositions for Coverage Tasks paper_content: Exact cellular decompositions represent a robot's free space by dividing it into regions with simple structure such that the sum of the regions fills the free space. These decompositions have been widely used for path planning between two points, but can be used for mapping and coverage of free spaces. In this paper, we define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries. Morse functions are those whose critical points are non-degenerate. Between critical points, the structure of a space is effectively the same, so simple control strategies to achieve tasks, such as coverage, are feasible within each cell. This allows us to introduce a general framework for coverage tasks because varying the Morse function has the effect of changing the pattern by which a robot covers its free space. In this paper, we give examples of different Morse functions and comment on their corresponding tasks. In a companion paper, we describe the sensor-based algo... --- paper_title: Robot Motion Planning paper_content: 1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References. --- paper_title: Coverage path planning algorithms for agricultural field machines paper_content: In this article, a coverage path planning problem is discussed in the case of agricultural fields and agricultural machines. Methods and algorithms to solve this problem are developed. These algorithms are applicable to both robots and human-driven machines. The necessary condition is to cover the whole field, and the goal is to find as efficient a route as possible. As yet, there is no universal algorithm or method capable of solving the problem in all cases. Two new approaches to solve the coverage path planning problem in the case of agricultural fields and agricultural machines are presented for consideration. Both of them are greedy algorithms. In the first algorithm the view is from on top of the field, and the goal is to split a single field plot into subfields that are simple to drive or operate. This algorithm utilizes a trapezoidal decomposition algorithm, and a search is developed of the best driving direction and selection of subfields. This article also presents other practical aspects that are taken into account, such as underdrainage and laying headlands. The second algorithm is also an incremental algorithm, but the path is planned on the basis of the machine's current state and the search is on the next swath instead of the next subfield. There are advantages and disadvantages with both algorithms, neither of them solving the problem of coverage path planning problem optimally. Nevertheless, the developed algorithms are remarkable steps toward finding a way to solve the coverage path planning problem with nonomnidirectional vehicles and taking into consideration agricultural aspects. © 2009 Wiley Periodicals, Inc. --- paper_title: Exact cellular decompositions in terms of critical points of Morse functions paper_content: Exact cellular decompositions are structures that globally encode the topology of a robot's free space, while locally describing the free space geometry. These structures have been widely used for path planning between two points, but can be used for mapping and coverage of robot free spaces. In this paper, we define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries. Morse functions are those whose critical points are non-degenerate. Between critical points, the structure of a space is effectively the same, so simple control strategies to achieve tasks, such as coverage, are feasible within each cell. In this paper, we derive a general framework for defining decompositions in terms of critical points and then give examples, each corresponding to a different task. All of the results in this paper are derived in an m-dimensional Euclidean space, but the examples depicted in the figures are 2D and 3D for ease of presentation. --- paper_title: Coverage path planning: The Boustrophedon Cellular Decomposition paper_content: Coverage path planning is the determination of a path that a robot must take in order to pass over each point in an environment. Applica­ tions include vacuuming, floor scrubbing, and inspection. We developed the boustrophedon cellular decomposition, which is an exact cel­ lular decomposition approach, for the purposes of coverage. Each cell in the boustrophedon is covered with simple back and forth motions. Once each cell is covered, then the entire envi­ ronment is covered. Therefore, coverage is re­ duced to finding an exhaustive path through a graph which represents the adjacency relation­ ships of the cells in the boustrophedon decom­ position. This approach is provably complete and Experiments on a mobile robot validate this approach. --- paper_title: Coverage Algorithms for an Under-actuated Car-Like Vehicle in an Uncertain Environment paper_content: A coverage algorithm is an algorithm that deploys a strategy as to how to cover all points in terms of a given area using some set of sensors. In the past decades a lot of research has gone into development of coverage algorithms. Initially, the focus was coverage of structured and semi-structured indoor areas, but with time and development of better sensors and introduction of GPS, the focus has turned to outdoor coverage. Due to the unstructured nature of an outdoor environment, covering an outdoor area with all its obstacles and simultaneously performing reliable localization is a difficult task. In this paper, two path planning algorithms suitable for solving outdoor coverage tasks are introduced. The algorithms take into account the kinematic constraints of an under-actuated car-like vehicle, minimize trajectory curvatures, and dynamically avoid detected obstacles in the vicinity, all in real-time. We demonstrate the performance of the coverage algorithm in the field by achieving 95% coverage using an autonomous tractor mower without the aid of any absolute localization system or constraints on the physical boundaries of the area. --- paper_title: Constructing roadmaps of semi-algebraic sets I: completeness paper_content: Abstract This paper describes preliminary work on an algorithm for planning collision-free motions for a robot manipulator in the presence of obstacles. The physical obstacles lead to forbidden regions in the robots configuration space, and for collision-free motion we need paths through configuration space which avoid these regions. Our method is to construct a certain one-dimensional subset or “roadmap” of the space of allowable configurations. If S denotes the set of allowable configurations, the roadmap has the property that any connected component of S contains a single connected component of the roadmap. It is also possible, starting from an arbitrary point p ∈ S to rapidly construct a path from p to a point on the roadmap. Thus given any two points in S we can rapidly determine whether they lie in the same connected component of S, and if they do, we can return a candidate path between them. We do not give a complete description of the algorithm here, but we define the roadmap geometrically, and verify that it has the necessary connectivity. --- paper_title: An opportunistic global path planner paper_content: A robot planning algorithm that constructs a global skeleton of free-space by incremental local methods is described. The curves of the skeleton are the loci of maxima of an artificial potential field that is directly proportional to the distance of the robot from obstacles. The method has the advantage of fast convergence of local methods in uncluttered environments, but it also has a deterministic and efficient method of escaping local extremal points of the potential function. The authors present a general algorithm, for configuration spaces of any dimension, and describe instantiations of the algorithm for robots with two and three degrees of freedom. > --- paper_title: Efficient seabed coverage path planning for ASVs and AUVs paper_content: Coverage path planning is the problem of moving an effector (e.g. a robot, a sensor) over all points in a given region. In marine robotics, a number of applications require to cover a region on the seafloor while navigating above it at a constant depth. This is the case of Autonomous Surface Vehicles, that always navigate at the water surface level, but also of several Autonomous Underwater Vehicle tasks as well. Most existing coverage algorithms sweep the free space in the target region using lawnmower-like back-and-forth motions, and the inter-lap spacing between these back-and-forth laps is determined by the robot's sensor coverage range. However, while covering the seafloor surface by navigating above it at a constant depth, the sensor's field of view varies depending on the seafloor height. Therefore, to ensure full coverage one would need to use the inter-lap spacing determined by the shallowest point on the target surface, resulting in undesired coverage overlapping among the back-and-forth laps. In this work, we propose a novel method to generate a coverage path that completely covers a surface of interest on the seafloor by navigating in a constant-depth plane above it. The proposed method uses environment information to minimize the coverage overlapping by segmenting the target surface in regions of similar depth features and addressing them as individual coverage path planning problems. A cell decomposition coverage method is applied to each region. The surface gradient is used to determine the best sweep orientation in each cell, and the inter-lap spacing in the lawnmower-like paths used to cover each cell is maximized on a lap-by-lap basis, hence obtaining a shorter, more efficient coverage path. The proposal is validated in simulation experiments conducted with a real-world bathymetric dataset that show a significant increase on path efficiency in comparison with a standard boustrophedon coverage path. --- paper_title: Morse Decompositions for Coverage Tasks paper_content: Exact cellular decompositions represent a robot's free space by dividing it into regions with simple structure such that the sum of the regions fills the free space. These decompositions have been widely used for path planning between two points, but can be used for mapping and coverage of free spaces. In this paper, we define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries. Morse functions are those whose critical points are non-degenerate. Between critical points, the structure of a space is effectively the same, so simple control strategies to achieve tasks, such as coverage, are feasible within each cell. This allows us to introduce a general framework for coverage tasks because varying the Morse function has the effect of changing the pattern by which a robot covers its free space. In this paper, we give examples of different Morse functions and comment on their corresponding tasks. In a companion paper, we describe the sensor-based algo... --- paper_title: Coverage of Known Spaces: The Boustrophedon Cellular Decomposition paper_content: Coverage path planning is the determination of a path that a robot must take in order to pass over each point in an environment. Applications include de-mining, floor scrubbing, and inspection. We developed the boustrophedon cellular decomposition, which is an exact cellular decomposition approach, for the purposes of coverage. Essentially, the boustrophedon decomposition is a generalization of the trapezoidal decomposition that could allow for non-polygonalobstacles, but also has the side effect of having more “efficient” coverage paths than the trapezoidal decomposition. Each cell in the boustrophedon decomposition is covered with simple back and forth motions. Once each cell is covered, then the entire environment is covered. Therefore, coverage is reduced to finding an exhaustive path through a graph which represents the adjacency relationships of the cells in the boustrophedon decomposition. This approach is provably complete and experiments on a mobile robot validate this approach. --- paper_title: Morse Decompositions for Coverage Tasks paper_content: Exact cellular decompositions represent a robot's free space by dividing it into regions with simple structure such that the sum of the regions fills the free space. These decompositions have been widely used for path planning between two points, but can be used for mapping and coverage of free spaces. In this paper, we define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries. Morse functions are those whose critical points are non-degenerate. Between critical points, the structure of a space is effectively the same, so simple control strategies to achieve tasks, such as coverage, are feasible within each cell. This allows us to introduce a general framework for coverage tasks because varying the Morse function has the effect of changing the pattern by which a robot covers its free space. In this paper, we give examples of different Morse functions and comment on their corresponding tasks. In a companion paper, we describe the sensor-based algo... --- paper_title: Path Planning for Robotic Demining: Robust Sensor-Based Coverage of Unstructured Environments and Probabilistic Methods paper_content: Demining and unexploded ordnance (UXO) clearance are extremely tedious and dangerous tasks. The use of robots bypasses the hazards and potentially increases the efficiency of both tasks. A first cr... --- paper_title: Path planning and control for AERCam, a free-flying inspection robot in space paper_content: This paper describes a prototype robot and the necessary path planning and control for space inspection applications. The robot is the first generation of a free-flying robotic camera that will assist astronauts in constructing and maintaining the Space Station. The robot will provide remote views to astronauts inside the Space Shuttle and future Space Station, and to ground controllers. The paper describes a planar robot prototype autonomously moving about an air bearing table, and introduces a method for determining paths in three-dimensions for efficient fuel use. Finally, the paper describes the software simulation of the path planner with the future Space Station. --- paper_title: Sensor-Based Exploration: Incremental Construction of the Hierarchical Generalized Voronoi Graph paper_content: This paper prescribes an incremental procedure to construct roadmaps of unknown environments. Recall that a roadmap is a geometric structure that a robot uses to plan a path between two points in an environment. If the robot knows the roadmap, then it knows the environment. Likewise, if the robot constructs the roadmap, then it has effectively explored the environment. This paper focuses on the hierarchical generalized Voronoi graph (HGVG), detailed in the companion paper in this issue. The incremental construction procedure of the HGVG requires only local distance sensor measurements, and therefore the method can be used as a basis for sensor-based planning algorithms. Simulations and experiments using a mobile robot with ultrasonic sensors verify this approach. --- paper_title: Morse Decompositions for Coverage Tasks paper_content: Exact cellular decompositions represent a robot's free space by dividing it into regions with simple structure such that the sum of the regions fills the free space. These decompositions have been widely used for path planning between two points, but can be used for mapping and coverage of free spaces. In this paper, we define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries. Morse functions are those whose critical points are non-degenerate. Between critical points, the structure of a space is effectively the same, so simple control strategies to achieve tasks, such as coverage, are feasible within each cell. This allows us to introduce a general framework for coverage tasks because varying the Morse function has the effect of changing the pattern by which a robot covers its free space. In this paper, we give examples of different Morse functions and comment on their corresponding tasks. In a companion paper, we describe the sensor-based algo... --- paper_title: Sensor-based coverage with extended range detectors paper_content: Coverage path planning determines a path that passes a robot, a detector, or some type of effector over all points in the environment. Prior work in coverage tends to fall into one of two extremes: coverage with an effector the same size of the robot, and coverage with an effector that has infinite range. In this paper, we consider coverage in the middle of this spectrum: coverage with a detector range that goes beyond the robot, and yet is still finite in range. We achieve coverage in two steps: The first step considers vast, open spaces, where the robot can use the full range of its detector; the robot covers these spaces as if it were as big as its detector range. Here we employ previous work in using Morse cell decompositions to cover unknown spaces. A cell in this decomposition can be covered via simple back-and-forth motions, and coverage of the vast space is then reduced to ensuring that the robot visits each cell in the vast space. The second step considers the narrow or cluttered spaces where obstacles lie within detector range, and thus the detector "fills" the surrounding area. In this case, the robot can cover the cluttered space by simply following the generalized Voronoi diagram (GVD) of that space. In this paper, we introduce a hierarchical decomposition that combines the Morse decompositions and the GVDs to ensure that the robot indeed visits all vast, open, as well as narrow, cluttered, spaces. We show how to construct this decomposition online with sensor data that is accumulated while the robot enters the environment for the first time. --- paper_title: Contact sensor-based coverage of rectilinear environments paper_content: A variety of mobile robot tasks require complete coverage of an initially unknown environment, either as the entire task or as a way to generate a complete map for use during further missions. This is a problem known as sensor-based coverage, in which the robot's sensing is used to plan a path that reaches every point in the environment. A new algorithm, CC/sub R/, is presented here which works for robots with only contact sensing that operate in environments with rectilinear boundaries and obstacles. This algorithm uses a high-level rule-based feedback structure to direct coverage rather than a script in order to facilitate future extensions to a team of independent robots. The outline of a completeness proof of CC/sub R/ is also presented, which shows that it produces coverage of any of a large class of rectilinear environments. Implementation of CC/sub R/ in simulation is discussed, as well as the results of testing in a variety of world geometries and potential extensions to the algorithm. --- paper_title: Smooth coverage path planning and control of mobile robots based on high-resolution grid map representation paper_content: Abstract This paper presents a new approach to a time and energy efficient online complete coverage solution for a mobile robot. While most conventional approaches strive to reduce path overlaps, this work focuses on smoothing the coverage path to reduce accelerations and yet to increase the average velocity for faster coverage. The proposed algorithm adopts a high-resolution grid map representation to reduce directional constraints on path generation. Here, the free space is covered by three independent behaviors: spiral path tracking, wall following control, and virtual wall path tracking. Regarding the covered region as a virtual wall, all the three behaviors adopt a common strategy of following the (physical or virtual) wall or obstacle boundaries for close coverage. Wall following is executed by a sensor-based reactive path planning control process, whereas the spiral (filling) path and virtual wall path are first modeled by their relevant parametric curves and then tracked via dynamic feedback linearization. For complete coverage, these independent behaviors are linked through a new path linking strategy, called a coarse-to-fine constrained inverse distance transform (CFCIDT). CFCIDT reduces the computational cost compared to the conventional constrained inverse distance transform (CIDT), which applies a region growing starting from the current robot position to find the nearest unexplored cell as well as the shortest path to it while constraining the search space. As for experimental validation, performance of the proposed algorithm is compared to those of conventional coverage techniques to demonstrate its completeness of coverage, energy and time efficiency, and robustness to the environment shape or the initial robot pose. --- paper_title: Learning metric-topological maps for indoor mobile robot navigation paper_content: Abstract Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. --- paper_title: High resolution maps from wide angle sonar paper_content: We describe the use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot. A sonar range reading provides information concerning empty and occupied volumes in a cone (subtending 30 degrees in our case) in front of the sensor. The reading is modelled as probability profiles projected onto a rasterized map, where somewhere occupied and everywhere empty areas are represented. Range measurements from multiple points of view (taken from multiple sensors on the robot, and from the same sensors after robot moves) are systematically integrated in the map. Overlapping empty volumes re-inforce each other, and serve to condense the range of occupied volumes. The map definition improves as more readings are added. The final map shows regions probably occupied, probably unoccupied, and unknown areas. The method deals effectively with clutter, and can be used for motion planning and for extended landmark recognition. This system has been tested on the Neptune mobile robot at CMU. --- paper_title: Coverage for robotics – A survey of recent results paper_content: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms. --- paper_title: Planning Paths of Complete Coverage of an Unstructured Environment by a Mobile Robot paper_content: Abs t rac t Much of the focus of the research effort in path planning for mobile robots has centred on the problem of finding a path from a start location to a goal location, while minimising one or more parameters such as length of path, energy consumption or journey time. A path of complete coverage is a planned path in which a robot sweeps all areas of free space in an environment in a systematic and efficient manner. Possible applications for paths of complete coverage include autonomous vacuum cleaners, lawn mowers, security robots, land mine detectors etc. This paper will present a solution to this problem based upon an extension to the distance transform path planning methodology. The solution has been implemented on the self-contained autonomous mobile robot called the Yamabico. --- paper_title: BSA: A Complete Coverage Algorithm paper_content: The Backtracking Spiral Algorithm (BSA) is a coverage strategy for mobile robots based on the use of spiral filling paths; in order to assure the completeness, unvisited regions are marked and covered by backtracking mechanism. The BSA basic algorithm is designed to work in an environment modeled by a coarse-grain grid. BSA has been extended to cover, not only the free cells, but also the partially occupied ones. In this paper, the concepts and algorithms used to extend BSA are introduced. The ideas used to extend BSA are generic, thus a similar approach can be used to extend most of the grid-based coverage algorithms. Finally, some simulation results that demonstrate that BSA performs a complete coverage are presented. --- paper_title: Spiral-STC: an on-line coverage algorithm of grid environments by a mobile robot paper_content: We describe an on-line sensor based algorithm for covering planar areas by a square-shaped tool attached to a mobile robot. Let D be the tool size. The algorithm, called Spiral-STC, incrementally subdivides the planar work-area into disjoint D-size cells, while following a spanning tree of the resulting grid. The algorithm covers general grid environments using a path whose length is at most (n + m)D, where n is the number of D-size cells and m /spl les/ n is the number of boundary cells, defined as cells that share at least one point with the grid boundary. We also report that any on-line coverage algorithm generates a covering path whose length is at least (2 - /spl epsiv/)l/sub opt/ in the worst case, where l/sub opt/ is the length of the optimal covering path. Since (n + m)D /spl les/ 2l/sub opt/, Spiral-STC is worst-case optimal. Moreover, m << n in practical environments, and the algorithm generates close-to-optimal covering paths in such environments. Simulation results demonstrate the spiral-like covering patterns typical to the algorithm. --- paper_title: Real-Time Planning for Covering an Initially-Unknown Spatial Environment paper_content: We consider the problem of planning, on the fly, a path whereby a robotic vehicle will cover every point in an initially unknown spatial environment. We describe four strategies (Iterated WaveFront, Greedy-Scan, Delayed GreedyScan and Closest-First Scan) for generating cost-effective coverage plans in real time for unknown environments. We give theorems showing the correctness of our planning strategies. Our experiments demonstrate that some of these strategies work significantly better than others, and that the best ones work very well; e.g., in environments having an average of 64,000 locations for the robot to cover, the best strategy returned plans with less than 6% redundant coverage, and took only an average of 0.1 milliseconds per action. --- paper_title: Online complete coverage path planning for mobile robots based on linked spiral paths using constrained inverse distance transform paper_content: This paper presents a sensor-based online coverage path planning algorithm guaranteeing a complete coverage of unstructured planar environments by a mobile robot. The proposed complete coverage algorithm abstracts the environment as a union of robot-sized cells and then uses a spiral filling rule. It can be largely classified as an approximate cellular decomposition approach as defined by Choset. In this paper, we first propose a special map coordinate assignment scheme based on active wall-finding using the history of sensor readings, which can drastically reduce the number of turns on the generated coverage path. Next, we develop an efficient path planner to link the simple spiral paths using the constrained inverse distance transform that we introduced the first time. This planner selects the next target cell which is at the minimal path length away from the current cell among the remaining non-contiguous uncovered cells while at the same time, finding the path to this target to save both the memory and time which are important concern in embedded robotics. Experiments on both simulated and real cleaning robots demonstrate the practical efficiency and robustness of the proposed algorithm. --- paper_title: A neural network approach to complete coverage path planning paper_content: Complete coverage path planning requires the robot path to cover every part of the workspace, which is an essential issue in cleaning robots and many other robotic applications such as vacuum robots, painter robots, land mine detectors, lawn mowers, automated harvesters, and window cleaners. In this paper, a novel neural network approach is proposed for complete coverage path planning with obstacle avoidance of cleaning robots in nonstationary environments. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation derived from Hodgkin and Huxley's (1952) membrane equation. There are only local lateral connections among neurons. The robot path is autonomously generated from the dynamic activity landscape of the neural network and the previous robot location. The proposed model algorithm is computationally simple. Simulation results show that the proposed model is capable of planning collision-free complete coverage robot paths. --- paper_title: A quantitative description of membrane current and its application to conduction and excitation in nerve paper_content: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre (Hodgkinet al., 1952,J. Physiol. 116, 424–448; Hodgkin and Huxley, 1952,J. Physiol. 116, 449–566). Its general object is to discuss the results of the preceding papers (Section 1), to put them into mathematical form (Section 2) and to show that they will account for conduction and excitation in quantitative terms (Sections 3–6). --- paper_title: A Complete Coverage Path Planning Method for Mobile Robot in Uncertain Environments paper_content: In this paper, a novel complete coverage path planning method based on the biologically inspired neural networks, rolling path planning and heuristic searching approach is presented for mobile robot motion planning with obstacles avoidance. The biologically inspired neural network is used to model the environment and calculate the environment information, while the rolling planning technique and the heuristic searching algorithm are utilized for the path planning,. Simulation studies show that the proposed method is very effective for the dynamic uncertain environments. --- paper_title: A Bioinspired Neural Network for Real-Time Concurrent Map Building and Complete Coverage Robot Navigation in Unknown Environments paper_content: Complete coverage navigation (CCN) requires a special type of robot path planning, where the robots should pass every part of the workspace. CCN is an essential issue for cleaning robots and many other robotic applications. When robots work in unknown environments, map building is required for the robots to effectively cover the complete workspace. Real-time concurrent map building and complete coverage robot navigation are desirable for efficient performance in many applications. In this paper, a novel neural-dynamics-based approach is proposed for real-time map building and CCN of autonomous mobile robots in a completely unknown environment. The proposed model is compared with a triangular-cell-map-based complete coverage path planning method (Oh et al., 2004) that combines distance transform path planning, wall-following algorithm, and template-based technique. The proposed method does not need any templates, even in unknown environments. A local map composed of square or rectangular cells is created through the neural dynamics during the CCN with limited sensory information. From the measured sensory information, a map of the robot's immediate limited surroundings is dynamically built for the robot navigation. In addition, square and rectangular cell map representations are proposed for real-time map building and CCN. Comparison studies of the proposed approach with the triangular-cell-map-based complete coverage path planning approach show that the proposed method is capable of planning more reasonable and shorter collision-free complete coverage paths in unknown environments. --- paper_title: Graph Planning for Environmental Coverage paper_content: Tasks such as street mapping and security surveillance seek a route that traverses a given space to perform a function. These task functions may involve mapping the space for accurate modeling, sensing the space for unusual activity, or searching the space for objects. When these tasks are performed autonomously by robots, the constraints of the environment must be considered in order to generate more feasible paths. Additionally, performing these tasks in the real world presents the challenge of operating in dynamic, changing environments. ::: This thesis addresses the problem of effective graph coverage with environmental constraints and incomplete prior map information. Prior information about the environment is assumed to be given in the form of a graph. We seek a solution that effectively covers the graph while accounting for space restrictions and online changes. For real-time applications, we seek a complete but efficient solution that has fast replanning capabilities. ::: For this work, we model the set of coverage problems as arc routing problems. Although these routing problems are generally NP-hard, our approach aims for optimal solutions through the use of low-complexity algorithms in a branch-and-bound framework when time permits and approximations when time restrictions apply. Additionally, we account for environmental constraints by embedding those constraints into the graph. In this thesis, we present algorithms that address the multi-dimensional routing problem and its subproblems and evaluate them on both computer-generated and physical road network data. --- paper_title: Coverage for robotics – A survey of recent results paper_content: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms. --- paper_title: Uniform Coverage of Automotive Surface Patches paper_content: In spray painting applications, it is essential to generate a spray gun trajectory such that the entire surface is completely covered and receives an acceptably uniform layer of paint deposition; we call this the “uniform coverage” problem. The uniform coverage problem is challenging because the atomizer emits a non-trivial paint distribution, thus making the relationships between the spray gun trajectory and the deposition uniformity complex. To understand the key issues involved in uniform coverage, we consider surface patches that are geodesically convex and topologically simple as representative of subsets of realistic automotive surfaces. In addition to ensuring uniform paint deposition on the surface, our goal is to also minimize the associated process cycle time and paint waste. Based on the relationships between the spray gun trajectory and the output characteristics (i.e., uniformity, cycle time and paint waste), our approach decomposes the coverage trajectory generation problem into three subpro... --- paper_title: Coverage path planning on three-dimensional terrain for arable farming paper_content: Field operations should be done in a manner that minimizes time and travels over the field surface and is coordinated with topographic land features. Automated path planning can help to find the best coverage path so that the field operation costs can be minimized. Intelligent algorithms are desired for both two-dimensional (2D) and three-dimensional (3D) terrain field coverage path planning. The algorithm of generating an optimized full coverage pattern for a given 2D planar field by using boustrophedon paths has been investigated and reported before. However, a great proportion of farms have rolling terrains, which have a considerable influence on the design of coverage paths. Coverage path planning in 3D space has a great potential to further optimize field operations. This work addressed four critical tasks: terrain modeling and representation, coverage cost analysis, terrain decomposition, and the development of optimized path searching algorithm. The developed algorithms and methods have been successfully implemented and tested using 3D terrain maps of farm fields with various topographic features. Each field was decomposed into subregions based on its terrain features. A recommended “seed curve” based on a customized cost function was searched for each subregion, and parallel coverage paths were generated by offsetting the found “seed curve” toward its two sides until the whole region was completely covered. Compared with the 2D planning results, the experimental results of 3D coverage path planning showed its superiority in reducing both headland turning cost and soil erosion cost. On the tested fields, on average the 3D planning algorithm saved 10.3% on headland turning cost, 24.7% on soil erosion cost, 81.2% on skipped area cost, and 22.0% on the weighted sum of these costs, where their corresponding weights were 1, 1, and 0.5, respectively. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc. --- paper_title: Sampling-based coverage path planning for inspection of complex structures paper_content: We present several new contributions in sampling-based coverage path planning, the task of finding feasible paths that give 100% sensor coverage of complex structures in obstacle-filled and visually occluded environments. First, we establish a framework for analyzing the probabilistic completeness of a sampling-based coverage algorithm, and derive results on the completeness and convergence of existing algorithms. Second, we introduce a new algorithm for the iterative improvement of a feasible coverage path; this relies on a sampling-based subroutine that makes asymptotically optimal local improvements to a feasible coverage path based on a strong generalization of the RRT* algorithm. We then apply the algorithm to the real-world task of autonomous in-water ship hull inspection. We use our improvement algorithm in conjunction with redundant roadmap coverage planning algorithm to produce paths that cover complex 3D environments with unprecedented efficiency. --- paper_title: Asymptotically optimal inspection planning using systems with differential constraints paper_content: This paper proposes a new inspection planning algorithm, called Random Inspection Tree Algorithm (RITA). Given a perfect model of a structure, sensor specifications, robot's dynamics, and an initial configuration of a robot, RITA computes the optimal inspection trajectory that observes all points on the structure. Many inspection planning algorithms have been proposed, most of them consist of two sequential steps. In the first step, they compute a small set of observation points such that each point on the structure is visible. In the second step, they compute the shortest trajectory to visit all observation points at least once. The robot's kinematic and dynamic constraints are taken into account only in the second step. Thus, when the robot has differential constraints and operates in cluttered environments, the observation points may be difficult or even infeasible to reach. To alleviate this difficulty, RITA computes both observation points and the trajectory to visit the observation points simultaneously. RITA uses sampling-based techniques to find admissible trajectories with decreasing cost. Simulation results for 2-D environments are promising. Furthermore, we present analysis on the probabilistic completeness and asymptotic optimality of our algorithm. --- paper_title: Robot Motion Planning paper_content: 1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References. --- paper_title: Optimal complete terrain coverage using an Unmanned Aerial Vehicle paper_content: We present the adaptation of an optimal terrain coverage algorithm for the aerial robotics domain. The general strategy involves computing a trajectory through a known environment with obstacles that ensures complete coverage of the terrain while minimizing path repetition. We introduce a system that applies and extends this generic algorithm to achieve automated terrain coverage using an aerial vehicle. Extensive experimental results in simulation validate the presented system, along with data from over 100 kilometers of successful coverage flights using a fixed-wing aircraft. --- paper_title: Optimal Line-sweep-based Decompositions for Coverage Algorithms paper_content: Robotic coverage is the problem of moving a sensor or actuator over all points in given region. Ultimately, we want a coverage path that minimizes some cost such as time. We take the approach of decomposing the coverage region into subregions, selecting a sequence of those subregions, and then generating a path that covers each subregion in turn. We focus on generating decompositions based upon the planar line sweep. After a general overview of the coverage problem, we describe how our assumptions lead to the optimality criterion of minimizing the sum of subregion altitudes (which are measured relative to the sweep direction assigned to that subregion). For a line-sweep decomposition, the sweep direction is the same for all subregions. We describe how to find the optimal sweep direction for convex polygonal worlds. We then introduce the minimal sum of altitudes (MSA) decomposition in which we may assign a different sweep direction to each subregion. This decomposition is better for generating an optimal coverage path. We describe a method based on multiple line sweeps and dynamic programming to generate the MSA decomposition. --- paper_title: Sampling-based coverage path planning for inspection of complex structures paper_content: We present several new contributions in sampling-based coverage path planning, the task of finding feasible paths that give 100% sensor coverage of complex structures in obstacle-filled and visually occluded environments. First, we establish a framework for analyzing the probabilistic completeness of a sampling-based coverage algorithm, and derive results on the completeness and convergence of existing algorithms. Second, we introduce a new algorithm for the iterative improvement of a feasible coverage path; this relies on a sampling-based subroutine that makes asymptotically optimal local improvements to a feasible coverage path based on a strong generalization of the RRT* algorithm. We then apply the algorithm to the real-world task of autonomous in-water ship hull inspection. We use our improvement algorithm in conjunction with redundant roadmap coverage planning algorithm to produce paths that cover complex 3D environments with unprecedented efficiency. --- paper_title: Optimal coverage of a known arbitrary environment paper_content: The problem of coverage of known space by a mobile robot has many applications. Of particular interest is providing a solution that guarantees the complete coverage of the free space by traversing an optimal path, in terms of the distance travelled. In this paper we introduce a new algorithm based on the Boustrophedon cellular decomposition. The presented algorithm encodes the areas (cells) to be covered as edges of the Reeb graph. The optimal solution to the Chinese Postman Problem (CPP) is used to calculate an Euler tour, which guarantees complete coverage of the available free space while minimizing the path of the robot. In addition, we extend the classical solution of the CPP to account for the entry point of the robot for cell coverage by changing the weights of the Reeb graph edges. Proof of correctness is provided together with experimental results in different environments. --- paper_title: Optimal area covering using genetic algorithms paper_content: Path planning problems involve computing or finding a collision free path between two positions. A special kind of path planning is complete coverage path planning, where a robot sweeps all area of free space in an environment. There are different methods to cover the complete area; however, they are not designed to optimize the process. This paper proposes a novel method of complete coverage path planning based on genetic algorithms. In order to check the viability of this approach the optimal path is tested in a virtual environment. The simulation results confirm the feasibility of this method. --- paper_title: Robust coverage by a mobile robot of a planar workspace paper_content: In this paper, we suggest a new way to plan coverage paths for a mobile robot whose position and velocity are subject to bounded error. Most prior approaches assume a probabilistic model of uncertainty and maximize the expected value of covered area. We assume a worst-case model of uncertainty and-for a particular choice of coverage path-are still able to guarantee complete coverage. We begin by considering the special case in which the region to be covered is a single point. The machinery we develop to express and solve this problem immediately extends to guarantee coverage of a small subset in the workspace. Finally, we use this subset as a sort of virtual coverage implement, achieving complete coverage of the entire workspace by tiling copies of the subset along boustrophedon paths. --- paper_title: Leap-Frog Path Design for Multi-Robot Cooperative Localization paper_content: We present a “leap-frog” path designed for a team of three robots performing cooperative localization. Two robots act as stationary measurement beacons while the third moves in a path that provides informative measurements. After completing the move, the roles of each robot are switched and the path is repeated. We demonstrate accurate localization using this path via a coverage experiment in which three robots successfully cover a 20m x 30m area. We report an approximate positional drift of 1.1m per robot over a travel distance of 140m. To our knowledge, this is one of the largest successful GPS-denied coverage experiments to date. --- paper_title: Robust Area Coverage using Hybrid Control paper_content: Efficient coverage of an area by a mobile vehicle is a common challenge in many applications. Examples include automatic lawn mowers and vacuum cleaning robots. In this paper a vehicle with uncertain heading is studied. Five control strategies based on position measurements available only when the vehicle intersects the boundary of the area are compared. It is shown that the performance depends heavily on the heading error. The results are evaluated through extensive Monte Carlo simulations. An experimental implementation on a mobile robot is also presented. --- paper_title: Coverage for robotics – A survey of recent results paper_content: This paper surveys recent results in coverage path planning, a new path planning approach that determines a path for a robot to pass over all points in its free space. Unlike conventional point-to-point path planning, coverage path planning enables applications such as robotic de-mining, snow removal, lawn mowing, car-body painting, machine milling, etc. This paper will focus on coverage path planning algorithms for mobile robots constrained to operate in the plane. These algorithms can be classified as either heuristic or complete. It is our conjecture that most complete algorithms use an exact cellular decomposition, either explicitly or implicitly, to achieve coverage. Therefore, this paper organizes the coverage algorithms into four categories: heuristic, approximate, partial-approximate and exact cellular decompositions. The final section describes some provably complete multi-robot coverage algorithms. --- paper_title: Active Visual SLAM with Exploration for Autonomous Underwater Navigation paper_content: Abstract : One of the major challenges in the field of underwater robotics is the opacity of the water medium to radio frequency transmission modes, which precludes the use of a global positioning system (GPS) and high speed radio communication in underwater navigation and mapping applications. One approach to underwater robotics that overcomes this limitation is vision-based simultaneous localization and mapping (SLAM), a framework that enables a robot to localize itself, while simultaneously building a map of an unknown environment. The SLAM algorithm provides a probabilistic map that contains the estimated state of the system, including a map of the environment and the pose of the robot. Because the quality of vision-based navigation varies spatially within the environment the performance of visual SLAM strongly depends on the path and motion that the robot follows. While traditionally treated as two separate problems, SLAM and path planning are indeed interrelated: the performance of SLAM depends significantly on the environment and motion; however, control of the robot motion fully depends on the information from SLAM. Therefore, an integrated SLAM control scheme is needed?one that can direct motion for better localization and mapping, and thereby provide more accurate state information back to the controller. This thesis develops perception-driven control, an integrated SLAM and path planning framework that improves the performance of visual SLAM in an informative and efficient way by jointly considering the reward predicted by a candidate camera measurement, along with its likelihood of success based upon visual saliency. The proposed control architecture identifies highly informative candidate locations for SLAM loop-closure that are also visually distinctive, such that a camera-derived pose-constraint is probable. Results are shown for autonomous underwater hull inspection experiments using the Bluefin Robotics Hovering Autonomous Underwater Vehicle (HAUV). --- paper_title: Exploiting critical points to reduce positioning error for sensor-based navigation paper_content: This paper presents a planner that determines a path such that the robot does not have to heavily rely on odometry to reach its goal. The planner determines a sequence of obstacle boundaries that the robot must follow to reach the goal. Since this planner is used in the context of a coverage algorithm already presented by the authors, we assume that the free space is already, completely or partially, represented by a cellular decomposition whose cell boundaries are defined by critical points of Morse functions (isolated points at obstacle boundaries). The topological relationship among the cells is represented by a graph where nodes are the critical points and edges connect the nodes that define a common cell (i.e., the edges correspond to the cells themselves). A search of this graph yields a sequence of cells that directs the robot from a start to a goal. Once a sequence of cells and critical points are determined, a robot traverses each cell by mainly following the boundary of the cell along the obstacle boundaries and minimizes the accumulated dead-reckoning error at the intermediate critical points. This allows the robot to reach the goal robustly even in the presence of dead-reckoning error. --- paper_title: Efficient Boustrophedon Multi-Robot Coverage: an algorithmic approach paper_content: This paper presents algorithmic solutions for the complete coverage path planning problem using a team of mobile robots. Multiple robots decrease the time to complete the coverage, but maximal efficiency is only achieved if the number of regions covered multiple times is minimized. A set of multi-robot coverage algorithms is presented that minimize repeat coverage. The algorithms use the same planar cell-based decomposition as the Boustrophedon single robot coverage algorithm, but provide extensions to handle how robots cover a single cell, and how robots are allocated among cells. Specifically, for the coverage task our choice of multi-robot policy strongly depends on the type of communication that exists between the robots. When the robots operate under the line-of-sight communication restriction, keeping them as a team helps to minimize repeat coverage. When communication between the robots is available without any restrictions, the robots are initially distributed through space, and each one is allocated a virtually-bounded area to cover. A greedy auction mechanism is used for task/cell allocation among the robots. Experimental results from different simulated and real environments that illustrate our approach for different communication conditions are presented. --- paper_title: Graph Planning for Environmental Coverage paper_content: Tasks such as street mapping and security surveillance seek a route that traverses a given space to perform a function. These task functions may involve mapping the space for accurate modeling, sensing the space for unusual activity, or searching the space for objects. When these tasks are performed autonomously by robots, the constraints of the environment must be considered in order to generate more feasible paths. Additionally, performing these tasks in the real world presents the challenge of operating in dynamic, changing environments. ::: This thesis addresses the problem of effective graph coverage with environmental constraints and incomplete prior map information. Prior information about the environment is assumed to be given in the form of a graph. We seek a solution that effectively covers the graph while accounting for space restrictions and online changes. For real-time applications, we seek a complete but efficient solution that has fast replanning capabilities. ::: For this work, we model the set of coverage problems as arc routing problems. Although these routing problems are generally NP-hard, our approach aims for optimal solutions through the use of low-complexity algorithms in a branch-and-bound framework when time permits and approximations when time restrictions apply. Additionally, we account for environmental constraints by embedding those constraints into the graph. In this thesis, we present algorithms that address the multi-dimensional routing problem and its subproblems and evaluate them on both computer-generated and physical road network data. --- paper_title: Cooperative Cleaners: A Study in Ant Robotics paper_content: In the world of living creatures, simple-minded animals often cooperate to achieve common goals with amazing performance. One can consider this idea in the context of robotics, and suggest models for programming goal-oriented behavior into the members of a group of simple robots lacking global supervision. This can be done by controlling the local interactions between the robot agents, to have them jointly carry out a given mission. As a test case we analyze the problem of many simple robots cooperating to clean the dirty floor of a non-convex region in Z2, using the dirt on the floor as the main means of inter-robot communication. --- paper_title: A model for terrain coverage inspired by ant's alarm pheromones paper_content: When looking at science and technology today, we find a recurrent problem to many fields: how to cover a search space consistently and uniformly. This problem is encountered in robotics (searching for targets), optimization (searching for solutions), mathematics and computer science (graph traversals), and even in software engineering (the main motivation for this research). In insect societies, and in particular ant colonies, one can find the concept of alarm pheromones used to indicate an important event to the society (e.g. a threat). Alarm pheromones enable the society to have a uniform spread of its individuals, probably as a survival mechanism --- the more uniform the spread the better the changes of survival at the colony level. This paper proposes a model of this ant behavior which can be used to solve the aforementioned problem. The model, called ALARM is inspired primarily by ACO and from observations of ants alarm behavior. We compare the model with a random walk, to demonstrate a significant improvement over this approach. --- paper_title: Distributed covering by ant-robots using evaporating traces paper_content: We investigate the ability of a group of robots, that communicate by leaving traces, to perform the task of cleaning the floor of an un-mapped building, or any task that requires the traversal of an unknown region. More specifically, we consider robots which leave chemical odour traces that evaporate with time, and are able to evaluate the strength of smell at every point they reach, with some measurement error. Our abstract model is a decentralized multi-agent adaptive system with a shared memory, moving on a graph whose vertices are the floor-tiles. We describe three methods of covering a graph in a distributed fashion, using smell traces that gradually vanish with time, and show that they all result in eventual task completion, two of them in a time polynomial in the number of tiles. Our algorithms can complete the traversal of the graph even if some of the agents die or the graph changes during the execution, as long as the graph stays connected. Another advantage of our agent interaction processes is the ability of agents to use noisy information at the cost of longer cover time. --- paper_title: Aerial remote sensing in agriculture: A practical approach to area coverage and path planning for fleets of mini aerial robots paper_content: In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc. --- paper_title: Contact sensor-based coverage of rectilinear environments paper_content: A variety of mobile robot tasks require complete coverage of an initially unknown environment, either as the entire task or as a way to generate a complete map for use during further missions. This is a problem known as sensor-based coverage, in which the robot's sensing is used to plan a path that reaches every point in the environment. A new algorithm, CC/sub R/, is presented here which works for robots with only contact sensing that operate in environments with rectilinear boundaries and obstacles. This algorithm uses a high-level rule-based feedback structure to direct coverage rather than a script in order to facilitate future extensions to a team of independent robots. The outline of a completeness proof of CC/sub R/ is also presented, which shows that it produces coverage of any of a large class of rectilinear environments. Implementation of CC/sub R/ in simulation is discussed, as well as the results of testing in a variety of world geometries and potential extensions to the algorithm. --- paper_title: A topological coverage algorithm for mobile robots paper_content: In applications such as vacuum cleaning, painting, demining and foraging, a mobile robot must cover an unknown surface. The efficiency and completeness of coverage is improved via the construction of a map of covered regions while the robot covers the surface. Existing methods generally use grid maps, which are susceptible to odometry error and may require considerable memory and computation. This paper proposes a topological map and presents a coverage algorithm in which natural landmarks are added as nodes in a partial map. The completeness of the algorithm is argued. Simulation tests show over 99% of the surface is covered; 85% for real (Khepera) robot tests. The path length is about 10% worse than optimal in simulation tests, and about 20% worse than optimal for the real robot, which are within theoretical upper bounds for approximates solutions to traveling salesman based coverage problems. The proposed algorithm generates shorter paths and covers a wider variety of environments than topological coverage based on Morse decompositions. --- paper_title: Graph Planning for Environmental Coverage paper_content: Tasks such as street mapping and security surveillance seek a route that traverses a given space to perform a function. These task functions may involve mapping the space for accurate modeling, sensing the space for unusual activity, or searching the space for objects. When these tasks are performed autonomously by robots, the constraints of the environment must be considered in order to generate more feasible paths. Additionally, performing these tasks in the real world presents the challenge of operating in dynamic, changing environments. ::: This thesis addresses the problem of effective graph coverage with environmental constraints and incomplete prior map information. Prior information about the environment is assumed to be given in the form of a graph. We seek a solution that effectively covers the graph while accounting for space restrictions and online changes. For real-time applications, we seek a complete but efficient solution that has fast replanning capabilities. ::: For this work, we model the set of coverage problems as arc routing problems. Although these routing problems are generally NP-hard, our approach aims for optimal solutions through the use of low-complexity algorithms in a branch-and-bound framework when time permits and approximations when time restrictions apply. Additionally, we account for environmental constraints by embedding those constraints into the graph. In this thesis, we present algorithms that address the multi-dimensional routing problem and its subproblems and evaluate them on both computer-generated and physical road network data. --- paper_title: A Bioinspired Neural Network for Real-Time Concurrent Map Building and Complete Coverage Robot Navigation in Unknown Environments paper_content: Complete coverage navigation (CCN) requires a special type of robot path planning, where the robots should pass every part of the workspace. CCN is an essential issue for cleaning robots and many other robotic applications. When robots work in unknown environments, map building is required for the robots to effectively cover the complete workspace. Real-time concurrent map building and complete coverage robot navigation are desirable for efficient performance in many applications. In this paper, a novel neural-dynamics-based approach is proposed for real-time map building and CCN of autonomous mobile robots in a completely unknown environment. The proposed model is compared with a triangular-cell-map-based complete coverage path planning method (Oh et al., 2004) that combines distance transform path planning, wall-following algorithm, and template-based technique. The proposed method does not need any templates, even in unknown environments. A local map composed of square or rectangular cells is created through the neural dynamics during the CCN with limited sensory information. From the measured sensory information, a map of the robot's immediate limited surroundings is dynamically built for the robot navigation. In addition, square and rectangular cell map representations are proposed for real-time map building and CCN. Comparison studies of the proposed approach with the triangular-cell-map-based complete coverage path planning approach show that the proposed method is capable of planning more reasonable and shorter collision-free complete coverage paths in unknown environments. --- paper_title: Planning Paths of Complete Coverage of an Unstructured Environment by a Mobile Robot paper_content: Abs t rac t Much of the focus of the research effort in path planning for mobile robots has centred on the problem of finding a path from a start location to a goal location, while minimising one or more parameters such as length of path, energy consumption or journey time. A path of complete coverage is a planned path in which a robot sweeps all areas of free space in an environment in a systematic and efficient manner. Possible applications for paths of complete coverage include autonomous vacuum cleaners, lawn mowers, security robots, land mine detectors etc. This paper will present a solution to this problem based upon an extension to the distance transform path planning methodology. The solution has been implemented on the self-contained autonomous mobile robot called the Yamabico. --- paper_title: Spiral-STC: an on-line coverage algorithm of grid environments by a mobile robot paper_content: We describe an on-line sensor based algorithm for covering planar areas by a square-shaped tool attached to a mobile robot. Let D be the tool size. The algorithm, called Spiral-STC, incrementally subdivides the planar work-area into disjoint D-size cells, while following a spanning tree of the resulting grid. The algorithm covers general grid environments using a path whose length is at most (n + m)D, where n is the number of D-size cells and m /spl les/ n is the number of boundary cells, defined as cells that share at least one point with the grid boundary. We also report that any on-line coverage algorithm generates a covering path whose length is at least (2 - /spl epsiv/)l/sub opt/ in the worst case, where l/sub opt/ is the length of the optimal covering path. Since (n + m)D /spl les/ 2l/sub opt/, Spiral-STC is worst-case optimal. Moreover, m << n in practical environments, and the algorithm generates close-to-optimal covering paths in such environments. Simulation results demonstrate the spiral-like covering patterns typical to the algorithm. --- paper_title: A neural network approach to complete coverage path planning paper_content: Complete coverage path planning requires the robot path to cover every part of the workspace, which is an essential issue in cleaning robots and many other robotic applications such as vacuum robots, painter robots, land mine detectors, lawn mowers, automated harvesters, and window cleaners. In this paper, a novel neural network approach is proposed for complete coverage path planning with obstacle avoidance of cleaning robots in nonstationary environments. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation derived from Hodgkin and Huxley's (1952) membrane equation. There are only local lateral connections among neurons. The robot path is autonomously generated from the dynamic activity landscape of the neural network and the previous robot location. The proposed model algorithm is computationally simple. Simulation results show that the proposed model is capable of planning collision-free complete coverage robot paths. --- paper_title: Sampling-based coverage path planning for inspection of complex structures paper_content: We present several new contributions in sampling-based coverage path planning, the task of finding feasible paths that give 100% sensor coverage of complex structures in obstacle-filled and visually occluded environments. First, we establish a framework for analyzing the probabilistic completeness of a sampling-based coverage algorithm, and derive results on the completeness and convergence of existing algorithms. Second, we introduce a new algorithm for the iterative improvement of a feasible coverage path; this relies on a sampling-based subroutine that makes asymptotically optimal local improvements to a feasible coverage path based on a strong generalization of the RRT* algorithm. We then apply the algorithm to the real-world task of autonomous in-water ship hull inspection. We use our improvement algorithm in conjunction with redundant roadmap coverage planning algorithm to produce paths that cover complex 3D environments with unprecedented efficiency. --- paper_title: Coverage path planning on three-dimensional terrain for arable farming paper_content: Field operations should be done in a manner that minimizes time and travels over the field surface and is coordinated with topographic land features. Automated path planning can help to find the best coverage path so that the field operation costs can be minimized. Intelligent algorithms are desired for both two-dimensional (2D) and three-dimensional (3D) terrain field coverage path planning. The algorithm of generating an optimized full coverage pattern for a given 2D planar field by using boustrophedon paths has been investigated and reported before. However, a great proportion of farms have rolling terrains, which have a considerable influence on the design of coverage paths. Coverage path planning in 3D space has a great potential to further optimize field operations. This work addressed four critical tasks: terrain modeling and representation, coverage cost analysis, terrain decomposition, and the development of optimized path searching algorithm. The developed algorithms and methods have been successfully implemented and tested using 3D terrain maps of farm fields with various topographic features. Each field was decomposed into subregions based on its terrain features. A recommended “seed curve” based on a customized cost function was searched for each subregion, and parallel coverage paths were generated by offsetting the found “seed curve” toward its two sides until the whole region was completely covered. Compared with the 2D planning results, the experimental results of 3D coverage path planning showed its superiority in reducing both headland turning cost and soil erosion cost. On the tested fields, on average the 3D planning algorithm saved 10.3% on headland turning cost, 24.7% on soil erosion cost, 81.2% on skipped area cost, and 22.0% on the weighted sum of these costs, where their corresponding weights were 1, 1, and 0.5, respectively. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc. --- paper_title: Asymptotically optimal inspection planning using systems with differential constraints paper_content: This paper proposes a new inspection planning algorithm, called Random Inspection Tree Algorithm (RITA). Given a perfect model of a structure, sensor specifications, robot's dynamics, and an initial configuration of a robot, RITA computes the optimal inspection trajectory that observes all points on the structure. Many inspection planning algorithms have been proposed, most of them consist of two sequential steps. In the first step, they compute a small set of observation points such that each point on the structure is visible. In the second step, they compute the shortest trajectory to visit all observation points at least once. The robot's kinematic and dynamic constraints are taken into account only in the second step. Thus, when the robot has differential constraints and operates in cluttered environments, the observation points may be difficult or even infeasible to reach. To alleviate this difficulty, RITA computes both observation points and the trajectory to visit the observation points simultaneously. RITA uses sampling-based techniques to find admissible trajectories with decreasing cost. Simulation results for 2-D environments are promising. Furthermore, we present analysis on the probabilistic completeness and asymptotic optimality of our algorithm. --- paper_title: Cooperative Cleaners: A Study in Ant Robotics paper_content: In the world of living creatures, simple-minded animals often cooperate to achieve common goals with amazing performance. One can consider this idea in the context of robotics, and suggest models for programming goal-oriented behavior into the members of a group of simple robots lacking global supervision. This can be done by controlling the local interactions between the robot agents, to have them jointly carry out a given mission. As a test case we analyze the problem of many simple robots cooperating to clean the dirty floor of a non-convex region in Z2, using the dirt on the floor as the main means of inter-robot communication. --- paper_title: Efficient Boustrophedon Multi-Robot Coverage: an algorithmic approach paper_content: This paper presents algorithmic solutions for the complete coverage path planning problem using a team of mobile robots. Multiple robots decrease the time to complete the coverage, but maximal efficiency is only achieved if the number of regions covered multiple times is minimized. A set of multi-robot coverage algorithms is presented that minimize repeat coverage. The algorithms use the same planar cell-based decomposition as the Boustrophedon single robot coverage algorithm, but provide extensions to handle how robots cover a single cell, and how robots are allocated among cells. Specifically, for the coverage task our choice of multi-robot policy strongly depends on the type of communication that exists between the robots. When the robots operate under the line-of-sight communication restriction, keeping them as a team helps to minimize repeat coverage. When communication between the robots is available without any restrictions, the robots are initially distributed through space, and each one is allocated a virtually-bounded area to cover. A greedy auction mechanism is used for task/cell allocation among the robots. Experimental results from different simulated and real environments that illustrate our approach for different communication conditions are presented. ---
Title: A Survey on Coverage Path Planning for Robotics Section 1: Introduction Description 1: This section will provide an overview of Coverage Path Planning (CPP) for robotics, including its importance and applications across various robotic fields. Section 2: Classical Exact Cellular Decomposition Methods Description 2: Discusses methods that break the free space into simple, non-overlapping regions called cells and the basics of two notable approaches: Trapezoidal Decomposition and Boustrophedon Decomposition. Section 3: Morse-based Cellular Decomposition Description 3: This section covers Morse decomposition, an advanced cellular decomposition technique that includes handling non-polygonal obstacles and the associated on-line method for critical point detection. Section 4: Landmark-based Topological Coverage Description 4: Explains a coverage approach for mobile robots based on detecting natural landmarks to create an exact cellular decomposition called slice decomposition. Section 5: Contact Sensor-based Coverage of Rectilinear Environments Description 5: Details the CC R algorithm for robots without range sensing capabilities, focusing on achieving coverage in unknown rectilinear environments. Section 6: Grid-based Methods Description 6: Reviews methods utilizing grid representations for the environment, including Wavefront Algorithm, Spiral-STC Algorithm, and neural network-based approaches. Section 7: Graph-based Coverage Description 7: Explores coverage algorithms suitable for environments represented as graphs, such as street or road networks, addressing issues like incomplete maps and environmental constraints. Section 8: 3D Coverage Description 8: Discusses various methods for 3-dimensional coverage including using planar algorithms in successive horizontal planes, Morse-based decomposition for 3D structures, and specific applications such as bathymetric surfaces. Section 9: Optimal Coverage Description 9: Reviews approaches to achieve optimal coverage paths in terms of path length and time to completion, suitable for known or partially known environments. Section 10: Coverage under Uncertainty Description 10: Focuses on strategies to reduce the effect of localization error while performing coverage, including SLAM-based approaches. Section 11: Multi-robot Methods Description 11: Discusses methods for utilizing multiple robots in CPP tasks, including strategies adapted from single-robot decomposition methods and bio-inspired approaches for multi-robot fleet coordination. Section 12: Conclusion Description 12: Summarizes the different CPP methodologies covered, their applications, advantages, and emerging techniques in the field, especially highlighting the potential of probabilistic sampling and SLAM integration in CPP.
Positioning Information Privacy in Intelligent Transportation Systems: An Overview and Future Perspective †
10
--- paper_title: UAV-Enabled Intelligent Transportation Systems for the Smart City: Applications and Challenges paper_content: There could be no smart city without a reliable and efficient transportation system. This necessity makes the ITS a key component of any smart city concept. While legacy ITS technologies are deployed worldwide in smart cities, enabling the next generation of ITS relies on effective integration of connected and autonomous vehicles, the two technologies that are under wide field testing in many cities around the world. Even though these two emerging technologies are crucial in enabling fully automated transportation systems, there is still a significant need to automate other road and transportation components. To this end, due to their mobility, autonomous operation, and communication/processing capabilities, UAVs are envisaged in many ITS application domains. This article describes the possible ITS applications that can use UAVs, and highlights the potential and challenges for UAV-enabled ITS for next-generation smart cities. --- paper_title: LTE evolution for vehicle-to-everything services paper_content: Wireless communication has become a key technology for competitiveness of next generation vehicles. Recently, the 3GPP has initiated standardization activities for LTE-based V2X services composed of vehicle-to-vehicle, vehicle- to-pedestrian, and vehicle-to-infrastructure/network. The goal of these 3GPP activities is to enhance LTE systems to enable vehicles to communicate with other vehicles, pedestrians, and infrastructure in order to exchange messages for aiding in road safety, controlling traffic flow, and providing various traffic notifications. In this article, we provide an overview of the service flow and requirements of the V2X services LTE systems are targeting. This article also discusses the scenarios suitable for operating LTE-based V2X services, and addresses the main challenges of high mobility and densely populated vehicle environments in designing technical solutions to fulfill the requirements of V2X services. Leveraging the spectral-efficient air interface, the cost-effective network deployment, and the versatile nature of supporting different communication types, LTE systems along with proper enhancements can be the key enabler of V2X services. --- paper_title: Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication in a heterogeneous wireless network – Performance evaluation paper_content: Connected Vehicle Technology (CVT) requires wireless data transmission between vehicles (V2V), and vehicle-to-infrastructure (V2I). Evaluating the performance of different network options for V2V and V2I communication that ensure optimal utilization of resources is a prerequisite when designing and developing robust wireless networks for CVT applications. Though dedicated short range communication (DSRC) has been considered as the primary communication option for CVT safety applications, the use of other wireless technologies (e.g., Wi-Fi, LTE, WiMAX) allow longer range communications and throughput requirements that could not be supported by DSRC alone. Further, the use of other wireless technology potentially reduces the need for costly DSRC infrastructure. In this research, the authors evaluated the performance of Het-Net consisting of Wi-Fi, DSRC and LTE technologies for V2V and V2I communications. An application layer handoff method was developed to enable Het-Net communication for two CVT applications: traffic data collection, and forward collision warning. The handoff method ensures the optimal utilization of available communication options (i.e., eliminate the need of using multiple communication options at the same time) and corresponding backhaul communication infrastructure depending on the connected vehicle application requirements. Field studies conducted in this research demonstrated that the use of Het-Net broadened the range and coverage of V2V and V2I communications. The use of the application layer handoff technique to maintain seamless connectivity for CVT applications was also successfully demonstrated and can be adopted in future Het-Net supported connected vehicle applications. A long handoff time was observed when the application switches from LTE to Wi-Fi. The delay is largely due to the time required to activate the 802.11 link and the time required for the vehicle to associate with the RSU (i.e., access point). Modifying the application to implement a soft handoff where a new network is seamlessly connected before breaking from the existing network can greatly reduce (or eliminate) the interruption of network service observed by the application. However, the use of a Het-Net did not compromise the performance of the traffic data collection application as this application does not require very low latency, unlike connected vehicle safety applications. Field tests revealed that the handoff between networks in Het-Net required several seconds (i.e., higher than 200 ms required for safety applications). Thus, Het-Net could not be used to support safety applications that require communication latency less than 200 ms. However, Het-Net could provide additional/supplementary connectivity for safety applications to warn vehicles upstream to take proactive actions to avoid problem locations. To validate and establish the findings from field tests that included a limited number of connected vehicles, ns-3 simulation experiments with a larger number of connected vehicles were conducted involving a DSRC and LTE Het-Net scenario. The latency and packet delivery error trend obtained from ns-3 simulation were found to be similar to the field experiment results. --- paper_title: Real-Time Energy Management Strategy Based on Velocity Forecasts Using V2V and V2I Communications paper_content: The performance of energy management in hybrid electric vehicles is highly dependent on the forecasted velocity. To this end, a new velocity-prediction approach utilizing the concept of chaining neural network (CNN) is introduced. This velocity forecasting approach is subsequently used as the basis for an equivalent consumption minimization strategy (ECMS). The CNN is used to predict the velocity over different temporal horizons, exploiting the information provided through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication channels. In addition, a new adaptation law for the so-called equivalent factor (EF) in ECMS is devised to investigate the effects of future velocity on fuel economy and to impose charge sustainability. Compared with traditional adaptation law, this paper considers the impact of predicted velocity on EF. The control objective is to improve the fuel economy relative to the ECMS without considering predicted velocity. Finally, simulations are conducted in three cases over different prediction horizons to demonstrate the performance of the proposed velocity-prediction method and ECMS with adaptation law. Simulation results confirm that ECMS with EF adjusted by the proposed adaptation law produces between 0.2% and 5% improvements in fuel economy relative to ECMS with traditional adaptation law. In addition, better charge sustainability is achieved as well. --- paper_title: Latency of Cellular-Based V2X: Perspectives on TTI-Proportional Latency and TTI-Independent Latency paper_content: Vehicle-to-everything (V2X) is a form of wireless communication that is extremely sensitive to latency, because the latency is directly related to driving safety. The V2X systems developed so far have been based on the LTE system. However, the conventional LTE system is not able to support the latency requirements of latency-aware V2X. Fortunately, the state-of-the-art cellular technology standard includes the development of latency reduction schemes, such as shortened transmission time intervals (TTI) and self-contained subframes. This paper verifies and analyzes the latency of cellular-based V2X with shortened TTI, which is one of the most efficient latency reduction schemes. To verify the feasibility of V2X service, we divide the V2X latency into two types of latency, TTI-independent latency and TTI-proportional latency. Moreover, using system-level simulations considering additional overhead from shortened TTI, we evaluate the latency of cellular-based V2X systems. Based on this feasibility verification, we then propose cellular-based V2X system design principles in terms of shortened TTI with only one OFDM symbol and while sustaining radio resource control connection. --- paper_title: MoZo: A Moving Zone Based Routing Protocol Using Pure V2V Communication in VANETs paper_content: Vehicular Ad-hoc Networks (VANETs) are an emerging field, whereby vehicle-to-vehicle communications can enable many new applications such as safety and entertainment services. Most VANET applications are enabled by different routing protocols. The design of such routing protocols, however, is quite challenging due to the dynamic nature of nodes (vehicles) in VANETs. To exploit the unique characteristics of VANET nodes, we design a moving-zone based architecture in which vehicles collaborate with one another to form dynamic moving zones so as to facilitate information dissemination. We propose a novel approach that introduces moving object modeling and indexing techniques from the theory of large moving object databases into the design of VANET routing protocols. The results of extensive simulation studies carried out on real road maps demonstrate the superiority of our approach compared with both clustering and non-clustering based routing protocols. --- paper_title: Realizing the Tactile Internet: Haptic Communications over Next Generation 5G Cellular Networks paper_content: Prior Internet designs encompassed the fixed, mobile, and lately the "things" Internet. In a natural evolution to these, the notion of the Tactile Internet is emerging, which allows one to transmit touch and actuation in real-time. With voice and data communications driving the designs of the current Internets, the Tactile Internet will enable haptic communications, which in turn will be a paradigm shift in how skills and labor are digitally delivered globally. Design efforts for both the Tactile Internet and the underlying haptic communications are in its infancy. The aim of this article is thus to review some of the most stringent design challenges, as well as propose first avenues for specific solutions to enable the Tactile Internet revolution. --- paper_title: Modeling of V2V Communications for C-ITS Safety Applications: A CPS Perspective paper_content: Tight coupling between the performance of vehicle-to-vehicle (V2V) communications and the performance of cooperative intelligent transportation system (C-ITS) safety applications is addressed. A cyber-physical system analytical framework is developed which links the characteristics of V2V communications (such as packet loss probability and packet transmission delay) with the physical mobility characteristics of the vehicular system (such as safe intervehicular distance). The study is applied to the Day 1 C-ITS application, emergency electronic brake lights, enabled by the European Telecommunication Standard Institute ITS-G5 and IEEE 802.11p standards. --- paper_title: Accuracy Of The GPS Positioning System In The Context Of Increasing The Number Of Satellites In The Constellation paper_content: Abstract A possibility of utilising the GPS system for navigation and transport are fundamentally dependent on the accuracy in positioning. Two fundamental factors decisive for its value are the values of the User Range Error (URE) and Dilution of Precision (DOP), strictly related to the number of satellites forming the constellation. The nominal constellation of GPS satellites consists of 24 units which gives a possibility of identification of coordinates all over the globe. In the last few years, however, the nominal number of satellites in the constellation was much higher, and the URE value has been constantly increasing. The authors of the paper try to estimate the impact of the changing number of GPS satellites on accuracy of position coordinates with a variable URE value. Mathematical model for estimating geometrical indicators’ value, utilising data derived from the almanac files has been presented. Following a drawn-up algorithm and calculations made with Mathcad software, the authors carried out a comparative analysis of mean daily values of DOP indicators for a variable number of satellites included in the GPS constellation in the years 2001-2013. Then, the authors have established representative values of Two Distance Root Mean Square Error (2drms) 2D and 3D, and calculated a percentage increase of accuracy in the period under discussion. --- paper_title: Improving Positioning Accuracy Using GPS Pseudorange Measurements for Cooperative Vehicular Localization paper_content: Accurate positioning is a key factor for enabling innovative applications in intelligent transportation systems. Cutting-edge communication technologies make cooperative localization a promising approach for accurate vehicle positioning. In this paper, we first propose a ranging technique called weighted least squares double difference (WLS-DD), which is used to detect intervehicle distances based on the sharing of GPS pseudorange measurements and a weighted least squares method. It takes the carrier-to-noise ratio (CNR) of raw pseudorange measurements into consideration for mitigating noises so that it can improve the accuracy of the distance detection. We show the superiority of WLS-DD by conducting a series of field experiments. Based on intervehicle distance detection, we propose a distributed location estimate algorithm (DLEA) to improve the accuracy of vehicle positioning. The implementation of DLEA only relies on inaccurate GPS pseudorange measurements and the obtained intervehicle distances without using any reference points for positioning correction. Moreover, to evaluate the joint effect of WLS-DD and DLEA, we derive a data fitting model based on the observed distance detection bias from field experiments, which generates parameters in a variety of environments for performance evaluation. Finally, we demonstrate the effectiveness of the proposed solutions via a comprehensive simulation study. --- paper_title: Performance assessment of GPS/GLONASS single point positioning in an urban environment paper_content: In signal-degraded environments such as urban canyons and mountainous area, many GNSS signals are either blocked or strongly degraded by natural and artificial obstacles. In such scenarios standalone GPS is often unable to guarantee a continuous and accurate positioning due to lack (or the poor quality) of signals. The combination of different GNSSs could be a suitable approach to fill this gap, because the multi-constellation system guarantees an improved satellite availability compared to standalone GPS, thus providing enhanced accuracy, continuity and integrity of the positioning. The present GNSSs are GPS, GLONASS, Galileo and Beidou, but the latter two are still in the development phase. In this work GPS/GLONASS systems are combined for single point positioning and their performance are assessed for different configurations. Using GPS/GLONASS multi-constellation implies the addition of an additional unknown, i.e. the intersystem time scale offset, which requires a sacrifice of one measurement. Since the intersystem offset is quasi-constant over a short period, a pseudo-measurement can be introduced to compensate the sacrifice. --- paper_title: Research and progress of Beidou satellite navigation system paper_content: The paper gives a brief review of the development course and strategic planning of Beidou satellite navigation system, and introduces its construction progress, with emphasis on the research progress of some key technologies of Beidou, including navigation constellation design, navigation signal structure and performance, compatibility and interoperability, precise orbit determination, navigation and precise positioning. The main problems and challenges of Beidou are described. --- paper_title: Access point significance measures in WLAN-based location paper_content: This paper focuses on the WLAN-based indoor location by taking into account the contribution of each hearable Access Point (AP) in the location estimation. Typically, in many indoor scenarios of interest for the future location services, such as malls, shopping centers, airports or other transit hubs, the amount of hearable APs is huge and it is important to find out whether some of these APs are redundant for the purpose of location accuracy and may be dropped. Moreover, many APs nowadays are multi-antenna APs or support multiple MAC addresses coming from exactly the same location, thus it is likely that they may bring little or no benefit if keeping all in the positioning stage. The purpose of our paper is to address various significance measures in WLAN-based location and to compare them from the point of view of the accuracy of the location solution. The access point significance is studied both at the training stage and at the estimation stage. Our models are based on real measurement data. --- paper_title: Recognizing individuals in groups in outdoor environments combining stereo vision, RFID and BLE paper_content: Vision-based people localization systems in outdoor environments can be enhanced by means of radio frequency identification technologies. This combination has the potential to enable a wide range of new applications. When individuals wear a radio frequency tag, they may be both identified and localized. In this way, the technology may interact with individuals in a personalized way. In this paper, two radio frequency identification technologies, UHF Radio Frequency IDentification (RFID) and Bluetooth Low Energy (BLE), are combined with a stereo-based people detection system to recognize individuals in groups in complex outdoor scenarios in medium sized areas up to 20 m. The proposed approach is validated in crosswalks with pedestrians wearing portable RFID passive tags and active BLE beacons. --- paper_title: LTE Positioning Accuracy Performance Evaluation paper_content: In this paper we investigate the positioning accuracy of user equipment (UE) with observed time difference of arrival (OTDoA) technique in Long Term Evolution (LTE) networks using dedicated positioning reference signal (PRS) by means of comprehensive simulation model. System-level model includes typical cellular network layout with spatially distributed eNodeB (eNB) and link-level model simulates PRS transmission and reception with LTE System Toolbox. Matlab “TDOA Positioning Using PRS example” was developed to include known linear least squares (LLS), nonlinear Gauss-Newton (GN) and Levenberg-Marquardt (LM) positioning algorithms and compare it with Cramer-Rao lower bound (CRLB). Resulting estimates clarify known results and reveal that simple LLS achieves close to LM positioning accuracy when the number of eNB is 6, while further increasing the number of eNB degrades positioning algorithms accuracy. --- paper_title: A Positioning Accuracy Enhancement Method Based on Inter-Vehicular Communication and Self-Organizing Map paper_content: Traditional low-cost GPS installed on vehicles and other equipment has a tolerance of tens of meters. With the help of auxiliary devices and/or methodologies such as Differential GPSs (DGPSs), Assisted GPSs (AG-PSs), Real-Time Kinematic (RTK), computer vision and etc., the positioning accuracy of GPS receivers increased a lot, but a certain amount of cost increased at the same time. In this paper, we propose a new cooperative vehicular localization scheme for improving the accuracy of GPS fixes based on Vehicle-to-vehicle (V2V) communication. The proposed scheme first estimates the distances between neighboring vehicles using WLS-DD scheme [1] and uses a machine learning technique called Constrained Self-Organizing Map (C-SOM) with a set of GPS fixes collected over time to generate the final estimates of GPS locations with much lower errors. We present simulation results that demonstrate the superior performance of the proposed Constrained-SOM. --- paper_title: Joint 3D Positioning and Network Synchronization in 5G Ultra-Dense Networks Using UKF and EKF paper_content: It is commonly expected that future fifth generation (5G) networks will be deployed with a high spatial density of access nodes (ANs) in order to meet the envisioned capacity requirements of the upcoming wireless networks. Densification is beneficial not only for communications but it also creates a convenient infrastructure for highly accurate user node (UN) positioning. Despite the fact that positioning will play an important role in future networks, thus enabling a huge amount of location-based applications and services, this great opportunity has not been widely explored in the existing literature. Therefore, this paper proposes an unscented Kalman filter (UKF)-based method for estimating directions of arrival (DoAs) and times of arrival (ToA) at ANs as well as performing joint 3D positioning and network synchronization in a network-centric manner. In addition to the proposed UKF-based solution, the existing 2D extended Kalman filter (EKF)-based solution is extended to cover also realistic 3D positioning scenarios. Building on the premises of 5G ultra-dense networks (UDNs), the performance of both methods is evaluated and analysed in terms of DoA and ToA estimation as well as positioning and clock offset estimation accuracy, using the METIS map-based ray-tracing channel model and 3D trajectories for vehicles and unmanned aerial vehicles (UAVs) through the Madrid grid. Based on the comprehensive numerical evaluations, both proposed methods can provide the envisioned one meter 3D positioning accuracy even in the case of unsynchronized 5G network while simultaneously tracking the clock offsets of network elements with a nanosecond-scale accuracy. --- paper_title: User Positioning in mmW 5G Networks Using Beam-RSRP Measurements and Kalman Filtering paper_content: In this paper, we exploit the 3D-beamforming features of multiantenna equipment employed in fifth generation (5G) networks, operating in the millimeter wave (mmW) band, for accurate positioning and tracking of users. We consider sequential estimation of users' positions, and propose a two-stage extended Kalman filter (EKF) that is based on reference signal received power (RSRP) measurements. In particular, beamformed downlink (DL) reference signals (RS) are transmitted by multiple base stations (BSs) and measured by user equipmentn(UE) employing receive beamforming. The so-obtained BRSRP measurements are fed back to the BS where the corresponding direction-of-departure are sequentially estimated by a novel EKF. Such angle estimates from multiple BSs are subsequently fused on a central entity into 3D position estimates of UE by means of an angle-based EKF. The proposed positioning scheme is scalable since the computational burden is shared among different network entities, namely transmission/reception points (TRPs) and 5G-NR Node B (gNB), and may be accomplished with the signalling currently specified for 5G. We assess the performance of the proposed algorithm on a realistic outdoor 5G deployment with a detailed ray tracing propagation model based on the METIS Madrid map. Numerical results with a system operating at 39 GHz show that sub-meter 3D positioning accuracy is achievable in future mmW 5G networks. --- paper_title: Prospective Positioning Architecture and Technologies in 5G Networks paper_content: Accurate and real-time positioning is highly demanded by location-based services in 5G networks, which are currently being standardized and developed to achieve significant performance improvement over existing cellular networks. In 5G networks, many new envisioned technologies, for example, massive Multiple Input Multiple Output (MIMO), millimeter Wave (mmWave) communication, ultra dense network (UDN), and device-to-device (D2D) communication, are introduced to not only enhance communication performance but also offer the possibility to increase positioning accuracy. In this article, we provide an extensive overview of positioning architectures in previous generations of cellular networks to show a road map of how positioning technologies have evolved in past decades. With this insight, we then propose a general positioning architecture for 5G networks, by exploiting the new features of emerging technologies wherein. We also investigate positioning technologies that have great potential in achieving sub-meter accuracy in 5G networks, and discuss some of the new challenges and open issues. --- paper_title: 5G NR Testbed 3.5 GHz Coverage Results paper_content: 5G New Radio (NR) has attracted a large amount of interest with its foreseen improvement of user experience and capacity. To become a commercial success coverage is also an important con- sideration. The sub-6 bands are expected to provide good coverage. One such useful band is 3.5 GHz. In this paper the coverage of 3.5 GHz is studied by means of a full-scale trial with a NR testbed. With beamforming the increased propagation losses at 3.5 GHz compared to 2.1 GHz can be compensated and the coverage was demonstrated to be on par with 2.1 GHz LTE fixed antenna coverage both outdoor and indoor. --- paper_title: 5G mm Wave Downlink Vehicular Positioning paper_content: 5G new radio (NR) provides new opportunities for accurate positioning from a single reference station: large bandwidth combined with multiple antennas, at both the base station and user sides, allows for unparalleled angle and delay resolution. Nevertheless, positioning quality is affected by multipath and clock biases. We study, in terms of performance bounds and algorithms, the ability to localize a vehicle in the presence of multipath and unknown user clock bias. We find that when a sufficient number of paths is present, a vehicle can still be localized thanks to redundancy in the geometric constraints. Moreover, the 5G NR signals enable a vehicle to build up a map of the environment. --- paper_title: 5G mmWave Positioning for Vehicular Networks paper_content: 5G technologies present a new paradigm to provide connectivity to vehicles, in support of high data-rate services, complementing existing inter-vehicle communication standards based on IEEE 802.11p. As we argue, the specific signal characteristics of 5G communication turn out to be highly conducive for vehicle positioning. Hence, 5G can work in synergy with existing on-vehicle positioning and mapping systems to provide redundancy for certain applications, in particular automated driving. This article provides an overview of the evolution of cellular positioning and discusses the key properties of 5G as they relate to vehicular positioning. Open research challenges are presented. --- paper_title: Vehicle Localization in Vehicular Networks paper_content: We propose a distributed algorithm that uses inter-vehicle distance estimates, made using a radio-based ranging technology, to localize a vehicle among its neighbours. Given that the inter-vehicle distance estimates contain noise, our algorithm reduces the residuals of the Euclidean distance between the vehicles and their measured distances, allowing it to accurately estimate the position of a vehicle within a cluster. In this paper, we show that our proposed algorithm outperforms previously proposed algorithms and present its performance in a simulated vehicular environment. --- paper_title: Privacy-Preserved Pseudonym Scheme for Fog Computing Supported Internet of Vehicles paper_content: As a promising branch of Internet of Things, Internet of Vehicles (IoV) is envisioned to serve as an essential data sensing and processing platform for intelligent transportation systems. In this paper, we aim to address location privacy issues in IoV. In traditional pseudonym systems, the pseudonym management is carried out by a centralized way resulting in big latency and high cost. Therefore, we present a new paradigm named Fog computing supported IoV (F-IoV) to exploit resources at the network edge for effective pseudonym management. By utilizing abundant edge resources, a privacy-preserved pseudonym ( $P^{3}$ ) scheme is proposed in F-IoV. The pseudonym management in this scheme is shifted to specialized fogs at the network edge named pseudonym fogs, which are composed of roadside infrastructures and deployed in close proximity of vehicles. $P^{3}$ scheme has following advantages: 1) context-aware pseudonym changing; 2) timely pseudonym distribution; and 3) reduced pseudonym management overhead. Moreover, a hierarchical architecture for $P^{3}$ scheme is introduced in F-IoV. Enabled by the architecture, a context-aware pseudonym changing game and secure pseudonym management communication protocols are proposed. The security analysis shows that $P^{3}$ scheme provides secure communication and privacy preservation for vehicles. Numerical results indicate that $P^{3}$ scheme effectively enhances location privacy and reduces communication overhead for the vehicles. --- paper_title: Autonomous land vehicle navigation using millimeter wave radar paper_content: This paper discusses the use of a 77 GHz millimeter wave radar as a guidance sensor for autonomous land vehicle navigation. A test vehicle has been fitted with a radar and encoders that give steer angle and velocity. An extended Kalman filter optimally fuses the radar range and bearing measurements with vehicle control signals to give estimated position and variance as the vehicle moves around a test site. The effectiveness of this data fusion is compared with encoders alone and with a satellite positioning system. Consecutive scans have been combined to give a radar image of the surrounding environment. Data in this format are invaluable for future work on collision detection and map building navigation. --- paper_title: Vehicle Localization using 76GHz Omnidirectional Millimeter-Wave Radar for Winter Automated Driving* paper_content: This paper presents the 76GHz MWR (Millimeter-Wave Radar)-based self-localization method for automated driving during snowfall. Previously, there were many LIDAR (Light Detection and Ranging)-based localization techniques proposed for high measurement accuracy and robustness to changes of day and night. However, they did not provide effective approaches for snow conditions because of sensing noise (i.e., environmental resistance) created by snowfall. Therefore, this paper developed a MWR-based map generation and a real-time localization method by modeling the uncertainties of error propagation. Quantitative evaluations are performed on driving data with and without snow conditions, using a LIDAR-based method as the baseline. Experimental results show that a lateral root mean square error of about 0. 25m can be obtained, regardless of the presence or absence of snowfall. Thus, it can be investigated that a potential performance of radar-based localization. --- paper_title: Secure positioning of wireless devices with application to sensor networks paper_content: So far, the problem of positioning in wireless networks has been mainly studied in a non-adversarial setting. In this work, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call verifiable multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations. --- paper_title: Vehicle control algorithms for cooperative driving with automated vehicles and intervehicle communications paper_content: Describes the technologies of cooperative driving with automated vehicles and intervehicle communications in the Demo 2000 cooperative driving. Cooperative driving, aiming at the compatibility of safety and efficiency of road traffic, means that automated vehicles drive by forming a flexible platoon over a couple of lanes with a short intervehicle distance while performing lane changing, merging, and leaving the platoon. The vehicles for the demonstration are equipped with automated lateral and longitudinal control functions with localization data by the differential global positioning system (DGPS) and the intervehicle communication function with 5.8-GHz dedicated short range communication (DSRC) designed for the dedicated use in the demonstration. In order to show the feasibility and potential of the technologies, the demonstration was held in November 2000, on a test track with five automated vehicles. The scenario included stop and go, platooning, merging, and obstacle avoidance. --- paper_title: The Human Element in Autonomous Vehicles paper_content: Autonomous vehicle research has been prevalent for well over a decade but only recently has there been a small amount of research conducted on the human interaction that occurs in autonomous vehicles. Although functional software and sensor technology is essential for safe operation, which has been the main focus of autonomous vehicle research, handling all elements of human interaction is also a very salient aspect of their success. This paper will provide an overview of the importance of human vehicle interaction in autonomous vehicles, while considering relevant related factors that are likely to impact adoption. Particular attention will be given to prior research conducted on germane areas relating to control in the automobile, in addition to the different elements that are expected to affect the likelihood of success for these vehicles initially developed for human operation. This paper will also include a discussion of the limited research conducted to consider interactions with humans and the current state of published functioning software and sensor technology that exists. --- paper_title: An overview of 3GPP device-to-device proximity services paper_content: Device-to-device communication is likely to be added to LTE in 3GPP Release 12. In principle, exploiting direct communication between nearby mobile devices will improve spectrum utilization, overall throughput, and energy consumption, while enabling new peer-to-peer and location-based applications and services. D2D-enabled LTE devices can also become competitive for fallback public safety networks, which must function when cellular networks are not available or fail. Introducing D2D poses many challenges and risks to the long-standing cellular architecture, which is centered around the base station. We provide an overview of D2D standardization activities in 3GPP, identify outstanding technical challenges, draw lessons from initial evaluation studies, and summarize "best practices" in the design of a D2D-enabled air interface for LTE-based cellular networks --- paper_title: PrivHab+: A secure geographic routing protocol for DTN paper_content: Abstract We present PrivHab+, a secure geographic routing protocol that learns about the mobility habits of the nodes of the network and uses this information in a secure manner. PrivHab+ is designed to operate in areas that lack of network, using the store-carry-and-forward approach. PrivHab+ compares nodes and chooses the best choice to carry messages towards a known geographical location. To achieve a high performance and low overhead, PrivHab+ uses information about the usual whereabouts of the nodes to make optimal routing decisions. PrivHab+ makes use of cryptographic techniques from secure multi-party computation to preserve nodes’ privacy while taking routing decisions. The overhead introduced by PrivHab+ is evaluated using a proof-of-concept implementation, and its performance is studied under the scope of a realistic application of podcast distribution. PrivHab+ is compared, through simulation, with a set of well-known delay-tolerant routing algorithms in two different scenarios of remote rural areas. --- paper_title: Extended Privacy in Crowdsourced Location-Based Services Using Mobile Cloud Computing paper_content: Crowdsourcing mobile applications are of increasing importance due to their suitability in providing personalized and better matching replies. The competitive edge of crowdsourcing is twofold; the requestors can achieve better and/or cheaper responses while the crowd contributors can achieve extra money by utilizing their free time or resources. Crowdsourcing location-based services inherit the querying mechanism from their legacy predecessors and this is where the threat lies. In this paper, we are going to show that none of the advanced privacy notions found in the literature except for -anonymity is suitable for crowdsourced location-based services. In addition, we are going to prove mathematically, using an attack we developed, that -anonymity does not satisfy the privacy level needed by such services. To respond to this emerging threat, we will propose a new concept, totally different from existing resource consuming privacy notions, to handle user privacy using Mobile Cloud Computing. --- paper_title: Real-Time Positioning Based on Millimeter Wave Device to Device Communications paper_content: Current generation mobile wireless communication networks are not suitable for real-time positioning applications because timing information is not readily available. Fifth generation (5G) cellular networks provide device to device real-time communications which can be used for real-time positioning. Millimeter-wave (mmWave) transmission is regarded as a key technology in 5G networks. In this paper, several 73-GHz mmWave waveforms are investigated. A new threshold selection algorithm for energy detector-based ranging is proposed which employs a dynamic threshold based on an artificial neural network. The positioning performance using this algorithm with mmWave waveforms is investigated. --- paper_title: Evolution of Positioning Techniques in Cellular Networks, from 2G to 4G paper_content: This review paper presents within a common framework the mobile station positioning methods applied in 2G, 3G, and 4G cellular networks, as well as the structure of the related 3GPP technical specifications. The evolution path through the generations is explored in three steps at each level: first, the new network elements supporting localization features are introduced; then, the standard localization methods are described; finally, the protocols providing specific support to mobile station positioning are studied. To allow a better understanding, this paper also brings a brief review of the cellular networks evolution paths. --- paper_title: Supporting Implicit Human-to-Vehicle Interaction: Driver Identification from Sitting Postures paper_content: Mobile internet services have started to pervade into vehicles, approaching a new generation of networked, ”smart” cars. With the evolution of in-car services, particularly with the emergence of services that are personalized to an individual driver (like road pricing, maintenance, insurance and entertainment services) the need for reliable, yet easy to handle identification and authentication has arisen. Services that demand unambiguous and unmistakable continuous identification of the driver have recently attracted many research efforts, mostly proposing video-based face/pose recognition, or acoustic analysis. A driver identification system for vehicular services is proposed, that, as opposed to video or audio based techniques, does not suffer from the continuously changing environment while driving, like lighting or noise conditions. A posture recognition technique based on a high resolution pressure sensor integrated invisibly and unobtrusively into the fabric of the driver seat has been developed, taking the pelvic bone distance as a biometric trait. Data coming from two 32x32 pressure sensor arrays (seatand backrest) is classified according to features defined based on the pelvic bone signature, mid and high pressure distribution and body weight. Empirical studies, besides analyzing (quantitative) driver recognition performance, assess the identification technique according to the qualitative attributes universality, collectability, uniqueness, and permanency. The proposed driver identification technique is implicit and thus not reliant to attention, it is continuously in operation while seated, and requires no active person cooperation. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ISVCS 2008, July 22 – 24, 2008, Dublin, Ireland. Copyright 2008 ACM ISBN 978-963-9799-27-1 ...$5.00. These characteristics encourage the universal use of the approach – a whole new modality for person-to-environment interaction seems possible. --- paper_title: Securing vehicles against cyber attacks paper_content: The automobile industry has grown to become an integral part of our everyday life. As vehicles evolve, the primarily mechanical solutions for vehicle control are gradually replaced by electronics and software solutions forming in-vehicle computer networks. An emerging trend is to introduce wireless technology in the vehicle domain by attaching a wireless gateway to the in-vehicle network. By allowing wireless communication, real-time information exchange between vehicles and between infrastructure and vehicles become reality. This communication allows for road condition reporting, decision making, and remote diagnostics and firmware updates over-the-air. However, allowing external parties wireless access to the in-vehicle network creates a potential entry-point for cyber attacks. In this paper, we investigate the security issues of allowing external wireless communication. We use a defense-in-depth perspective and discuss security challenges for each of the prevention, detection, deflection, countermeasures, and recovery layers. --- paper_title: Secure positioning of wireless devices with application to sensor networks paper_content: So far, the problem of positioning in wireless networks has been mainly studied in a non-adversarial setting. In this work, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call verifiable multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations. --- paper_title: Vehicle Authentication via Monolithically Certified Public Key and Attributes paper_content: Vehicular networks are used to coordinate actions among vehicles in traffic by the use of wireless transceivers (pairs of transmitters and receivers). Unfortunately, the wireless communication among vehicles is vulnerable to security threats that may lead to very serious safety hazards. In this work, we propose a viable solution for coping with Man-in-the-Middle attacks. Conventionally, Public Key Infrastructure is utilized for a secure communication with the pre-certified public key. However, a secure vehicle-to-vehicle communication requires additional means of verification in order to avoid impersonation attacks. To the best of our knowledge, this is the first work that proposes to certify both the public key and out-of-band sense-able static attributes to enable mutual authentication of the communicating vehicles. Vehicle owners are bound to preprocess (periodically) a certificate for both a public key and a list of fixed unchangeable attributes of the vehicle. Furthermore, the proposed approach is shown to be adaptable with regards to the existing authentication protocols. We illustrate the security verification of the proposed protocol using a detailed proof in Spi calculus. --- paper_title: Role-Based Access Control for Vehicular Adhoc Networks paper_content: VANET, the vehicular ad-hoc network, is a new communication technology that is characterized by a large amount of moving hosts, dynamic topology, and permanently changing data flows. VANE T specific features result in poor access control and isolation of sensitive data in intranetwork. Our paper discusses this issue and adapts the role-based access control to VANET with a hierarchy of objects and roles to ensure access control and improve data confidentiality. --- paper_title: RAISE: An Efficient RSU-Aided Message Authentication Scheme in Vehicular Communication Networks paper_content: Addressing security and privacy issues is a prerequisite for a market-ready vehicular communication network. Although recent related studies have already addressed most of these issues, few of them have taken scalability issues into consideration. When the traffic density becomes larger, a vehicle cannot verify all signatures of the messages sent by its neighbors in a timely manner, which results in message loss. Communication overhead as another issue has also not been well addressed in previously reported studies. To deal with these issues, this paper introduces a novel RSU-aided messages authentication scheme, called RAISE. With RAISE, roadside units (RSUs) are responsible for verifying the authenticity of the messages sent from vehicles and for notifying the results back to vehicles. In addition, our scheme adopts the k-anonymity approach to protect user identity privacy, where an adversary cannot associate a message with a particular vehicle. Extensive simulations are conducted to verify the proposed scheme, which demonstrates that RAISE yields much better performance than any of the previously reported counterparts in terms of message loss ratio and delay. --- paper_title: Potential Cyberattacks on Automated Vehicles paper_content: Vehicle automation has been one of the fundamental applications within the field of intelligent transportation systems (ITS) since the start of ITS research in the mid-1980s. For most of this time, it has been generally viewed as a futuristic concept that is not close to being ready for deployment. However, recent development of “self-driving” cars and the announcement by car manufacturers of their deployment by 2020 show that this is becoming a reality. The ITS industry has already been focusing much of its attention on the concepts of “connected vehicles” (United States) or “cooperative ITS” (Europe). These concepts are based on communication of data among vehicles (V2V) and/or between vehicles and the infrastructure (V2I/I2V) to provide the information needed to implement ITS applications. The separate threads of automated vehicles and cooperative ITS have not yet been thoroughly woven together, but this will be a necessary step in the near future because the cooperative exchange of data will provide vital inputs to improve the performance and safety of the automation systems. Thus, it is important to start thinking about the cybersecurity implications of cooperative automated vehicle systems. In this paper, we investigate the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities. We analyze the threats on autonomous automated vehicles and cooperative automated vehicles. This analysis shows the need for considerably more redundancy than many have been expecting. We also raise awareness to generate discussion about these threats at this early stage in the development of vehicle automation systems. --- paper_title: Generalized geometric triangulation algorithm for mobile robot absolute self-localization paper_content: Triangulation with active beacons is widely used in the absolute localization of mobile robots. The geometric triangulation algorithm allows the self-localization of a robot on a plane. However, the three beacons it uses must be "properly ordered" and the algorithm works consistently only when these beacons within the triangle form the robot. This paper presents an improved version of the algorithm, which does not require beacon ordering and works over the whole navigation plane except for a few well-determined lines where localization is not possible. --- paper_title: Survey of Cellular Mobile Radio Localization Methods: From 1G to 5G paper_content: Cellular systems evolved from a dedicated mobile communication system to an almost omnipresent system with unlimited coverage anywhere and anytime for any device. The growing ubiquity of the network stirred expectations to determine the location of the mobile devices themselves. Since the beginning of standardization, each cellular mobile radio generation has been designed for communication services, and satellite navigation systems, such as Global Positioning System (GPS), have provided precise localization as an add-on service to the mobile terminal. Self-contained localization services relying on the mobile network elements have offered only rough position estimates. Moreover, satellite-based technologies suffer a severe degradation of their localization performance in indoors and urban areas. Therefore, only in subsequent cellular standard releases, more accurate cellular-based location methods have been considered to accommodate more challenging localization services. This survey provides an overview of the evolution of the various localization methods that were standardized from the first to the fourth generation of cellular mobile radio, and looks over what can be expected with the new radio and network aspects for the upcoming generation of fifth generation. --- paper_title: A Novel Approach for Improved Vehicular Positioning Using Cooperative Map Matching and Dynamic Base Station DGPS Concept paper_content: In this paper, a novel approach for improving vehicular positioning is presented. This method is based on the cooperation of the vehicles by communicating their measured information about their position. This method consists of two steps. In the first step, we introduce our cooperative map matching method. This map matching method uses the V2V communication in a vehicular ad hoc network (VANET) to exchange global positioning system (GPS) information between vehicles. Having a precise road map, vehicles can apply the road constraints of other vehicles in their own map matching process and acquire a significant improvement in their positioning. After that, we have proposed the concept of a dynamic base station DGPS (DDGPS), which is used by vehicles in the second step to generate and broadcast the GPS pseudorange corrections that can be used by newly arrived vehicles to improve their positioning. The DDGPS is a decentralized cooperative method that aims to improve the GPS positioning by estimating and compensating the common error in GPS pseudorange measurements. It can be seen as an extension of DGPS where the base stations are not necessarily static with an exact known position. In the DDGPS method, the pseudorange corrections are estimated based on the receiver's belief on its positioning and its uncertainty and then broadcasted to other GPS receivers. The performance of the proposed algorithm has been verified with simulations in several realistic scenarios. --- paper_title: Design and Implementation of Urban Vehicle Positioning System Based on RFID, GPS and LBS paper_content: In order to monitor the mobile vehicles efficiently and judge the congestion status under complex condition of road and traffic, Radio frequency identification (RFID) technology is used to realize the dynamic identification and information exchange of the vehicles together with using the advantage of technologies of Global Positioning System (GPS) and Location Based Service (LBS) to get the precise position of vehicle. LBS technology is used to realize the function of showing the congestion status under complex condition of road and traffic, and the status of the vehicle can be shown to users by electronic map on website. The model of vehicle monitoring is designed and the system is implemented and tested on the basis of combination of RFID, GPS and LBS, which can be of great significance to the information construction of traffic detection. --- paper_title: Secure positioning of wireless devices with application to sensor networks paper_content: So far, the problem of positioning in wireless networks has been mainly studied in a non-adversarial setting. In this work, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call verifiable multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations. --- paper_title: Secure Information Exchange in Defining the Location of the Vehicle paper_content: With the advent of the electric vehicle market, the problem of locating a vehicle is becoming more and more important. Smart roads are creating, where the car control system can work without a person - communicating with the elements on the road. The standard technologies, such as GPS, can't always accurately determine the location, and not all vehicles have a GPS-module. It is very important to build an effective secure communication protocol between the vehicle and the base stations on the road. In this paper we consider different methods of location determination, propose the improved communicating protocol between the vehicle and the base station. --- paper_title: A comparative study of Message Digest 5(MD5) and SHA256 algorithm paper_content: The document is a collection of written or printed data containing information. The more rapid advancement of technology, the integrity of a document should be kept. Because of the nature of an open document means the document contents can be read and modified by many parties so that the integrity of the information as a content of the document is not preserved. To maintain the integrity of the data, it needs to create a mechanism which is called a digital signature. A digital signature is a specific code which is generated from the function of producing a digital signature. One of the algorithms that used to create the digital signature is a hash function. There are many hash functions. Two of them are message digest 5 (MD5) and SHA256. Those both algorithms certainly have its advantages and disadvantages of each. The purpose of this research is to determine the algorithm which is better. The parameters which used to compare that two algorithms are the running time and complexity. The research results obtained from the complexity of the Algorithms MD5 and SHA256 is the same, i.e., (N), but regarding the speed is obtained that MD5 is better compared to SHA256. --- paper_title: A Guide to Fully Homomorphic Encryption paper_content: Fully homomorphic encryption (FHE) has been dubbed the holy grail of cryptography, an elusive goal which could solve the IT world’s problems of security and trust. Research in the area exploded after 2009 when Craig Gentry showed that FHE can be realised in principle. Since that time considerable progress has been made in finding more practical and more efficient solutions. Whilst research quickly developed, terminology and concepts became diverse and confusing so that today it can be difficult to understand what the achievements of different works actually are. The purpose of this paper is to address three fundamental questions: What is FHE? What can FHE be used for? What is the state of FHE today? As well as surveying the field, we clarify different terminology in use and prove connections between different FHE notions. --- paper_title: Secure positioning of wireless devices with application to sensor networks paper_content: So far, the problem of positioning in wireless networks has been mainly studied in a non-adversarial setting. In this work, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call verifiable multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations. --- paper_title: PrOLoc: resilient localization with private observers using partial homomorphic encryption paper_content: Aided by advances in sensors and algorithms, systems for localizing and tracking target objects or events have become ubiquitous in recent years. Most of these systems operate on the principle of fusing measurements of distance and/or direction to the target made by a set of spatially distributed observers using sensors that measure signals such as RF, acoustic, or optical. The computation of the target's location is done using multilateration and multiangulation algorithms, typically running at an aggregation node that, in addition to the distance/direction measurements, also needs to know the observers' locations. This presents a privacy risk for an observer that does not trust the aggregation node or other observers and could in turn lead to lack of participation. For example, consider a crowd-sourced sensing system where citizens are required to report security threats, or a smart car, stranded with a malfunctioning GPS, sending out localization requests to neighboring cars -- in both cases, observer (i.e., citizens and cars respectively) participation can be increased by keeping their location private. This paper presents PrOLoc, a localization system that combines partially homomorphic encryption with a new way of structuring the localization problem to enable efficient and accurate computation of a target's location without requiring observers to make public their locations or measurements. Moreover, and unlike previously proposed perturbation based techniques, PrOLoc is also resilient to malicious active false data injection attacks. We present two realizations of our approach, provide rigorous theoretical guarantees, and also compare the performance of each against traditional methods. Our experiments on real hardware demonstrate that PrOLoc yields location estimates that are accurate while being at least 500\times faster than state-of-art secure function evaluation techniques. --- paper_title: Efficient Fully Homomorphic Encryption from (Standard) LWE paper_content: We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:\begin{enumerate}\item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. \item We deviate from the "squashing paradigm'' used in all previous works. We introduce a new {\em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {\em without introducing additional assumptions}. \end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog(k)+\log \dbs$ bits per single-bit query (here, $k$ is a security parameter). --- paper_title: Securing Road Traffic Congestion Detection by Incorporating V2I Communications paper_content: In this paper, we address the security properties of automated road congestion detection systems. SCATS, SCOOT and InSync are three examples of Adaptive Traffic Control Systems (ATCSs) widely deployed today. ATCSs minimize the unused green time and reduce traffic congestion in urban areas using different methods such as induction loops and camcorders installed at intersections. The main drawback of these system is that they cannot capture incidents outside the range of these camcorders or induction loops. To overcome this hurdle, theoretical concepts for automated road congestion alarm systems including the system architecture, communication protocol, and algorithms are proposed. These concepts incorporate secure wireless vehicle-to-infrastructure (V2I) communications. The security properties of this new system are presented and then analyzed using the ProVerif protocol verification tool. --- paper_title: Vehicular Ad Hoc Networks (VANETS): Status, Results, and Challenges paper_content: Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services. --- paper_title: Survey on security attacks in Vehicular Ad hoc Networks (VANETs) paper_content: Vehicular Ad hoc Networks (VANETs) are emerging mobile ad hoc network technologies incorporating mobile routing protocols for inter-vehicle data communications to support intelligent transportation systems. Among others security and privacy are major research concerns in VANETs due to the frequent vehicles movement, time critical response and hybrid architecture of VANETs that make them different than other Ad hoc networks. Thus, designing security mechanisms to authenticate and validate transmitted message among vehicles and remove adversaries from the network are significantly important in VANETs. This paper presents several existing security attacks and approaches to defend against them, and discusses possible future security attacks with critical analysis and future research possibilities. --- paper_title: GSIS: A Secure and Privacy-Preserving Protocol for Vehicular Communications paper_content: In this paper, we first identify some unique design requirements in the aspects of security and privacy preservation for communications between different communication devices in vehicular ad hoc networks. We then propose a secure and privacy-preserving protocol based on group signature and identity (ID)-based signature techniques. We demonstrate that the proposed protocol cannot only guarantee the requirements of security and privacy but can also provide the desired traceability of each vehicle in the case where the ID of the message sender has to be revealed by the authority for any dispute event. Extensive simulation is conducted to verify the efficiency, effectiveness, and applicability of the proposed protocol in various application scenarios under different road systems. --- paper_title: Security and Privacy in VANET to reduce Authentication Overhead for Rapid Roaming Networks paper_content: Since the last few years VANET have received increased attention as the potential technology to enhance active and preventive safety on the road, as well as travel comfort. Security and privacy are indispensable in vehicular communications for successful acceptance and deployment of such a technology. Generally, attacks cause anomalies to the network functionality. A secure VANET system, while exchanging information should protect the system against unauthorized message injection, message alteration, eavesdropping. In this paper, various security and privacy issues and challenges are discussed. The various authentication schemes in wireless LAN, VANETS are discussed. Out of various authentication schemes that are used to reduce the overhead in authentication, when roaming proxy reencryption scheme and new proxy re encryption scheme is reviewed in detail. A comparison between the two schemes is done, which shows that the privacy can be maintained better by using new proxy re encryption. --- paper_title: Survey on Security Issues in Vehicular Ad Hoc Networks paper_content: Abstract Vehicular Ad hoc NETworks are special case of ad hoc networks that, besides lacking infrastructure, communicating entities move with various accelerations. Accordingly, this impedes establishing reliable end-to-end communication paths and having efficient data transfer. Thus, VANETs have different network concerns and security challenges to get the availability of ubiquitous connectivity, secure communications, and reputation management systems which affect the trust in cooperation and negotiation between mobile networking entities. In this survey, we discuss the security features, challenges, and attacks of VANETs, and we classify the security attacks of VANETs due to the different network layers. --- paper_title: Temporal Outlier Detection in Vehicle Traffic Data paper_content: Outlier detection in vehicle traffic data is a practical problem that has gained traction lately due to an increasing capability to track moving vehicles in city roads. In contrast to other applications, this particular domain includes a very dynamic dimension: time. Many existing algorithms have studied the problem of outlier detection at a single instant in time. This study proposes a method for detecting temporal outliers with an emphasis on historical similarity trends between data points. Outliers are calculated from drastic changes in the trends. Experiments with real world traffic data show that this approach is effective and efficient. --- paper_title: Secure positioning in wireless networks paper_content: So far, the problem of positioning in wireless networks has been studied mainly in a nonadversarial setting. In this paper, we analyze the resistance of positioning techniques to position and distance spoofing attacks. We propose a mechanism for secure positioning of wireless devices, that we call verifiable multilateration. We then show how this mechanism can be used to secure positioning in sensor networks. We analyze our system through simulations. --- paper_title: Detecting Sybil attacks in VANETs paper_content: Sybil attacks have been regarded as a serious security threat to Ad hoc Networks and Sensor Networks. They may also impair the potential applications in Vehicular Ad hoc Networks (VANETs) by creating an illusion of traffic congestion. In this paper, we make various attempts to explore the feasibility of detecting Sybil attacks by analyzing signal strength distribution. First, we propose a cooperative method to verify the positions of potential Sybil nodes. We use a Random Sample Consensus (RANSAC)-based algorithm to make this cooperative method more robust against outlier data fabricated by Sybil nodes. However, several inherent drawbacks of this cooperative method prompt us to explore additional approaches. We introduce a statistical method and design a system which is able to verify where a vehicle comes from. The system is termed the Presence Evidence System (PES). With PES, we are able to enhance the detection accuracy using statistical analysis over an observation period. Finally, based on realistic US maps and traffic models, we conducted simulations to evaluate the feasibility and efficiency of our methods. Our scheme proves to be an economical approach to suppressing Sybil attacks without extra support from specific positioning hardware. --- paper_title: A review and classification of various VANET Intrusion Detection Systems paper_content: The use of wireless links renders a vehicular ad-hoc network (VANET) vulnerable to malicious attacks such as Denial of Service, blackhole attack, Sybil attack, selective forwarding and altering routing information. In wired networks the attacker needs to gain access to the physical media to make an attack. In wireless networks the scenario is much different, there are no firewalls and gateways in place hence attacks can take place from any location within radio coverage area. Each mobile node in ad-hoc network is an autonomous unit in itself free to move independently. This means a node with not adequate physical protection is very much susceptible to be compromised. It is difficult to track down a single compromised node in a large network, attacks stemming from a compromised node are far more detrimental and much harder to detect. --- paper_title: An optimized adaptive algorithm for authentication of safety critical messages in VANET paper_content: Authentication is one of the essential frameworks to ensure safe and secure message dissemination in Vehicular Adhoc Networks (VANETs). But an optimized authentication algorithm with reduced computational overhead is still a challenge. In this paper, we propose a novel classification of safety critical messages and provide an adaptive algorithm for authentication in VANETs using the concept of Merkle tree and Elliptic Curve Digital Signature Algorithm (ECDSA). Here, the Merkle tree is constructed to store the hashed values of public keys at the leaf nodes. This algorithm addresses Denial of Service (DoS) attack, man in the middle attack and phishing attack. Experimental results show that, the algorithm reduces the computational delay by 20 percent compared to existing schemes. --- paper_title: Securing Road Traffic Congestion Detection by Incorporating V2I Communications paper_content: In this paper, we address the security properties of automated road congestion detection systems. SCATS, SCOOT and InSync are three examples of Adaptive Traffic Control Systems (ATCSs) widely deployed today. ATCSs minimize the unused green time and reduce traffic congestion in urban areas using different methods such as induction loops and camcorders installed at intersections. The main drawback of these system is that they cannot capture incidents outside the range of these camcorders or induction loops. To overcome this hurdle, theoretical concepts for automated road congestion alarm systems including the system architecture, communication protocol, and algorithms are proposed. These concepts incorporate secure wireless vehicle-to-infrastructure (V2I) communications. The security properties of this new system are presented and then analyzed using the ProVerif protocol verification tool. --- paper_title: An Analytical Study of Routing Attacks in Vehicular Ad-hoc Networks (VANETs) paper_content: Vehicular Ad-hoc Network (VANET) is a rising & most challenging research area to provide Intelligent Transportation System (ITS) services to the end users. The implementation of routing protocols in VANET is an exigent task as of its high mobility & frequent link disruption topology. VANET is basically used to provide various infotainment services to each and every end user; these services are further responsible to provide an efficient driving environment. At present, to provide efficient communication in vehicular networks several routing protocols have been designed, but the networks are vulnerable to several threats in the presence of malicious nodes. Today, security is the major concern for various VANET applications where a wrong message may directly or indirectly affect the human lives. In this paper, we investigate the several security issues on network layer in VANET. In this, we also examine routing attacks such as Sybil & Illusion attacks, as well as available solutions for such attacks in existing VANET protocols. --- paper_title: Mitigating the Effects of Position-Based Routing Attacks in Vehicular Ad Hoc Networks paper_content: In this paper, we investigate the effects of routing loop, sinkhole, and wormhole attacks on the position-based routing (PBR) in vehicular ad hoc networks (VANETs). We also introduce a new attack termed wormhole-aided sybil attack on PBR. Our study shows that the wormhole-aided sybil attack has the worst impact on the packet delivery in PBR. To ensure the reliability of PBR in VANETs, we propose a set of plausibility checks that can mitigate the impact of these PBR attacks. The proposed plausibility checks do not require adding extra hardware to the vehicles. In addition, they can adapt to different road characteristics and traffic conditions. Simulation results are given to demonstrate that the proposed plausibility checks are able to efficiently mitigate the impact of the previously mentioned PBR attacks. --- paper_title: VANet security challenges and solutions: A survey paper_content: Abstract VANET is an emergent technology with promising future as well as great challenges especially in its security. In this paper, we focus on VANET security frameworks presented in three parts. The first presents an extensive overview of VANET security characteristics and challenges as well as requirements. These requirements should be taken into consideration to enable the implementation of secure VANET infrastructure with efficient communication between parties. We give the details of the recent security architectures and the well-known security standards protocols. The second focuses on a novel classification of the different attacks known in the VANET literature and their related solutions. The third is a comparison between some of these solutions based on well-known security criteria in VANET. Then we draw attention to different open issues and technical challenges related to VANET security, which can help researchers for future use. --- paper_title: The scrambler attack: A robust physical layer attack on location privacy in vehicular networks paper_content: Vehicular networks provide the basis for a wide range of both safety and non-safety applications. One of the key challenges for wide acceptance is to which degree the drivers' privacy can be protected. The main technical privacy protection mechanism is the use of changing identifiers (from MAC to application layer), so called pseudonyms. The effectiveness of this approach, however, is clearly reduced if specific characteristics of the physical layer (e.g., in the transmitted signal) reveal the link between two messages with different pseudonyms. In this paper, we present such a fingerprinting technique: the scrambler attack. In contrast to other physical layer fingerprinting methods, it does not rely on potentially fragile features of the channel or the hardware, but exploits the transmitted scrambler state that each receiver has to derive in order to decode a packet, making this attack extremely robust. We show how the scrambler attack bypasses the privacy protection mechanism of state-of-the-art approaches and quantify the degradation of drivers' location privacy with an extensive simulation study. Based on our results, we identify additional technological requirements in order to enable privacy protection mechanisms on a large scale. --- paper_title: Protecting Location Privacy with Personalized k-Anonymity: Architecture and Algorithms paper_content: Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty. --- paper_title: Vehicular Ad Hoc Networks paper_content: Vehicular Ad Hoc Networks(VANETs) are computer networks of moving nodes attached to vehicles. Theyalso include stationary infrastructure elements. This use of wireless local area network technology is becoming necessity and will grow rapidly in the near future. There is a wide range of VANET and associated Dedicated Short Range Communications (DSRC)applications. It starts fromsimple cases likeactive safety systems for driving, traffic information and collision warning, to more comprehensive Intelligent Transport Systems (ITS). Step up from simple driver assistance will be the complete takeover of the vehicle control. Finally, VANETs will support auto driver algorithms and applications for unmanned ground vehicles (UGVs). The aim of this paper is to present an overview of the technology, standards and problems, with some proposals for the quick start with the most necessary applications and for the future research. --- paper_title: Software defined security for vehicular ad hoc networks paper_content: Vehicular Ad hoc NETwork (VANET) is one of the intensively growing technologies which shape new social and engineering opportunities such as smart traffic control, effective road safety, optimal rescue maintenance, enhanced customer services. However, common VANET as a large-scale wireless environment with extremely dynamic topology lacks in information security. Specific nature of VANET prevents traditional solutions to be applied ‘as is’ for security purposes. The paper reviews software defined security (SDS) suggested to be employed to provide the programmable and flexible access control to VANET. --- paper_title: A Pseudonym Management System to Achieve Anonymity in Vehicular Ad Hoc Networks paper_content: In this paper we propose a framework for providing anonymity to communicating cars in VANETs. The anonymity is accomplished based on a system of pseudonym generation, distribution, and replenishing. The road side units (RSUs) play a key role in the framework by receiving the originally generated pseudonyms from the trusted authority, and then distributing pseudonym sets to cars while shuffling the sets amongst themselves to maximize anonymity. The pseudonym distribution process among the RSUs and to the vehicles is highly adaptive to accommodate the needs of the vehicles. We develop a distributed optimization algorithm for the shuffling process and a novel mechanism for cars to change their pseudonyms. Experimental evaluations based on ns3 simulations demonstrate the effectiveness of the framework through showing relatively high values of the used metric, namely the anonymity set. --- paper_title: Privacy in VANETs using Changing Pseudonyms - Ideal and Real paper_content: Vehicular ad hoc networks (VANETs) and vehicular communications are considered a milestone in improving the safety, efficiency and convenience in transportation. Vehicular ad hoc networks and many vehicular applications rely on periodic broadcast of the vehicles' location. For example, the location of vehicles can be used for detecting and avoiding collisions or geographical routing of data to disseminate warning messages. At the same time, this information can be used to track the users' whereabouts. Protecting the location privacy of the users of VANETs is important, because lack of privacy may hinder the broad acceptance of this technology. Frequently changing pseudonyms are commonly accepted as a solution to protect the privacy in VANETs. In this paper, we discuss their effectiveness and different methods to change pseudonyms. We introduce the context mix model that can be used to describe pseudonym change algorithms. Further, we asses in which situations, i.e. mix contexts, a pseudonym change is most effective and improves the privacy in vehicular environments. ---
Title: Positioning Information Privacy in Intelligent Transportation Systems: An Overview and Future Perspective Section 1: Introduction Description 1: Introduce the penetration of technology in modern systems, with emphasis on Intelligent Transportation Systems (ITS) and their main components. Outline the key contributions and structure of the paper. Section 2: Solutions for Spotting of Vehicles on the Road Description 2: Present various approaches for vehicle localization, detailing their benefits and drawbacks. Section 3: Node-centric Localization Description 3: Discuss localization methods where vehicles determine their locations using data from neighboring vehicles and other direct communications. Section 4: Human-centric Localization Description 4: Explore localization methods focused on human-carried gadgets and their interaction with vehicles. Section 5: Vehicle Location Protocols Using Additional Information Description 5: Describe protocols for vehicle location determination involving external data, explaining the technical details and security aspects. Section 6: Distance Determination without Anonymity Description 6: Review distance-bounding protocols that do not ensure vehicle anonymity, discussing their operation and limitations. Section 7: Location Determination with Mutual Base Station Authentication Description 7: Present enhanced protocols that guarantee mutual authentication between vehicles and base stations while preserving location privacy. Section 8: Related Security and Privacy Threats Description 8: Identify various security and privacy threats related to vehicle localization and data exchange in ITS, detailing conventional, infrastructure-related, and distributed attacks. Section 9: Discussion and Future Perspectives Description 9: Provide an overview of future development for privacy strategies in the context of positioning, specifically considering EU regulations and technological advancements such as 5G. Section 10: Acknowledgments and Conflicts of Interest Description 10: Acknowledge the contributions and support for the research and disclose any potential conflicts of interest.
Ammunition Reliability Against the Harsh Environments During the Launch of an Electromagnetic Gun: A Review
9
--- paper_title: A long-range naval railgun paper_content: The U.S. Navy is considering developing an electromagnetic railgun for use on future ships for long-range shore bombardment missions. The goals are to provide support for ground forces in a timely fashion, increase the ship-to-shore standoff distance, and improve ship survivability in combat situations. This paper describes the parameters of a notional railgun design that may be capable of supporting the Navy's needs. The Naval Surface Fire Support mission requires a railgun capable of firing high-energy projectiles for ranges of 300-500 km with a firing rate of up to 12 rounds per minute. The notional system described here is intended to meet these requirements while providing the ability to take advantage of the integrated electric drive architecture to be used on the next generation destroyer. Several important technology issues will need to be addressed before the feasibility of such a system can be demonstrated. These issues are identified and discussed. --- paper_title: Review of Inductive Pulsed Power Generators for Railguns paper_content: This paper addresses inductive pulsed power generators and their major components. Different inductive storage designs, such as solenoids, toroids, and force-balanced coils, are briefly presented and their advantages and disadvantages are mentioned. Special emphasis is given to inductive circuit topologies, which have been investigated in railgun research, such as the XRAM, meat grinder, or pulse transformer topologies. One section deals with opening switches as they are indispensable for inductive storages and another one deals briefly with superconducting magnetic energy storage for pulsed power applications. Finally, the most relevant inductor systems, which have been realized with respect to railgun research, are summarized in Table I , together with their main characteristics. --- paper_title: Development of the high-velocity gas-dynamics gun paper_content: Abstract The subjects of this paper are the historical overview and development of the high-velocity gas-dynamics gun. These are guns that derive their energy from a reservoir of compressed gas. Other 3uns derive their energy from electricity or from high explosive. Their historical overviews and developments are covered in papers by Mr. William Weldon and Mr. Alex Wenzel. The gas dynamics gun is viewed first from the standpoint of modern technology. An idealized configuration, the “Reference Gun”, is analysed in order to quantify the effects of gun diameter and length, projectile mass, and propellant gas pressure and composition. The analysis assumes that the propellant is an ideal gas, and formulae are derived for the base pressure and velocity of the projectile as functions of the size and loading parameters of the gun. The analysis demonstrates that the prime requirements for high velocity are a high gas pressure, a low molecular weight gas, a light projectile, and a long gun. The history of guns is reviewed briefly from 14th century black-powder muzzle-loaders to 20th century, nitrocellulose -propellant, breech-loaded guns. The velocity limit of the modern gun is shown to be around 3 km/s, if the gun is loaded with nitrocellulose propellant and is very long (200 calibers). However, if the gun is loaded with hydrogen and the length doubled, it is shown that the velocity limit can be increased to 7 km/s, thus approaching current needs. The problem of using hydrogen has been solved by the invention of the piston-compression light-gas gun (PCLGG). However, the limited strength of the fragile, sabot-model projectiles of experimental research has capped the maximum acceleration and has placed a demand on the gun's operating cycle to generate a constant pressure at the base of the projectile for the launching run. This second problem has been partially solved by the invention of a modification to the PCLGG known as the piston-compression, accelerated-reservoir, light-gas gun (PCARLGG). Both the PCLGG and the PCARLGG are described. The performance of the PCARLGG has been analyzed by a hydrocode developed for this purpose, and the results of the calculations are presented and compared with experiment. The concept of a frictionless, adiabatic “Ideal Gun” is introduced in order to simplify the analysis of performance. It is shown that the performance of any ideal gun is given by a simple equation involving two dimensionless parameters that relate the projectile's velocity to its mass, its average base pressure, and the diameter and length of the gun. Based on the ideal gun equation, the maximum operating velocity of the gas-dynamics gun is estimated to be about 12 km/s. --- paper_title: Naval Railguns paper_content: Compared with propellant guns, railguns can fire at higher velocities and do not require gun propellant but use ships' fuel. These features lead to important advantages, including shorter time of flight (important for ship defense), higher lethality on target (important for direct fire), and very extensive range capability (important for support of troops on shore). Such extended range capability also supports the sea-basing concept in which a forward-deployed battle group is able to operate far enough off shore to be safe while providing a long reach for distant targets. In this paper, the characteristics of the railgun systems needed for these applications are identified and discussed, leading to a definition of the most important science and technology objectives for near-term research programs --- paper_title: Miniature mechanical safety and arming device with runaway escapement arming delay mechanism for artillery fuze paper_content: Abstract In this research, a miniature mechanical SAD (Safety and Arming Device) with arming delay was developed for actual munitions application. Reliable arming delay performance was achieved by applying a runaway escapement system that operates by a rack-and-pinion motion. The miniature mechanical SAD was fabricated using a stainless steel wet etching process that provided not only miniaturization but also a high processing yield. The miniature mechanical SAD performed successfully under the desired safety and arming conditions in lab tests and showed fine agreement with the finite element method simulation results. Field tests were performed with a grenade launcher to validate its performance under the actual firing conditions. One hundred samples that were shot 23.6 m (safety distance) and 200 m (arming distance), and every specific test criterion was met successfully. The new SAD was also found to be appropriate for safe use in artillery fuzes by conducting environmental tests under a variety of temperature, vibration, and impact conditions. --- paper_title: Measure variation of magnetic field waveforms above the rails of rail-gun during the launching period paper_content: The rail-gun has two rails with an armature sliding along the rails, and the armature is accelerated by the transient current. In order to analysis variation of magnetic field waveforms above rails of the rail-gun during the launching period for EMI designs, the magnetic fields are measured by loop probes, and current waveforms are measured by calibrated probes. The magnetic fields waveform besides static rails has the same raise time and width time as the main current. According to the armature movement states and measurement positions, magnetic waveforms are distinguished with each other. The experiment results show that magnetic field waveforms above the rails change for the different positions. There are narrower raise time and last time with higher velocities. The truncated waveforms introduce new spectrum signals around the rails. The waveforms at different positions are compared with each other, which can inflect the magnetic field distribution along the rails during the armature is launching. The results are useful to analysis the interference of the transient environment for the rail launcher and it is also useful for the EMI design of rail-gun systems. --- paper_title: The 100-kJ Modular Pulsed Power Units for Railgun paper_content: A 3.2-MJ pulsed power supply system for railgun has been built. The power supply system consists of 32 pulse forming units (PFUs) of 100 kJ each. Each PUF can be triggered and controlled independently by the center computer by fiber optic transmitters. The maximum total current of the 3.2-MJ supply system reaches 500 kA. The new PFU is smaller than the former one. The PFU is composed of two capacitors, a set of semiconducting switches (thyristor and crowbar diode), an inductor, a safety device, a local controller and data acquisition circuit, a current coil, and coaxial cables. The modular PFU is designed for a maximum charging voltage of 10 kV and the peak current up to 50 kA/module. In order to reduce the influence of the electromagnetic interference and the electromagnetic force and to provide the high flexibility reliability, the arrangement of components and the configuration of the pulsed power module have been improved. A local data acquisition circuit in each PFU can record and save the discharging current and the voltage of the capacitor itself and upload test data to the computer by fiber optics. This paper describes the stack configuration, the test facility, and the prototype arrangement of 32 modular pulsed power units. --- paper_title: A Four-Stage XRAM Generator as Inductive Pulsed Power Supply for a Small-Caliber Railgun paper_content: A four-stage XRAM generator is built and then applied to a railgun having a caliber of 15 × 15 mm. The XRAM generator is based on 1-mH Brooks coils and on standard high-power thyristors which are employed as opening switches according to the ICCOS countercurrent commutation principle. The coils of the generator are charged with a current of 10 kA, which corresponds to an energy of 200 kJ. The current is successfully commutated and a pulse with a maximum current of 40 kA is applied to the railgun. Consequently, a projectile with a total weight of 33 g is accelerated to 157 m/s within the railgun length of 2.1 m. The semiconductor switches are exposed to a steep voltage rise due to the muzzle flash when the projectile leaves the railgun without their critical limit being exceeded. Thus, the successful operation of the XRAM generator using semiconductor switches during railgun experiments is demonstrated. Furthermore, we are able to show in a further experiment that the muzzle flash of the railgun can be significantly reduced by switching the XRAM back into charging mode when the projectile leaves the bore. This method also allows the conservation of the energy that remains in the coils after the exit of the projectile from the railgun. --- paper_title: History and the Status of Electric Ship Propulsion, Integrated Power Systems, and Future Trends in the U.S. Navy paper_content: While electric propulsion for warships has existed for nearly a century, it has only been since the end of the Cold War that modern integrated power systems have been developed and implemented on U.S. Navy warships. The principal enablers have been the products of research and development for rotating machines (generators and propulsion motors), power electronics (power conversion and motor drives), energy storage, and controls. The U.S. Navy has implemented this advanced technology incrementally. Notably, DDG 1000 with its integrated propulsion system and CVN 78 with its electromagnetic aircraft launch system will soon join the fleet and mark another important advance to the electric warship. In the future, the integration of electric weapons such as railguns, high power radars, and lasers will result in the final achievement of the electric warship. --- paper_title: The image processing and target identification of laser imaging fuze paper_content: Imaging detection can get the geometric shape and exterior features of target in very close distance, and supply the information for target recognition. But because of the large amount of the information and the real-time requirement, the image of target is always distorted and incomplete.To solve this probolem,the image proecessing and target identification method of laser imaging fuze are introduced. Firstly,a thinning algorithm for binary image is proposed to get the target skeleton. Secondly, the feature of the target is extracted, this kind of feature is very helpful on the hardware design of system. Finally ,the FMW (Feature Mapping Wave) network is developed from M-P network and ART theory to recognitize target status. This kind of image processing system including plane target identification and the engagement of missile and plane identification. The validity of the system has been validated. --- paper_title: Design and fabrication of large- and small-bore railguns paper_content: A joint program between the Lawrence Livermore National Laboratory and the Los Alamos National Laboratory was conducted to establish whether railguns could be operated at megampere currents, to set operating limits, and to provide data to validate the modeling of railgun technology. This paper discusses the 12.7- and 50.0-mm-bore railguns designed and fabricated for this program. The design criteria, the materials and fabrication methods, and the success of the designs are discussed in detail. --- paper_title: Thermal Analysis of High-Energy Railgun Tests paper_content: This paper describes temperature measurements made on the high-energy medium-caliber launcher at the Institute for Advanced Technology. Simulations performed in Maxwell 3-D and E-Physics showed that Joule heating from current diffusing into the rails accounts for most of the temperature rise in the conductors. Temporal skin effects increase thermal dissipation significantly over what would be expected by the ohmic losses under fully diffused conditions. Based on this analysis, Joule heating is the overwhelmingly dominant source of heating in low-speed tests. As the velocity of the armature increases, Joule heating remains the dominant source of heat; however, additional mechanisms-which may include frictional heating, arcing energy, aluminum deposition, and temperature-dependent properties-are required to more satisfactorily explain the temperature profile obtained. --- paper_title: Electromagnetic launch: a review of the U.S. National Program paper_content: The United States Program to use electric energy rather than chemical energy to propel materials to high velocity is now focused almost entirely on fundamental research efforts to provide an understanding of the critical research issues for selected military applications, specifically on applications of direct interest to the U.S. Army. Almost all of the applications envisioned since the inception of the program in the late 1970's still appear to be viable. But, the military interest to propel projectiles to higher and higher velocities for direct and indirect fire applications dominates the funding for research and consequently determines the current directions of the science and technology. --- paper_title: A 10 MJ cryogenic inductor paper_content: A high current, high efficiency inductor for a 10 MJ pulsed power supply for powering experimental electromagnetic launcher systems has been designed and is in fabrication. This inductor employs liquid nitrogen cooled copper windings which are compensated for transient effects and high strength aluminum structure to react magnetic forces. A toroidal geometry minimizes external fields, and is capable of being connected in either a 20 µH or 5 µH configuration with peak currents of 1 and 2 MA, respectively. The details of the concept selection and design are presented. --- paper_title: Early electric gun research paper_content: There have been sporadic efforts to use electricity to power guns and launch projectiles at high velocity for more than a century. Each generation seems to have "reinvented" the idea and has built upon prior efforts, leading to successively greater progress. With the present interest in electric guns for military, and other, applications, it is instructive to review previous developments and examine their relevance to present research. Although there are isolated reports of early efforts in the 19th century, the major developments have been in the 20th century. They include efforts in Norway in 1901, French developments in World War I and research in Germany and Japan during World War II. The primary focus of this paper is to review the work of Andre Louis Octave Fauchon-Villeplee in France and Joachim Hansler in Germany, although the early Japanese and Russian interests are also mentioned. The various novel ideas suggested are highlighted, especially where they are relevant to ideas that are being studied or considered today. --- paper_title: The ISL Rapid Fire Railgun Project RAFIRA Part I: Technical Aspects and Design Considerations paper_content: This paper reports on the progress made within the rapid fire railgun (RAFIRA) project recently launched at the ISL. The goal of this project is to investigate the multishot capacity of the armature technology developed at ISL. This technology is characterized by the use of multiple metal fiber brush armatures. The project bases on results mainly obtained with the railgun EMA3 at ISL (muzzle velocity v0 les 2 km/s, applied energy per shot Eprim < 1MJ, 1 = 3 m, regular cal. = 15 mm x 30 mm). These results are characterized by good contact performance with respect to friction, rail erosion and contact transition in the range up to 2 km/s. The paper summarizes the main results in single shot mode and introduces preliminary experiments with respect to multishot applications. Subsequently technical aspects about loading and switching technologies for a multishot system are discussed and last but not least concrete design considerations for the medium caliber system RAFIRA are presented. --- paper_title: Magnetic Diffusion Inside the Rails of an Electromagnetic Launcher: Experimental and Numerical Studies paper_content: The topic of this paper is the distribution of magnetic fields inside the rails of the electromagnetic railgun RAFIRA located at the ISL. The magnetic field pulse characteristics are measured using colossal magnetoresistance-B-scalar sensors placed at different depths inside the rails of the accelerator. During launch the muzzle velocity reached up to 1.4 km/s, the electrical shot energy is about 1.2 MJ and the projectile mass was 140 g. The obtained results are analyzed using two models based on analytic solutions of Maxwell's equations. The first model considers the 1-D magnetic field diffusion in the direction perpendicular to the rails. The second model includes convection and simulates the 2-D behavior of the magnetic field distribution in three regions: the armature, the contact zone between rail and armature and the rail behind the armature. Additionally, 2-D and 3-D quasistationary finite element models are developed using Comsol Multiphysics. Excellent agreement is found between the 3-D simulation results and the measurements of magnetic diffusion. --- paper_title: Thermal and electromagnetic analysis of an electromagnetic launcher paper_content: An advanced high-power electromagnetic launcher (EML) improves performance by as much as 30% over conventional launchers. Electrical energy is the main driving source for the electromagnetic launcher. In the new EML, thermal energy, generated by the extraordinarily high current that goes through the rail and the armature, changes the electrical, thermal, and mechanical specifications of the structure. This paper reports on a study of the thermal and magnetic induction distribution in the rail and the armature at different locations. In our formulation of governing nonlinear differential equations, because of the electrical conductivity and ohmic heating of the rail and the armature, Maxwell equations are coupled with energy equations. The friction force that causes heat between the armature and the rail is considered in the equations, as is the melting latent heat effect. To solve the nonlinear governing differential equations, we utilize an unstructured, moving-mesh-generation, control-volume-based finite-difference code for the rail and the armature. In this method of solution, unlike most others, the rail stays stationary and the armature moves in the forward direction. Results obtained for the rail and the armature show that the maximum temperature occurs at the trailing edge of the armature. In this region, the temperature reaches about 600 K. However, the temperature of 1 m of rail stays around 360 K. --- paper_title: Pulsed magnetic field measurement system based on colossal magnetoresistance-B-scalar sensors for railgun investigation paper_content: A high pulsed magnetic field measurement system based on the use of CMR-B-scalar sensors was developed for the investigations of the electrodynamic processes in electromagnetic launchers. The system consists of four independent modules (channels) which are controlled by a personal computer. Each channel is equipped with a CMR-B-scalar sensor connected to the measurement device—B-scalar meter. The system is able to measure the magnitude of pulsed magnetic fields from 0.3 T to 20 T in the range from DC up to 20 kHz independently of the magnetic field direction. The measurement equipment circuit is electrically separated from the ground and shielded against low and high frequency electromagnetic noise. The B-scalar meters can be operated in the presence of ambient pulsed magnetic fields with amplitudes up to 0.2 T and frequencies higher than 1 kHz. The recorded signals can be transmitted to a personal computer in a distance of 25 m by means of a fiber optic link. The system was tested using the electromagnet... --- paper_title: Measurement of Solid Armature’s In-Bore Velocity Using B-Dot Probes in a Series-Augmented Railguns paper_content: The waveform of solid armature displacement and velocity can be obtained by arranging B-dot probe arrays along the barrel of a series augmented railgun. The moment of armature arrival can be detected by an evident change in the differential signal from the B-dot probe while armature passes by. However, it is difficult or even impossible in a railgun to directly pick up the arrival moment, because both armature movement and rail current are responsible for the change of differential signal, especially in the series-augmented railgun where currents in both outer rails and connecting conductors are probably sensed by the B-dot probe. A ratio function (RF) has been introduced to eliminate the influence of current change, which is the ratio of the integral of differential signal to the current. The RF of the armature probe aimed at the armature current will reach a maximum at the moment of armature arrival, while the one at the rail current will reach the median. Three shots with identical initial conditions were conducted in a series-augmented railgun, each of which had adopted the armature probe, rail probe, and Velocity Interferometer System for Any Reflector, respectively. The good agreement between the displacement and velocity waveforms derived from B-dot probes indicated that the method and measurement were valid. The accuracy of the B-dot method is mainly affected by probe dimension and position, railgun current distribution, and electromagnetic noise. --- paper_title: Application of W-band, Doppler Radar to Railgun Velocity Measurements paper_content: Abstract This paper describes a W-Band Doppler radar system in use at the electromagnetic launch facility at the Naval Surface Warfare Center in Dahgren, VA. Experimental results for a medium caliber railgun launch at 2000 m/s are presented. A discussion of the radar signal characteristics is given and the Doppler velocity profile during launch is computed. The in-bore time history of the launch package derived from the radar system is found to have good agreement with that derived from traditional magnetic field flux sensors (B-dot sensors). Given the increased temporal resolution of the radar system over B-dot measurements, time resolved friction coefficients can be estimated based on launch parameters. Due to the capabilities of the Dahlgren test range, the radar is able to track the projectile for a portion of flight just after the projectile exits the launcher. Drag coefficients for the projectile in hypervelocity free-flight are also presented. --- paper_title: Thermal and electromagnetic analysis of an electromagnetic launcher paper_content: An advanced high-power electromagnetic launcher (EML) improves performance by as much as 30% over conventional launchers. Electrical energy is the main driving source for the electromagnetic launcher. In the new EML, thermal energy, generated by the extraordinarily high current that goes through the rail and the armature, changes the electrical, thermal, and mechanical specifications of the structure. This paper reports on a study of the thermal and magnetic induction distribution in the rail and the armature at different locations. In our formulation of governing nonlinear differential equations, because of the electrical conductivity and ohmic heating of the rail and the armature, Maxwell equations are coupled with energy equations. The friction force that causes heat between the armature and the rail is considered in the equations, as is the melting latent heat effect. To solve the nonlinear governing differential equations, we utilize an unstructured, moving-mesh-generation, control-volume-based finite-difference code for the rail and the armature. In this method of solution, unlike most others, the rail stays stationary and the armature moves in the forward direction. Results obtained for the rail and the armature show that the maximum temperature occurs at the trailing edge of the armature. In this region, the temperature reaches about 600 K. However, the temperature of 1 m of rail stays around 360 K. --- paper_title: Measurement of Solid Armature’s In-Bore Velocity Using B-Dot Probes in a Series-Augmented Railguns paper_content: The waveform of solid armature displacement and velocity can be obtained by arranging B-dot probe arrays along the barrel of a series augmented railgun. The moment of armature arrival can be detected by an evident change in the differential signal from the B-dot probe while armature passes by. However, it is difficult or even impossible in a railgun to directly pick up the arrival moment, because both armature movement and rail current are responsible for the change of differential signal, especially in the series-augmented railgun where currents in both outer rails and connecting conductors are probably sensed by the B-dot probe. A ratio function (RF) has been introduced to eliminate the influence of current change, which is the ratio of the integral of differential signal to the current. The RF of the armature probe aimed at the armature current will reach a maximum at the moment of armature arrival, while the one at the rail current will reach the median. Three shots with identical initial conditions were conducted in a series-augmented railgun, each of which had adopted the armature probe, rail probe, and Velocity Interferometer System for Any Reflector, respectively. The good agreement between the displacement and velocity waveforms derived from B-dot probes indicated that the method and measurement were valid. The accuracy of the B-dot method is mainly affected by probe dimension and position, railgun current distribution, and electromagnetic noise. --- paper_title: Application of W-band, Doppler Radar to Railgun Velocity Measurements paper_content: Abstract This paper describes a W-Band Doppler radar system in use at the electromagnetic launch facility at the Naval Surface Warfare Center in Dahgren, VA. Experimental results for a medium caliber railgun launch at 2000 m/s are presented. A discussion of the radar signal characteristics is given and the Doppler velocity profile during launch is computed. The in-bore time history of the launch package derived from the radar system is found to have good agreement with that derived from traditional magnetic field flux sensors (B-dot sensors). Given the increased temporal resolution of the radar system over B-dot measurements, time resolved friction coefficients can be estimated based on launch parameters. Due to the capabilities of the Dahlgren test range, the radar is able to track the projectile for a portion of flight just after the projectile exits the launcher. Drag coefficients for the projectile in hypervelocity free-flight are also presented. --- paper_title: Thermal and electromagnetic analysis of an electromagnetic launcher paper_content: An advanced high-power electromagnetic launcher (EML) improves performance by as much as 30% over conventional launchers. Electrical energy is the main driving source for the electromagnetic launcher. In the new EML, thermal energy, generated by the extraordinarily high current that goes through the rail and the armature, changes the electrical, thermal, and mechanical specifications of the structure. This paper reports on a study of the thermal and magnetic induction distribution in the rail and the armature at different locations. In our formulation of governing nonlinear differential equations, because of the electrical conductivity and ohmic heating of the rail and the armature, Maxwell equations are coupled with energy equations. The friction force that causes heat between the armature and the rail is considered in the equations, as is the melting latent heat effect. To solve the nonlinear governing differential equations, we utilize an unstructured, moving-mesh-generation, control-volume-based finite-difference code for the rail and the armature. In this method of solution, unlike most others, the rail stays stationary and the armature moves in the forward direction. Results obtained for the rail and the armature show that the maximum temperature occurs at the trailing edge of the armature. In this region, the temperature reaches about 600 K. However, the temperature of 1 m of rail stays around 360 K. --- paper_title: Thermal Analysis of High-Energy Railgun Tests paper_content: This paper describes temperature measurements made on the high-energy medium-caliber launcher at the Institute for Advanced Technology. Simulations performed in Maxwell 3-D and E-Physics showed that Joule heating from current diffusing into the rails accounts for most of the temperature rise in the conductors. Temporal skin effects increase thermal dissipation significantly over what would be expected by the ohmic losses under fully diffused conditions. Based on this analysis, Joule heating is the overwhelmingly dominant source of heating in low-speed tests. As the velocity of the armature increases, Joule heating remains the dominant source of heat; however, additional mechanisms-which may include frictional heating, arcing energy, aluminum deposition, and temperature-dependent properties-are required to more satisfactorily explain the temperature profile obtained. --- paper_title: Numerical simulation of interior ballistic process of railgun based on the multi-field coupled model paper_content: Railgun launcher design relies on appropriate models. A multi-field coupled model of railgun launcher was presented in this paper. The 3D transient multi-field was composed of electromagnetic field, thermal field and structural field. The magnetic diffusion equations were solved by a finite-element boundary-element coupling method. The thermal diffusion equations and structural equations were solved by a finite element method. A coupled calculation was achieved by the transfer data from the electromagnetic field to the thermal and structural fields. Some characteristics of railgun shot, such as velocity skin effect, melt-wave erosion and magnetic sawing, which are generated under the condition of large-current and high-speed sliding electrical contact, were demonstrated by numerical simulation. --- paper_title: Thermal Analysis of High-Energy Railgun Tests paper_content: This paper describes temperature measurements made on the high-energy medium-caliber launcher at the Institute for Advanced Technology. Simulations performed in Maxwell 3-D and E-Physics showed that Joule heating from current diffusing into the rails accounts for most of the temperature rise in the conductors. Temporal skin effects increase thermal dissipation significantly over what would be expected by the ohmic losses under fully diffused conditions. Based on this analysis, Joule heating is the overwhelmingly dominant source of heating in low-speed tests. As the velocity of the armature increases, Joule heating remains the dominant source of heat; however, additional mechanisms-which may include frictional heating, arcing energy, aluminum deposition, and temperature-dependent properties-are required to more satisfactorily explain the temperature profile obtained. --- paper_title: Thermal runaway mechanism of lithium ion battery for electric vehicles: A review paper_content: Abstract The safety concern is the main obstacle that hinders the large-scale applications of lithium ion batteries in electric vehicles. With continuous improvement of lithium ion batteries in energy density, enhancing their safety is becoming increasingly urgent for the electric vehicle development. Thermal runaway is the key scientific problem in battery safety research. Therefore, this paper provides a comprehensive review on the thermal runaway mechanism of the commercial lithium ion battery for electric vehicles. Learning from typical accidents, the abuse conditions that may lead to thermal runaway have been summarized. The abuse conditions include mechanical abuse, electrical abuse, and thermal abuse. Internal short circuit is the most common feature for all the abuse conditions. The thermal runaway follows a mechanism of chain reactions, during which the decomposition reaction of the battery component materials occurs one after another. A novel energy release diagram, which can quantify the reaction kinetics for all the battery component materials, is proposed to interpret the mechanisms of the chain reactions during thermal runaway. The relationship between the internal short circuit and the thermal runaway is further clarified using the energy release diagram with two cases. Finally, a three-level protection concept is proposed to help reduce the thermal runaway hazard. The three-level protection can be fulfilled by providing passive defense and early warning before the occurrence of thermal runaway, by enhancing the intrinsic thermal stability of the materials, and by reducing the secondary hazard like thermal runaway propagation. --- paper_title: Synergy of Melt-Wave and Electromagnetic Force on the Transition Mechanism in Electromagnetic Launch paper_content: Synergy of melting wave and electromagnetic force on the transition mechanism is studied in this paper. The 3-D electromagnetic field, temperature field, and structure field coupling model are established to simulate the velocity skin effect and melting wave phenomenon. It is found that the melting wave and electromagnetic force together lead to the occurrence of transition. And the two are inextricably linked. The melting wave will enlarge the separating electromagnetic force on the armature trailing arm. The separating electromagnetic force may result in the reduction of contact area, which would accelerate the melting wave. It is getting worse and worse, eventually lead to the transition. And the analytical results are consistent with the experimental phenomena. The transition prediction method will be more accurate when combining the melt-wave and the electromagnetic force. --- paper_title: 3D numerical simulation and analysis of railgun gouging mechanism paper_content: A gouging phenomenon with a hypervelocity sliding electrical contact in railgun not only shortens the rail lifetime but also affects the interior ballistic performance. In this paper, a 3-D numerical model was introduced to simulate and analyze the generation mechanism and evolution of the rail gouging phenomenon. The results show that a rail surface bulge is an important factor to induce gouging. High density and high pressure material flow on the contact surface, obliquely extruded into the rail when accelerating the armature to a high velocity, can produce gouging. Both controlling the bulge size to a certain range and selecting suitable materials for rail surface coating will suppress the formation of gouging. The numerical simulation had a good agreement with experiments, which validated the computing model and methodology are reliable. --- paper_title: Experiments with the Green Farm electric gun facility paper_content: The design and construction of a major electric gun facility at Green Farm in San Diego is described. The facility is driven by an 11 kV 32 MJ capacitor bank, arranged in 4 MJ modules that can be independently triggered to provide a choice of pulse shapes. The bank has powered a variety of EM and ET guns, including an 8-meter Single Shot Rail gun and a modified 5-inch Naval gun. World record high energy and high velocity shots have been achieved. The highest rail gun muzzle energy was 8.6 MJ and the highest muzzle velocity was 4.3 km/s with a plasma driven projectile of 0.64 kg, corresponding to a muzzle energy of 6 MJ. Intact projectile launch and flight was confirmed by high speed (8000 frames/sec.) photography and flash X-rays. > --- paper_title: Analysis of in-bore magnetic field in C-shaped armature railguns paper_content: Abstract In order to analysis the distribution characteristics of in-bore magnetic field for C-shaped armature electromagnetic railgun, a computational model considering dynamic contact pressure is established. By solving the dynamic equation, the in-bore motion characteristics of the armature are obtained. The distribution of current in the rail and armature is analyzed based on the magnetic diffusion equation and Ampere's law. On this basis, three simulation models are proposed, which correspond to static state, motion state and motion state considering the velocity skin effect. The magnetic field of the investigated points along the central axes of the armature front end are obtained. The results show that, in static state, the peak magnetic flux density of each investigated point is greater than the other two states. Velocity skin effect leads to a decrease in peak magnetic flux density. The change of motion state has little influence on the peak magnetic flux density of the investigated point that far away from the armature. The calculated results can be used in the electromagnetic shielding design of intelligent ammunition. --- paper_title: Research on Thermal Stress by Current Skin Effect in a Railgun paper_content: In this paper, a research on the damages of material caused by thermal stress in electromagnetic launch is presented. 3-D finite-element program is established to calculate the transient electromagnetic fields and temperature fields including a moving armature. The focus is mainly on the high temperature and large thermal stress damages to the rail. Thermal stress is produced by the current density concentration and defined as the skin effect. There are two main reasons for the current skin effect. One is due to the rapid changes of the current wave. The other is due to the moving conductors known as the velocity skin effect. The simulation results illustrate that with the influence of the current skin effect, the thermal stress will concentrate at a small area. This area may cause the grooving damage at startup, besides, the gouging damage at high velocity. Finally, the results are in agreement with the experiments. --- paper_title: Numerical simulation of interior ballistic process of railgun based on the multi-field coupled model paper_content: Railgun launcher design relies on appropriate models. A multi-field coupled model of railgun launcher was presented in this paper. The 3D transient multi-field was composed of electromagnetic field, thermal field and structural field. The magnetic diffusion equations were solved by a finite-element boundary-element coupling method. The thermal diffusion equations and structural equations were solved by a finite element method. A coupled calculation was achieved by the transfer data from the electromagnetic field to the thermal and structural fields. Some characteristics of railgun shot, such as velocity skin effect, melt-wave erosion and magnetic sawing, which are generated under the condition of large-current and high-speed sliding electrical contact, were demonstrated by numerical simulation. --- paper_title: Magnetic Diffusion Inside the Rails of an Electromagnetic Launcher: Experimental and Numerical Studies paper_content: The topic of this paper is the distribution of magnetic fields inside the rails of the electromagnetic railgun RAFIRA located at the ISL. The magnetic field pulse characteristics are measured using colossal magnetoresistance-B-scalar sensors placed at different depths inside the rails of the accelerator. During launch the muzzle velocity reached up to 1.4 km/s, the electrical shot energy is about 1.2 MJ and the projectile mass was 140 g. The obtained results are analyzed using two models based on analytic solutions of Maxwell's equations. The first model considers the 1-D magnetic field diffusion in the direction perpendicular to the rails. The second model includes convection and simulates the 2-D behavior of the magnetic field distribution in three regions: the armature, the contact zone between rail and armature and the rail behind the armature. Additionally, 2-D and 3-D quasistationary finite element models are developed using Comsol Multiphysics. Excellent agreement is found between the 3-D simulation results and the measurements of magnetic diffusion. --- paper_title: The Use of Electronic Components in Railgun Projectiles paper_content: This paper deals with experiments and calculations performed in order to investigate the influence of the electromagnetic hardening of payloads in a railgun. This is a complex task: besides the large amplitudes of the in-bore magnetic fields due to the pulsed current, the exit of the projectile from the muzzle and the consequences of plasma arcs have to be considered. At the muzzle the magnetic induction can drop from several Teslas to zero within some microseconds, leading to very high induced voltages and electric fields in the metallic parts of the projectile. On the other hand, the electric contact established by solid armatures tends to develop into electric arcs at high velocities during the launch. These plasma arcs as well as the closing switch transients of the railgun circuit are a source of electromagnetic radiation in a broad spectral range. Some electronic devices were selected and tested with static setups corresponding to the previous conditions. In a first phase a series of static railgun experiments (no projectile movement) was performed. In a second phase, static experiments simulating the muzzle exit conditions were carried out. Finally, the influence of electromagnetic waves emitted during railgun experiments on electronic devices was investigated, using a static setup with a conventional spark gap. --- paper_title: The Use of Electronic Components in Railgun Projectiles paper_content: This paper deals with experiments and calculations performed in order to investigate the influence of the electromagnetic hardening of payloads in a railgun. This is a complex task: besides the large amplitudes of the in-bore magnetic fields due to the pulsed current, the exit of the projectile from the muzzle and the consequences of plasma arcs have to be considered. At the muzzle the magnetic induction can drop from several Teslas to zero within some microseconds, leading to very high induced voltages and electric fields in the metallic parts of the projectile. On the other hand, the electric contact established by solid armatures tends to develop into electric arcs at high velocities during the launch. These plasma arcs as well as the closing switch transients of the railgun circuit are a source of electromagnetic radiation in a broad spectral range. Some electronic devices were selected and tested with static setups corresponding to the previous conditions. In a first phase a series of static railgun experiments (no projectile movement) was performed. In a second phase, static experiments simulating the muzzle exit conditions were carried out. Finally, the influence of electromagnetic waves emitted during railgun experiments on electronic devices was investigated, using a static setup with a conventional spark gap. --- paper_title: The Use of Electronic Components in Railgun Projectiles paper_content: This paper deals with experiments and calculations performed in order to investigate the influence of the electromagnetic hardening of payloads in a railgun. This is a complex task: besides the large amplitudes of the in-bore magnetic fields due to the pulsed current, the exit of the projectile from the muzzle and the consequences of plasma arcs have to be considered. At the muzzle the magnetic induction can drop from several Teslas to zero within some microseconds, leading to very high induced voltages and electric fields in the metallic parts of the projectile. On the other hand, the electric contact established by solid armatures tends to develop into electric arcs at high velocities during the launch. These plasma arcs as well as the closing switch transients of the railgun circuit are a source of electromagnetic radiation in a broad spectral range. Some electronic devices were selected and tested with static setups corresponding to the previous conditions. In a first phase a series of static railgun experiments (no projectile movement) was performed. In a second phase, static experiments simulating the muzzle exit conditions were carried out. Finally, the influence of electromagnetic waves emitted during railgun experiments on electronic devices was investigated, using a static setup with a conventional spark gap. --- paper_title: Microstructure, electromagnetic shielding effectiveness and mechanical properties of Mg–Zn–Y–Zr alloys paper_content: Abstract The microstructure, electromagnetic interference (EMI) shielding effectiveness (SE) and mechanical properties of Mg–Zn–Y–Zr alloys with 0–3.91 wt.% Y were investigated systematically in this work. The results indicated that addition of Y brought about the formation of I-phase (Mg 3 Zn 6 Y) and W-phase (Mg 3 Zn 3 Y 2 ) and refined grains in as-cast state. After hot extrusion, there was more and more broken particles dispersing in matrix when Y content ranged from 0 to 3.19 wt.%. With increasing Y content, EMI SE was enhanced significantly in extruded state. The alloy with 1.9 wt.% Y exhibited the optimal EMI shielding capacity with the SE value of 79–118 dB. It was found that good mechanical properties could be achieved by adding very low Y content. The extruded alloy with 0.5 wt.% Y presented higher yield strength (268 MPa), ultimate tensile strength (334 MPa) and good elongation ( δ = 12.3%) compared with other extruded alloys. A subsequent aging treatment on the extruded alloy with 1.29 wt.% Y exhibiting outstanding comprehensive EMI SE and mechanical properties resulted in precipitation of W, β 1 ′ and β 2 ′ phases, which led to further improvement in EMI SE. The peak-aged sample showed the superior mechanical properties. Based on the microstructure observation, the changes of EMI shielding capacity and mechanical properties have been discussed. --- paper_title: Microstructure, electromagnetic shielding effectiveness and mechanical properties of Mg–Zn–Y–Zr alloys paper_content: Abstract The microstructure, electromagnetic interference (EMI) shielding effectiveness (SE) and mechanical properties of Mg–Zn–Y–Zr alloys with 0–3.91 wt.% Y were investigated systematically in this work. The results indicated that addition of Y brought about the formation of I-phase (Mg 3 Zn 6 Y) and W-phase (Mg 3 Zn 3 Y 2 ) and refined grains in as-cast state. After hot extrusion, there was more and more broken particles dispersing in matrix when Y content ranged from 0 to 3.19 wt.%. With increasing Y content, EMI SE was enhanced significantly in extruded state. The alloy with 1.9 wt.% Y exhibited the optimal EMI shielding capacity with the SE value of 79–118 dB. It was found that good mechanical properties could be achieved by adding very low Y content. The extruded alloy with 0.5 wt.% Y presented higher yield strength (268 MPa), ultimate tensile strength (334 MPa) and good elongation ( δ = 12.3%) compared with other extruded alloys. A subsequent aging treatment on the extruded alloy with 1.29 wt.% Y exhibiting outstanding comprehensive EMI SE and mechanical properties resulted in precipitation of W, β 1 ′ and β 2 ′ phases, which led to further improvement in EMI SE. The peak-aged sample showed the superior mechanical properties. Based on the microstructure observation, the changes of EMI shielding capacity and mechanical properties have been discussed. --- paper_title: Microstructure, electromagnetic shielding effectiveness and mechanical properties of Mg–Zn–Cu–Zr alloys paper_content: Abstract The microstructure, electromagnetic interference (EMI) shielding effectiveness (SE) and mechanical properties of Mg–Zn– x Cu–Zr alloys ( x = 0–2.32 wt.%) were investigated in this study. The results indicated that the addition of Cu led to the formation of MgZnCu phase with a face-center cubic structure, and resulted in grain refinement. EMI SE increased significantly with increasing Cu content in extruded state. The alloy with 2.32 wt.% Cu exhibited optimal EMI shielding capacity with SE value of 84–117 dB. Meanwhile, it was found that good mechanical properties could be achieved by adding low Cu content. The extruded alloy with 0.37 wt.% Cu presented higher yield strength (276 MPa), ultimate tensile strength (346 MPa) and elongation ( δ = 11.4%) compared with other extruded alloys. However, a higher Cu content would substantially deteriorate tensile properties of the alloys. Based on microstructure observation, the variation of EMI shielding capacity and mechanical properties have been discussed. --- paper_title: Electromagnetic shielding properties of soft magnetic powder–polymer composite films for the application to suppress noise in the radio frequency range paper_content: Abstract Electromagnetic absorption characteristics in the near- and the far-field regime were evaluated from measurements of power loss by the coaxial transmission and reflection method and the microstrip line method, respectively, for high-density soft magnetic Fe–Al–Si alloy–polymer composite films that were highly effective in the radio frequency (RF) range. The electromagnetic absorption in the near- and the far-field regime for the soft magnetic metal–polymer composite films was greatly dependent on the film density. The electromagnetic absorption in the RF range significantly increased with increasing film density, which was caused by the increase of the magnetic permeability and the electrical conductivity. As a result, the high-density soft magnetic film showed excellent electromagnetic absorption for the near- and the far-field electromagnetic shielding and was applicable as an electromagnetic absorber for high-frequency devices operated over 0.1 GHz. --- paper_title: Electromagnetic shielding mechanisms using soft magnetic stainless steel fiber enabled polyester textiles paper_content: Abstract This work studied the effects of conductivity, magnetic loss, and complex permittivity when using blended textiles (SSF/PET) of polyester fibers (PET) with stainless steel fibers (SSF) on electromagnetic wave shielding mechanisms at electromagnetic wave frequencies ranging from 30 MHz to 1500 MHz. The 316L stainless steel fiber used in this study had 38 vol% γ austenite and 62 vol% α′ martensite crystalline phases, which was characterized by an x-ray diffractometer. Due to the magnetic and dielectric loss of soft metallic magnetic stainless steel fiber enabled polyester textiles, the relationship between the reflection/absorption/transmission behaviors of the electromagnetic wave and the electrical/magnetic/dielectric properties of the SSF and SSF/PET fabrics was analyzed. Our results showed that the electromagnetic interference shielding of the SSF/PET textiles show an absorption-dominant mechanism, which attributed to the dielectric loss and the magnetic loss at a lower frequency and attributed to the magnetic loss at a higher frequency, respectively. --- paper_title: Magnetic and conductive graphene papers toward thin layers of effective electromagnetic shielding paper_content: Graphene-based hybrids, specifically free-standing graphene-based hybrid papers, have recently attracted increasing attention in many communities for their great potential applications. As the most commonly used precursors for the preparation of graphene-based hybrids, electrically-insulating graphene oxides (GO) generally must be further chemically reduced or thermally annealed back to reduced GO (RGO) if high electrical conductivity is needed. However, various concerns are generated if the hybrid structures are sensitive to the treatments used to produce RGO. In this work, we develop a highly facile strategy to fabricate free-standing magnetic and conductive graphene-based hybrid papers. Electrically conductive graphene nanosheets (GNs) are used directly to grow Fe3O4 magnetic nanoparticles without additional chemical reduction or thermal annealing, thus completely avoiding the concerns in the utilisation of GO. The free-standing Fe3O4/GN papers are magnetic, electrically conductive and present sufficient magnetic shielding (>20 dB), making them promising for applications in the conductive magnetically-controlled switches. The shielding results suggest that the Fe3O4/GN papers of very small thickness (<0.3 mm) and light weight (∼0.78 g cm−3) exhibit comparable shielding effectiveness to polymeric graphene-based composites of much larger thickness. Fundamental mechanisms for shielding performance and associated opportunities are discussed. --- paper_title: Electromagnetic shielding effectiveness of aluminum alloy-fly ash composites paper_content: Abstract The cenosphere and precipitator fly ash particulates were used to produce two kinds of aluminum matrix composites with the density of 1.4–1.6 g cm −3 and 2.2–2.4 g cm −3 separately. The electromagnetic interference shielding effectiveness (EMSE) properties of the composites were measured in the frequency range of 30.0 kHz–1.5 GHz. The results indicated the EMSE properties of the two types of composites were nearly the same. By using the fly ash particles, the shielding effectiveness properties of the matrix aluminum have been improved in the frequency ranges 30.0 kHz–600.0 MHz and the increment varied with increasing frequency. The EMSE properties of 2024Al are in the range −36.1 ± 0.2 to −46.3 ± 0.3 dB while the composites are in the range −40.0 ± 0.8 to −102.5 ± 0.1 dB in the frequency range 1.0–600.0 MHz. At higher frequency, the EMSE properties of the composites are similar to that of the matrix. The tensile strength of the matrix aluminum has been decreased by addition of the fly ash particulate and the tensile strength of the composites were 110.2 MPa and 180.6 MPa separately. The fractography showed that one composite fractured brittly and the other fractured in a microductile manner. --- paper_title: A novel structure of Ferro-Aluminum based sandwich composite for magnetic and electromagnetic interference shielding paper_content: Abstract A novel Ferro-Aluminum based sandwich composite for magnetic and electromagnetic interference shielding was designed and fabricated by hot pressing and subsequent diffusion treatment. The microstructure evolution of sandwich composite was characterized. Magnetic and electromagnetic interference shielding properties and mechanisms of the composites were also investigated. Sandwich composite is obtained with pure iron/Fe–Al alloy layer/pure iron structure and the Fe–Al/Fe interface shows good bonding. Al elemental content in reaction layer presents gradient distribution and the Al-riched brittle phase turns into ductile phase with diffusion time increasing. The electromagnetic shielding effectiveness of sandwich composite is higher than that of pure iron plate and increases with diffusion time extension, reaching 70 – 80 dB at the frequency of 30 KHz – 1.5 GHz. The multiple reflection loss in Fe–Al gradient layer is the primary contribution to the shielding effectiveness improvement of sandwich composite. The magnetic shielding effectiveness of sandwich composite can amount to 10 dB, about 2.5 times of that of pure iron plate. Fe–Al intermetallic layer, as non-magnetic spacer, is added between two iron plates and the permeable layer in sandwich composite can shunt magnetic field twice to improve shielding effectiveness. --- paper_title: Modeling and reliability characterization of area-array electronics subjected to high-g mechanical shock up to 50,000g paper_content: Electronics in aerospace applications may be subjected to very high g-loads during normal operation. A novel micro-coil array interconnect has been studied for increased reliability during extended duration aerospace missions in presence of high-g loads. Ceramic area-array components have been populated with micro-coil interconnects. The micro-coil spring (MCS) is fabricated using a beryllium copper wire post plated with 100 μin of Sn63Pb37, 50 mils in height with a diameter of 20 mils. Board assemblies have been subjected to high g-loads in the 0°, horizontal orientation. The board assemblies are daisy chained. Damage initiation and progression in interconnects has been measured using in-situ monitoring with high speed data-acquisition systems. Transient deformation of the board assemblies has been measured using high-speed cameras with digital image correlation. Multiple board assemblies have been subjected to shock tests till failure. Peak shock pulse magnitude ranges from 1,500g typical of JEDEC standard, to very high g-levels of 50,000g. The MCS interconnects are daisy chained and failures are measured using electrical continuity. A finite element model using explicit global to local models has been used to study interconnect reliability under shock loads. Models have been correlated with experimental data. The reliability performance of micro-coil interconnects has been compared to column interconnects. Results have shown that the micro-coil spring array has a higher reliability than the ceramic column grid array (CCGA). Failure modes have been determined. --- paper_title: Experimental study on the package of high-g accelerometer paper_content: Abstract In this paper, the effect of the package die adhesive and package shell on the performances of silicon based MEMS high-g accelerometers was reported. Using Raman spectroscopy, the residual stress caused by different package die adhesive thickness and different package shell material was characterized. It can be concluded from the testing results that: with thicker die adhesive, the residual stress increment was much smaller; the piezoresistance variation caused by this residual stress was much smaller; and the temperature shift of the output voltage was much smaller. Comparing with the ceramic package, the stainless steel package has bigger sensitivity and bigger anti-overload ability. --- paper_title: Out-of-plane micro triple-hot-wire anemometer based on Pyrex bubble for airflow sensing paper_content: The paper reports novel design and fabrication of out-of-plane micro airflow sensors based on the hot-wire sensing principle, i.e. gas cooling of electrically-heated wires. Three micro Ti/Au/Cu hot-wire resistors have been fabricated on an out-of-plane Pyrex bubble with height of 300 μm. They are arranged 120 degrees apart on the sidewall of a bubble at height of 100 μm for airflow velocity and direction detection. Air velocity around the out-of-plane bubble structure has been investigated for square and circular package designs. The sensor with axisymmetric circular package has demonstrated the ability to detect velocity (<;10 m/s) and to determine flow direction with an error less than ±8° when the velocity is 10m/s. The sensitivity could be further improved in a new design with increased bubble height (1 mm) and elevated hot-wire resistor position (500 μm), according to the modeling results. --- paper_title: Carbon-MEMS-Based Alternating Stacked MoS2 @rGO-CNT Micro-Supercapacitor with High Capacitance and Energy Density paper_content: A novel process to fabricate a carbon-microelectromechanical-system-based alternating stacked MoS2@rGO–carbon-nanotube (CNT) micro-supercapacitor (MSC) is reported. The MSC is fabricated by successively repeated spin-coating of MoS2@rGO/photoresist and CNT/photoresist composites twice, followed by photoetching, developing, and pyrolysis. MoS2@rGO and CNTs are embedded in the carbon microelectrodes, which cooperatively enhance the performance of the MSC. The fabricated MSC exhibits a high areal capacitance of 13.7 mF cm−2 and an energy density of 1.9 µWh cm−2 (5.6 mWh cm−3), which exceed many reported carbon- and MoS2-based MSCs. The MSC also retains 68% of capacitance at a current density of 2 mA cm−2 (5.9 A cm−3) and an outstanding cycling performance (96.6% after 10 000 cycles, at a scan rate of 1 V s−1). Compared with other MSCs, the MSC in this study is fabricated by a low-cost and facile process, and it achieves an excellent and stable electrochemical performance. This approach could be highly promising for applications in integration of micro/nanostructures into microdevices/systems. --- paper_title: Fabrication of a symmetric micro supercapacitor based on tubular ruthenium oxide on silicon 3D microstructures paper_content: Abstract A micro-supercapacitor with a three-dimensional configuration has been fabricated using an ICP etching technique. Hydrous ruthenium oxide with a tubular morphology is successfully synthesized using a cathodic deposition technique with a Si micro prominence as a template. The desired tubular RuO 2 · x H 2 O architecture facilitates electrolyte penetration and proton exchange/diffusion. A single MEMS electrode is studied using cyclic voltammetry, and a specific capacitance of 99.3 mF cm −2 and 70 F g −1 is presented at 5 mV s −1 in neutral Na 2 SO 4 solution. The accelerated cycle life is tested at 80 mV s −1 , and satisfactory cyclability is observed. When placed on a chip, the symmetric cell exhibits good supercapacitor properties, and a specific capacitance as high as 23 mF cm −2 is achieved at 10 mA cm −2 . Therefore, 3D MEMS microelectrode arrays with electrochemically deposited ruthenium oxide films are promising candidates for on-chip electrochemical micro-capacitor applications. --- paper_title: Impact experiment analysis of MEMS ultra-high G piezoresistive shock accelerometer paper_content: A novel ultra-high g shock micro accelerometer with four self-supporting piezoresistive micro beams was proposed to simultaneously enhance the sensor performance of sensitivity and frequency response. The finite element method (FEM) simulations indicated that the pure axial deformations could occur on self-supporting piezoresistive micro beams via optimizing structure dimensions. And the average stress distribution in piezoresistive beams was 81.5 MPa, the natural frequency of sensor was about 505.00 kHz. To verify the sensitivity and natural frequency of fabricated shock accelerometer, impact tests were carried out using Hopkinson pressure bar under impact loads of 100,000 g. The results were obtained in the form of response curves of a shock signal. Then, the average sensitivity was calculated as 1.586 μV/g/3V, and the natural frequency was obtained as 445 kHz by fast Fourier transform. The experimental results agreed well with the FEM simulations except for a slight mismatch in natural frequency which was probably resulted from the errors of device fabrication and package processes. The experiment results reliably demonstrate that the proposed shock accelerometer, with high natural frequency and high sensitivity, is capable of measuring ultra-high g loading shock. --- paper_title: Silicon micromachined high-shock accelerometers with a curved-surface-application structure for over-range stop protection and free-mode-resonance depression paper_content: In this paper we present a high-shock accelerometer that is in high demand in many engineering applications. The device, formed by using advanced silicon bulk micromachining technology, including deep-reactive ionic etching, uses a single-chip structure that facilitates packaging and contributes to low-cost mass-production. A novel curved-surface-application stop structure is designed and micromachining formed for the improved protection of over-range shock and the depression of free-mode resonance. By using a dropping-hammer testing system, characterizations of the sensors show a sensitivity of 3 μV g−1 for a 13 700g shock-range and satisfied wave fidelity of response to shock. --- paper_title: Simulation, fabrication and characterization of an all-metal contact-enhanced triaxial inertial microswitch with low axial disturbance paper_content: Abstract An all-metal inertial microswitch that is sensitive to three axial accelerations (+ x , + y and + z ) is fabricated by low-temperature photoresist modeled metal-electroplating technology. The inertial switch consists of four main parts: a quartz wafer with anti-stiction strips as the substrate; a proof mass suspended by conjoined serpentine springs as the movable electrode; two L -shaped flexible cantilevers and a multi-hole crossbeam as horizontal and vertical fixed electrodes, respectively; two anchors located in the middle of proof mass as limit blocks. ANSYS software is used to simulate the dynamic contact process in the microswitch, and the simulation results reveal that the flexible fixed electrode can prolong the contact time and eliminate the rebound during the contact process. The axial disturbance among different sensitive directions has been discussed by dynamic simulation. The modal analysis, crosstalk between horizontal and vertical directions, cross-axis sensitivity, and the disturbance under overload shock along the reverse sensitive direction are also simulated and discussed. The suspension and gap in the device structure can be precisely controlled utilizing the photoresist modeled metal-electroplating technology to reduce the axial disturbance effectively. Finally, the prototype is fabricated successfully and tested by dropping hammer system. It is shown that the test threshold acceleration is 255–260 g in horizontal directions (+ x and + y ), ∼75 g in vertical direction. The contact time of the switch with elastic contact point is ∼60 μs in horizontal direction and ∼80 μs in vertical direction. The crosstalk between horizontal and vertical direction, cross-axis and overload disturbance have been also demonstrated by test results, which indicate the axial disturbance is low in the present inertial switch. --- paper_title: Symmetric redox supercapacitor based on micro-fabrication with three-dimensional polypyrrole electrodes paper_content: Abstract To achieve higher energy density and power density, we have designed and fabricated a symmetric redox supercapacitor based on microelectromechanical system (MEMS) technologies. The supercapacitor consists of a three-dimensional (3D) microstructure on silicon substrate micromachined by high-aspect-ratio deep reactive ion etching (DRIE) method, two sputtered Ti current collectors and two electrochemical polymerized polypyrrole (PPy) films as electrodes. Electrochemical tests, including cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and galvanostatical charge/discharge methods have been carried out on the single PPy electrodes and the symmetric supercapacitor in different electrolytes. The specific capacitance (capacitance per unit footprint area) and specific power (power per unit footprint area) of the PPy electrodes and symmetric supercapacitor can be calculated from the electrochemical test data. It is found that NaCl solution is a good electrolyte for the polymerized PPy electrodes. In NaCl electrolyte, single PPy electrodes exhibit 0.128 F cm −2 specific capacitance and 1.28 mW cm −2 specific power at 20 mV s −1 scan rate. The symmetric supercapacitor presents 0.056 F cm −2 specific capacitance and 0.56 mW cm −2 specific power at 20 mV s −1 scan rate. --- paper_title: Trends and challenges in modern MEMS sensor packages paper_content: Modern MEMS sensors for automotive as well as consumer electronics face a continuous pressure for size reduction. This can be met not only by a consistent shrink of sensing elements and ASICs resulting in a smaller package but also through the integration of several sensors into one system. The trends and challenges of this steady shrink will be explained and several examples of current automotive and consumer MEMS sensors will be shown. --- paper_title: Flexible transparent tribotronic transistor for active modulation of conventional electronics paper_content: Abstract Flexible and transparent electronics have attracted wide attention for electronic skin, wearable sensors and man-machine interactive interfacing. In this paper, a novel flexible transparent tribotronic transistor (FTT) is developed by coupling an organic thin film transistor (OTFT) and a triboelectric nanogenerator (TENG) in free-standing sliding mode. The carrier transport between drain and source can be modulated by the sliding-induced electrostatic potential of the TENG instead of the conventional gate voltage. With the sliding distance increases from 0 to 7 mm, the reverse drain current is almost linearly increased from 2 to 22 μA. The FTT has excellent performances in stability and durability in different bending modes and radius. The optical transmittance of the device is about 71.6% in the visible wavelength range from 400 to 800 nm. Moreover, the FTT is used for active modulation of conventional electronics, in which the luminance, magnetism, sound and micro-motion can be modulated by sliding a finger. This work has provided a new way to actively modulate conventional electronics, and demonstrated the practicability of tribotronics for human-machine interaction. --- paper_title: Discharge voltage behavior of electric double-layer capacitors during high-g impact and their application to autonomously sensing high-g accelerometers paper_content: In this study, the discharge voltage behavior of electric double-layer capacitors (EDLCs) during high-g impact is studied both theoretically and experimentally. A micro-scale dynamic mechanism is proposed to describe the physical basis of the increase in the discharge voltage during a high-g impact. Based on this dynamic mechanism, a multi-field model is established, and the simulation and experimental studies of the discharge voltage increase phenomenon are conducted. From the simulation and experimental data, the relationship between the increased voltage and the high-g acceleration is revealed. An acceleration detection range of up to 10,000g is verified. The design of the device is optimized by studying the influences of the parameters, such as the electrode thickness and discharge current, on the outputs. This work opens up new avenues for the development of autonomous sensor systems based on energy storage devices and is significant for many practical applications such as in collision testing and automobile safety. --- paper_title: Triboelectric nanogenerators as self-powered acceleration sensor under high-g impact paper_content: Abstract In the field of automobiles and many other industries, there is an urgent demand for the sensing of high-g acceleration. In this paper, a self-powered high-g acceleration sensor based on a triboelectric nanogenerator is proposed for the first time. It is micro-fabricated with a total volume of 14 × 14 × 8 mm 3 , and its sensing ability is confirmed via a Machete hammer experiment, with a measurement range of up to 1.8 × 10 4 g , a sensitivity of 1.8 mV/g, and a Pearson correlation coefficient of 0.99959. In addition, the output signal of this novel acceleration sensor has few clutters, which is beneficial for recognition and subsequent signal processing. The effects of the aluminum-electrode thickness on the sensitivity and linearity of the sensor are investigated via modeling, theoretical analysis, and experiment, providing a reliable basis for the parameter optimization of the structural design. Experiment results indicate that this novel acceleration sensor covers a wide measurement range and meets the urgent needs of monitoring various high-g impacts for military equipment and automobiles. --- paper_title: Pressure Sensitivity Enhancement of Porous Carbon Electrode and Its Application in Self-Powered Mechanical Sensors paper_content: Microsystems with limited power supplies, such as electronic skin and smart fuzes, have a strong demand for self-powered pressure and impact sensors. In recent years, new self-powered mechanical sensors based on the piezoresistive characteristics of porous electrodes have been rapidly developed, and have unique advantages compared to conventional piezoelectric sensors. In this paper, in order to optimize the mechanical sensitivity of porous electrodes, a material preparation process that can enhance the piezoresistive characteristics is proposed. A flexible porous electrode with superior piezoresistive characteristics and elasticity was prepared by modifying the microstructure of the porous electrode material and adding an elastic rubber component. Furthermore, based on the porous electrode, a self-powered pressure sensor and an impact sensor were fabricated. Through experimental results, the response signals of the sensors present a voltage peak under such mechanical effects and the sensitive signal has less clutter, making it easy to identify the features of the mechanical effects. --- paper_title: Self-Powered Magnetic Sensor Based on a Triboelectric Nanogenerator paper_content: MagneticsensorsareusuallybasedontheHalleffect or a magnetoresistive sensing mechanism. Here we demonstrate that a nanogenerator can serve as a sensor for detecting the variation of the time-dependent magnetic field. The output voltage of the sensor was found to exponentially increase with increasing magnetic field. The detection sensitivities for the change and the changingrateofmagnetic fieldareabout0.0363(0.0004ln(mV)/G and 0.0497 ( 0.0006 ln(mV)/(G/s), respectively. The response time and reset time of the sensor are about 0.13 and 0.34 s, respectively. The fabricated sensor has a detection resolution of about 3 G and can work under low frequencies (<0.4 Hz). --- paper_title: Detection and measurement of impacts in composite structures using a self-powered triboelectric sensor paper_content: Abstract Composite structures as e.g. aircrafts, wind turbines or racing cars are frequently subjected to numerous impacts. For example, aircrafts may collide with birds during take-off and landing or get damaged due to the impact of hailstones. These impacts harm the integrity of the composite laminates used in their structures which results in delamination and other failures which are usually very difficult to detect by visual inspections. Hence, the detection and quantification of impacts is of vital importance for monitoring the health state of composite structures. Recently, triboelectric sensors have been demonstrated to detect touches, pressures, vibrations and other mechanical motions with the advantages of being self-powered, maintenance-free and easy to fabricate. However, there is no research focusing on the potential of triboelectric sensors to detect impacts in a wide energy range. In this paper, a self-powered triboelectric sensor is developed to measure impacts at high energy in structures made of composite materials. This could be particularly beneficial for the detection of bird strikes, hailstones and other high energy impacts in aircraft composite structures. For that purpose, composite plates are subjected to various energy impacts using a drop weight impact machine and the electric responses provided by the developed triboelectric sensor are measured in terms of voltage and current. The idea is to evaluate the sensitivity of the electrical signals provided by the sensor to changes in the impact energy. The results prove that the generated electric responses are affected by the energy of the impact and their amplitude increases linearly with the impact energy. The voltage and current sensor responses demonstrate a very good impact sensitivity of 160 mV/J and a strong linear relationship to the impact energy (R = 0.999) in a wide energy range from 2 to 30 J. This work suggests a novel approach to measure the magnitude of the impacts in composite structures using the newly developed triboelectric sensor. The findings of this work demonstrate that the developed triboelectric sensor meets the urgent needs for monitoring high energy impacts for aeronautic and civil composite structures. --- paper_title: A 137 dB Dynamic Range and 0.32 V Self-Powered CMOS Imager With Energy Harvesting Pixels paper_content: This work presents an 0.32 V self-powered high dynamic range (DR) CMOS imager in standard $0.18~\mu \text {m}$ CMOS technology with dual-mode operation: imaging (IMG) and optical energy harvesting (OEH) modes. In IMG mode, a dual-exposure extended-counting (DEEC) scheme is proposed and implemented with a 5-bit programmable current-controlled threshold (PCCT) generator. By combining the DR of a short-exposure (88 dB) and an extended long-exposure (49 dB) conversions, the DEEC achieves a high DR of 137 dB. The chip consumes $10.6~\mu \text {W}$ at 6.5 fps with 0.32 V operation and $32.1~\mu \text {W}$ at 16.5 fps with 0.4 V operation, which results in an iFoM of 8.1 f and 9.8 fJ/pixel $\cdot $ code. In OEH mode, the sensing pixels turns into energy harvesting pixels with an additional global micro solar cell and corresponding mode control circuit, which generates 455 mV and $14~\mu \text {W}$ at 60 klux (sunny day) and supports a self-powered imaging operation at 4.1 fps. --- paper_title: Theoretical study and applications of self-sensing supercapacitors under extreme mechanical effects paper_content: Abstract Supercapacitors exhibit a self-sensing phenomenon with voltage sensitivity under mechanical effects. This paper proposes a dynamic model that reveals the mechanism of this sensitivity phenomenon. During a mechanical process such as an extreme impact or a finger press, transient changes of the coupled micro-deformation, porosity and output voltage of the supercapacitors are simulated, and the relationship between change in voltage and strength of the mechanical impacts is obtained. In particular, the experimental phenomenon that the output signal of the supercapacitors under extreme effects exhibits few clutters is theoretically explained and simulated. Finally, for the application of electronic skin and wearable electronics, the self-sensing characteristics of the supercapacitors under finger pressing are experimentally studied, and the ability to perceive the magnitude and duration of pressing is verified. This work provides a strong theoretical foundation for designing self-sensing supercapacitors with superior performance and expands their applications under various mechanical effects, from electronic skin to car crash protection. --- paper_title: Flexible transparent tribotronic transistor for active modulation of conventional electronics paper_content: Abstract Flexible and transparent electronics have attracted wide attention for electronic skin, wearable sensors and man-machine interactive interfacing. In this paper, a novel flexible transparent tribotronic transistor (FTT) is developed by coupling an organic thin film transistor (OTFT) and a triboelectric nanogenerator (TENG) in free-standing sliding mode. The carrier transport between drain and source can be modulated by the sliding-induced electrostatic potential of the TENG instead of the conventional gate voltage. With the sliding distance increases from 0 to 7 mm, the reverse drain current is almost linearly increased from 2 to 22 μA. The FTT has excellent performances in stability and durability in different bending modes and radius. The optical transmittance of the device is about 71.6% in the visible wavelength range from 400 to 800 nm. Moreover, the FTT is used for active modulation of conventional electronics, in which the luminance, magnetism, sound and micro-motion can be modulated by sliding a finger. This work has provided a new way to actively modulate conventional electronics, and demonstrated the practicability of tribotronics for human-machine interaction. --- paper_title: Self-Powered Magnetic Sensor Based on a Triboelectric Nanogenerator paper_content: MagneticsensorsareusuallybasedontheHalleffect or a magnetoresistive sensing mechanism. Here we demonstrate that a nanogenerator can serve as a sensor for detecting the variation of the time-dependent magnetic field. The output voltage of the sensor was found to exponentially increase with increasing magnetic field. The detection sensitivities for the change and the changingrateofmagnetic fieldareabout0.0363(0.0004ln(mV)/G and 0.0497 ( 0.0006 ln(mV)/(G/s), respectively. The response time and reset time of the sensor are about 0.13 and 0.34 s, respectively. The fabricated sensor has a detection resolution of about 3 G and can work under low frequencies (<0.4 Hz). --- paper_title: A 137 dB Dynamic Range and 0.32 V Self-Powered CMOS Imager With Energy Harvesting Pixels paper_content: This work presents an 0.32 V self-powered high dynamic range (DR) CMOS imager in standard $0.18~\mu \text {m}$ CMOS technology with dual-mode operation: imaging (IMG) and optical energy harvesting (OEH) modes. In IMG mode, a dual-exposure extended-counting (DEEC) scheme is proposed and implemented with a 5-bit programmable current-controlled threshold (PCCT) generator. By combining the DR of a short-exposure (88 dB) and an extended long-exposure (49 dB) conversions, the DEEC achieves a high DR of 137 dB. The chip consumes $10.6~\mu \text {W}$ at 6.5 fps with 0.32 V operation and $32.1~\mu \text {W}$ at 16.5 fps with 0.4 V operation, which results in an iFoM of 8.1 f and 9.8 fJ/pixel $\cdot $ code. In OEH mode, the sensing pixels turns into energy harvesting pixels with an additional global micro solar cell and corresponding mode control circuit, which generates 455 mV and $14~\mu \text {W}$ at 60 klux (sunny day) and supports a self-powered imaging operation at 4.1 fps. ---
Title: Ammunition Reliability Against the Harsh Environments During the Launch of an Electromagnetic Gun: A Review Section 1: INTRODUCTION Description 1: Describe the background, importance, and advancements in electromagnetic launch technology, including key historical milestones and research objectives. Section 2: HARSH ENVIRONMENT IN THE LAUNCH PROCESS Description 2: Discuss the extreme conditions experienced during the electromagnetic railgun launch process, including magnetic fields, high-g impacts, and high temperatures, and the need for quantitative research. Section 3: STRONG MAGNETIC FIELD Description 3: Present experimental results and simulations on the strong and varying magnetic fields during the launch process, including challenges in measurement and modeling techniques. Section 4: HIGH-G IMPACT Description 4: Explain the extreme acceleration and its effects during the launch, presenting experimental results and adequacy of existing models and simulations to predict high-g impacts. Section 5: HIGH TEMPERATURE Description 5: Analyze the generation of high temperatures due to friction and Joule heating, discussing experimental measurements and modeling approaches to understand temperature distributions and effects. Section 6: LIMITATIONS OF MODELING ON SINGLE FIELD Description 6: Critique the limitations of modeling individual extreme factors and the necessity to consider multiphysics coupling for accurate simulations. Section 7: THE 'CHAIN REACTION': THE DECISIVE ROLE OF MULTIPHYSICS COUPLING ON EXTREME ENVIRONMENTS Description 7: Examine the interdependent and exacerbatory effects of multiphysics coupling on the harsh environments, highlighting the importance of integrated modeling approaches. Section 8: PROTECTION OF THE KEY COMPONENTS OF FUZES IN HARSH ENVIRONMENTS Description 8: Discuss the potential failure mechanisms of fuze components under extreme conditions and propose protective measures including electromagnetic shielding and anti-high-g impact technologies. Section 9: CONCLUSION Description 9: Summarize the key findings regarding extreme environments during electromagnetic railgun launches, emphasize the importance of advanced protection methods for fuzes, and suggest future research directions.
Learning to Detect Deceptive Opinion Spam: A Survey
5
--- paper_title: Estimating the Prevalence of Deception in Online Review Communities paper_content: Consumers' purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam---fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. But while this practice has received considerable public attention and concern, relatively little is known about the actual prevalence, or rate, of deception in online review communities, and less still about the factors that influence it. We propose a generative model of deception which, in conjunction with a deception classifier, we use to explore the prevalence of deception in six popular online review communities: Expedia, Hotels.com, Orbitz, Priceline, TripAdvisor, and Yelp. We additionally propose a theoretical model of online reviews based on economic signaling theory, in which consumer reviews diminish the inherent information asymmetry between consumers and producers, by acting as a signal to a product's true, unknown quality. We find that deceptive opinion spam is a growing problem overall, but with different growth rates across communities. These rates, we argue, are driven by the different signaling costs associated with deception for each review community, e.g., posting requirements. When measures are taken to increase signaling cost, e.g., filtering reviews written by first-time reviewers, deception prevalence is effectively reduced. --- paper_title: Cost-sensitive linguistic fuzzy rule based classification systems under the MapReduce framework for imbalanced big data paper_content: Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. --- paper_title: Exploiting Burstiness in Reviews for Review Spammer Detection. ICWSM paper_content: Online product reviews have become an important source of user opinions. Due to profit or fame, imposters have been writing deceptive or fake reviews to promote and/or to demote some target products or services. Such imposters are called review spammers. In the past few years, several approaches have been proposed to deal with the problem. In this work, we take a different approach, which exploits the burstiness nature of reviews to identify review spammers. Bursts of reviews can be either due to sudden popularity of products or spam attacks. Reviewers and reviews appearing in a burst are often related in the sense that spammers tend to work with other spammers and genuine reviewers tend to appear together with other genuine reviewers. This paves the way for us to build a network of reviewers appearing in different bursts. We then model reviewers and their cooccurrence in bursts as a Markov Random Field (MRF), and employ the Loopy Belief Propagation (LBP) method to infer whether a reviewer is a spammer or not in the graph. We also propose several features and employ feature induced message passing in the LBP framework for network inference. We further propose a novel evaluation method to evaluate the detected spammers automatically using supervised classification of their reviews. Additionally, we employ domain experts to perform a human evaluation of the identified spammers and non-spammers. Both the classification result and human evaluation result show that the proposed method outperforms strong baselines, which demonstrate the effectiveness of the method. --- paper_title: Survey of review spam detection using machine learning techniques paper_content: Online reviews are often the primary factor in a customer’s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation. --- paper_title: Text mining and probabilistic language modeling for online review spam detection paper_content: In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet. --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: Analyzing and Detecting Review Spam paper_content: Mining of opinions from product reviews, forum posts and blogs is an important research topic with many applications. However, existing research has been focused on extraction, classification and summarization of opinions from these sources. An important issue that has not been studied so far is the opinion spam or the trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews. To our knowledge, there is still no published study on this topic, although Web page spam and email spam have been investigated extensively. We will see that review spam is quite different from Web page spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that review spam is widespread. In this paper, we first present a categorization of spam reviews and then propose several techniques to detect them. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: Automatic Detection of Verbal Deception paper_content: A growing field in computer applications is the use of algorithms to spot the lie. The most promising area within this field is the analysis of the language of the liar since speakers effectively control only the meaning they wish to convey, but not the linguistic style of the communication. With the advent of computational means to analyze language, we now have the ability to recognize differences in the way speakers phrase their lies as opposed to their truths. The main goal of this book is to cover the advances of the last 10 years in automatically discriminating truths from lies. To give the reader a grounding in deception studies, it describes a range of behaviors (physiological, gestural as well as verbal) that have been proposed as indicators of deception. An overview of the primary psychological and cognitive theories that have been offered as explanations of deceptive behaviors gives context for the description of specific behaviors. The book also addresses the differences between data collected in a laboratory and real-world data with respect to the emotional and cognitive state of the liar. It discusses sources of real-world data and problematic issues in its collection and identifies the primary areas in which applied studies based on real-world data are critical, including police, security, border crossing, customs, and asylum interviews; congressional hearings; financial reporting; legal depositions; human resource evaluation; predatory communications that include Internet scams, identity theft, fraud, and false product reviews. Having established the background, the book concentrates on computational analyses of deceptive verbal behavior, which have enabled the field of deception studies to move from individual cues to overall differences in behavior. The book concludes with a set of open questions that the computational work has generated. --- paper_title: Opinion mining and sentiment analysis paper_content: An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. ::: ::: This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Finding unusual review patterns using unexpected rules paper_content: In recent years, opinion mining attracted a great deal of research attention. However, limited work has been done on detecting opinion spam (or fake reviews). The problem is analogous to spam in Web search [1, 9 11]. However, review spam is harder to detect because it is very hard, if not impossible, to recognize fake reviews by manually reading them [2]. This paper deals with a restricted problem, i.e., identifying unusual review patterns which can represent suspicious behaviors of reviewers. We formulate the problem as finding unexpected rules. The technique is domain independent. Using the technique, we analyzed an Amazon.com review dataset and found many unexpected rules and rule groups which indicate spam activities. --- paper_title: Syntactic Stylometry for Deception Detection paper_content: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: Distortion as a validation criterion in the identification of suspicious reviews paper_content: Assessing the trustworthiness of reviews is a key issue for the maintainers of opinion sites such as TripAdvisor. In this paper we propose a distortion criterion for assessing the impact of methods for uncovering suspicious hotel reviews in TripAdvisor. The principle is that dishonest reviews will distort the overall popularity ranking for a collection of hotels. Thus a mechanism that deletes dishonest reviews will distort the popularity ranking significantly, when compared with the removal of a similar set of reviews at random. This distortion can be quantified by comparing popularity rankings before and after deletion, using rank correlation. We present an evaluation of this strategy in the assessment of shill detection mechanisms on a dataset of hotel reviews collected from TripAdvisor. --- paper_title: Spotting Fake Reviewer Groups in Consumer Reviews paper_content: Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or demote some target products. For reviews to reflect genuine user experiences and opinions, such spam reviews should be detected. Prior works on opinion spam focused on detecting fake reviews and individual fake reviewers. However, a fake reviewer group (a group of reviewers who work collaboratively to write fake reviews) is even more damaging as they can take total control of the sentiment on the target product due to its size. This paper studies spam detection in the collaborative setting, i.e., to discover fake reviewer groups. The proposed method first uses a frequent itemset mining method to find a set of candidate groups. It then uses several behavioral models derived from the collusion phenomenon among fake reviewers and relation models based on the relationships among groups, individual reviewers, and products they reviewed to detect fake reviewer groups. Additionally, we also built a labeled dataset of fake reviewer groups. Although labeling individual fake reviews and reviewers is very hard, to our surprise labeling fake reviewer groups is much easier. We also note that the proposed technique departs from the traditional supervised learning approach for spam detection because of the inherent nature of our problem which makes the classic supervised learning approach less effective. Experimental results show that the proposed method outperforms multiple strong baselines including the state-of-the-art supervised classification, regression, and learning to rank algorithms. --- paper_title: Identifying Multiple Userids of the Same Author paper_content: This paper studies the problem of identifying users who use multiple userids to post in social media. Since multiple userids may belong to the same author, it is hard to directly apply supervised learning to solve the problem. This paper proposes a new method, which still uses supervised learning but does not require training documents from the involved userids. Instead, it uses documents from other userids for classifier building. The classifier can be applied to documents of the involved userids. This is possible because we transform the document space to a similarity space and learning is performed in this new space. Our evaluation is done in the online review domain. The experimental results using a large number of userids and their reviews show that the proposed method is highly effective. --- paper_title: Syntactic Stylometry for Deception Detection paper_content: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. --- paper_title: Review Graph Based Online Store Review Spammer Detection paper_content: Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results. --- paper_title: Identify Online Store Review Spammers via Social Review Graph paper_content: Online shopping reviews provide valuable information for customers to compare the quality of products, store services, and many other aspects of future purchases. However, spammers are joining this community trying to mislead consumers by writing fake or unfair reviews to confuse the consumers. Previous attempts have used reviewers’ behaviors such as text similarity and rating patterns, to detect spammers. These studies are able to identify certain types of spammers, for instance, those who post many similar reviews about one target. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like normal reviewers, and thus cannot be detected by the available techniques. In this article, we propose a novel concept of review graph to capture the relationships among all reviewers, reviews and stores that the reviewers have reviewed as a heterogeneous graph. We explore how interactions between nodes in this graph could reveal the cause of spam and propose an iterative computation model to identify suspicious reviewers. In the review graph, we have three kinds of nodes, namely, reviewer, review, and store. We capture their relationships by introducing three fundamental concepts, the trustiness of reviewers, the honesty of reviews, and the reliability of stores, and identifying their interrelationships: a reviewer is more trustworthy if the person has written more honesty reviews; a store is more reliable if it has more positive reviews from trustworthy reviewers; and a review is more honest if many other honest reviews support it. This is the first time such intricate relationships have been identified for spam detection and captured in a graph model. We further develop an effective computation method based on the proposed graph model. Different from any existing approaches, we do not use an review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results. --- paper_title: Detecting Deceptive Groups Using Conversations and Network Analysis paper_content: Deception detection has been formulated as a supervised binary classification problem on single documents. However, in daily life, millions of fraud cases involve detailed conversations between deceivers and victims. Deceivers may dynamically adjust their deceptive statements according to the reactions of victims. In addition, people may form groups and collaborate to deceive others. In this paper, we seek to identify deceptive groups from their conversations. We propose a novel subgroup detection method that combines linguistic signals and signed network analysis for dynamic clustering. A social-elimination game called Killer Game is introduced as a case study1. Experimental results demonstrate that our approach significantly outperforms human voting and state-of-theart subgroup detection methods at dynamically differentiating the deceptive groups from truth-tellers. --- paper_title: Review spam detection via temporal pattern discovery paper_content: Online reviews play a crucial role in today's electronic commerce. It is desirable for a customer to read reviews of products or stores before making the decision of what or from where to buy. Due to the pervasive spam reviews, customers can be misled to buy low-quality products, while decent stores can be defamed by malicious reviews. We observe that, in reality, a great portion (> 90% in the data we study) of the reviewers write only one review (singleton review). These reviews are so enormous in number that they can almost determine a store's rating and impression. However, existing methods did not examine this larger part of the reviews. Are most of these singleton reviews truthful ones? If not, how to detect spam reviews in singleton reviews? We call this problem singleton review spam detection. To address this problem, we observe that the normal reviewers' arrival pattern is stable and uncorrelated to their rating pattern temporally. In contrast, spam attacks are usually bursty and either positively or negatively correlated to the rating. Thus, we propose to detect such attacks via unusually correlated temporal patterns. We identify and construct multidimensional time series based on aggregate statistics, in order to depict and mine such correlations. In this way, the singleton review spam detection problem is mapped to a abnormally correlated pattern detection problem. We propose a hierarchical algorithm to robustly detect the time windows where such attacks are likely to have happened. The algorithm also pinpoints such windows in different time resolutions to facilitate faster human inspection. Experimental results show that the proposed method is effective in detecting singleton review attacks. We discover that singleton review is a significant source of spam reviews and largely affects the ratings of online stores. --- paper_title: Understanding deja reviewers paper_content: People who review products on the web invest considerable time and energy in what they write. So why would someone write a review that restates earlier reviews? Our work looks to answer this question. In this paper, we present a mixed-method study of deja reviewers, latecomers who echo what other people said. We analyze nearly 100,000 Amazon.com reviews for signs of repetition and find that roughly 10-15% of reviews substantially resemble previous ones. Using these algorithmically-identified reviews as centerpieces for discussion, we interviewed reviewers to understand their motives. An overwhelming number of reviews partially explains deja reviews, but deeper factors revolving around an individual's status in the community are also at work. The paper concludes by introducing a new idea inspired by our findings: a self-aware community that nudges members toward community-wide goals. --- paper_title: Identifying fake Amazon reviews as learning from crowds paper_content: Customers who buy products such as books online often rely on other customers reviews more than on reviews found on specialist magazines. Unfortunately the confidence in such reviews is often misplaced due to the explosion of so-called sock puppetry-Authors writing glowing reviews of their own books. Identifying such deceptive reviews is not easy. The first contribution of our work is the creation of a collection including a number of genuinely deceptive Amazon book reviews in collaboration with crime writer Jeremy Duns, who has devoted a great deal of effort in unmasking sock puppeting among his colleagues. But there can be no certainty concerning the other reviews in the collection: All we have is a number of cues, also developed in collaboration with Duns, suggesting that a review may be genuine or deceptive. Thus this corpus is an example of a collection where it is not possible to acquire the actual label for all instances, and where clues of deception were treated as annotators who assign them heuristic labels. A number of approaches have been proposed for such cases; we adopt here the 'learning from crowds' approach proposed by Raykar et al. (2010). Thanks to Duns' certainly fake reviews, the second contribution of this work consists in the evaluation of the effectiveness of different methods of annotation, according to the performance of models trained to detect deceptive reviews. © 2014 Association for Computational Linguistics. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: Negative Deceptive Opinion Spam paper_content: The rising influence of user-generated online reviews (Cone, 2011) has led to growing incentive for businesses to solicit and manufacture DECEPTIVE OPINION SPAM—fictitious reviews that have been deliberately written to sound authentic and deceive the reader. Recently, Ott et al. (2011) have introduced an opinion spam dataset containing gold standard deceptive positive hotel reviews. However, the complementary problem of negative deceptive opinion spam, intended to slander competitive offerings, remains largely unstudied. Following an approach similar to Ott et al. (2011), in this work we create and study the first dataset of deceptive opinion spam with negative sentiment reviews. Based on this dataset, we find that standard n-gram text categorization techniques can detect negative deceptive opinion spam with performance far surpassing that of human judges. Finally, in conjunction with the aforementioned positive review dataset, we consider the possible interactions between sentiment and deception, and present initial results that encourage further exploration of this relationship. --- paper_title: Learning to Identify Review Spam paper_content: In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors' products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. ::: ::: In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines. --- paper_title: Spotting Fake Reviews via Collective Positive-Unlabeled Learning paper_content: Online reviews have become an increasingly important resource for decision making and product designing. But reviews systems are often targeted by opinion spamming. Although fake review detection has been studied by researchers for years using supervised learning, ground truth of large scale datasets is still unavailable and most of existing approaches of supervised learning are based on pseudo fake reviews rather than real fake reviews. Working with Dianping, the largest Chinese review hosting site, we present the first reported work on fake review detection in Chinese with filtered reviews from Dianping's fake review detection system. Dianping's algorithm has a very high precision, but the recall is hard to know. This means that all fake reviews detected by the system are almost certainly fake but the remaining reviews (unknown set) may not be all genuine. Since the unknown set may contain many fake reviews, it is more appropriate to treat it as an unlabeled set. This calls for the model of learning from positive and unlabeled examples (PU learning). By leveraging the intricate dependencies among reviews, users and IP addresses, we first propose a collective classification algorithm called Multi-typed Heterogeneous Collective Classification (MHCC) and then extend it to Collective Positive and Unlabeled learning (CPU). Our experiments are conducted on real-life reviews of 500 restaurants in Shanghai, China. Results show that our proposed models can markedly improve the F1 scores of strong baselines in both PU and non-PU learning settings. Since our models only use language independent features, they can be easily generalized to other languages. --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Collective Opinion Spam Detection: Bridging Review Networks and Metadata paper_content: Online reviews capture the testimonials of "real" people and help shape the decisions of other consumers. Due to the financial gains associated with positive reviews, however, opinion spam has become a widespread problem, with often paid spam reviewers writing fake reviews to unjustly promote or demote certain products or businesses. Existing approaches to opinion spam have successfully but separately utilized linguistic clues of deception, behavioral footprints, or relational ties between agents in a review system. In this work, we propose a new holistic approach called SPEAGLE that utilizes clues from all metadata (text, timestamp, rating) as well as relational data (network), and harness them collectively under a unified framework to spot suspicious users and reviews, as well as products targeted by spam. Moreover, our method can efficiently and seamlessly integrate semi-supervision, i.e., a (small) set of labels if available, without requiring any training or changes in its underlying algorithm. We demonstrate the effectiveness and scalability of SPEAGLE on three real-world review datasets from Yelp.com with filtered (spam) and recommended (non-spam) reviews, where it significantly outperforms several baselines and state-of-the-art methods. To the best of our knowledge, this is the largest scale quantitative evaluation performed to date for the opinion spam problem. --- paper_title: Negative Deceptive Opinion Spam paper_content: The rising influence of user-generated online reviews (Cone, 2011) has led to growing incentive for businesses to solicit and manufacture DECEPTIVE OPINION SPAM—fictitious reviews that have been deliberately written to sound authentic and deceive the reader. Recently, Ott et al. (2011) have introduced an opinion spam dataset containing gold standard deceptive positive hotel reviews. However, the complementary problem of negative deceptive opinion spam, intended to slander competitive offerings, remains largely unstudied. Following an approach similar to Ott et al. (2011), in this work we create and study the first dataset of deceptive opinion spam with negative sentiment reviews. Based on this dataset, we find that standard n-gram text categorization techniques can detect negative deceptive opinion spam with performance far surpassing that of human judges. Finally, in conjunction with the aforementioned positive review dataset, we consider the possible interactions between sentiment and deception, and present initial results that encourage further exploration of this relationship. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: Negative Deceptive Opinion Spam paper_content: The rising influence of user-generated online reviews (Cone, 2011) has led to growing incentive for businesses to solicit and manufacture DECEPTIVE OPINION SPAM—fictitious reviews that have been deliberately written to sound authentic and deceive the reader. Recently, Ott et al. (2011) have introduced an opinion spam dataset containing gold standard deceptive positive hotel reviews. However, the complementary problem of negative deceptive opinion spam, intended to slander competitive offerings, remains largely unstudied. Following an approach similar to Ott et al. (2011), in this work we create and study the first dataset of deceptive opinion spam with negative sentiment reviews. Based on this dataset, we find that standard n-gram text categorization techniques can detect negative deceptive opinion spam with performance far surpassing that of human judges. Finally, in conjunction with the aforementioned positive review dataset, we consider the possible interactions between sentiment and deception, and present initial results that encourage further exploration of this relationship. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Twitter Sarcasm Detection Exploiting a Context-Based Model paper_content: Automatically detecting sarcasm in twitter is a challenging task because sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. Previous work focus on feature modeling of the single tweet, which limit the performance of the task. These methods did not leverage contextual information regarding the author or the tweet to improve the performance of sarcasm detection. However, tweets are filtered through streams of posts, so that a wider context, e.g. a conversation or topic, is always available. In this paper, we compared sarcastic utterances in twitter to utterances that express positive or negative attitudes without sarcasm. The sarcasm detection problem is modeled as a sequential classification task over a tweet and his contextual information. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the $$SVM^{hmm}$$ algorithm has been employed to assign the category label to entire sequence. Experimental results show that sequential classification effectively embodied evidence about the context information and is able to reach a relative increment in detection performance. --- paper_title: Long short-term memory RNN for biomedical named entity recognition paper_content: BackgroundBiomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features.ResultsWe present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance — 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus.ConclusionsOur neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER. --- paper_title: Learning to Identify Review Spam paper_content: In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors' products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. ::: ::: In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines. --- paper_title: Finding unusual review patterns using unexpected rules paper_content: In recent years, opinion mining attracted a great deal of research attention. However, limited work has been done on detecting opinion spam (or fake reviews). The problem is analogous to spam in Web search [1, 9 11]. However, review spam is harder to detect because it is very hard, if not impossible, to recognize fake reviews by manually reading them [2]. This paper deals with a restricted problem, i.e., identifying unusual review patterns which can represent suspicious behaviors of reviewers. We formulate the problem as finding unexpected rules. The technique is domain independent. Using the technique, we analyzed an Amazon.com review dataset and found many unexpected rules and rule groups which indicate spam activities. --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Grammatical word class variation within the British National Corpus sampler paper_content: This paper examines the relationship between part-of-speech frequencies and text typology in the British National Corpus Sampler. Four pairwise comparisons of part-of-speech frequencies were made: written language vs. spoken language; informative writing vs. imaginative writing; conversational speech vs. ‘task-oriented’ speech; and imaginative writing vs. ‘task-oriented’ speech. The following variation gradient was hypothesized: conversation – task-oriented speech – imaginative writing – informative writing; however, the actual progression was: conversation – imaginative writing – task-oriented speech – informative writing. It thus seems that genre and medium interact in a more complex way than originally hypothesized. However, this conclusion has been made on the basis of broad, pre-existing text types within the BNC, and, in future, the internal structure of these text types may need to be addressed. --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: Detecting deceptive reviews using lexical and syntactic features paper_content: Deceptive opinion classification has attracted a lot of research interest due to the rapid growth of social media users. Despite the availability of a vast number of opinion features and classification techniques, review classification still remains a challenging task. In this work we applied stylometric features, i.e. lexical and syntactic, using supervised machine learning classifiers, i.e. Support Vector Machine (SVM) with Sequential Minimal Optimization (SMO) and Naive Bayes, to detect deceptive opinion. Detecting deceptive opinion by a human reader is a difficult task because spammers try to write wise reviews, therefore it causes changes in writing style and verbal usage. Hence, considering the stylometric features help to distinguish the spammer writing style to find deceptive reviews. Experiments on an existing hotel review corpus suggest that using stylometric features is a promising approach for detecting deceptive opinions. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Cues to deception and ability to detect lies as a function of police interview styles. paper_content: In Experiment 1, we examined whether three interview styles used by the police, accusatory, information-gathering and behaviour analysis, reveal verbal cues to deceit, measured with the Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM) methods. A total of 120 mock suspects told the truth or lied about a staged event and were interviewed by a police officer employing one of these three interview styles. The results showed that accusatory interviews, which typically result in suspects making short denials, contained the fewest verbal cues to deceit. Moreover, RM distinguished between truth tellers and liars better than CBCA. Finally, manual RM coding resulted in more verbal cues to deception than automatic coding of the RM criteria utilising the Linguistic Inquiry and Word Count (LIWC) software programme. In Experiment 2, we examined the effects of the three police interview styles on the ability to detect deception. Sixty-eight police officers watched some of the videotaped interviews of Experiment 1 and made veracity and confidence judgements. Accuracy scores did not differ between the three interview styles; however, watching accusatory interviews resulted in more false accusations (accusing truth tellers of lying) than watching information-gathering interviews. Furthermore, only in accusatory interviews, judgements of mendacity were associated with higher confidence. We discuss the possible danger of conducting accusatory interviews. --- paper_title: Detecting fake websites: the contribution of statistical learning theory paper_content: Fake websites have become increasingly pervasive, generating billions of dollars in fraudulent revenue at the expense of unsuspecting Internet users. The design and appearance of these websites makes it difficult for users to manually identify them as fake. Automated detection systems have emerged as a mechanism for combating fake websites, however most are fairly simplistic in terms of their fraud cues and detection methods employed. Consequently, existing systems are susceptible to the myriad of obfuscation tactics used by fraudsters, resulting in highly ineffective fake website detection performance. In light of these deficiencies, we propose the development of a new class of fake website detection systems that are based on statistical learning theory (SLT). Using a design science approach, a prototype system was developed to demonstrate the potential utility of this class of systems. We conducted a series of experiments, comparing the proposed system against several existing fake website detection systems on a test bed encompassing 900 websites. The results indicate that systems grounded in SLT can more accurately detect various categories of fake websites by utilizing richer sets of fraud cues in combination with problem-specific knowledge. Given the hefty cost exacted by fake websites, the results have important implications for e-commerce and online security. --- paper_title: Using linguistic cues for the automatic recognition of personality in conversation and text paper_content: It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker's personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using self-reports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers. --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: Text mining and probabilistic language modeling for online review spam detection paper_content: In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet. --- paper_title: Learning Document Representation for Deceptive Opinion Spam Detection paper_content: Deceptive opinion spam in reviews of products or service is very harmful for customers in decision making. Existing approaches to detect deceptive spam are concern on feature designing. Hand-crafted features can show some linguistic phenomenon, but is time-consuming and can not reveal the connotative semantic meaning of the review. We present a neural network to learn document-level representation. In our model, we not only learn to represent each sentence but also represent the whole document of the review. We apply traditional convolutional neural network to represent the semantic meaning of sentences. We present two variant convolutional neural-network models to learn the document representation. The model taking sentence importance into consideration shows the better performance in deceptive spam detection which enhances the value of F1 by 5 %. --- paper_title: Deceptive Opinion Spam Detection Using Deep Level Linguistic Features paper_content: This paper focuses on improving a specific opinion spam detection task, deceptive spam. In addition to traditional word form and other shallow syntactic features, we introduce two types of deep level linguistic features. The first type of features are derived from a shallow discourse parser trained on Penn Discourse Treebank PDTB, which can capture inter-sentence information. The second type is based on the relationship between sentiment analysis and spam detection. The experimental results over the benchmark dataset demonstrate that both of the proposed deep features achieve improved performance over the baseline. --- paper_title: Learning to Identify Review Spam paper_content: In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors' products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. ::: ::: In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines. --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Spotting Fake Reviewer Groups in Consumer Reviews paper_content: Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or demote some target products. For reviews to reflect genuine user experiences and opinions, such spam reviews should be detected. Prior works on opinion spam focused on detecting fake reviews and individual fake reviewers. However, a fake reviewer group (a group of reviewers who work collaboratively to write fake reviews) is even more damaging as they can take total control of the sentiment on the target product due to its size. This paper studies spam detection in the collaborative setting, i.e., to discover fake reviewer groups. The proposed method first uses a frequent itemset mining method to find a set of candidate groups. It then uses several behavioral models derived from the collusion phenomenon among fake reviewers and relation models based on the relationships among groups, individual reviewers, and products they reviewed to detect fake reviewer groups. Additionally, we also built a labeled dataset of fake reviewer groups. Although labeling individual fake reviews and reviewers is very hard, to our surprise labeling fake reviewer groups is much easier. We also note that the proposed technique departs from the traditional supervised learning approach for spam detection because of the inherent nature of our problem which makes the classic supervised learning approach less effective. Experimental results show that the proposed method outperforms multiple strong baselines including the state-of-the-art supervised classification, regression, and learning to rank algorithms. --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Spotting Fake Reviewer Groups in Consumer Reviews paper_content: Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or demote some target products. For reviews to reflect genuine user experiences and opinions, such spam reviews should be detected. Prior works on opinion spam focused on detecting fake reviews and individual fake reviewers. However, a fake reviewer group (a group of reviewers who work collaboratively to write fake reviews) is even more damaging as they can take total control of the sentiment on the target product due to its size. This paper studies spam detection in the collaborative setting, i.e., to discover fake reviewer groups. The proposed method first uses a frequent itemset mining method to find a set of candidate groups. It then uses several behavioral models derived from the collusion phenomenon among fake reviewers and relation models based on the relationships among groups, individual reviewers, and products they reviewed to detect fake reviewer groups. Additionally, we also built a labeled dataset of fake reviewer groups. Although labeling individual fake reviews and reviewers is very hard, to our surprise labeling fake reviewer groups is much easier. We also note that the proposed technique departs from the traditional supervised learning approach for spam detection because of the inherent nature of our problem which makes the classic supervised learning approach less effective. Experimental results show that the proposed method outperforms multiple strong baselines including the state-of-the-art supervised classification, regression, and learning to rank algorithms. --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Detecting product review spammers using rating behaviors paper_content: This paper aims to detect users generating spam reviews or review spammers. We identify several characteristic behaviors of review spammers and model these behaviors so as to detect the spammers. In particular, we seek to model the following behaviors. First, spammers may target specific products or product groups in order to maximize their impact. Second, they tend to deviate from the other reviewers in their ratings of products. We propose scoring methods to measure the degree of spam for each reviewer and apply them on an Amazon review dataset. We then select a subset of highly suspicious reviewers for further scrutiny by our user evaluators with the help of a web based spammer evaluation software specially developed for user evaluation experiments. Our results show that our proposed ranking and supervised methods are effective in discovering spammers and outperform other baseline method based on helpfulness votes alone. We finally show that the detected spammers have more significant impact on ratings compared with the unhelpful reviewers. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Detecting product review spammers using rating behaviors paper_content: This paper aims to detect users generating spam reviews or review spammers. We identify several characteristic behaviors of review spammers and model these behaviors so as to detect the spammers. In particular, we seek to model the following behaviors. First, spammers may target specific products or product groups in order to maximize their impact. Second, they tend to deviate from the other reviewers in their ratings of products. We propose scoring methods to measure the degree of spam for each reviewer and apply them on an Amazon review dataset. We then select a subset of highly suspicious reviewers for further scrutiny by our user evaluators with the help of a web based spammer evaluation software specially developed for user evaluation experiments. Our results show that our proposed ranking and supervised methods are effective in discovering spammers and outperform other baseline method based on helpfulness votes alone. We finally show that the detected spammers have more significant impact on ratings compared with the unhelpful reviewers. --- paper_title: Support-vector networks paper_content: Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. --- paper_title: Detecting deceptive reviews using lexical and syntactic features paper_content: Deceptive opinion classification has attracted a lot of research interest due to the rapid growth of social media users. Despite the availability of a vast number of opinion features and classification techniques, review classification still remains a challenging task. In this work we applied stylometric features, i.e. lexical and syntactic, using supervised machine learning classifiers, i.e. Support Vector Machine (SVM) with Sequential Minimal Optimization (SMO) and Naive Bayes, to detect deceptive opinion. Detecting deceptive opinion by a human reader is a difficult task because spammers try to write wise reviews, therefore it causes changes in writing style and verbal usage. Hence, considering the stylometric features help to distinguish the spammer writing style to find deceptive reviews. Experiments on an existing hotel review corpus suggest that using stylometric features is a promising approach for detecting deceptive opinions. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Syntactic Stylometry for Deception Detection paper_content: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. --- paper_title: Finding Deceptive Opinion Spam by Any Stretch of the Imagination paper_content: Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Finding Deceptive Opinion Spam by Correcting the Mislabeled Instances paper_content: Assessing the trustworthiness of reviews is a key in natural language processing and computational linguistics. Previous work mainly focuses on some heuristic strategies or simple supervised learning methods, which limit the performance of this task. This paper presents a new approach, from the viewpoint of correcting the mislabeled instances, to find deceptive opinion spam. Partition a dataset into several subsets, construct a classifier set for each subset and select the best one to evaluate the whole dataset. Error variables are defined to compute the probability that the instances have been mislabeled. The mislabeled instances are corrected based on two threshold schemes, majority and non-objection. The results display significant improvements in our method in contrast to the existing baselines. --- paper_title: Finding unusual review patterns using unexpected rules paper_content: In recent years, opinion mining attracted a great deal of research attention. However, limited work has been done on detecting opinion spam (or fake reviews). The problem is analogous to spam in Web search [1, 9 11]. However, review spam is harder to detect because it is very hard, if not impossible, to recognize fake reviews by manually reading them [2]. This paper deals with a restricted problem, i.e., identifying unusual review patterns which can represent suspicious behaviors of reviewers. We formulate the problem as finding unexpected rules. The technique is domain independent. Using the technique, we analyzed an Amazon.com review dataset and found many unexpected rules and rule groups which indicate spam activities. --- paper_title: Opinion spam and analysis paper_content: Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Latent Dirichlet Allocation paper_content: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. --- paper_title: Sparse Additive Generative Models of Text paper_content: Generative models of text typically associate a multinomial with every class label or topic. Even in simple models this requires the estimation of thousands of parameters; in multi-faceted latent variable models, standard approaches require additional latent "switching" variables for every token, complicating inference. In this paper, we propose an alternative generative model for text. The central idea is that each class label or latent topic is endowed with a model of the deviation in log-frequency from a constant background distribution. This approach has two key advantages: we can enforce sparsity to prevent overfitting, and we can combine generative facets through simple addition in log space, avoiding the need for latent switching variables. We demonstrate the applicability of this idea to a range of scenarios: classification, topic modeling, and more complex multifaceted generative models. --- paper_title: Learning to Identify Review Spam paper_content: In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors' products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. ::: ::: In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines. --- paper_title: Positive Unlabeled Learning for Deceptive Reviews Detection paper_content: Deceptive reviews detection has attracted significant attention from both business and research communities. However, due to the difficulty of human labeling needed for supervised learning, the problem remains to be highly challenging. This paper proposed a novel angle to the problem by modeling PU (positive unlabeled) learning. A semi-supervised model, called mixing population and individual property PU learning (MPIPUL), is proposed. Firstly, some reliable negative examples are identified from the unlabeled dataset. Secondly, some representative positive examples and negative examples are generated based on LDA (Latent Dirichlet Allocation). Thirdly, for the remaining unlabeled examples (we call them spy examples), which can not be explicitly identified as positive and negative, two similarity weights are assigned, by which the probability of a spy example belonging to the positive class and the negative class are displayed. Finally, spy examples and their similarity weights are incorporated into SVM (Support Vector Machine) to build an accurate classifier. Experiments on gold-standard dataset demonstrate the effectiveness of MPIPUL which outperforms the state-of-the-art baselines. --- paper_title: Revisiting Semi-Supervised Learning for Online Deceptive Review Detection paper_content: With more consumers using online opinion reviews to inform their service decision making, opinion reviews have an economical impact on the bottom line of businesses. Unsurprisingly, opportunistic individuals or groups have attempted to abuse or manipulate online opinion reviews (e.g., spam reviews) to make profits and so on, and that detecting deceptive and fake opinion reviews is a topic of ongoing research interest. In this paper, we explain how semi-supervised learning methods can be used to detect spam reviews, prior to demonstrating its utility using a data set of hotel reviews. --- paper_title: Deceptive review detection using labeled and unlabeled data paper_content: Availability of millions of products and services on e-commerce sites makes it difficult to search the best suitable product according to the requirements because of existence of many alternatives. To get rid of this the most popular and useful approach is to follow reviews of others in opinionated social medias, who have already tried them. Almost all e-commerce sites provide facility to the users for giving views and experience of the product and services they experienced. The customers reviews are increasingly used by individuals, manufacturers and retailers for purchase and business decisions. As there is no scrutiny over the reviews received, anybody can write anything unanimously which conclusively leads to review spam. Moreover, driven by the desire of profit and/or publicity, spammers produce synthesized reviews to promote some products/brand and demote competitors products/brand. Deceptive review spam has seen a considerable growth overtime. In this work, we have applied supervised as well as unsupervised techniques to identify review spam. Most effective feature sets have been assembled for model building. Sentiment analysis has also been incorporated in the detection process. In order to get best performance some well-known classifiers were applied on labeled dataset. Further, for the unlabeled data, clustering is used after desired attributes were computed for spam detection. Additionally, there is a high chance that spam reviewers may also be held responsible for content pollution in multimedia social networks, because nowadays many users are giving the reviews using their social network logins. Finally, the work can be extended to find suspicious accounts responsible for posting fake multimedia contents into respective social networks. --- paper_title: Spotting Fake Reviews via Collective Positive-Unlabeled Learning paper_content: Online reviews have become an increasingly important resource for decision making and product designing. But reviews systems are often targeted by opinion spamming. Although fake review detection has been studied by researchers for years using supervised learning, ground truth of large scale datasets is still unavailable and most of existing approaches of supervised learning are based on pseudo fake reviews rather than real fake reviews. Working with Dianping, the largest Chinese review hosting site, we present the first reported work on fake review detection in Chinese with filtered reviews from Dianping's fake review detection system. Dianping's algorithm has a very high precision, but the recall is hard to know. This means that all fake reviews detected by the system are almost certainly fake but the remaining reviews (unknown set) may not be all genuine. Since the unknown set may contain many fake reviews, it is more appropriate to treat it as an unlabeled set. This calls for the model of learning from positive and unlabeled examples (PU learning). By leveraging the intricate dependencies among reviews, users and IP addresses, we first propose a collective classification algorithm called Multi-typed Heterogeneous Collective Classification (MHCC) and then extend it to Collective Positive and Unlabeled learning (CPU). Our experiments are conducted on real-life reviews of 500 restaurants in Shanghai, China. Results show that our proposed models can markedly improve the F1 scores of strong baselines in both PU and non-PU learning settings. Since our models only use language independent features, they can be easily generalized to other languages. --- paper_title: Partially Supervised Classification of Text Documents paper_content: We investigate the following problem: Given a set of documents of a particular topic or class P , and a large set M of mixed documents that contains documents from class P and other types of documents, identify the documents from class P in M . The key feature of this problem is that there is no labeled nonP document, which makes traditional machine learning techniques inapplicable, as they all need labeled documents of both classes. We call this problem partially supervised classification. In this paper, we show that this problem can be posed as a constrained optimization problem and that under appropriate conditions, solutions to the constrained optimization problem will give good solutions to the partially supervised classification problem. We present a novel technique to solve the problem and demonstrate the effectiveness of the technique through extensive experimentation. --- paper_title: Similarity-Based Approach for Positive and Unlabeled Learning paper_content: Positive and unlabelled learning (PU learning) has been investigated to deal with the situation where only the positive examples and the unlabelled examples are available. Most of the previous works focus on identifying some negative examples from the unlabelled data, so that the supervised learning methods can be applied to build a classifier. However, for the remaining unlabelled data, which can not be explicitly identified as positive or negative (we call them ambiguous examples), they either exclude them from the training phase or simply enforce them to either class. Consequently, their performance may be constrained. This paper proposes a novel approach, called similarity-based PU learning (SPUL) method, by associating the ambiguous examples with two similarity weights, which indicate the similarity of an ambiguous example towards the positive class and the negative class, respectively. The local similarity-based and global similarity-based mechanisms are proposed to generate the similarity weights. The ambiguous examples and their similarity-weights are thereafter incorporated into an SVM-based learning phase to build a more accurate classifier. Extensive experiments on real-world datasets have shown that SPUL outperforms state-of-the-art PU learning methods. --- paper_title: Learning classifiers from only positive and unlabeled data paper_content: The input to an algorithm that learns a binary classifier normally consists of two sets of examples, where one set consists of positive examples of the concept to be learned, and the other set consists of negative examples. However, it is often the case that the available training data are an incomplete set of positive examples, and a set of unlabeled examples, some of which are positive and some of which are negative. The problem solved in this paper is how to learn a standard binary classifier given a nontraditional training set of this nature. Under the assumption that the labeled examples are selected randomly from the positive examples, we show that a classifier trained on positive and unlabeled examples predicts probabilities that differ by only a constant factor from the true conditional probabilities of being positive. We show how to use this result in two different ways to learn a classifier from a nontraditional training set. We then apply these two new methods to solve a real-world problem: identifying protein records that should be included in an incomplete specialized molecular biology database. Our experiments in this domain show that models trained using the new methods perform better than the current state-of-the-art biased SVM method for learning from positive and unlabeled examples. --- paper_title: Building text classifiers using positive and unlabeled examples paper_content: We study the problem of building text classifiers using positive and unlabeled examples. The key feature of this problem is that there is no negative example for learning. Recently, a few techniques for solving this problem were proposed in the literature. These techniques are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. We first introduce some new methods for the two steps, and perform a comprehensive evaluation of all possible combinations of methods of the two steps. We then propose a more principled approach to solving the problem based on a biased formulation of SVM, and show experimentally that it is more accurate than the existing techniques. --- paper_title: RLOSD: Representation learning based opinion spam detection paper_content: Nowadays, by vastly increasing in online reviews, harmful influence of spam reviews on decision making causes irrecoverable outcomes for both customers and organizations. Existing methods investigate for a way to contradistinction between spam and non-spam reviews. Most algorithms focus on feature engineering approaches to expose an accommodation of data representation. In this paper we propose a decision tree-based method to reveal deceptive reviews from trustworthy ones. We use unsupervised representation learning along with traditional feature selection methods to extract appropriate features and evaluate them with a decision tree. Our model takes data correlation into consideration to opt suitable features. The result shows the better performance in detecting opinion spam, comparing most common methods in this area. --- paper_title: Text mining and probabilistic language modeling for online review spam detection paper_content: In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet. --- paper_title: Integrating Feature Selection and Feature Extraction Methods With Deep Learning to Predict Clinical Outcome of Breast Cancer paper_content: In many microarray studies, classifiers have been constructed based on gene signatures to predict clinical outcomes for various cancer sufferers. However, signatures originating from different studies often suffer from poor robustness when used in the classification of data sets independent from which they were generated from. In this paper, we present an unsupervised feature learning framework by integrating a principal component analysis algorithm and autoencoder neural network to identify different characteristics from gene expression profiles. As the foundation for the obtained features, an ensemble classifier based on the AdaBoost algorithm (PCA-AE-Ada) was constructed to predict clinical outcomes in breast cancer. During the experiments, we established an additional classifier with the same classifier learning strategy (PCA-Ada) in order to perform as a baseline to the proposed method, where the only difference is the training inputs. The area under the receiver operating characteristic curve index, Matthews correlation coefficient index, accuracy, and other evaluation parameters of the proposed method were tested on several independent breast cancer data sets and compared with representative gene signature-based algorithms including the baseline method. Experimental results demonstrate that the proposed method using deep learning techniques performs better than others. --- paper_title: Natural Language Processing (almost) from Scratch paper_content: We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements. --- paper_title: DRI-RCNN: An approach to deceptive review identification using recurrent convolutional neural network paper_content: Abstract With the widespread of deceptive opinions in the Internet, how to identify online deceptive reviews automatically has become an attractive topic in research field. Traditional methods concentrate on extracting different features from online reviews and training machine learning classifiers to produce models to decide whether an incoming review is deceptive or not. This paper proposes an approach called DRI-RCNN (Deceptive Review Identification by Recurrent Convolutional Neural Network) to identify deceptive reviews by using word contexts and deep learning. The basic idea is that since deceptive reviews and truthful reviews are written by writers without and with real experience respectively, the writers of the reviews should have different contextual knowledge on their target objectives under description. In order to differentiate the deceptive and truthful contextual knowledge embodied in the online reviews, we represent each word in a review with six components as a recurrent convolutional vector. The first and second components are two numerical word vectors derived from training deceptive and truthful reviews, respectively. The third and fourth components are left neighboring deceptive and truthful context vectors derived by training a recurrent convolutional neural network on context vectors and word vectors of left words. The fifth and six components are right neighboring deceptive and truthful context vectors of right words. Further, we employ max-pooling and ReLU (Rectified Linear Unit) filter to transfer recurrent convolutional vectors of words in a review to a review vector by extracting positive maximum feature elements in recurrent convolutional vectors of words in the review. Experiment results on the spam dataset and the deception dataset demonstrate that the proposed DRI-RCNN approach outperforms the state-of-the-art techniques in deceptive review identification. --- paper_title: Towards Accurate Deceptive Opinion Spam Detection based on Word Order-preserving CNN paper_content: As a mainly network of Internet naval activities, the deceptive opinion spam is of great harm. The identification of deceptive opinion spam is of great importance because of the rapid and dramatic development of Internet. The effective distinguish between positive and deceptive opinion plays an important role in maintaining and improving the Internet environment. Deceptive opinion spam is very short, varied type and content. In order to effectively identify deceptive opinion, expect for the textual semantics and emotional polarity that have been widely used in text analysis, we need to further summarize the deep features of deceptive opinion in order to characterize deceptive opinion effectively. In this paper, we use the traditional convolution neural network and improve it from the point of the word order by using the method called word order-preserving k-max pooling, which makes convolution neural network more suitable for text classification. The experiment can get better deceptive opinion spam detection. --- paper_title: Handling Cold-Start Problem in Review Spam Detection by Jointly Embedding Texts and Behaviors paper_content: Solving the cold-start problem in review spam detection is an urgent and significant task. It can help the on-line review websites to relieve the damage of spammers in time, but has never been investigated by previous work. This paper proposes a novel neural network model to detect review spam for the cold-start problem, by learning to represent the new reviewers’ review with jointly embedded textual and behavioral information. Experimental results prove the proposed model achieves an effective performance and possesses preferable domain-adaptability. It is also applicable to a large-scale dataset in an unsupervised way. --- paper_title: Learning Document Representation for Deceptive Opinion Spam Detection paper_content: Deceptive opinion spam in reviews of products or service is very harmful for customers in decision making. Existing approaches to detect deceptive spam are concern on feature designing. Hand-crafted features can show some linguistic phenomenon, but is time-consuming and can not reveal the connotative semantic meaning of the review. We present a neural network to learn document-level representation. In our model, we not only learn to represent each sentence but also represent the whole document of the review. We apply traditional convolutional neural network to represent the semantic meaning of sentences. We present two variant convolutional neural-network models to learn the document representation. The model taking sentence importance into consideration shows the better performance in deceptive spam detection which enhances the value of F1 by 5 %. --- paper_title: Detecting Deceptive Review Spam via Attention-Based Neural Networks paper_content: In recent years, the influence of deceptive review spam has further strengthened in purchasing decisions, election choices and product design. Detecting deceptive review spam has attracted more and more researchers. Existing work makes utmost efforts to explore effective linguistic and behavioral features, and utilizes the off-the-shelf classification algorithms to detect spam. But the models are usually compromised training results on the whole datasets. They failed to distinguish whether a review is linguistically suspicious or behaviorally suspicious or both. In this paper, we propose an attention-based neural networks to detect deceptive review spam by distinguishingly using linguistic and behavioral features. Experimental results on real commercial public datasets show the effectiveness of our model over the state-of-the-art methods. --- paper_title: Detecting Deceptive Reviews Using Generative Adversarial Networks paper_content: In the past few years, consumer review sites have become the main target of deceptive opinion spam, where fictitious opinions or reviews are deliberately written to sound authentic. Most of the existing work to detect the deceptive reviews focus on building supervised classifiers based on syntactic and lexical patterns of an opinion. With the successful use of Neural Networks on various classification applications, in this paper, we propose FakeGAN a system that for the first time augments and adopts Generative Adversarial Networks (GANs) for a text classification task, in particular, detecting deceptive reviews. Unlike standard GAN models which have a single Generator and Discriminator model, FakeGAN uses two discriminator models and one generative model. The generator is modeled as a stochastic policy agent in reinforcement learning (RL), and the discriminators use Monte Carlo search algorithm to estimate and pass the intermediate action-value as the RL reward to the generator. Providing the generator model with two discriminator models avoids the mod collapse issue by learning from both distributions of truthful and deceptive reviews. Indeed, our experiments show that using two discriminators provides FakeGAN high stability, which is a known issue for GAN architectures. While FakeGAN is built upon a semi-supervised classifier, known for less accuracy, our evaluation results on a dataset of TripAdvisor hotel reviews show the same performance in terms of accuracy as of the state-of-the-art approaches that apply supervised machine learning. These results indicate that GANs can be effective for text classification tasks. Specifically, FakeGAN is effective at detecting deceptive reviews. --- paper_title: Syntactic Stylometry for Deception Detection paper_content: Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (Ott et al., 2011) reaching 91.2% accuracy with 14% error reduction. --- paper_title: Detecting Deceptive Opinions with Profile Compatibility paper_content: We propose using profile compatibility to differentiate genuine and fake product reviews. For each product, a collective profile is derived from a separate collection of reviews. Such a profile contains a number of aspects of the product, together with their descriptions. For a given unseen review about the same product, we build a test profile using the same approach. We then perform a bidirectional alignment between the test and the collective profile, to compute a list of aspect-wise compatible features. We adopt Ott et al. (2011)’s op spam v1.3 dataset for identifying truthful vs. deceptive reviews. We extend the recently proposed N-GRAM+SYN model of Feng et al. (2012a) by incorporating profile compatibility features, showing such an addition significantly improves upon their state-ofart classification performance. --- paper_title: Detecting spamming reviews using long short-term memory recurrent neural network framework paper_content: Some unethical companies may hire workers (fake review spammers) to write reviews to influence consumers' purchasing decisions. However, it is not easy for consumers to distinguish real reviews posted by ordinary users or fake reviews post by fake review spammers. In this current study, we attempt to use Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) framework to detect spammers. In the current, we used a real case of fake review in Taiwan, and compared the analytical results of the current study with results of previous literature. We found that the LSTM method was more effective than Support Vector Machine (SVM) for detecting fake reviews. We concluded that deep learning could be use to detect fake reviews. --- paper_title: What Yelp Fake Review Filter Might Be Doing ? paper_content: Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. --- paper_title: Towards a General Rule for Identifying Deceptive Opinion Spam paper_content: Consumers’ purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam— fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites. 1 --- paper_title: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation paper_content: In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. --- paper_title: Context-augmented convolutional neural networks for twitter sarcasm detection paper_content: Abstract Sarcasm detection on twitter has received increasing research in recent years. However, existing work has two limitations. First, existing work mainly uses discrete models, requiring a large number of manual features, which can be expensive to obtain. Second, most existing work focuses on feature engineering according to the tweet itself, and does not utilize contextual information regarding the target tweet. However, contextual information (e.g. a conversation or the history tweets of the target tweet author) may be available for the target tweet. To address the above two issues, we explore the neural network models for twitter sarcasm detection. Based on convolutional neural network, we propose two different context-augmented neural models for this task. Results on the dataset show that neural models can achieve better performance compared to state-of-the-art discrete models. Meanwhile, the proposed context-augmented neural models can effectively decode sarcastic clues from contextual information, and give a relative improvement in the detection performance. ---
Title: Learning to Detect Deceptive Opinion Spam: A Survey Section 1: INTRODUCTION Description 1: This section provides background information on deceptive opinion spam, its economic implications, types of opinion spam, and the necessity of detecting deceptive opinion spam. It also outlines the objectives of this survey. Section 2: TASK DEFINITION Description 2: This section defines the task of deceptive opinion spam detection, detailing the sub-tasks involved, including the detection of deceptive opinion spam and the detection of deceptive opinion spammers. Section 3: DATASETS Description 3: This section summarizes various datasets used for detecting deceptive opinion spam, categorizing them based on their construction methods such as rule-based, human-based, filtering algorithms, and AMT-based. Section 4: METHODS Description 4: This section provides an overview of the methods used for detecting deceptive opinion spam, including traditional statistical models, semi-supervised learning, unsupervised learning, and neural network models. It includes descriptions of feature engineering and various algorithms. Section 5: DISCUSSIONS AND FUTURE DIRECTIONS Description 5: This section discusses the current state of research in deceptive opinion spam detection, highlighting the challenges and limitations of existing methods. It also suggests potential future research directions to improve the detection of deceptive opinion spam.
Short Literature Review for a General Player Model Based on Behavlets
8
--- paper_title: Behavlets: a method for practical player modelling using psychology-based player traits and domain specific features paper_content: As player demographics broaden it has become important to understand variation in player types. Improved player models can help game designers create games that accommodate a range of playing styles, and may also facilitate the design of systems that detect the currently-expressed player type and adapt dynamically in real-time. Existing approaches can model players, but most focus on tracking and classifying behaviour based on simple functional metrics such as deaths, specific choices, player avatar attributes, and completion times. We describe a novel approach which seeks to leverage expert domain knowledge using a theoretical framework linking behaviour and game design patterns. The aim is to derive features of play from sequences of actions which are intrinsically informative about behaviour--which, because they are directly interpretable with respect to psychological theory of behaviour, we name `Behavlets'. We present the theoretical underpinning of this approach from research areas including psychology, temperament theory, player modelling, and game composition. The Behavlet creation process is described in detail; illustrated using a clone of the well-known game Pac-Man, with data gathered from 100 participants. A workshop-based evaluation study is also presented, where nine game design expert participants were briefed on the Behavlet concepts and requisite models, and then attempted to apply the method to games of the well-known first/third-person shooter genres, exemplified by `Gears of War', (Microsoft). The participants found 139 Behavlet concepts mapping from behavioural preferences of the temperament types, to design patterns of the shooter genre games. We conclude that the Behavlet approach has significant promise, is complementary to existing methods and can improve theoretical validity of player models. --- paper_title: Does an Individual’s Myers-Briggs Type Indicator Preference Influence Task-Oriented Technology Use? paper_content: Technology innovators face the challenge of finding representative groups of users to participate in design activities. In some cases, software applications will target an audience of millions, and the characteristics of the vast number of potential users are unclear to the design team. In other cases, a technology is so new that the target market of potential users is not known. The Myers-Briggs Type Indicator (MBTI) measures individual personality preferences on four dimensions and is used by psychologists to explain certain differences in human behavior. The definitions of the MBTI dimensions suggest they could be a factor explaining why individuals take different approaches to using software applications. This study explores whether MBTI preferences affect behavior when individuals perform tasks using three different software applications. We find a person’s MBTI type influences how they organize email and the informational features they rely on when using a decision support system. --- paper_title: What video games have to teach us about learning and literacy paper_content: Good computer and video games like System Shock 2, Deus Ex, Pikmin, Rise of Nations, Neverwinter Nights, and Xenosaga: Episode 1 are learning machines. They get themselves learned and learned well, so that they get played long and hard by a great many people. This is how they and their designers survive and perpetuate themselves. If a game cannot be learned and even mastered at a certain level, it won't get played by enough people, and the company that makes it will go broke. Good learning in games is a capitalist-driven Darwinian process of selection of the fittest. Of course, game designers could have solved their learning problems by making games shorter and easier, by dumbing them down, so to speak. But most gamers don't want short and easy games. Thus, designers face and largely solve an intriguing educational dilemma, one also faced by schools and workplaces: how to get people, often young people, to learn and master something that is long and challenging--and enjoy it, to boot. --- paper_title: Reinterpreting the Myers-Briggs Type Indicator From the Perspective of the Five-Factor Model of Personality paper_content: ABSTRACT The Myers-Briggs Type Indicator (MBTI, Myers & McCaulley, 1985) was evaluated from the perspectives of Jung's theory of psychological types and the five-factor model of personality as measured by self-reports and peer ratings on the NEO Personality Inventory (NEO-PI, Costa & McCrae, 1985b) Data were provided by 267 men and 201 women ages 19 to 93 Consistent with earlier research and evaluations, there was no support for the view that the MBTI measures truly dichotomous preferences or qualitatively distinct types, instead, the instrument measures four relatively independent dimensions The interpretation of the Judging-Perceiving index was also called into question The data suggest that Jung's theory is either incorrect or inadequately operationalized by the MBTI and cannot provide a sound basis for interpreting it However, correlational analyses showed that the four MBTI indices did measure aspects of four of the five major dimensions of normal personality The five-factor model provides an alternative basis for interpreting MBTI findings within a broader, more commonly shared conceptual framework --- paper_title: Player Typology in Theory and Practice paper_content: Player satisfaction modeling depends in part upon quantitative or qualitative typologies of playing preferences, although such approaches require scrutiny. Examination of psychometric typologies reveal that type theories have—except in rare cases—proven inadequate and have made way for alternative trait theories. This suggests any future player typology that will be sufficiently robust will need foundations in the form of a trait theory of playing preferences. This paper tracks the development of a sequence of player typologies developing from psychometric type theory roots towards an independently validated trait theory of play, albeit one yet to be fully developed. Statistical analysis of the results of one survey in this lineage is presented, along with a discussion of theoretical and practical ways in which the surveys and their implied typological instruments have evolved. --- paper_title: People Efficiently Explore the Solution Space of the Computationally Intractable Traveling Salesman Problem to Find Near-Optimal Tours paper_content: Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution ("good" edges) were significantly more likely to stay than other edges ("bad" edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants "ran out of ideas." In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics. --- paper_title: An Introduction to the Five-Factor Model and Its Applications paper_content: ABSTRACT The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists. --- paper_title: Challenges for Game Designers paper_content: Welcome to a book written to challenge you, improve your brainstorming abilities, and sharpen your game design skills! Challenges for Game Designers: Non-Digital Exercises for Video Game Designers is filled with enjoyable, interesting, and challenging exercises to help you become a better video game designer, whether you are a professional or aspire to be. Each chapter covers a different topic important to game designers, and was taken from actual industry experience. After a brief overview of the topic, there are five challenges that each take less than two hours and allow you to apply the material, explore the topic, and expand your knowledge in that area. Each chapter also includes 10 non-digital shorts to further hone your skills. None of the challenges in the book require any programming or a computer, but many of the topics feature challenges that can be made into fully functioning games. The book is useful for professional designers, aspiring designers, and instructors who teach game design courses, and the challenges are great for both practice and homework assignments. The book can be worked through chapter by chapter, or you can skip around and do only the challenges that interest you. As with anything else, making great games takes practice and Challenges for Game Designers provides you with a collection of fun, thoughtprovoking, and of course, challenging activities that will help you hone vital skills and become the best game designer you can be. --- paper_title: Towards an Ontological Language for Game Analysis paper_content: The Game Ontology Project (GOP) is creating a framework for describing, analyzing and studying games, by defining a hierarchy of concepts abstracted from an analysis of many specific games. GOP borrows concepts and methods from prototype theory as well as grounded theory to achieve a framework that is always growing and changing as ne w games are analyzed or particular research questions are explored. The top level of the ontolo gy (interface, rules, goals, entities, and entity manipulation) is described as well as a part icular ontological entry. Finally, by engaging in three short discussions centered on relevant games studies research questions, the ontology’s utility is demonstrated. --- paper_title: Uncertainty in Games paper_content: In life, uncertainty surrounds us. Things that we thought were good for us turn out to be bad for us (and vice versa); people we thought we knew well behave in mysterious ways; the stock market takes a nosedive. Thanks to an inexplicable optimism, most of the time we are fairly cheerful about it all. But we do devote much effort to managing and ameliorating uncertainty. Is it any wonder, then, asks Greg Costikyan, that we have taken this aspect of our lives and transformed it culturally, making a series of elaborate constructs that subject us to uncertainty but in a fictive and nonthreatening way? That is: we create games. In this concise and entertaining book, Costikyan, an award-winning game designer, argues that games require uncertainty to hold our interest, and that the struggle to master uncertainty is central to their appeal. Game designers, he suggests, can harness the idea of uncertainty to guide their work. Costikyan explores the many sources of uncertainty in many sorts of games -- from Super Mario Bros. to Rock/Paper/Scissors, from Monopoly to CityVille, from FPS Deathmatch play to Chess. He describes types of uncertainty, including performative uncertainty, analytic complexity, and narrative anticipation. And he suggest ways that game designers who want to craft novel game experiences can use an understanding of game uncertainty in its many forms to improve their designs. --- paper_title: Formal Models And Game Design paper_content: A baseband switching device for switching between first and second redundant channels is provided with the capability of adjusting the phase and polarity of the data signals in each channel to be equal so that data will continue without interruption when switching from one channel to another. --- paper_title: Rules of Play: Game Design Fundamentals paper_content: This text offers an introduction to game design and a unified model for looking at all kinds of games, from board games and sports to computer and video games. Also included are concepts, strategies, and methodologies for creating and understanding games. --- paper_title: Patterns In Game Design paper_content: PART I BACKGROUND 1 Introduction 2 An Activity-Based Framework for Describing Games 3 Game Design Patterns PART II THE PATTERN 4 Using Design Patterns 5 Game Design Patterns for Game Elements 6 Game Design Patterns for Resource and Resource Management 7 Game Design Patterns for Information, Communication, and Presentation 8 Actions and Events Patterns 9 Game Design Patterns for Narrative Structures, Predictability, and Immersion Patterns 10 Game Design Patterns for Social Interaction 11 Game Design Patterns for Goals 12 Game Design Patterns for Goal Structures 13 Game Design Patterns for Game Sessions 14 Game Design Patterns for Game Mastery and Balancing 15 Game Design Patterns for Meta Games, Replayability, and Learning Curves --- paper_title: Intelligent Modeling of the User in Interactive Entertainment paper_content: A theme of the symposium is to explore ways to employ AI to make games more appealing to people who do not enjoy current genres, and to expand the market for interactive entertainment beyond the traditional niche of young male players. We suggest that AI techniques employed in the world of intelligent tutoring to model the user and adjust instruction, help and content could be fruitfully adapted to interactive entertainment. In computer-based educational tutoring, adaptivity to user behaviors and characteristics such as gender and cognitive developmental level have been shown to increase learner motivation, engagement and achievement in the area of mathematics learning. Similarly, utilizing data regarding player behaviors such as latency and errors to construct a model of the player would allow for more adaptive game play, which in turn would increase the appeal of computer games to a wider audience. --- paper_title: Integrating Domain Experts in Educational Game Authoring: A Case Study paper_content: Authoring educational games introduces difficult problems because it is the product of multidisciplinary work, integrating very different experts with different backgrounds that use different terminology. In this paper we discuses how a team composed of computer science experts, an education expert and two medical experts successfully tacked the problem of designing and implementing an educational video game. An approach consisting of different tools and strategies was used to ensure educational value, correctness and completeness of the knowledge represented in the game. The game's goal is to teach basic medical first response procedures to young students (12-15 year old) by using photo realistic representations of the situations and videos with correct realization of the procedures. The game was successfully completed and is currently available online and being tested with real students. --- paper_title: An Investigation of Gamification Typologies for Enhancing Learner Motivation paper_content: In this paper we present a new gamified learning system called Reflex which builds on our previous research, placing greater emphasis on variation in learner motivation and associated behaviour, having a particular focus on gamification typologies. Reflex comprises a browser based 3D virtual world that embeds both learning content and learner feedback. In this way the topography of the virtual world plays an important part in the presentation and access to learning material and learner feedback. Reflex presents information to learners based on their curriculum learning objectives and tracks their movement and interactions within the world. A core aspect of Reflex is its gamification design, with our engagement elements and processes based on Marczewski's eight gamification types [1]. We describe his model and its relationship to Bartle's player types [2] as well as the RAMP intrinsic motivation model [3]. We go on to present an analysis of experiments using Reflex with students on two 2nd year Computing modules. Our data mining and cluster analysis on the results of a gamification typology questionnaire expose variation in learner motivation. The results from a comprehensive tracking of the interactions of learners within Reflex are discussed and the acquired tracking data is discussed in context of gamification typologies and metacognitive tendencies of the learners. We discuss correlations in actual learner behaviour to that predicted by gamified learner profile. Our results illustrate the importance of taking variation in learner motivation into account when designing gamified learning systems. --- paper_title: User-system-experience model for user centered design in computer games paper_content: This paper details the central ideas to date, from a PhD entitled ‘Player Profiling for Adaptive Artificial Intelligence in Computer and Video Games’. Computer and videogames differ from other web and productivity software in that games are much more highly interactive and immersive experiences. Whereas usability and user modelling for other software may be based on productivity alone, games require an additional factor that takes account of the quality of the user experience in playing a game. In order to describe that experience we describe a model of User, System and Experience (USE) in which the primary construct for evaluation of a player’s experience will be the Experience Fluctuation Model (EFM), taken from Flow theory. We illustrate with a straightforward example how this system may be automated in real-time within a commercial game. --- paper_title: Compositional abstractions of hybrid control systems paper_content: Abstraction is a natural way to hierarchically decompose the analysis and design of hybrid systems. Given a hybrid control system and some desired properties, one extracts an abstracted system while preserving the properties of interest. Abstractions of purely discrete systems is a mature area, whereas abstractions of continuous systems is a recent activity. We present a framework for abstraction that applies to abstract control systems capturing discrete, continuous, and hybrid systems. Parallel composition is presented in a categorical framework and an algorithm is proposed to construct abstractions of hybrid control systems. Finally, we show that our abstractions of hybrid systems are compositional. --- paper_title: Theory of Games and Economic Behavior paper_content: This is the classic work upon which modern-day game theory is based. What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published "Theory of Games and Economic Behavior." In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences. --- paper_title: Formal Models And Game Design paper_content: A baseband switching device for switching between first and second redundant channels is provided with the capability of adjusting the phase and polarity of the data signals in each channel to be equal so that data will continue without interruption when switching from one channel to another. --- paper_title: Machine learning in digital games: a survey paper_content: Artificial intelligence for digital games constitutes the implementation of a set of algorithms and techniques from both traditional and modern artificial intelligence in order to provide solutions to a range of game dependent problems. However, the majority of current approaches lead to predefined, static and predictable game agent responses, with no ability to adjust during game-play to the behaviour or playing style of the player. Machine learning techniques provide a way to improve the behavioural dynamics of computer controlled game agents by facilitating the automated generation and selection of behaviours, thus enhancing the capabilities of digital game artificial intelligence and providing the opportunity to create more engaging and entertaining game-play experiences. This paper provides a survey of the current state of academic machine learning research for digital game environments, with respect to the use of techniques from neural networks, evolutionary computation and reinforcement learning for game agent control. --- paper_title: Learning principles and interaction design for ‘Green My Place’: A massively multiplayer serious game paper_content: The usual approach to serious game design is to construct a single game intended to address the specific domain problem being addressed. This paper describes a novel alternative approach, focussed on embedding smaller game elements into a comprehensive framework, which provides stronger motive for play and thus greater chance of effect. This serious game design methodology was developed for an EU project to teach energy efficient knowledge and behaviour to users of public buildings around Europe. The successful implementation of this game is also described. The cutting-edge educational principles that formed the basis for the design are drawn from recent research in serious games and energy efficiency, and include the Behavlet, a novel behaviour-transformation concept developed by the authors. The game design framework presented illustrates a clear approach for serious games dealing with topics applicable at societal scales. --- paper_title: Player modeling using self-organization in Tomb Raider: Underworld paper_content: We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified. --- paper_title: Human-level control through deep reinforcement learning paper_content: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. --- paper_title: Evolving personas for player decision modeling paper_content: This paper explores how evolved game playing agents can be used to represent a priori defined archetypical ways of playing a test-bed game, as procedural personas. The end goal of such procedural personas is substituting players when authoring game content manually, procedurally, or both (in a mixed-initiative setting). Building on previous work, we compare the performance of newly evolved agents to agents trained via Q-learning as well as a number of baseline agents. Comparisons are performed on the grounds of game playing ability, generalizability, and conformity among agents. Finally, all agents' decision making styles are matched to the decision making styles of human players in order to investigate whether the different methods can yield agents who mimic or differ from human decision making in similar ways. The experiments performed in this paper conclude that agents developed from a priori defined objectives can express human decision making styles and that they are more generalizable and versatile than Q-learning and hand-crafted agents. --- paper_title: Some studies in machine learning using the game of checkers paper_content: Two machine-learning procedures have been investigated in some detail using the game of checkers. Enough work has been done to verify the fact that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program. Further-more, it can learn to do this in a remarkably short period of time (8 or 10 hours of machine-playing time) when given only the rules of the game, a sense of direction, and a redundant and incomplete list of parameters which are thought to have something to do with the game, but whose correct signs and relative weights are unknown and unspecified. The principles of machine learning verified by these experiments are, of course, applicable to many other situations. --- paper_title: Mastering the game of Go with deep neural networks and tree search paper_content: The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away. --- paper_title: Player-Centred Game Design : Player Modelling and Adaptive Digital Games paper_content: We describe an approach to player-centred game design through adaptive game technologies [9]. The work presented is the result of on-going collaborative research between Media and Computing groups at the University of Ulster, and so we begin with a review of related literature from both areas before presenting our new ideas. In particular we focus on three areas of related research: understanding players, modelling players, and adaptive game technology. We argue that player modelling and adaptive technologies may be used alongside existing approaches to facilitate improved player-centred game design in order to provide a more appropriate level of challenge, smooth the learning curve, and enhance the gameplay experience for individual players regardless of gender, age and experience. However, adaptive game behaviour is a controversial topic within game research and development and so while we outline the potential of such technologies, we also address the most significant concerns. --- paper_title: Real-time rule-based classification of player types in computer games paper_content: The power of using machine learning to improve or investigate the experience of play is only beginning to be realised. For instance, the experience of play is a psychological phenomenon, yet common psychological concepts such as the typology of temperaments have not been widely utilised in game design or research. An effective player typology provides a model by which we can analyse player behaviour. We present a real-time classifier of player type, implemented in the test-bed game Pac-Man. Decision Tree algorithms CART and C5.0 were trained on labels from the DGD player typology (Bateman and Boon, 21st century game design, vol. 1, 2005). The classifier is then built by selecting rules from the Decision Trees using a rule- performance metric, and experimentally validated. We achieve ~70% accuracy in this validation testing. We further analyse the concept descriptions learned by the Decision Trees. The algorithm output is examined with respect to a set of hypotheses on player behaviour. A set of open questions is then posed against the test data obtained from validation testing, to illustrate the further insights possible from extended analysis. --- paper_title: Behavlets: a method for practical player modelling using psychology-based player traits and domain specific features paper_content: As player demographics broaden it has become important to understand variation in player types. Improved player models can help game designers create games that accommodate a range of playing styles, and may also facilitate the design of systems that detect the currently-expressed player type and adapt dynamically in real-time. Existing approaches can model players, but most focus on tracking and classifying behaviour based on simple functional metrics such as deaths, specific choices, player avatar attributes, and completion times. We describe a novel approach which seeks to leverage expert domain knowledge using a theoretical framework linking behaviour and game design patterns. The aim is to derive features of play from sequences of actions which are intrinsically informative about behaviour--which, because they are directly interpretable with respect to psychological theory of behaviour, we name `Behavlets'. We present the theoretical underpinning of this approach from research areas including psychology, temperament theory, player modelling, and game composition. The Behavlet creation process is described in detail; illustrated using a clone of the well-known game Pac-Man, with data gathered from 100 participants. A workshop-based evaluation study is also presented, where nine game design expert participants were briefed on the Behavlet concepts and requisite models, and then attempted to apply the method to games of the well-known first/third-person shooter genres, exemplified by `Gears of War', (Microsoft). The participants found 139 Behavlet concepts mapping from behavioural preferences of the temperament types, to design patterns of the shooter genre games. We conclude that the Behavlet approach has significant promise, is complementary to existing methods and can improve theoretical validity of player models. --- paper_title: Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty paper_content: This paper proposes to maintain player's engagement by adapting game difficulty according to player's emotions assessed from physiological signals. The validity of this approach was first tested by analyzing the questionnaire responses, electroencephalogram (EEG) signals, and peripheral signals of the players playing a Tetris game at three difficulty levels. This analysis confirms that the different difficulty levels correspond to distinguishable emotions, and that, playing several times at the same difficulty level gives rise to boredom. The next step was to train several classifiers to automatically detect the three emotional classes from EEG and peripheral signals in a player-independent framework. By using either type of signals, the emotional classes were successfully recovered, with EEG having a better accuracy than peripheral signals on short periods of time. After the fusion of the two signal categories, the accuracy raised up to 63%. --- paper_title: Experience Assessment and Design in the Analysis of Gameplay paper_content: We report research on player modeling using psychophysiology and machine learning, conducted through interdisciplinary collaboration between researchers of computer science, psychology, and game design at Aalto University, Helsinki. First, we propose the Play Patterns And eXperience (PPAX) framework to connect three levels of game experience that previously had remained largely unconnected: game design patterns, the interplay of game context with player personality or tendencies, and state-of-the-art measures of experience (both subjective and non-subjective). Second, we describe our methodology for using machine learning to categorize game events to reveal corresponding patterns, culminating in an example experiment. We discuss the relation between automatically detected event clusters and game design patterns, and provide indications on how to incorporate personality profiles of players in the analysis. This novel interdisciplinary collaboration combines basic psychophysiology research with game design patterns and machine learning, and generates new knowledge about the interplay between game experience and design. ---
Title: Short Literature Review for a General Player Model Based on Behavlets Section 1: INTRODUCTION Description 1: This section should provide an overview of the importance of understanding and modelling players, define the term "player model," and discuss the foundational areas (psychology of behaviour, language of general game design, and model of actions) as well as the research questions addressed in the review. Section 2: LITERATURE REVIEW Description 2: This section should summarize previous work relevant to player modelling, including game decomposition, player psychology, combined approaches for play experience frameworks, and recent player models. Section 3: Personality and Play Description 3: This subsection should focus on the influence of player personality on gameplay and discuss various models and theories such as temperament theory, Myers-Briggs Type Indicator, and Demographic Game Design (DGD) player typologies. Section 4: Game Structure Description 4: This subsection should describe methods for decomposing games into constituent parts, including various perspectives (mechanistic, dynamic, aesthetic) and standards (e.g., MDA model, Rules schema, game design patterns). Section 5: Play Experience Frameworks Description 5: This subsection should cover frameworks that combine game decomposition and player psychology models, highlighting various approaches and theories proposed for understanding the player experience. Section 6: Player Modelling Description 6: This subsection should discuss various computational intelligence methods for player modelling, illustrating the challenges and methodologies used for feature extraction and defining different player attributes. Section 7: DISCUSSION Description 7: This section should synthesize the information reviewed and discuss the rationale for a generalized player model. It should also propose future work, including systematic reviews and meta-analyses to validate and expand upon the findings. Section 8: CONCLUSION Description 8: This section should summarize the key points from the review, providing a snapshot of the current state of the art in player modelling and outlining the proposed strategy for validating and expanding these models through systematic review and meta-analysis.
Environments and System Types of Virtual Reality Technology in STEM: a Survey
12
--- paper_title: Empirical evidence, evaluation criteria and challenges for the effectiveness of virtual and mixed reality tools for training operators of car service maintenance paper_content: State of the art review of car service training with virtual and augmented reality.Current criteria considered by researchers focus on training effectiveness.Limited assessment of trainees experience pre- and post-training.This paper reports challenges for next generation of studies on training technologies. The debate on effectiveness of virtual and mixed reality (VR/MR) tools for training professionals and operators is long-running with prominent contributions arguing that there are several shortfalls of experimental approaches and assessment criteria reported within the literature. In the automotive context, although car-makers were pioneers in the use of VR/MR tools for supporting designers, researchers started only recently to explore the effectiveness of VR/MR systems as mean for driving external operators of service centres to acquire the procedural skills necessary for car maintenance processes. In fact, from 463 journal articles on VR/MR tools for training published in the last thirty years, we identified only eight articles in which researchers experimentally tested the effectiveness of VR/MR tools for training service operators' skills. To survey the current findings and the deficiencies of these eight studies, we use two main drivers: (i) a well-known framework of organizational training programmes, and (ii) a list of eleven evaluation criteria widely applied by researchers of different fields for assessing the effectiveness of training carried out with VR/MR systems. The analysis that we present allows us to: (i) identify a trend among automotive researchers of focusing their analysis only on car service operators' performance in terms of time and errors, by leaving unexplored important pre- and post-training aspects that could affect the effectiveness of VR/MR tools to deliver training contents - e.g., people skills, previous experience, cibersickness, presence and engagement, usability and satisfaction and (ii) outline the future challenges for designing and assessing VR/MR tools for training car service operators. --- paper_title: On encouraging multiple views for visualization paper_content: Visualization enables 'seeing the unseen', and provides new insight into the underlying data. However, users far too easily believe or rely on a single representation of the data; this view may be a favourite method, the simplest to perform, or a method that has always been used. A single representation may generate a misinterpretation of the information or provide a situation where the user is missing the 'richness' of the data content. By displaying the data in multiple ways a user may understand the information through different perspectives, overcome possible misinterpretations and perform interactive investigative visualization through correlating the information between views. Thus, the use of multiple views of the same information should be encouraged. We believe the visualization system itself should actively encourage the generation of multiple views by providing appropriate tools to aid in this operation. We present and categorise issues for encouraging multiple views and provide a framework for the generation, management and manipulation of such views. --- paper_title: A Survival Guide for the Social Scientist paper_content: In this article, we provide the nontechnical reader with a fundamental understanding of the components of virtual reality (VR) and a thorough discussion of the role VR has played in social science. First, we provide a brief overview of the hardware and equipment used to create VR and review common elements found within the virtual environment that may be of interest to social scientists, such as virtual humans and interactive, multisensory feedback. Then, we discuss the role of VR in existing social scientific research. Specif- ically, we review the literature on the study of VR as an object, wherein we discuss the effects of the technology on human users; VR as an application, wherein we consider real-world applications in areas such as medicine and education; and VR as a method, wherein we provide a comprehensive outline of studies in which VR technologies are used to study phenomena that have traditionally been studied in physical settings, such as nonverbal behavior and social interaction. We then present a content analysis of the literature, tracking the trends for this research over the last two decades. Finally, we present some possibilities for future research for interested social scientists. --- paper_title: Design concerns in the engineering of virtual worlds for learning paper_content: The convergence of 3D simulation and social networking into current multi-user virtual environments has opened the door to new forms of interaction for learning in order to complement the face-to-face and Web 2.0-based systems. Yet, despite a growing user community, design knowledge for virtual worlds remains patchy, particularly when it comes to an understanding of the particular nature of design in virtual environments, the relationship between virtual and real-world contexts of design, as well as the engineering issues it raises and the management of any related risks. In this article, we explore such issues based on our experience of socio-technical engineering of a novel learning programme for higher education with a substantial virtual component. The project's significance stems from the large number of stake-holders involved, the relatively large scale of the virtual world development and the strategic significance of such a development within the learning programme. Of particular novelty is our exploration of the relationship between virtual and real-world contexts of design, indicating when they align and differ, showing when tools and techniques translate and when new tools and techniques may be required. --- paper_title: Empirical evidence, evaluation criteria and challenges for the effectiveness of virtual and mixed reality tools for training operators of car service maintenance paper_content: State of the art review of car service training with virtual and augmented reality.Current criteria considered by researchers focus on training effectiveness.Limited assessment of trainees experience pre- and post-training.This paper reports challenges for next generation of studies on training technologies. The debate on effectiveness of virtual and mixed reality (VR/MR) tools for training professionals and operators is long-running with prominent contributions arguing that there are several shortfalls of experimental approaches and assessment criteria reported within the literature. In the automotive context, although car-makers were pioneers in the use of VR/MR tools for supporting designers, researchers started only recently to explore the effectiveness of VR/MR systems as mean for driving external operators of service centres to acquire the procedural skills necessary for car maintenance processes. In fact, from 463 journal articles on VR/MR tools for training published in the last thirty years, we identified only eight articles in which researchers experimentally tested the effectiveness of VR/MR tools for training service operators' skills. To survey the current findings and the deficiencies of these eight studies, we use two main drivers: (i) a well-known framework of organizational training programmes, and (ii) a list of eleven evaluation criteria widely applied by researchers of different fields for assessing the effectiveness of training carried out with VR/MR systems. The analysis that we present allows us to: (i) identify a trend among automotive researchers of focusing their analysis only on car service operators' performance in terms of time and errors, by leaving unexplored important pre- and post-training aspects that could affect the effectiveness of VR/MR tools to deliver training contents - e.g., people skills, previous experience, cibersickness, presence and engagement, usability and satisfaction and (ii) outline the future challenges for designing and assessing VR/MR tools for training car service operators. --- paper_title: Computers for Everyone paper_content: This book is the first of a continuing series of publications, composed of 56 articles written by first year students in the Department of Computing and Maths, in the first 6 weeks of their first semester, as part of their assessed work for the Introduction to Computer Science module These articles all achieved a mark of 60% or greater. --- paper_title: Computers for Everyone paper_content: This book is the first of a continuing series of publications, composed of 56 articles written by first year students in the Department of Computing and Maths, in the first 6 weeks of their first semester, as part of their assessed work for the Introduction to Computer Science module These articles all achieved a mark of 60% or greater. --- paper_title: Spatial Augmented Reality: Merging Real and Virtual Worlds paper_content: Like virtual reality, augmented reality is becoming an emerging platform in new application areas for museums, edutainment, home entertainment, research, industry, and the art communities using novel approaches which have taken augmented reality beyond traditional eye-worn or hand-held displays. In this book, the authors discuss spatial augmented reality approaches that exploit optical elements, video projectors, holograms, radio frequency tags, and tracking technology, as well as interactive rendering algorithms and calibration techniques in order to embed synthetic supplements into the real environment or into a live video of the real environment. Special Features: - Comprehensive overview - Detailed mathematical equations - Code fragments - Implementation instructions - Examples of Spatial AR displays The authors have put together a preliminary collection of Errata. Updates will be posted to this site as necessary. --- paper_title: Virtual environment display system paper_content: A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described. --- paper_title: Virtual environment display system paper_content: A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed for use as a multipurpose interface environment. The system provides a multisensory, interactive display environment in which a user can virtually explore a 360-degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, application scenarios, and research directions are described. --- paper_title: Optical-flow-driven Gadgets for Gaming User Interface paper_content: We describe how to build a VIDEOPLACE-like vision-driven user interface using “optical-flow” measurements. The optical-flow denotes the estimated movement of an image patch between two consecutive frames from a video sequence. Similar framework is used in a number of commercial vision-driven interactive computer games but the motion of the users is detected by examining the difference between two consecutive frames. The optical-flow presents a natural extension. We show here how the optical-flow can be used to provide much richer interaction. --- paper_title: An object-based 3D walk-through model for interior construction progress monitoring paper_content: The complicated nature of interior construction works makes the detailed progress monitoring challenging. Current interior construction progress monitoring methods involve submission of periodic reports and are constrained by their reliance on manually intensive processes and limited support for recording visual information. Recent advances in image-based visualization techniques enable reporting construction progress using interactive and visual approaches. However, analyzing significant amounts of as-built construction photographs requires sophisticated techniques. To overcome limitations of existing approaches, this research focuses on visualization and computer vision techniques to monitor detailed interior construction progress using an object-based approach. As-planned 3D models from Building Information Modeling (BIM) and as-built photographs are visualized and compared in a walk-through model. Within such an environment, the as-built interior construction objects are decomposed to automatically generate the status of construction progress. This object-based approach introduces an advanced model that enables the user to have a realistic understanding of the interior construction progress. --- paper_title: 3D structural component recognition and modeling method using color and 3D data for construction progress monitoring paper_content: Construction progress monitoring has been recognized as one of the key elements that lead to the success of a construction project. By performing construction progress monitoring, corrective measures and other appropriate actions can be taken in a timely manner, thereby enabling the actual performance to be as close as possible to the desired outcome even if the construction performance significantly deviates from the original plan. However, current methods of data acquisition and its use in construction progress monitoring have tended to be manual and time consuming. This paper proposes an efficient, automated 3D structural component recognition and modeling method that employs color and 3D data acquired from a stereo vision system for use in construction progress monitoring. An outdoor experiment was performed on an actual construction site to demonstrate the applicability of the method to 3D modeling of such environments, and the results indicate that the proposed method can be beneficial for construction progress monitoring. --- paper_title: E-learning as a Challenge for Widening of Opportunities for Improvement of Students’ Generic Competences paper_content: The rapidly changing economic, financial and social conditions require new knowledge and competences in order to be able to understand them, adapt to the new requirements and remain competitive and successful in the globalised social environment. Widening the access to lifelong learning is one way in which this could be achieved. A special role in this process is given to universities as promoters of lifelong learning. E-learning is a means of promoting the changes in academic studies and providing an opportunity to integrate non-formal and informal learning elements into formal education. Individualisation, learning opportunities flexible in time, as well as the e-environment can facilitate the development of students' competences. This article presents a study conducted during the implementation of an inter-university master's programme, 'Educational Treatment of Diversity' (in Spain, Latvia, Germany and the Czech Republic) in 2008-10. The research question was: which challenges for widening of opportunities were secured in e-learning in order to promote students' generic competences as a learning outcome? --- paper_title: Open Wonderland: An Extensible Virtual World Architecture paper_content: Open Wonderland is a toolkit for building 3D virtual worlds. The system architecture, based entirely on open standards, is highly modular and designed with a focus on extensibility. In this article, the authors articulate design goals related to collaboration, extensibility, and federation and describe the Open Wonderland architecture, including the design of the server, the client, the communications layer, and the extensibility mechanisms. They also discuss the trade-offs made in implementing the architecture. --- paper_title: Augmenting spatial skills with semi-immersive interactive desktop displays: do immersion cues matter? paper_content: 3D stereoscopic displays for desktop use show promise for augmenting users' spatial problem solving tasks. These displays have the capacity for different types of immersion cues including binocular parallax, motion parallax, proprioception, and haptics. Such cues can be powerful tools in increasing the realism of the virtual environment by making interactions in the virtual world more similar to interactions in the real non-digital world [21, 32]. However, little work has been done to understand the effects of such immersive cues on users' understanding of the virtual environment. We present a study in which users solve spatial puzzles with a 3D stereoscopic display under different immersive conditions while we measure their brain workload using fNIRS and ask them subjective workload questions. We conclude that 1) stereoscopic display leads to lower task completion time, lower physical effort, and lower frustration; 2) vibrotactile feedback results in increased perceived immersion and in higher cognitive workload; 3) increased immersion (which combines stereo vision with vibrotactile feedback) does not result in reduced cognitive workload. --- paper_title: VRML to WebGL Web-based converter application paper_content: In this paper we propose a method to convert a VRML file into a WebGL rendered scene through an online application that is capable of rendering this uploaded VRML file within the Web browser itself without any plug-ins being installed. The conversion process between the input VRML file and the WebGL using our Web-based converter application is done by analyzing the uploaded VRML file content represented in nodes. These nodes are then converted into scene graph nodes used for the scene creation. A scene tree is then initialized and populated using these nodes containing their info and sent to the proposed dynamic WebGL template. Finally a new HTML5 file is constructed and displayed on the user's Web browser. An extensive set of experiments are conducted to show the performance of the online converter application. --- paper_title: Using different methodologies and technologies to training spatial skill in Engineering Graphic subjects paper_content: Most papers about spatial skills and their components refer to the fact that engineering, architectural and scientific jobs require a good level of spatial ability. Spatial ability has an impact on every scientific and technical field, so it's still undergoing strong development when it comes to engineering, technology, art and many other aspects of life. In the academic environment, Graphic Design teachers usually see students who have difficulties solving tasks requiring spatial reasoning and viewing abilities. The main aim of this work is the development of didactic material based on several virtual and augmented reality formats, knowing how students behave while using them, and checking if they are useful materials to improve their spatial abilities. This work present Three different technologies: virtual reality, augmented reality and portable document format to find out if they are suitable technologies together suitable methodologies to improve spatial ability and from the student's perspective, their opinion of the tool and their motivation to learn more about the aspects of 3D reality. We present a pilot study that compared the results of improvement in spatial ability acquired by freshman engineering students also a survey of satisfaction and motivation of the methodology and technology used. --- paper_title: Wayfinding and navigation in haptic virtual environments paper_content: Cognitive maps are mental models of the relative locations and attribute phenomena of spatial environments. The ability to form cognitive maps is one of the innate gifts of nature. An absence of this ability can have crippling effect, for example, on the visually impaired. The sense of touch becomes the primary source of forming cognitive maps for the visually impaired. Once formed, cognitive maps provide precise mapping of the physical world so that a visually impaired individual can successfully navigate with minimal assistance. However, traditional mobility training is time consuming, and it is very difficult for the blind to express or revisit the cognitive maps formed after a training session is over. The proposed haptic environment will allow the visually impaired individual to express cognitive maps as 3D surface maps, with two PHANToM force-feedback devices guiding them. The 3D representation can be finetuned by the care-giver, and then felt again by the visually impaired in order to form precise cognitive maps. In addition to voice commentary, a library of pre-existing shapes familiar to the blind will provide orientation and proprioceptive haptic-cues during navigation. A graphical display of cognitive maps will provide feedback to the care-giver or trainer. As the haptic environment can be easily stored and retrieved, the MoVE system will also encourage navigation by the blind at their own convenience, and with family members. --- paper_title: Virtual/real transfer in a large-scale environment: impact of active navigation as a function of the viewpoint displacement effect and recall tasks paper_content: The purpose of this study was to examine the effect of navigation mode (passive versus active) on the virtual/real transfer of spatial learning, according to viewpoint displacement (ground: 1m 75 versus aerial: 4m) and as a function of the recall tasks used. We hypothesize that active navigation during learning can enhance performances when route strategy is favored by egocentric match between learning (ground-level viewpoint) and recall (egocentric frame-based tasks). Sixty-four subjects (32 men and 32 women) participated in the experiment. Spatial learning consisted of route learning in a virtual district (four conditions: passive/ground, passive/aerial, active/ground, or active/aerial), evaluated by three tasks: wayfinding, sketch-mapping, and picture-sorting. In the wayfinding task, subjects who were assigned the ground-level viewpoint in the virtual environment (VE) performed better than those with the aerial-level viewpoint, especially in combination with active navigation. In the sketch-mapping task, aerial-level learning in the VE resulted in better performance than the ground-level condition, while active navigation was only beneficial in the ground-level condition. The best performance in the picture-sorting task was obtained with the ground-level viewpoint, especially with active navigation. This study confirmed the expected results that the benefit of active navigation was linked with egocentric frame-based situations. --- paper_title: Evaluation of the Cognitive Effects of Travel Technique in Complex Real and Virtual Environments paper_content: We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques. --- paper_title: The impact of motion in virtual environments on memorization performance paper_content: Virtual environments are more and more used for educational and training purposes. In order to design virtual environments for these applications in particular, it is very important to get a deep understanding of the relevant design features supporting the user's process of learning and comprehension. Relevance and implementation of these features as well as the benefits of virtual learning environments over traditional educational approaches in general are rarely explored. Focusing on modes of interaction in this work, we examined the effect of different motion types on the knowledge acquisition of users in various virtual environments. For our study we chose a simple memorization task as approximation of low cognitive knowledge acquirement. We hypothesized motion types and immersion levels influence memorization performance in virtual environments. The memorization task was conducted in two virtual environments with different levels of immersion: A high-immersive Cave Automatic Virtual Environment (CAVE) and a low-immersive desktop virtual environment. Two motion types in virtual environments were explored: Physical and virtual walking. In the CAVE physical walking was implemented by using motion capturing and virtual walking was realized using a joystick-like input device. The results indicate neither motion types nor immersion levels in virtual environments affect memorization performance significantly. --- paper_title: An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments paper_content: Grabbing and manipulating virtual objects is an important user interaction for immersive virtual environments. We present implementations and discussion of six techniques which allow manipulation of remote objects. A user study of these techniques was performed which revealed their characteristics and deficiencies, and led to the development of a new class of techniques. These hybrid techniques provide distinct advantages in terms of ease of use and efficiency because they consider the tasks of grabbing and manipulation separately. CR Categories and Subject Descriptors: 1.3.7 [Computer Graphics] :Three-Dimensional Graphics and Realism - Virtual Reality; 1.3.6 [Computer Graphics]:Methodology and Techniques - Interaction Techniques. ---
Title: Environments and System Types of Virtual Reality Technology in STEM: a Survey Section 1: INTRODUCTION Description 1: This section introduces the need for using VR technology in STEM fields and outlines the overall structure of the paper. Section 2: CONCEPTS OF VIRTUAL REALITY Description 2: This section provides various definitions and fundamental concepts of Virtual Reality (VR). Section 3: EMERGENCE OF VR TECHNOLOGY Description 3: This section discusses the historical development and milestones in VR technology, including key inventions and advancements. Section 4: APPLICATIONS OF VR Description 4: This section explores the different applications of VR technology across various fields such as medicine, education, entertainment, engineering, and more. Section 5: THE VR REQUIREMENTS Description 5: This section describes the necessary requirements and considerations for developing effective VR environments. Section 6: ESSENTIAL ELEMENTS OF VR Description 6: This section highlights the four basic elements essential to VR systems as determined by research. Section 7: TYPES OF VR SYSTEMS AND HARDWARE Description 7: This section discusses the different types of VR systems, categorized by their level of immersion, and the hardware used for each type. Section 8: VR SOFTWARE AND TOOLS Description 8: This section reviews various software and tools available for creating and utilizing VR environments. Section 9: NAVIGATION IN VR ENVIRONMENT Description 9: This section explains the importance of navigation in VR and its components for usability and interaction. Section 10: BENEFITS AND LIMITATION OF VR Description 10: This section mentions the advantages and limitations of VR technology in different applications. Section 11: ROAD OF THE MAP FOR SELECTING APPROPRIATE VR SYSTEM ACCORDING TO THE FIELD OF APPLICATIONS Description 11: This section provides a roadmap for selecting the proper VR system based on the application field and criteria. Section 12: CONCLUSION AND FUTURE WORKS Description 12: This section wraps up the survey, summarizing the importance of VR technology and suggesting directions for future research.
Automatic Keyphrase Extraction: A Survey of the State of the Art
10
--- paper_title: Opinion Expression Mining by Exploiting Keyphrase Extraction paper_content: In this paper, we shall introduce a system for extracting the keyphrases for the reason of authors’ opinion from product reviews. The datasets for two fairly different product review domains related to movies and mobile phones were constructed semiautomatically based on the pros and cons entered by the authors. The system illustrates that the classic supervised keyphrase extraction approach – mostly used for scientific genre previously – could be adapted for opinion-related keyphrases. Besides adapting the original framework to this special task through defining novel, taskspecific features, an efficient way of representing keyphrase candidates will be demonstrated as well. The paper also provides a comparison of the effectiveness of the standard keyphrase extraction features and that of the system designed for the special task of opinion expression mining. --- paper_title: Clustering to Find Exemplar Terms for Keyphrase Extraction paper_content: Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graph-based ranking methods (TextRank) by 9.5% in F1-measure. --- paper_title: Learning Algorithms for Keyphrase Extraction paper_content: Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a general-purpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by GenEx suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications. --- paper_title: Automatic Keyphrase Extraction via Topic Decomposition paper_content: Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics. --- paper_title: Keyphrase Extraction from Online News Using Binary Integer Programming paper_content: In recent years, keyphrase extraction has received great attention, and been successfully employed by various applications. Keyphrases extracted from news articles can be used to concisely represent main contents of news events. Keyphrases can help users to speed up browsing and find the desired contents more quickly. In this paper, we first present several criteria of high-quality news keyphrases. After that, in order to integrate those criteria into the keyphrase extraction task, we propose a novel formulation which converts the task to a binary integer programming problem. The formulation cannot only encode the prior knowledge as constraints, but also learn constraints from data. We evaluate the proposed approach on a manually labeled corpus. Experimental results demonstrate that our approach achieves better performances compared with the state-of-the-art methods. --- paper_title: Topical Keyphrase Extraction from Twitter paper_content: Summarizing and analyzing Twitter content is an important and challenging task. In this paper, we propose to extract topical keyphrases as one way to summarize Twitter. We propose a context-sensitive topical PageRank method for keyword ranking and a probabilistic scoring function that considers both relevance and interestingness of keyphrases for keyphrase ranking. We evaluate our proposed methods on a large Twitter data set. Experiments show that these methods are very effective for topical keyphrase extraction. --- paper_title: A Language Model Approach To Keyphrase Extraction paper_content: We present a new approach to extracting keyphrases based on statistical language models. Our approach is to use pointwise KL-divergence between multiple language models for scoring both phraseness and informativeness, which can be unified into a single score to rank extracted phrases. --- paper_title: Finding advertising keywords on web pages paper_content: A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with "relevant" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems. --- paper_title: Extracting Keywords from Multi-party Live Chats paper_content: Live chats have become a popular form of communication, connecting people all over the globe. We believe that one of the simplest approaches for providing topic information to users joining a chat is keywords. In this paper, we present a method to automatically extract contextually relevant keywords for multi-party live chats. In our work, we identify keywords that are associated with specific dialogue acts as well as the occurrences of keywords across the entire conversation. In this way, we are able to identify distinguishing features of the chat based on structural information derived from live chats and predicted dialogue acts. In evaluation, we find that using structural information and predicted dialogue acts performs well, and that conventional methods do not work well over live chats. --- paper_title: Coherent Keyphrase Extraction via Web Mining paper_content: Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents). --- paper_title: TextRank: Bringing Order Into Texts paper_content: In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. --- paper_title: Human-competitive tagging using automatic keyphrase extraction paper_content: This paper connects two research areas: automatic tagging on the web and statistical keyphrase extraction. First, we analyze the quality of tags in a collaboratively created folksonomy using traditional evaluation techniques. Next, we demonstrate how documents can be tagged automatically with a state-of-the-art keyphrase extraction algorithm, and further improve performance in this new domain using a new algorithm, "Maui", that utilizes semantic information extracted from Wikipedia. Maui outperforms existing approaches and extracts tags that are competitive with those assigned by the best performing human taggers. --- paper_title: Conundrums in Unsupervised Keyphrase Extraction: Making Sense of the State-of-the-Art paper_content: State-of-the-art approaches for unsupervised keyphrase extraction are typically evaluated on a single dataset with a single parameter setting. Consequently, it is unclear how effective these approaches are on a new dataset from a different domain, and how sensitive they are to changes in parameter settings. To gain a better understanding of state-of-the-art unsupervised keyphrase extraction algorithms, we conduct a systematic evaluation and analysis of these algorithms on a variety of standard evaluation datasets. --- paper_title: Single document keyphrase extraction using neighborhood knowledge paper_content: Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach. --- paper_title: Keyphrase Extraction Using Semantic Networks Structure Analysis paper_content: Keyphrases play a key role in text indexing, summarization and categorization. However, most of the existing keyphrase extraction approaches require human-labeled training sets. In this paper, we propose an automatic keyphrase extraction algorithm, which can be used in both supervised and unsupervised tasks. This algorithm treats each document as a semantic network. Structural dynamics of the network are used to extract keyphrases (key nodes) unsupervised. Experiments demonstrate the proposed algorithm averagely improves 50% in effectiveness and 30% in efficiency in unsupervised tasks and performs comparatively with supervised extractors. Moreover, by applying this algorithm to supervised tasks, we develop a classifier with an overall accuracy up to 80%. --- paper_title: Clustering to Find Exemplar Terms for Keyphrase Extraction paper_content: Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graph-based ranking methods (TextRank) by 9.5% in F1-measure. --- paper_title: Extracting key terms from noisy and multitheme documents paper_content: We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia. Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages. Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods. --- paper_title: Domain-specific keyphrase extraction paper_content: Document keyphrases provide semantic metadata characterizing documents and producing an overview of the content of a document. They can be used in many text-mining and knowledge management related applications. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified domain keyphrases to assign weights to the candidate keyphrases. The logic of our algorithm is: the more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. To obtain prior positive inputs, KIP first populates its glossary database using manually identified keyphrases and keywords. It then checks the composition of all noun phrases of a document, looks up the database and calculates scores for all these noun phrases. The ones having higher scores will be extracted as keyphrases. --- paper_title: Improved Automatic Keyword Extraction Given More Linguistic Knowledge paper_content: In this paper, experiments on automatic extraction of keywords from abstracts using a supervised machine learning algorithm are discussed. The main point of this paper is that by adding linguistic knowledge to the representation (such as syntactic features), rather than relying only on statistics (such as term frequency and n-grams), a better result is obtained as measured by keywords previously assigned by professional indexers. In more detail, extracting NP-chunks gives a better precision than n-grams, and by adding the PoS tag(s) assigned to the term as a feature, a dramatic improvement of the results is obtained, independent of the term selection approach applied. --- paper_title: Unsupervised Approaches for Automatic Keyword Extraction Using Meeting Transcripts paper_content: This paper explores several unsupervised approaches to automatic keyword extraction using meeting transcripts. In the TFIDF (term frequency, inverse document frequency) weighting framework, we incorporated part-of-speech (POS) information, word clustering, and sentence salience score. We also evaluated a graph-based approach that measures the importance of a word based on its connection with other sentences or words. The system performance is evaluated in different ways, including comparison to human annotated keywords using F-measure and a weighted score relative to the oracle system performance, as well as a novel alternative human evaluation. Our results have shown that the simple unsupervised TFIDF approach performs reasonably well, and the additional information from POS and sentence score helps keyword extraction. However, the graph method is less effective for this domain. Experiments were also performed using speech recognition output and we observed degradation and different patterns compared to human transcripts. --- paper_title: TextRank: Bringing Order Into Texts paper_content: In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. --- paper_title: Using Noun Phrase Heads to Extract Document Keyphrases paper_content: Automatically extracting keyphrases from documents is a task with many applications in information retrieval and natural language processing. Document retrieval can be biased towards documents containing relevant keyphrases; documents can be classified or categorized based on their keyphrases; automatic text summarization may extract sentences with high keyphrase scores. ::: ::: This paper describes a simple system for choosing noun phrases from a document as keyphrases. A noun phrase is chosen based on its length, its frequency and the frequency of its head noun. Noun phrases are extracted from a text using a base noun phrase skimmer and an off-the-shelf online dictionary. ::: ::: Experiments involving human judges reveal several interesting results: the simple noun phrase-based system performs roughly as well as a state-of-the-art, corpus-trained keyphrase extractor; ratings for individual keyphrases do not necessarily correlate with ratings for sets of keyphrases for a document; agreement among unbiased judges on the keyphrase rating task is poor. --- paper_title: An Ontology-Based Approach for Key Phrase Extraction paper_content: Automatic key phrase extraction is fundamental to the success of many recent digital library applications and semantic information retrieval techniques and a difficult and essential problem in Vietnamese natural language processing (NLP). In this work, we propose a novel method for key phrase extracting of Vietnamese text that exploits the Vietnamese Wikipedia as an ontology and exploits specific characteristics of the Vietnamese language for the key phrase selection stage. We also explore NLP techniques that we propose for the analysis of Vietnamese texts, focusing on the advanced candidate phrases recognition phase as well as part-of-speech (POS) tagging. Finally, we review the results of several experiments that have examined the impacts of strategies chosen for Vietnamese key phrase extracting. --- paper_title: Bayesian Text Segmentation for Index Term Identification and Keyphrase Extraction paper_content: Automatically extracting terminology and index terms from scientific literature is useful for a variety of digital library, indexing and search applications. This task is non-trivial, compli- cated by domain-specific terminology and a steady introduction of new terminology. Correctly identifying nested terminology further adds to the challenge. We present a Dirichlet Process (DP) model of word segmentation where multiword segments are either retrieved from a cache or newly generated. We show how this DP-Segmentation model can be used to successfully extract nested terminology, outperforming previous methods for solving this problem. --- paper_title: Human-competitive tagging using automatic keyphrase extraction paper_content: This paper connects two research areas: automatic tagging on the web and statistical keyphrase extraction. First, we analyze the quality of tags in a collaboratively created folksonomy using traditional evaluation techniques. Next, we demonstrate how documents can be tagged automatically with a state-of-the-art keyphrase extraction algorithm, and further improve performance in this new domain using a new algorithm, "Maui", that utilizes semantic information extracted from Wikipedia. Maui outperforms existing approaches and extracts tags that are competitive with those assigned by the best performing human taggers. --- paper_title: KP-Miner: A keyphrase extraction system for English and Arabic documents paper_content: Automatic keyphrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents the KP-Miner system, and demonstrates through experimentation and comparison with widely used systems that it is effective and efficient in extracting keyphrases from both English and Arabic documents of varied length. Unlike other existing keyphrase extraction systems, the KP-Miner system does not need to be trained on a particular document set in order to achieve its task. It also has the advantage of being configurable as the rules and heuristics adopted by the system are related to the general nature of documents and keyphrases. This implies that the users of this system can use their understanding of the document(s) being input into the system to fine-tune it to their particular needs. --- paper_title: Automatic keyphrase extraction from scientific documents using N-gram filtration technique paper_content: In this paper we present an automatic Keyphrase extraction technique for English documents of scientific domain. The devised algorithm uses n-gram filtration technique, which filters sophisticated n-grams {1dnd4} along with their weight from the words of input document. To develop n-gram filtration technique, we have used (1) LZ78 data compression based technique, (2) a simple refinement step, (3) A simple Pattern Filtration algorithm and, (4) a term weighting scheme. In term weighting scheme, we have introduced the importance of position of sentence (where given phrase occurs first) in document and position of phrase in sentence for documents of scientific domain (which is literally more organized than other domains). The entire system is based upon statistical observations, simple grammatical facts, heuristics, and lexical information of English language. We remark that the devised system does not require a learning phase. Our experimental results with publically available text dataset, shows that the devised system is comparable with other known algorithms. --- paper_title: Automatic Keyphrase Extraction with a Refined Candidate Set paper_content: In this paper, we develop and evaluate an automatic keyphrase extraction technique for scientific documents. A new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of candidate set by about 75% without increasing the computational complexity. Then in the step of feature calculation, when a phrase and its sub-phrases coexist as candidates, an inverse document frequency related feature is introduced for selecting the proper granularity. Experimental results show the efficiency and effectiveness of the refined candidate set and demonstrate that the overall performance of our system compares favorably with other known keyphrase extraction systems. --- paper_title: Finding advertising keywords on web pages paper_content: A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with "relevant" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems. --- paper_title: CoRankBayes: bayesian learning to rank under the co-training framework and its application in keyphrase extraction paper_content: Recently, learning to rank algorithms have become a popular and effective tool for ordering objects (e.g. terms) according to their degrees of importance. The contribution of this paper is that we propose a simple and fast learning to rank model RankBayes and embed it in the co-training framework. The detailed proof is given that Naive Bayes algorithm can be used to implement a learning to rank model. To solve the problem of two-model inconsistency, an ingenious approach is put forward to rank all the phrases by making use of the labeled results of two RankBayes models. Experimental results show that the proposed approach is promising in solving ranking problems. --- paper_title: KPSpotter: a flexible information gain-based keyphrase extraction system paper_content: To tackle the issue of information overload, we present an Information Gain-based KeyPhrase Extraction System, called KPSpotter. KPSpotter is a flexible web-enabled keyphrase extraction system, capable of processing various formats of input data, including web data, and generating the extraction model as well as the list of keyphrases in XML. In KPSpotter, the following two features were selected for training and extracting keyphrases: 1) TF*IDF and 2) Distance from First Occurrence. Input training and testing collections were processed in three stages: 1) Data Cleaning, 2) Data Tokenizing, and 3) Data Discretizing. To measure the system performance, the keyphrases extracted by KPSpotter are compared with the ones that the authors assigned. Our experiments show that the performance of KPSpotter was evaluated to be equivalent to KEA, a well-known keyphrase extraction system. KPSpotter, however, is differentiated from other extraction systems in the followings: First, KPSpotter employs a new keyphrase extraction technique that combines the Information Gain data mining measure and several Natural Language Processing techniques such as stemming and case-folding. Second, KPSpotter is able to process various types of input data such as XML, HTML, and unstructured text data and generate XML output. Third, the user can provide input data and execute KPSpotter through the Internet. Fourth, for efficiency and performance reason, KPSpotter stores candidate keyphrases and its related information such as frequency and stemmed form into an embedded database management system. --- paper_title: Improved Automatic Keyword Extraction Given More Linguistic Knowledge paper_content: In this paper, experiments on automatic extraction of keywords from abstracts using a supervised machine learning algorithm are discussed. The main point of this paper is that by adding linguistic knowledge to the representation (such as syntactic features), rather than relying only on statistics (such as term frequency and n-grams), a better result is obtained as measured by keywords previously assigned by professional indexers. In more detail, extracting NP-chunks gives a better precision than n-grams, and by adding the PoS tag(s) assigned to the term as a feature, a dramatic improvement of the results is obtained, independent of the term selection approach applied. --- paper_title: Learning Algorithms for Keyphrase Extraction paper_content: Many academic journals ask their authors to provide a list of about five to fifteen keywords, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a wide variety of tasks for which keyphrases are useful, as we discuss in this paper. We approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. We evaluate the performance of nine different configurations of C4.5. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for automatically extracting keyphrases from text. The experimental results support the claim that a custom-designed algorithm (GenEx), incorporating specialized procedural domain knowledge, can generate better keyphrases than a general-purpose algorithm (C4.5). Subjective human evaluation of the keyphrases generated by GenEx suggests that about 80% of the keyphrases are acceptable to human readers. This level of performance should be satisfactory for a wide variety of applications. --- paper_title: HUMB: Automatic Key Term Extraction from Scientific Articles in GROBID paper_content: The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID's facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository. --- paper_title: A ranking approach to keyphrase extraction paper_content: This paper addresses the issue of automatically extracting keyphrases from a document. Previously, this problem was formalized as classification and learning methods for classification were utilized. This paper points out that it is more essential to cast the problem as ranking and employ a learning to rank method to perform the task. Specifically, it employs Ranking SVM, a state-of-art method of learning to rank, in keyphrase extraction. Experimental results on three datasets show that Ranking SVM significantly outperforms the baseline methods of SVM and Naive Bayes, indicating that it is better to exploit learning to rank techniques in keyphrase extraction. --- paper_title: Enhancing Linguistically Oriented Automatic Keyword Extraction paper_content: This paper presents experiments on how the performance of automatic keyword extraction can be improved, as measured by keywords previously assigned by professional indexers. The keyword extraction algorithm consists of three prediction models that are combined to decide what words or sequences of words in the documents are suitable as keywords. The models, in turn, are built using different definitions of what constitutes a term in a written document. --- paper_title: Re-examining Automatic Keyphrase Extraction Approaches in Scientific Articles paper_content: We tackle two major issues in automatic keyphrase extraction using scientific articles: candidate selection and feature engineering. To develop an efficient candidate selection method, we analyze the nature and variation of keyphrases and then select candidates using regular expressions. Secondly, we re-examine the existing features broadly used for the supervised approach, exploring different ways to enhance their performance. While most other approaches are supervised, we also study the optimal features for unsupervised keyphrase extraction. Our research has shown that effective candidate selection leads to better performance as evaluation accounts for candidate coverage. Our work also attests that many of existing features are also usable in unsupervised extraction. --- paper_title: Finding advertising keywords on web pages paper_content: A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with "relevant" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems. --- paper_title: Re-examining Automatic Keyphrase Extraction Approaches in Scientific Articles paper_content: We tackle two major issues in automatic keyphrase extraction using scientific articles: candidate selection and feature engineering. To develop an efficient candidate selection method, we analyze the nature and variation of keyphrases and then select candidates using regular expressions. Secondly, we re-examine the existing features broadly used for the supervised approach, exploring different ways to enhance their performance. While most other approaches are supervised, we also study the optimal features for unsupervised keyphrase extraction. Our research has shown that effective candidate selection leads to better performance as evaluation accounts for candidate coverage. Our work also attests that many of existing features are also usable in unsupervised extraction. --- paper_title: Finding advertising keywords on web pages paper_content: A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with "relevant" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems. --- paper_title: Coherent Keyphrase Extraction via Web Mining paper_content: Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents). --- paper_title: HUMB: Automatic Key Term Extraction from Scientific Articles in GROBID paper_content: The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID's facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository. --- paper_title: Human-competitive tagging using automatic keyphrase extraction paper_content: This paper connects two research areas: automatic tagging on the web and statistical keyphrase extraction. First, we analyze the quality of tags in a collaboratively created folksonomy using traditional evaluation techniques. Next, we demonstrate how documents can be tagged automatically with a state-of-the-art keyphrase extraction algorithm, and further improve performance in this new domain using a new algorithm, "Maui", that utilizes semantic information extracted from Wikipedia. Maui outperforms existing approaches and extracts tags that are competitive with those assigned by the best performing human taggers. --- paper_title: Single document keyphrase extraction using neighborhood knowledge paper_content: Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach. --- paper_title: Extracting key terms from noisy and multitheme documents paper_content: We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia. Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages. Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods. --- paper_title: CollabRank: Towards a Collaborative Approach to Single-Document Keyphrase Extraction paper_content: Previous methods usually conduct the keyphrase extraction task for single documents separately without interactions for each document, under the assumption that the documents are considered independent of each other. This paper proposes a novel approach named CollabRank to collaborative single-document keyphrase extraction by making use of mutual influences of multiple documents within a cluster context. CollabRank is implemented by first employing the clustering algorithm to obtain appropriate document clusters, and then using the graph-based ranking algorithm for collaborative single-document keyphrase extraction within each cluster. Experimental results demonstrate the encouraging performance of the proposed approach. Different clustering algorithms have been investigated and we find that the system performance relies positively on the quality of document clusters. --- paper_title: Keyword extraction from a single document using word co-occurrence statistical information paper_content: We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of cooccurrence between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the documentas follows. If probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of biases of distribution is measured by the χ 2 -measure. Our algorithm shows comparable performance to tfidf without using a corpus. --- paper_title: TextRank: Bringing Order Into Texts paper_content: In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. --- paper_title: The Anatomy of a Large-Scale Hypertextual Web Search Engine paper_content: In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from 3years ago. This paper provides an in-depth description of our large-scale web search engine - the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections, where anyone can publish anything they want. --- paper_title: TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction paper_content: Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets. --- paper_title: Clustering to Find Exemplar Terms for Keyphrase Extraction paper_content: Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graph-based ranking methods (TextRank) by 9.5% in F1-measure. --- paper_title: Extracting key terms from noisy and multitheme documents paper_content: We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia. Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages. Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods. --- paper_title: Topical Word Trigger Model for Keyphrase Extraction paper_content: Keyphrase extraction aims to find representative phrases for a document. Keyphrases are expected to cover main themes of a document. Meanwhile, keyphrases do not necessarily occur frequently in the document, which is known as the vocabulary gap between the words in a document and its keyphrases. In this paper, we propose Topical Word Trigger Model (TWTM) for keyphrase extraction. TWTM assumes the content and keyphrases of a document are talking about the same themes but written in different languages. Under the assumption, keyphrase extraction is modeled as a translation process from document contenttokeyphrases. Moreover,in ordertobettercoverdocumentthemes, TWTMsets trigger probabilities to be topic-specific, and hence the trigger process can be influenced by the document themes. On one hand, TWTM uses latent topics to model document themes and takes the coverage of document themes into consideration; on the other hand, TWTM uses topic-specific word trigger to bridge the vocabulary gap between the words in document and keyphrases. Experiment results on real world dataset reveal that TWTM outperforms existing state-of-the-art methods under various evaluation metrics. --- paper_title: Improved Automatic Keyword Extraction Given More Linguistic Knowledge paper_content: In this paper, experiments on automatic extraction of keywords from abstracts using a supervised machine learning algorithm are discussed. The main point of this paper is that by adding linguistic knowledge to the representation (such as syntactic features), rather than relying only on statistics (such as term frequency and n-grams), a better result is obtained as measured by keywords previously assigned by professional indexers. In more detail, extracting NP-chunks gives a better precision than n-grams, and by adding the PoS tag(s) assigned to the term as a feature, a dramatic improvement of the results is obtained, independent of the term selection approach applied. --- paper_title: Automatic Keyphrase Extraction via Topic Decomposition paper_content: Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics. --- paper_title: Latent Dirichlet Allocation paper_content: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. --- paper_title: Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering paper_content: A novel method for simultaneous keyphrase extraction and generic text summarization is proposed by modeling text documents as weighted undirected and weighted bipartite graphs. Spectral graph clustering algorithms are useed for partitioning sentences of the documents into topical groups with sentence link priors being exploited to enhance clustering quality. Within each topical group, saliency scores for keyphrases and sentences are generated based on a mutual reinforcement principle. The keyphrases and sentences are then ranked according to their saliency scores and selected for inclusion in the top keyphrase list and summaries of the document. The idea of building a hierarchy of summaries for documents capturing different levels of granularity is also briefly discussed. Our method is illustrated using several examples from news articles, news broadcast transcripts and web documents. --- paper_title: Towards an Iterative Reinforcement Approach for Simultaneous Document Summarization and Keyword Extraction paper_content: Though both document summarization and keyword extraction aim to extract concise representations from documents, these two tasks have usually been investigated independently. This paper proposes a novel iterative reinforcement approach to simultaneously extracting summary and keywords from single document under the assumption that the summary and keywords of a document can be mutually boosted. The approach can naturally make full use of the reinforcement between sentences and keywords by fusing three kinds of relationships between sentences and words, either homogeneous or heterogeneous. Experimental results show the effectiveness of the proposed approach for both tasks. The corpus-based approach is validated to work almost as well as the knowledge-based approach for computing word semantics. --- paper_title: A Language Model Approach To Keyphrase Extraction paper_content: We present a new approach to extracting keyphrases based on statistical language models. Our approach is to use pointwise KL-divergence between multiple language models for scoring both phraseness and informativeness, which can be unified into a single score to rank extracted phrases. --- paper_title: Clustering to Find Exemplar Terms for Keyphrase Extraction paper_content: Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graph-based ranking methods (TextRank) by 9.5% in F1-measure. --- paper_title: Approximate Matching for Evaluating Keyphrase Extraction paper_content: We propose a new evaluation strategy for keyphrase extraction based on approximate keyphrase matching. It corresponds well with human judgments and is better suited to assess the performance of keyphrase extraction approaches. Additionally, we propose a generalized framework for comprehensive analysis of keyphrase extraction that subsumes most existing approaches, which allows for fair testing conditions. For the first time, we compare the results of state-of-the-art unsupervised and supervised keyphrase extraction approaches on three evaluation datasets and show that the relative performance of the approaches heavily depends on the evaluation metric as well as on the properties of the evaluation dataset. --- paper_title: Keyword extraction from a single document using word co-occurrence statistical information paper_content: We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of cooccurrence between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the documentas follows. If probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of biases of distribution is measured by the χ 2 -measure. Our algorithm shows comparable performance to tfidf without using a corpus. --- paper_title: TextRank: Bringing Order Into Texts paper_content: In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. --- paper_title: Automatic Keyphrase Extraction via Topic Decomposition paper_content: Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics. --- paper_title: KP-Miner: A keyphrase extraction system for English and Arabic documents paper_content: Automatic keyphrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents the KP-Miner system, and demonstrates through experimentation and comparison with widely used systems that it is effective and efficient in extracting keyphrases from both English and Arabic documents of varied length. Unlike other existing keyphrase extraction systems, the KP-Miner system does not need to be trained on a particular document set in order to achieve its task. It also has the advantage of being configurable as the rules and heuristics adopted by the system are related to the general nature of documents and keyphrases. This implies that the users of this system can use their understanding of the document(s) being input into the system to fine-tune it to their particular needs. --- paper_title: Evaluating N-gram based Evaluation Metrics for Automatic Keyphrase Extraction paper_content: This paper describes a feasibility study of n-gram-based evaluation metrics for automatic keyphrase extraction. To account for near-misses currently ignored by standard evaluation metrics, we adapt various evaluation metrics developed for machine translation and summarization, and also the R-precision evaluation metric from keyphrase evaluation. In evaluation, the R-precision metric is found to achieve the highest correlation with human annotations. We also provide evidence that the degree of semantic similarity varies with the location of the partially-matching component words. --- paper_title: Single document keyphrase extraction using neighborhood knowledge paper_content: Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach. --- paper_title: Clustering to Find Exemplar Terms for Keyphrase Extraction paper_content: Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graph-based ranking methods (TextRank) by 9.5% in F1-measure. --- paper_title: SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles paper_content: This paper describes Task 5 of the Workshop on Semantic Evaluation 2010 (SemEval-2010). Systems are to automatically assign keyphrases or keywords to given scientific articles. The participating systems were evaluated by matching their extracted keyphrases against manually assigned ones. We present the overall ranking of the submitted systems and discuss our findings to suggest future directions for this task. --- paper_title: Improved Automatic Keyword Extraction Given More Linguistic Knowledge paper_content: In this paper, experiments on automatic extraction of keywords from abstracts using a supervised machine learning algorithm are discussed. The main point of this paper is that by adding linguistic knowledge to the representation (such as syntactic features), rather than relying only on statistics (such as term frequency and n-grams), a better result is obtained as measured by keywords previously assigned by professional indexers. In more detail, extracting NP-chunks gives a better precision than n-grams, and by adding the PoS tag(s) assigned to the term as a feature, a dramatic improvement of the results is obtained, independent of the term selection approach applied. --- paper_title: Unsupervised Approaches for Automatic Keyword Extraction Using Meeting Transcripts paper_content: This paper explores several unsupervised approaches to automatic keyword extraction using meeting transcripts. In the TFIDF (term frequency, inverse document frequency) weighting framework, we incorporated part-of-speech (POS) information, word clustering, and sentence salience score. We also evaluated a graph-based approach that measures the importance of a word based on its connection with other sentences or words. The system performance is evaluated in different ways, including comparison to human annotated keywords using F-measure and a weighted score relative to the oracle system performance, as well as a novel alternative human evaluation. Our results have shown that the simple unsupervised TFIDF approach performs reasonably well, and the additional information from POS and sentence score helps keyword extraction. However, the graph method is less effective for this domain. Experiments were also performed using speech recognition output and we observed degradation and different patterns compared to human transcripts. --- paper_title: Human-competitive tagging using automatic keyphrase extraction paper_content: This paper connects two research areas: automatic tagging on the web and statistical keyphrase extraction. First, we analyze the quality of tags in a collaboratively created folksonomy using traditional evaluation techniques. Next, we demonstrate how documents can be tagged automatically with a state-of-the-art keyphrase extraction algorithm, and further improve performance in this new domain using a new algorithm, "Maui", that utilizes semantic information extracted from Wikipedia. Maui outperforms existing approaches and extracts tags that are competitive with those assigned by the best performing human taggers. --- paper_title: Automatic Keyphrase Extraction by Bridging Vocabulary Gap paper_content: Keyphrase extraction aims to select a set of terms from a document as a short summary of the document. Most methods extract keyphrases according to their statistical properties in the given document. Appropriate keyphrases, however, are not always statistically significant or even do not appear in the given document. This makes a large vocabulary gap between a document and its keyphrases. In this paper, we consider that a document and its keyphrases both describe the same object but are written in two different languages. By regarding keyphrase extraction as a problem of translating from the language of documents to the language of keyphrases, we use word alignment models in statistical machine translation to learn translation probabilities between the words in documents and the words in keyphrases. According to the translation model, we suggest keyphrases given a new document. The suggested keyphrases are not necessarily statistically frequent in the document, which indicates that our method is more flexible and reliable. Experiments on news articles demonstrate that our method outperforms existing unsupervised methods on precision, recall and F-measure. --- paper_title: Clustering to Find Exemplar Terms for Keyphrase Extraction paper_content: Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graph-based ranking methods (TextRank) by 9.5% in F1-measure. --- paper_title: Extracting key terms from noisy and multitheme documents paper_content: We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia. Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages. Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods. --- paper_title: Freebase: a collaboratively created graph database for structuring human knowledge paper_content: Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications. --- paper_title: Human-competitive tagging using automatic keyphrase extraction paper_content: This paper connects two research areas: automatic tagging on the web and statistical keyphrase extraction. First, we analyze the quality of tags in a collaboratively created folksonomy using traditional evaluation techniques. Next, we demonstrate how documents can be tagged automatically with a state-of-the-art keyphrase extraction algorithm, and further improve performance in this new domain using a new algorithm, "Maui", that utilizes semantic information extracted from Wikipedia. Maui outperforms existing approaches and extracts tags that are competitive with those assigned by the best performing human taggers. ---
Title: Automatic Keyphrase Extraction: A Survey of the State of the Art Section 1: Introduction Description 1: This section provides an overview and the importance of automatic keyphrase extraction, highlighting its applications and the challenges faced by current systems. Section 2: Corpora Description 2: This section discusses the different types of corpora used for evaluating keyphrase extraction systems, along with factors that affect extraction difficulty. Section 3: Keyphrase Extraction Approaches Description 3: This section outlines the general steps involved in keyphrase extraction systems and categorizes the approaches into supervised and unsupervised methods. Section 4: Selecting Candidate Words and Phrases Description 4: This section details the heuristics and rules used to extract candidate keyphrases from documents. Section 5: Supervised Approaches Description 5: This section covers the task reformulation and feature design in supervised keyphrase extraction systems. Section 6: Unsupervised Approaches Description 6: This section categorizes and describes different unsupervised methods for keyphrase extraction, including graph-based ranking, topic-based clustering, simultaneous learning, and language modeling. Section 7: Evaluation Description 7: This section discusses the metrics and methods used for evaluating keyphrase extraction systems and presents state-of-the-art results on commonly-used datasets. Section 8: Analysis Description 8: This section provides an error analysis of state-of-the-art keyphrase extraction systems, identifying common types of errors and their contributions to overall performance. Section 9: Recommendations Description 9: This section offers recommendations for improving keyphrase extraction by incorporating background knowledge and addressing identified error types. Section 10: Conclusion and Future Directions Description 10: This section summarizes the survey findings and outlines the major challenges and future directions for research in automatic keyphrase extraction.
Natural Language Processing: A Survey
11
--- paper_title: Coping With Syntactic Ambiguity Or How To Put The Block In The Box On The Table paper_content: Sentences are far more ambiguous than one might have thought. There may be hundreds, perhaps thousands, of syntactic parse trees for certain very natural sentences of English. This fact has been a major problem confronting natural language processing, especially when a large percentage of the syntactic parse trees are enumerated during semantic/pragmatic processing. In this paper we propose some methods for dealing with syntactic ambiguity in ways that exploit certain regularities among alternative parse trees. These regularities will be expressed as linear combinations of ATN networks, and also as sums and products of formal power series. We believe that such encoding of ambiguity will enhance processing, whether syntactic and semantic constraints are processed separately in sequence or interleaved together. --- paper_title: An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery paper_content: This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that this algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances. ::: Keywords: Bayesian grammar induction, probability models, minimum description length (MDL), unsupervised learning, cognitive modeling, language acquisition, segmentation --- paper_title: Automatic extraction of semantic classes from syntactic information in online resources paper_content: Abstract : This paper addresses the issue of word-sense ambiguity in extraction from machine-readable resources for the construction of large-scale knowledge sources. We describe two experiments: one which took word-sense distinctions into account, resulting in 97.9% accuracy for semantic classification of verbs based on (Levin, 1993); and one which ignored word-sense distinctions, resulting in 6.3% accuracy. These experiments were dual purpose: (1) to validate the central thesis of the of (Levin, 1993), i.e., that verb semantics and syntactic behavior are predictably related; (2) to demonstrate that a 20-fold improvement can be achieved in deriving semantic information from syntactic cues if we first divide the syntactic cues into distinct groupings that correlate with different word senses. Finally, we show that we can provide effective acquisition techniques for novel word senses using a combination of online sources. --- paper_title: An Ascription-Based Approach To Speech Acts paper_content: The two principal areas of natural language processing research in pragmantics are belief modelling and speech act processing. Belief modelling is the development of techniques to represent the mental attitudes of a dialogue participant. The latter approach, speech act processing, based on speech act theory, involves viewing dialogue in planning terms. Utterances in a dialogue are modelled as steps in a plan where understanding an utterance involves deriving the complete plan a speaker is attempting to achieve. However, previous speech act based approaches have been limited by a reliance upon relatively simplistic belief modelling techniques and their relationship to planning and plan recognition. In particular, such techniques assume precomputed nested belief structures. In this paper, we will present an approach to speech act processing based on novel belief modelling techniques where nested beliefs are propagated on demand. --- paper_title: An investigation of the modified direction feature for cursive character recognition paper_content: This paper describes and analyses the performance of a novel feature extraction technique for the recognition of segmented/cursive characters that may be used in the context of a segmentation-based handwritten word recognition system. The modified direction feature (MDF) extraction technique builds upon the direction feature (DF) technique proposed previously that extracts direction information from the structure of character contours. This principal was extended so that the direction information is integrated with a technique for detecting transitions between background and foreground pixels in the character image. In order to improve on the DF extraction technique, a number of modifications were undertaken. With a view to describe the character contour more effectively, a re-design of the direction number determination technique was performed. Also, an additional global feature was introduced to improve the recognition accuracy for those characters that were most frequently confused with patterns of similar appearance. MDF was tested using a neural network-based classifier and compared to the DF and transition feature (TF) extraction techniques. MDF outperformed both DF and TF techniques using a benchmark dataset and compared favourably with the top results in the literature. A recognition accuracy of above 89% is reported on characters from the CEDAR dataset. ---
Title: Natural Language Processing: A Survey Section 1: Introduction Description 1: Introduce the main goals and components of Natural Language Processing (NLP). Section 2: History Description 2: Discuss the origins and early development of NLP, including its association with linguistics, cognitive psychology, and computer science. Section 3: Why NLP? Description 3: Explain the importance and benefits of NLP, particularly in terms of user-computer communication. Section 4: Problems Encountered By NLP Description 4: Describe the various challenges faced in NLP, detailing specific issues such as syntactic ambiguity, speech segmentation, word sense disambiguation, speech acts problems, and incorrect input. Section 5: Why Are These Problems So Hard? Description 5: Explore the reasons why solving problems in NLP is particularly difficult, including the complexities of natural language and resource requirements. Section 6: Major Applications of NLP Description 6: Provide an overview of key real-world applications of NLP, such as machine translation, database access, text-to-speech systems, and optical character recognition. Section 7: Machine Translation Description 7: Discuss the specifics of machine translation, highlighting its goals, approaches, and challenges. Section 8: Database Access Description 8: Explore how NLP is used for database access and information retrieval, including the role of semantic webs. Section 9: Text-To-Speech Systems Description 9: Describe text-to-speech systems, their purpose, current capabilities, and ongoing research efforts to improve them. Section 10: Optical Character Recognition Description 10: Explain optical character recognition, its benefits, obstacles, past successes, current research, and everyday use. Section 11: Conclusion Description 11: Summarize the field of NLP, its current state, ongoing challenges, and potential future developments.
Studies and analysis of reference management software: a literature review
7
--- paper_title: Fore‐cite: tactics for evaluating citation management tools paper_content: Purpose – The purpose of this paper is to explore a general set of criteria that can be used by librarians and information professionals for the evaluation of citation management tools.Design/methodology/approach – Collection development practices found in the library world are combined with software selection criteria from the corporate sector and applied to the citation management environment. A discussion of these practices identifies general criteria, or best practices, that can be used in the evaluation of various types of citation management tools.Findings – Eight criteria are discussed. Key questions are raised that can assist librarians and information professionals in the evaluation process. Additional resources that may assist with evaluation efforts are highlighted, where applicable.Originality/value – Existing attempts to evaluate citation management tools have employed an approach centering on the features and functionality of a limited set of tools. While effective, these studies neglect new... --- paper_title: Storage and retrieval of bibliographic references using a microprocessor system paper_content: Abstract A program is described for the storage and retrieval of bibliographic references. The program, designed for a dual floppy disk microcomputer system, allows fast access to references, which can be retrieved by keywords, by authors' names or by string matching. Provided a printer is available, the program prints reprint requests, while new references are being stored, and prints hard copies of the references. The program also includes the possibility of creating a new bibliographic file from one or more already existing files. --- paper_title: Managing bibliographic citations using microcomputers paper_content: Abstract Programs are now available to construct and retrieve lists of bibliographic citations by microcomputers. Although considerable effort must be expended to learn to use them, once mastered, they can be useful to anyone who must manage large collections of references. There are several ways to use personal computers to store and retrieve bibliographic citations. Word processors can be used to manage relatively small lists of citations. Preprogrammed bibliographic systems are available that are designed specifically for this purpose. A general data base management program can also be adapted for bibliographic purposes and used for other functions as well. This essay has been prepared to provide guidance to those who have a working knowledge of microcomputers and wish to expand this to use a data management system for bibliographic purposes. The dBASE II program Is used to illustrate how to set up a bibliographic system. Methods are described on how to prepare citations for storage and retrieval using combinations of key words and Boolean operators, how to prepare selected lists of references and arrange them in alphabetic order or by subject heading, and how to print tailored lists of citations. The system was found to be highly responsive to commands and able to provide rapid retrieval of information. --- paper_title: A Guide to Conducting a Systematic Literature Review of Information Systems Research paper_content: This working paper has been thoroughly revised and superseded by two distinct articles. The first is a revised and peer-reviewed version of the original article: Okoli, Chitu (2015), A Guide to Conducting a Standalone Systematic Literature Review. Communications of the Association for Information Systems (37:43), November 2015, pp. 879-910. This article presents a methodology for conducting a systematic literature review with many examples from IS research and references to guides with further helpful details. The article is available from Google Scholar or from the author's website. The second extension article focuses on developing theory with literature reviews: Okoli, Chitu (2015), The View from Giants’ Shoulders: Developing Theory with Theory-Mining Systematic Literature Reviews. SSRN Working Paper Series, December 8, 2015. This article identifies theory-mining reviews, which are literature reviews that extract and synthesize theoretical concepts from the source primary studies. The article demonstrates by citation analysis that, in information systems research, this kind of literature review is more highly cited than other kinds of literature review. The article provides detailed guidelines to writing a high-quality theory-mining review. --- paper_title: Reference management software for systematic reviews and meta-analyses: an exploration of usage and usability paper_content: Reference management software programs enable researchers to more easily organize and manage large volumes of references typically identified during the production of systematic reviews. The purpose of this study was to determine the extent to which authors are using reference management software to produce systematic reviews; identify which programs are used most frequently and rate their ease of use; and assess the degree to which software usage is documented in published studies. We reviewed the full text of systematic reviews published in core clinical journals indexed in ACP Journal Club from 2008 to November 2011 to determine the extent to which reference management software usage is reported in published reviews. We surveyed corresponding authors to verify and supplement information in published reports, and gather frequency and ease-of-use data on individual reference management programs. Of the 78 researchers who responded to our survey, 79.5% reported that they had used a reference management software package to prepare their review. Of these, 4.8% reported this usage in their published studies. EndNote, Reference Manager, and RefWorks were the programs of choice for more than 98% of authors who used this software. Comments with respect to ease-of-use issues focused on the integration of this software with other programs and computer interfaces, and the sharing of reference databases among researchers. Despite underreporting of use, reference management software is frequently adopted by authors of systematic reviews. The transparency, reproducibility and quality of systematic reviews may be enhanced through increased reporting of reference management software usage. --- paper_title: Bibliographic Citation Management Software for Web Applications paper_content: SUMMARY The bibliographic citation management software librarians already use to support scholarly research can also be used to deliver databases to the Web. Scott Memorial Library at Thomas Jefferson University uses a combination of Reference Manager and Reference Web Poster to publish indexes to bibliographic literature, and searchable lists of electronic journals and frequently asked questions (FAQs). This approach to serving data is suitable for materials of a bibliographic nature and/or for low budgets. To get started, all that is required is a copy of a citation management program, such as Reference Manager, EndNote, ProCite or Biblioscape. Any of these will produce static pages coded in HTML. Optional additional packages (Reference Web Poster or BiblioWeb Server) provide interactive Web-based searching. --- paper_title: Defrosting the Digital Library: Bibliographic Tools for the Next Generation Web paper_content: Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as “thought in cold storage,” and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines. ---
Title: Studies and analysis of reference management software: a literature review Section 1: Introduction and background Description 1: This section provides an overview of the historical context and necessity of reference management software, as well as its development and significance in the field of scientific research. Section 2: Goals and working hypotheses Description 2: This section outlines the main and secondary goals of the article, as well as the working hypothesis regarding the rigour of published assessments of reference management software. Section 3: Methodology Description 3: This section describes the systematic literature review method adopted for the study, including the steps followed to gather, tabulate, order, review, and analyze the data. Section 4: Data collection Description 4: This section details the process of collecting relevant articles from databases and other sources, including the criteria used to select the articles for review. Section 5: Results analysis and discussion Description 5: This section presents the analysis and discussion of the 37 reviewed articles, categorizing the types of reviews, functions evaluated, and the evolution and trends in reference management software. Section 6: General Description 6: This section summarizes common aspects and criteria used in the evaluation of reference management software and highlights significant trends and observations from the reviewed articles. Section 7: Conclusions Description 7: This section provides the final conclusions drawn from the review, discussing the lack of a standardized methodology, the evolution of reference management software, and the contributions of Library Science to this field.
Siamese Learning Visual Tracking: A Survey
12
--- paper_title: Siamese Instance Search for Tracking paper_content: In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-theart tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-theart performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot. --- paper_title: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift paper_content: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters. --- paper_title: Struck: Structured output tracking with kernels paper_content: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance. --- paper_title: Large-scale image classification: Fast feature extraction and SVM training paper_content: Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate. --- paper_title: Understanding Machine Learning: From Theory to Algorithms paper_content: Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering. --- paper_title: A logical calculus of the ideas immanent in nervous activity paper_content: Abstract Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed. --- paper_title: A Convolutional Neural Network Hand Tracker paper_content: We describe a system that can track a hand in a sequence of video frames and recognize hand gestures in a user-independent manner. The system locates the hand in each video frame and determines if the hand is open or closed. The tracking system is able to track the hand to within ±10 pixels of its correct location in 99.7% of the frames from a test set containing video sequences from 18 different individuals captured in 18 different room environments. The gesture recognition network correctly determines if the hand being tracked is open or closed in 99.1% of the frames in this test set. The system has been designed to operate in real time with existing hardware. --- paper_title: Accurate Scale Estimation for Robust Visual Tracking. paper_content: Robust scale estimation is a challenging problem in visual object tracking. Most existing methods fail to handle large scale variations in complex image sequences. This paper presents a novel approach for robust scale estimation in a tracking-by-detection framework. The proposed approach works by learning discriminative correlation filters based on a scale pyramid representation. We learn separate filters for translation and scale estimation, and show that this improves the performance compared to an exhaustive scale search. Our scale estimation approach is generic as it can be incorporated into any tracking method with no inherent scale estimation.Experiments are performed on 28 benchmark sequences with significant scale variations. Our results show that the proposed approach significantly improves the performance by 18.8 % in median distance precision compared to our baseline. Finally, we provide both quantitative and qualitative comparison of our approach with state-of-the-art trackers in literature. The proposed method is shown to outperform the best existing tracker by 16.6 % in median distance precision, while operating at real-time. --- paper_title: Learning a Deep Compact Image Representation for Visual Tracking paper_content: In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU). --- paper_title: Challenges in long-term imaging and quantification of single-cell dynamics paper_content: Continuous analysis of single cells, over several cell divisions and for up to weeks at a time, is crucial to deciphering rare, dynamic and heterogeneous cell responses, which would otherwise be missed by population or single-cell snapshot analysis. Although the field of long-term single-cell imaging, tracking and analysis is constantly advancing, several technical challenges continue to hinder wider implementation of this important approach. This is a particular problem for mammalian cells, where in vitro observation usually remains the only possible option for uninterrupted long-term, single-cell observation. Efforts must focus not only on identifying and maintaining culture conditions that support normal cellular behavior while allowing high-resolution imaging over time, but also on developing computational methods that enable semiautomatic analysis of the data. Solutions in microscopy hard- and software, computer vision and specialized theoretical methods for analysis of dynamic single-cell data will enable important discoveries in biology and beyond. --- paper_title: ImageNet Large Scale Visual Recognition Challenge paper_content: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. --- paper_title: Cloud motion and stability estimation for intra-hour solar forecasting paper_content: Abstract Techniques for estimating cloud motion and stability for intra-hour forecasting using a ground-based sky imaging system are presented. A variational optical flow (VOF) technique was used to determine the sub-pixel accuracy of cloud motion for every pixel. Cloud locations up to 15 min ahead were forecasted by inverse mapping of the cloud map. A month of image data captured by a sky imager at UC San Diego was analyzed to compare the accuracy of VOF forecast with cross-correlation method (CCM) and image persistence method. The VOF forecast with a fixed smoothness parameter was found to be superior to image persistence forecast for all forecast horizons for almost all days and outperform CCM forecast with an average error reduction of 39%, 21%, 19%, and 19% for 0, 5, 10, and 15 min forecasts respectively. Optimum forecasts may be achieved with forecast-horizon-dependent smoothness parameters. In addition, cloud stability and forecast confidence was evaluated by correlating point trajectories with forecast error. Point trajectories were obtained by tracking sub-sampled pixels using optical flow field. Point trajectory length in mintues was shown to increase with decreasing forecast error and provide valuable information for cloud forecast confidence at forecast issue time. --- paper_title: Data Association for Multi-Object Visual Tracking paper_content: In the human quest for scientific knowledge, empirical evidence is collected by visual perception. Tracking with computer vision takes on the important role to reveal complex patterns of motion that exist in the world we live in. Multi-object tracking algorithms provide new information on how groups and individual group members move through three-dimensional space. They enable us to study in depth the relationships between individuals in moving groups. These may be interactions of pedestrians on a crowded sidewalk, living cells under a microscope, or bats emerging in large numbers from a cave. Being able to track pedestrians is important for urban planning; analysis of cell interactions supports research on biomaterial design; and the study of bat and bird flight can guide the engineering of aircraft. We were inspired by this multitude of applications to consider the crucial component needed to advance a single-object tracking system to a multi-object tracking system—data association. Data association in the most general sense is the process of matching information about newly observed objects with information that was previously observed about them. This information may be about their identities, positions, or trajectories. Algorithms for data association search for matches that optimize certain match criteria and are subject to physical conditions. They can therefore be formulated as solving a "constrained optimization problem"—the problem of optimizing an objective function of some variables in the presence of constraints on these variables. As such, data association methods have a strong mathematical grounding and are valuable general tools for computer vision researchers. This book serves as a tutorial on data association methods, intended for both students and experts in computer vision. We describe the basic research problems, review the current state of the art, and present some recently developed approaches. The book covers multi-object tracking in two and three dimensions. We consider two imaging scenarios involving either single cameras or multiple cameras with overlapping fields of view, and requiring across-time and across-view data association methods. In addition to methods that match new measurements to already established tracks, we describe methods that match trajectory segments, also called tracklets. The book presents a principled application of data association to solve two interesting tasks: first, analyzing the movements of groups of free-flying animals and second, reconstructing the movements of groups of pedestrians. We conclude by discussing exciting directions for future research. --- paper_title: Fundamentals of Object Tracking paper_content: Preface 1. Introduction to object tracking 2. Filtering theory and non-maneuvering object tracking 3. Maneuvering object tracking 4. Single-object tracking in clutter 5. Single- and multiple-object tracking in clutter: object-existence-based approach 6. Multiple-object tracking in clutter: random-set-based approach 7. Bayesian smoothing algorithms for object tracking 8. Object tracking with time-delayed, out-of-sequence measurements 9. Practical object tracking A. Mathematical and statistical preliminaries B. Finite set statistics (FISST) C. Pseudo-functions in object tracking References Index. --- paper_title: Learning Structural Descriptions From Examples paper_content: Massachusetts Institute of Technology. Dept. of Electrical Engineering. Thesis. 1970. Ph.D. --- paper_title: Learning attentional policies for tracking and recognition in video with deep networks paper_content: We propose a novel attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of the human perceptual system, the model consists of two interacting pathways: ventral and dorsal. The ventral pathway models object appearance and classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of retinal images, with decaying resolution toward the periphery of the gaze. The dorsal pathway models the location, orientation, scale and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the dorsal pathway, we encounter an attentional mechanism that learns to control gazes so as to minimize tracking uncertainty. The approach is modular (with each module easily replaceable with more sophisticated algorithms), straightforward to implement, practically efficient, and works well in simple video sequences. --- paper_title: Understanding and Diagnosing Visual Tracking Systems paper_content: Several benchmark datasets for visual tracking research have been created in recent years. Despite their usefulness, whether they are sufficient for understanding and diagnosing the strengths and weaknesses of different trackers remains questionable. To address this issue, we propose a framework by breaking a tracker down into five constituent parts, namely, motion model, feature extractor, observation model, model updater, and ensemble post-processor. We then conduct ablative experiments on each component to study how it affects the overall result. Surprisingly, our findings are discrepant with some common beliefs in the visual tracking research community. We find that the feature extractor plays the most important role in a tracker. On the other hand, although the observation model is the focus of many studies, we find that it often brings no significant improvement. Moreover, the motion model and model updater contain many details that could affect the result. Also, the ensemble post-processor can improve the result substantially when the constituent trackers have high diversity. Based on our findings, we put together some very elementary building blocks to give a basic tracker which is competitive in performance to the state-of-the-art trackers. We believe our framework can provide a solid baseline when conducting controlled experiments for visual tracking research. --- paper_title: The Complementary Brain -- Unifying Brain Dynamics and Modularity paper_content: Abstract How are our brains functionally organized to achieve adaptive behavior in a changing world? This article presents one alternative to the computer analogy that suggests brains are organized into independent modules. Evidence is reviewed that brains are in fact organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact. Examples from perception, learning, cognition and action are described, and theoretical concepts and mechanisms by which complementarity might be accomplished are presented. --- paper_title: Fundamentals of Object Tracking paper_content: Preface 1. Introduction to object tracking 2. Filtering theory and non-maneuvering object tracking 3. Maneuvering object tracking 4. Single-object tracking in clutter 5. Single- and multiple-object tracking in clutter: object-existence-based approach 6. Multiple-object tracking in clutter: random-set-based approach 7. Bayesian smoothing algorithms for object tracking 8. Object tracking with time-delayed, out-of-sequence measurements 9. Practical object tracking A. Mathematical and statistical preliminaries B. Finite set statistics (FISST) C. Pseudo-functions in object tracking References Index. --- paper_title: Recent advances and trends in visual tracking: A review paper_content: The goal of this paper is to review the state-of-the-art progress on visual tracking methods, classify them into different categories, as well as identify future trends. Visual tracking is a fundamental task in many computer vision applications and has been well studied in the last decades. Although numerous approaches have been proposed, robust visual tracking remains a huge challenge. Difficulties in visual tracking can arise due to abrupt object motion, appearance pattern change, non-rigid object structures, occlusion and camera motion. In this paper, we first analyze the state-of-the-art feature descriptors which are used to represent the appearance of tracked objects. Then, we categorize the tracking progresses into three groups, provide detailed descriptions of representative methods in each group, and examine their positive and negative aspects. At last, we outline the future trends for visual tracking research. --- paper_title: Visual Tracking: An Experimental Survey paper_content: There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers. --- paper_title: A Review of Visual Tracking paper_content: This report contains a review of visual tracking in monocular video sequences. For the purpose of this review, the majority of the visual trackers in the literature are divided into three tracking categories: discrete feature trackers, contour trackers, and region-based trackers. This categorization was performed based on the features used and the algorithms employed by the various visual trackers. The first class of trackers represents targets as discrete features (e.g. points, sets of points, lines) and performs data association using a distance metric that accommodates the particular feature. Contour trackers provide precise outlines of the target boundaries, meaning that they must not only uncover the position of the target, but its shape as well. Contour trackers often make use of gradient edge information during the tracking process. Region trackers represent the target with area-based descriptors that define its support and attempt to locate the image region in the current frame that best matches an object template. Trackers that are not in agreement with the abovementioned categorization, including those that combine methods from the three defined classes, are also considered in this review. In addition to categorizing and describing the various visual trackers in the literature, this review also provides a commentary on the current state of the field as well as a comparative analysis of the various approaches. The paper concludes with an outline of open problems in visual tracking. --- paper_title: Object tracking: A survey paper_content: The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects. --- paper_title: A survey of appearance models in visual object tracking paper_content: Visual object tracking is a significant computer vision task which can be applied to many domains such as visual surveillance, human computer interaction, and video compression. In the literature, researchers have proposed a variety of 2D appearance models. To help readers swiftly learn the recent advances in 2D appearance models for visual object tracking, we contribute this survey, which provides a detailed review of the existing 2D appearance models. In particular, this survey takes a module-based architecture that enables readers to easily grasp the key points of visual object tracking. In this survey, we first decompose the problem of appearance modeling into two different processing stages: visual representation and statistical modeling. Then, different 2D appearance models are categorized and discussed with respect to their composition modules. Finally, we address several issues of interest as well as the remaining challenges for future research on this topic. The contributions of this survey are four-fold. First, we review the literature of visual representations according to their feature-construction mechanisms (i.e., local and global). Second, the existing statistical modeling schemes for tracking-by-detection are reviewed according to their model-construction mechanisms: generative, discriminative, and hybrid generative-discriminative. Third, each type of visual representations or statistical modeling techniques is analyzed and discussed from a theoretical or practical viewpoint. Fourth, the existing benchmark resources (e.g., source code and video datasets) are examined in this survey. --- paper_title: Concrete Problems in AI Safety paper_content: Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI. --- paper_title: On surrogate loss functions and $f$-divergences paper_content: The goal of binary classification is to estimate a discriminant function y from observations of covariate vectors and corresponding binary labels. We consider an elaboration of this problem in which the covariates are not available directly but are transformed by a dimensionality-reducing quantizer Q. We present conditions on loss functions such that empirical risk minimization yields Bayes consistency when both the discriminant function and the quantizer are estimated. These conditions are stated in terms of a general correspondence between loss functions and a class of functionals known as Ali-Silvey or /-divergence functionals. Whereas this correspondence was established by Blackwell [Proc. 2nd Berkeley Symp. Probab. Statist. 1 (1951) 93-102. Univ. California Press, Berkeley] for the 0-1 loss, we extend the correspondence to the broader class of surrogate loss functions that play a key role in the general theory of Bayes consistency for binary classification. Our result makes it possible to pick out the (strict) subset of surrogate loss functions that yield Bayes consistency for joint estimation of the discriminant function and the quantizer. --- paper_title: Learning a similarity metric discriminatively, with application to face verification paper_content: We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L/sub 1/ norm in the target space approximates the "semantic" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue/AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves. --- paper_title: FaceNet: A Unified Embedding for Face Recognition and Clustering paper_content: Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. --- paper_title: MatchNet: Unifying feature and metric learning for patch-based matching paper_content: Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available. --- paper_title: Learning deep representations for ground-to-aerial geolocalization paper_content: The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations. --- paper_title: DeepFace: Closing the Gap to Human-Level Performance in Face Verification paper_content: In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance. --- paper_title: Large-Scale Video Classification with Convolutional Neural Networks paper_content: Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%). --- paper_title: Discriminative Learning of Deep Convolutional Feature Point Descriptors paper_content: Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available. --- paper_title: Neural Networks for Fingerprint Recognition paper_content: After collecting a data base of fingerprint images, we design a neural network algorithm for fingerprint recognition. When presented with a pair of fingerprint images, the algorithm outputs an estimate of the probability that the two images originate from the same finger. In one experiment, the neural network is trained using a few hundred pairs of images and its performance is subsequently tested using several thousand pairs of images originated from a subset of the database corresponding to 20 individuals. The error rate currently achieved is less than 0.5%. Additional results, extensions, and possible applications are also briefly discussed. --- paper_title: Deep Face Recognition. paper_content: The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks. --- paper_title: A logical calculus of the ideas immanent in nervous activity paper_content: Abstract Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed. --- paper_title: FlowNet: Learning Optical Flow with Convolutional Networks paper_content: Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps. --- paper_title: Computing the Stereo Matching Cost with a Convolutional Neural Network paper_content: We present a method for extracting depth information from a rectified image pair. We train a convolutional neural network to predict how well two image patches match and use it to compute the stereo matching cost. The cost is refined by cross-based cost aggregation and semiglobal matching, followed by a left-right consistency check to eliminate errors in the occluded regions. Our stereo method achieves an error rate of 2.61% on the KITTI stereo dataset and is currently (August 2014) the top performing method on this dataset. --- paper_title: On surrogate loss functions and $f$-divergences paper_content: The goal of binary classification is to estimate a discriminant function y from observations of covariate vectors and corresponding binary labels. We consider an elaboration of this problem in which the covariates are not available directly but are transformed by a dimensionality-reducing quantizer Q. We present conditions on loss functions such that empirical risk minimization yields Bayes consistency when both the discriminant function and the quantizer are estimated. These conditions are stated in terms of a general correspondence between loss functions and a class of functionals known as Ali-Silvey or /-divergence functionals. Whereas this correspondence was established by Blackwell [Proc. 2nd Berkeley Symp. Probab. Statist. 1 (1951) 93-102. Univ. California Press, Berkeley] for the 0-1 loss, we extend the correspondence to the broader class of surrogate loss functions that play a key role in the general theory of Bayes consistency for binary classification. Our result makes it possible to pick out the (strict) subset of surrogate loss functions that yield Bayes consistency for joint estimation of the discriminant function and the quantizer. --- paper_title: Siamese Instance Search for Tracking paper_content: In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-theart tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-theart performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot. --- paper_title: Fully-Convolutional Siamese Networks for Object Tracking paper_content: The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: A logical calculus of the ideas immanent in nervous activity paper_content: Abstract Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed. --- paper_title: End-to-End Representation Learning for Correlation Filter Based Tracking paper_content: The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates. --- paper_title: Once for All: A Two-Flow Convolutional Neural Network for Visual Tracking paper_content: The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Very Deep Convolutional Networks for Large-Scale Image Recognition paper_content: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. --- paper_title: Object Tracking Benchmark paper_content: Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field. --- paper_title: Visual Tracking: An Experimental Survey paper_content: There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers. --- paper_title: Imagenet classification with deep convolutional neural networks paper_content: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. --- paper_title: ImageNet Large Scale Visual Recognition Challenge paper_content: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. --- paper_title: Convolutional Features for Correlation Filter Based Visual Tracking paper_content: Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets. --- paper_title: Mean shift: A robust approach toward feature space analysis paper_content: A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: Siamese Instance Search for Tracking paper_content: In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-theart tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-theart performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot. --- paper_title: Convolutional Features for Correlation Filter Based Visual Tracking paper_content: Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets. --- paper_title: End-to-End Representation Learning for Correlation Filter Based Tracking paper_content: The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates. --- paper_title: Once for All: A Two-Flow Convolutional Neural Network for Visual Tracking paper_content: The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: Once for All: A Two-Flow Convolutional Neural Network for Visual Tracking paper_content: The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: Siamese Instance Search for Tracking paper_content: In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-theart tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-theart performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot. --- paper_title: Fully-Convolutional Siamese Networks for Object Tracking paper_content: The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: End-to-End Representation Learning for Correlation Filter Based Tracking paper_content: The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates. --- paper_title: Once for All: A Two-Flow Convolutional Neural Network for Visual Tracking paper_content: The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014. --- paper_title: Large-scale image classification: Fast feature extraction and SVM training paper_content: Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: Decentralized detection and classification using kernel methods paper_content: We consider the problem of decentralized detection under constraints on the number of bits that can be transmitted by each sensor. In contrast to most previous work, in which the joint distribution of sensor observations is assumed to be known, we address the problem when only a set of empirical samples is available. We propose a novel algorithm using the framework of empirical risk minimization and marginalized kernels, and analyze its computational and statistical properties both theoretically and empirically. We provide an efficient implementation of the algorithm, and demonstrate its performance on both simulated and real data sets. --- paper_title: Person re-identification by Local Maximal Occurrence representation and metric learning paper_content: Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2%, 4.88%, 28.91%, and 31.55% on the four databases, respectively. --- paper_title: Learning to Track at 100 FPS with Deep Regression Networks paper_content: Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps. --- paper_title: End-to-End Representation Learning for Correlation Filter Based Tracking paper_content: The Correlation Filter is an algorithm that trains a linear template to discriminate between images and their translations. It is well suited to object tracking because its formulation in the Fourier domain provides a fast solution, enabling the detector to be re-trained once per frame. Previous works that use the Correlation Filter, however, have adopted features that were either manually designed or trained for a different task. This work is the first to overcome this limitation by interpreting the Correlation Filter learner, which has a closed-form solution, as a differentiable layer in a deep neural network. This enables learning deep features that are tightly coupled to the Correlation Filter. Experiments illustrate that our method has the important practical benefit of allowing lightweight architectures to achieve state-of-the-art performance at high framerates. --- paper_title: Siamese Instance Search for Tracking paper_content: In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-theart tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-theart performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot. --- paper_title: Large-scale image classification: Fast feature extraction and SVM training paper_content: Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate. ---
Title: Siamese Learning Visual Tracking: A Survey Section 1: INTRODUCTION Description 1: Provide an overview of the advancements in machine learning and its impact on visual tracking, briefly introducing the core concepts and the significance of Siamese networks. Section 2: Tracking Definition Description 2: Define the concept of visual tracking, including the types of inputs and latent variables commonly used, and provide examples of applications. Section 3: Tracker Design Description 3: Discuss the paradigm shift from handcrafted to learning-based tracker designs, detailing the components and pathways of modern trackers. Section 4: Learning Tracking Description 4: Explain the role of machine learning in tracking, including the historical background, current approaches, and challenges in learning functional parts of tracking systems. Section 5: RELATED WORK Description 5: Summarize significant surveys and research works in the field of tracking, highlighting the main challenges and insights they have provided. Section 6: GRAND CHALLENGES Description 6: Identify and explain the major challenges in visual tracking, including complexity, uncertainty, initialization, computability, and comparison of trackers. Section 7: SIAMESE TRACKING Description 7: Introduce the concept of Siamese networks in tracking, discussing their advantages, applications, and the state-of-the-art implementations in this domain. Section 8: Proposed Methods Description 8: Summarize key proposed approaches in Siamese tracking networks, detailing their architecture, training methodology, and performance. Section 9: Discussion Description 9: Compare the proposed methods, their training, inference techniques, and discuss the implications of their differences on tracking performance. Section 10: Tracker Results Description 10: Present and analyze the performance metrics and results of various Siamese tracking methods, comparing their efficiency and accuracy. Section 11: CONCLUSION Description 11: Conclude the survey with reflections on the current state of Siamese tracking research, potential improvements, and future research directions. Section 12: ACKNOWLEDGMENTS Description 12: Acknowledge the contributions of reviewers and funding sources that supported the research.
A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks
8
--- paper_title: A Survey of Visual Sensor Networks paper_content: Visual sensor networks have emerged as an important ::: class of sensor-based distributed intelligent systems, ::: with unique performance, complexity, and quality of service ::: challenges. Consisting of a large number of low-power camera nodes, ::: visual sensor networks support a great number of novel ::: vision-based applications. The camera nodes provide information ::: from a monitored site, performing distributed and collaborative ::: processing of their collected data. Using multiple cameras in the ::: network provides different views of the scene, which enhances ::: the reliability of the captured events. However, the large amount ::: of image data produced by the cameras combined with the ::: network's resource constraints require exploring new means ::: for data processing, communication, and sensor management. ::: Meeting these challenges of visual sensor networks requires ::: interdisciplinary approaches, utilizing vision processing, communications ::: and networking, and embedded processing. In this ::: paper, we provide an overview of the current state-of-the-art in ::: the field of visual sensor networks, by exploring several relevant ::: research directions. Our goal is to provide a better understanding ::: of current research problems in the different research fields of ::: visual sensor networks, and to show how these different research ::: fields should interact to solve the many challenges of visual sensor ::: networks. --- paper_title: Efficient visual sensor coverage algorithm in Wireless Visual Sensor Networks paper_content: Traditional Wireless Sensor Networks (WSN) transmits the scalar data (e.g., temperature, irradiation) to the sink node. A new Wireless Visual Sensor Network (WVSN) that can transmit images is a more promising solution than the WSN on sensing, detecting and monitoring the environment to enhance awareness of the cyber, physical, and social contexts of our daily activities. Sensor coverage in WVSN is more challenging than in WSN due to besides the sensing range coverage, the Field of View (FoV) should also be considered in deploying the sensors. In this paper, we study the sensor coverage problem in WVSN. We first propose the mathematical model to formulate the sensor coverage problem in WVSN. We devise a heuristic algorithm (FoVIC) algorithm to tackle this sensor coverage problem in WVSN. The basic idea of FoVIC algorithm is to deploy a sensor one at a time that can cover the largest number of uncovered nodes and then the algorithm checks for any sensor deployed in the earlier stage that could be removed. From the computational experiments, they show that larger span angle could help the sensors to cover more nodes in bigger grid size and fewer sensors will be need in smaller grid size when in fixed sensing range and span angle. --- paper_title: Meerkats: A power-aware, self-managing wireless camera network for wide area monitoring paper_content: We introduce Meerkats, a wireless network of batteryoperated camera nodes that can be used for monitoring and surveillance of wide areas. One distinguishing feature of Meerkats (when compared, for example, with systems like Cyclops [10]) is that our nodes are equipped with sufficient processing and storage capabilities to be able to run relatively sophisticated vision algorithms (e.g., motion estimation) locally and/or collaboratively. In previous work [9, 8, 7] we analyzed the energy consumption characteristics of the Meerkats nodes under different duty cycles, involving different power states of the system’s components. In this paper we present an analysis of the performance of the surveillance system as a function of the image acquisition rate and of the synchronization between cameras nodes. Our ultimate goal is to optimally balance the trade-off between application-specific performance requirements (e.g., event miss rate) and network lifetime (as a function of the energy consumption characteristics of each node). --- paper_title: Battery discharge characteristics of wireless sensors in building applications paper_content: Sensor nodes in wireless networks often use batteries as their source of energy, but replacing or recharging exhausted batteries in a deployed network can be difficult and costly. Therefore, prolonging battery life becomes a principal objective in the design of wireless sensor networks (WSNs). There is little published data that quantitatively analyze a sensor node's lifetime under different operating conditions. This paper presents several experiments to quantify the impact of key wireless sensor network design and environmental parameters on battery performance. Our testbed consists of MicaZ motes, commercial alkaline batteries, and a suite of techniques for measuring battery performance. We evaluate known parameters, such as communication distance, working channel and operating power that play key roles in battery performance. Through extensive real battery discharge measurements, we expect our results to serve as a quantitative basis for future research in designing and implementing battery-efficient sensing applications and protocols. --- paper_title: Vision mesh: A novel video sensor networks platform for water conservancy engineering paper_content: Video is an important medium for the observation of a variety of phenomena in the physical world such as for water conservancy engineering. Video sensor networks (VSN) consists of a large number of sensor nodes with cheap CMOS cameras, therefore, it has the ability of large scale visual monitoring. However, a kind of powerful and scalable platform is urgent for simulation and development of VSN. This paper proposed a novel and scalable video sensor networks platform named as Vision Mesh for water conservancy engineering. Vision Mesh is composed of a mass of image or video sensor nodes - Vision Motes, with which multi-view image or video information of FOV (Field of View) can be acquired simply. Vision Mote is built with Atmel ARM9 CPU and operated on Linux operating system, with TI CC2430 Zigbee module taken as wireless transceiver. Furthermore, OpenCV machine vision lib is migrated to Vision Mesh platform so as to improve video processing ability, therefore, Vision Mesh has the ability of image and video processing and is of strong scalability to extend performance. In this paper, we provide an overview of Vision Mesh architecture and provide an insightful platform for VSN. --- paper_title: Cyclops: in situ image sensing and interpretation in wireless sensor networks paper_content: Despite their increasing sophistication, wireless sensor networks still do not exploit the most powerful of the human senses: vision. Indeed, vision provides humans with unmatched capabilities to distinguish objects and identify their importance. Our work seeks to provide sensor networks with similar capabilities by exploiting emerging, cheap, low-power and small form factor CMOS imaging technology. In fact, we can go beyond the stereo capabilities of human vision, and exploit the large scale of sensor networks to provide multiple, widely different perspectives of the physical phenomena.To this end, we have developed a small camera device called Cyclops that bridges the gap between the computationally constrained wireless sensor nodes such as Motes, and CMOS imagers which, while low power and inexpensive, are nevertheless designed to mate with resource-rich hosts. Cyclops enables development of new class of vision applications that span across wireless sensor network. We describe our hardware and software architecture, its temporal and power characteristics and present some representative applications. --- paper_title: Sensor networks for emergency response: challenges and opportunities paper_content: Sensor networks, a new class of devices has the potential to revolutionize the capture, processing, and communication of critical data for use by first responders. CodeBlue integrates sensor nodes and other wireless devices into a disaster response setting and provides facilities for ad hoc network formation, resource naming and discovery, security, and in-network aggregation of sensor-produced data. We designed CodeBlue for rapidly changing, critical care environments. To test it, we developed two wireless vital sign monitors and a PDA-based triage application for first responders. Additionally, we developed MoteTrack, a robust radio frequency (RF)-based localization system, which lets rescuers determine their location within a building and track patients. Although much of our work on CodeBlue is preliminary, our initial experience with medical care sensor networks raised many exciting opportunities and challenges. --- paper_title: Battery Lifetime Prediction Model for a WSN Platform paper_content: Wireless Sensor Network devices have by nature limited available energy to perform a wide range of demanding tasks. In order to maximize their operation lifetime, optimal resource management is an important challenge and its success requires methodical modeling of the factors contributing to the overall power consumption. Moreover, the power consumed is not always useful on its own, but it should rather express the expected lifetime concerning the device’s normal operation. To achieve such awareness, this paper contributes with a measuring methodology which involves combining power consumption of platform elementary functionalities with battery discharge characteristics, so that a practical, yet accurate battery lifetime prediction model can be formed. --- paper_title: MeshEye: a hybrid-resolution smart camera mote for applications in distributed intelligent surveillance paper_content: INTRODUCTION Surveillance isoneofthepromising applications towhichsmart Distributed smartcameras havereceived increased focus inthe cameramotesforming a vision-enabled network canadd research community overthepastseveral months. Thenotion of increasing levels ofintelligence. We seeahighdegree ofin-node camerascombined withembeddedcomputation powerand processing incombination withdistributed reasoning algorithmsinterconnected through radio links opensupa newrealmof asthekeyenablers forsuchintelligent surveillance systems. To intelligent vision-enabled applications. Real-time image putthesesystems intopractice still requires a considerableprocessing anddistributed reasoning madepossible bysmart amountofresearch ranging frommotearchitectures, pixel- cameras cannotonlyenhance existing applications butalso processing algorithms, uptodistributed reasoning engines. This motivate newapplications. Potential application areas range from paperintroduces MeshEye, anenergy-efficient smartcamera homemonitoring, elderly care, andsmartenvironments to motearchitecture thathasbeendesigned withintelligent security andsurveillance inpublic orcorporate buildings. Critical surveillance asthetarget application inmind.Special attention is issues influencing thesuccess ofsmartcameradeployments for given toMeshEye's unique vision system: alow-resolution stereo suchapplications include reliable androbust operation withas vision system continuously determines position, range, andsize of little maintenance aspossible. movingobjects entering itsfield ofview.Thisinformation triggers acolor cameramodule toacquire ahigh-resolution image ncmpaisonito alar sensors, shsm r e pess sub-array cotinn th obet whic ca.eefiinlhumidity, velocity, andacceleration sensors, vision sensors pub-arocessedainysubseq staeIt offerredced complextly generate muchhigher bandwidth dataduetothetwo-dimensional processed insubsequent stages. Itoffers reduced complexity, naueothipxlary.Tesermutofawda response~~~~~~~~ tie.n oe osmto vrcnetoa nature oftheir pixel array. Thesheeramountofrawdata --- paper_title: A survey of visual sensor network platforms paper_content: Recent developments in low-cost CMOS cameras have created the opportunity of bringing imaging capabilities to sensor networks. Various visual sensor platforms have been developed with the aim of integrating visual data to wireless sensor applications. The objective of this article is to survey current visual sensor platforms according to in-network processing and compression/coding techniques together with their targeted applications. Characteristics of these platforms such as level of integration, data processing hardware, energy dissipation, radios and operating systems are also explored and discussed. --- paper_title: Battery discharge characteristics of wireless sensor nodes: an experimental analysis paper_content: Battery life extension is the principal driver for energy-efficient wireless sensor network (WSN) design. However, there is growing awareness that in order to truly maximize the operating life of battery-powered systems such as sensor nodes, it is important to discharge the battery in a manner that maximizes the amount of charge extracted from it. In spite of this, there is little published data that quantitatively analyzes the effectiveness with which modern wireless sensor nodes discharge their batteries, under different operating conditions. In this paper, we report on systematic experiments that we conducted to quantify the impact of key wireless sensor network design and environmental parameters on battery performance. Our testbed consists of MICA2DOT Motes, a commercial lithium- coin battery, and a suite of techniques for measuring battery per- formance. We evaluate the extent to which known electrochemical phenomena, such as rate-capacity characteristics, charge recov- ery and thermal effects, can play a role in governing the selection of key WSN design parameters such as power levels, packet sizes, etc. We demonstrate that battery characteristics significantly alter and complicate otherwise well-understood trade-offs in WSN design. In particular, we analyze the non-trivial implications of battery characteristics on WSN power control strategies, and find that a battery-aware approach to power level selection leads to a 52% increase in battery efficiency. We expect our results to serve as a quantitative basis for future research in designing battery-efficient sensing applications and protocols. --- paper_title: Maximizing Angle Coverage in Visual Sensor Networks paper_content: In this paper, we study the angle coverage problem in visual sensor networks where all sensors are equipped with cameras. An object of interest moves around the network and the sensors near the object are responsible for capturing images of it. The angle coverage problem aims to identify a set of sensors that preserve all the angles of view of the object while fulfilling the image resolution requirement. The user is required to specify the minimum acceptable image resolution in the request. Only the images that fulfill the resolution requirement will be considered. In order to save transmission energy, the number of images to be sent should be minimized. We develop a distributed algorithm to identify the minimum set of sensors such that all these images cover the maximum angle of view of the target. Our simulation results show that our protocol can achieve significant reduction in transmission load while preserving the widest angle of view. --- paper_title: Deployment Optimization Strategy for a Two-Tier Wireless Visual Sensor Network paper_content: Wireless visual sensor network (VSN) can be said to be a special class of wireless sensor network (WSN) with smart-cameras. Due to its visual sensing capability, it has become an effective tool for applications such as large area surveillance, environmental monitoring and objects tracking. Different from a conventional WSN, VSN typically includes relatively expensive camera sensors, enhanced flash memory and a powerful CPU. While energy consumption is dominated primarily by data transmission and reception, VSN consumes extra power onimage sensing, processing and storing operations. The well-known energy-hole problem of WSNs has a drastic impact on the lifetime of VSN, because of the additional energy consumption of a VSN. Most prior research on VSN energy issues are primarily focusedon a single device or a given specific scenario. In this paper, we propose a novel optimal two-tier deployment strategy for a large scale VSN. Our two-tier VSN architecture includes tier-1 sensing network with visual sensor nodes (VNs) and tier-2 network having only relay nodes (RNs). While sensing network mainly performs image data collection, relay network only for wards image data packets to the central sink node. We use uniform random distribution of VNs to minimize the cost of VSN and RNs are deployed following two dimensional Gaussian distribution so as to avoid energy-hole problem. Algorithms are also introduced that optimizes deployment parameters and are shown to enhance the lifetime of the VSN in a cost effective manner. --- paper_title: Eyes in the Sky: Decentralized Control for the Deployment of Robotic Camera Networks paper_content: This paper presents a decentralized control strategy for positioning and orienting multiple robotic cameras to collectively monitor an environment. The cameras may have various degrees of mobility from six degrees of freedom, to one degree of freedom. The control strategy is proven to locally minimize a novel metric representing information loss over the environment. It can accommodate groups of cameras with heterogeneous degrees of mobility (e.g., some that only translate and some that only rotate), and is adaptive to robotic cameras being added or deleted from the group, and to changing environmental conditions. The robotic cameras share information for their controllers over a wireless network using a specially designed multihop networking algorithm. The control strategy is demonstrated in repeated experiments with three flying quadrotor robots indoors, and with five flying quadrotor robots outdoors. Simulation results for more complex scenarios are also presented. --- paper_title: Coverage estimation for crowded targets in visual sensor networks paper_content: Coverage estimation is one of the fundamental problems in sensor networks. Coverage estimation in visual sensor networks (VSNs) is more challenging than in conventional 1-D (omnidirectional) scalar sensor networks (SSNs) because of the directional sensing nature of cameras and the existence of visual occlusion in crowded environments. This article represents a first attempt toward a closed-form solution for the visual coverage estimation problem in the presence of occlusions. We investigate a new target detection model, referred to as the certainty-based target detection (as compared to the traditional uncertainty-based target detection) to facilitate the formulation of the visual coverage problem. We then derive the closed-form solution for the estimation of the visual coverage probability based on this new target detection model that takes visual occlusions into account. According to the coverage estimation model, we further propose an estimate of the minimum sensor density that suffices to ensure a visual K-coverage in a crowded sensing field. Simulation is conducted which shows extreme consistency with results from theoretical formulation, especially when the boundary effect is considered. Thus, the closed-form solution for visual coverage estimation is effective when applied to real scenarios, such as efficient sensor deployment and optimal sleep scheduling. --- paper_title: Efficient visual sensor coverage algorithm in Wireless Visual Sensor Networks paper_content: Traditional Wireless Sensor Networks (WSN) transmits the scalar data (e.g., temperature, irradiation) to the sink node. A new Wireless Visual Sensor Network (WVSN) that can transmit images is a more promising solution than the WSN on sensing, detecting and monitoring the environment to enhance awareness of the cyber, physical, and social contexts of our daily activities. Sensor coverage in WVSN is more challenging than in WSN due to besides the sensing range coverage, the Field of View (FoV) should also be considered in deploying the sensors. In this paper, we study the sensor coverage problem in WVSN. We first propose the mathematical model to formulate the sensor coverage problem in WVSN. We devise a heuristic algorithm (FoVIC) algorithm to tackle this sensor coverage problem in WVSN. The basic idea of FoVIC algorithm is to deploy a sensor one at a time that can cover the largest number of uncovered nodes and then the algorithm checks for any sensor deployed in the earlier stage that could be removed. From the computational experiments, they show that larger span angle could help the sensors to cover more nodes in bigger grid size and fewer sensors will be need in smaller grid size when in fixed sensing range and span angle. --- paper_title: DISTRIBUTED COVERAGE GAMES FOR ENERGY-AWARE MOBILE SENSOR NETWORKS paper_content: Inspired by current challenges in data-intensive and energy-limited sensor networks, we formulate a coverage optimization problem for mobile sensors as a (constrained) repeated multiplayer game. Each sensor tries to optimize its own coverage while minimizing the processing/energy cost. The sensors are subject to the informational restriction that the environmental distribution function is unknown a priori. We present two distributed learning algorithms where each sensor only remembers its own utility values and actions played during the last plays. These algorithms are proven to be convergent in probability to the set of (constrained) Nash equilibria and global optima of a certain coverage performance metric, respectively. Numerical examples are provided to verify the performance of our proposed algorithms. --- paper_title: Target detection and counting using a progressive certainty map in distributed visual sensor networks paper_content: Visual sensor networks (VSNs) merge computer vision, image processing and wireless sensor network disciplines to solve problems in multi-camera applications by providing valuable information through distributed sensing and collaborative in-network processing. Collaboration in sensor networks is necessary not only to compensate for the processing, sensing, energy, and bandwidth limitations of each sensor node but also to improve the accuracy and robustness of the sensor network. Collaborative processing in VSNs is more challenging than in conventional scalar sensor networks (SSNs) because of two unique features of cameras, including the extremely higher data rate compared to that of scalar sensors and the directional sensing characteristics with limited field of view. In this paper, we study a challenging computer vision problem, target detection and counting in VSN environment. Traditionally, the problem is solved by counting the number of intersections of the backprojected 2D cones of each target. However, the existence of visual occlusion among targets would generate many false alarms. In this work, instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in the cone and generate the so-called certainty map of non-existence of targets. This way, after fusing inputs from a set of sensor nodes, the unresolved regions on the certainty map would be the location of target. This paper focuses on the design of a light-weight, energy-efficient, and robust solution where not only each camera node transmits a very limited amount of data but that a limited number of camera nodes is used. We propose a dynamic itinerary for certainty map integration where the entire map is progressively clarified from sensor to sensor. When the confidence of the certainty map is satisfied, a geometric counting algorithm is applied to find the estimated number of targets. In the conducted experiments using real data, the results of the proposed distributed and progressive method shows effectiveness in detection accuracy and energy and bandwidth efficiency. --- paper_title: Distributed target localization using a progressive certainty map in visual sensor networks paper_content: Collaboration in visual sensor networks (VSNs) is essential not only to compensate for the processing, sensing, energy, and bandwidth limitations of each sensor node but also to improve the accuracy and robustness of the network. In this paper, we study target localization in VSNs, a challenging computer vision problem because of two unique features of cameras, including the extremely higher data rate and the directional sensing characteristics with limited field of view. Traditionally, the problem is solved by localizing the targets at the intersections of the back-projected 2D cones of each target. However, the existence of visual occlusion among targets would generate many false alarms. In this work, instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in the cone and generate the so-called certainty map of non-existence of targets. As a result, after fusing inputs from a set of sensor nodes, the unresolved regions on the certainty map would be the location of targets. This paper focuses on the design of a light-weight, energy-efficient, and robust solution where not only each camera node transmits a very limited amount of data but that a limited number of camera nodes is involved. We propose a dynamic itinerary for certainty map integration where the entire map is progressively clarified from sensor to sensor. When the confidence of the certainty map is satisfied, targets are localized at the remaining unresolved regions in the certainty map. Based on results obtained from both simulation and real experiments, the proposed progressive method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency. --- paper_title: Determining vision graphs for distributed camera networks using feature digests paper_content: We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length "feature digest" that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8) can be achieved while maintaining low false alarm rates (<0.05) using a simulated 60-node outdoor camera network. --- paper_title: A Spatial Correlation-Based Image Compression Framework for Wireless Multimedia Sensor Networks paper_content: Data redundancy caused by correlation has motivated the application of collaborative multimedia in-network processing for data filtering and compression in wireless multimedia sensor networks (WMSNs). This paper proposes an information theoretic image compression framework with an objective to maximize the overall compression of the visual information gathered in a WMSN. The novelty of this framework relies on its independence of specific image types and coding algorithms, thereby providing a generic mechanism for image compression under different coding solutions. The proposed framework consists of two components. First, an entropy-based divergence measure (EDM) scheme is proposed to predict the compression efficiency of performing joint coding on the images collected by spatially correlated cameras. The EDM only takes camera settings as inputs without requiring statistics of real images. Utilizing the predicted results from EDM, a distributed multi-cluster coding protocol (DMCP) is then proposed to construct a compression-oriented coding hierarchy. The DMCP aims to partition the entire network into a set of coding clusters such that the global coding gain is maximized. Moreover, in order to enhance decoding reliability at data sink, the DMCP also guarantees that each sensor camera is covered by at least two different coding clusters. Experiments on H.264 standards show that the proposed EDM can effectively predict the joint coding efficiency from multiple sources. Further simulations demonstrate that the proposed compression framework can reduce 10%-23% total coding rate compared with the individual coding scheme, i.e., each camera sensor compresses its own image independently. --- paper_title: Distributed metric calibration of ad hoc camera networks paper_content: We discuss how to automatically obtain the metric calibration of an ad hoc network of cameras with no centralized processor. We model the set of uncalibrated cameras as nodes in a communication network, and propose a distributed algorithm in which each camera performs a local, robust bundle adjustment over the camera parameters and scene points of its neighbors in an overlay “vision graph.” We analyze the performance of the algorithm on both simulated and real data, and show that the distributed algorithm results in a fairer allocation of messages per node while achieving comparable calibration accuracy to centralized bundle adjustment. --- paper_title: A Camera Nodes Correlation Model Based on 3D Sensing in Wireless Multimedia Sensor Networks paper_content: In wireless multimedia sensor networks, multiple camera sensor nodes generally are used for gaining enhanced observations of a certain area of interest. This brings on the visual information retrieved from adjacent camera nodes usually exhibits high levels of correlation. In this paper, first, based on the analysis of 3D directional sensing model of camera sensor nodes, a correlation model is proposed by measuring the intersection area of multiple camera nodes’ field of views. In this model, there is a asymmetrical relationship of the correlation between two camera nodes. Then, to farthest eliminate the data redundancy and use the node collaboration characteristic of wireless (multimedia) sensor networks, two kinds of cluster structure, camera sensor nodes cluster, and common sensor nodes cluster are established to cooperate on image processing and transmission tasks. A set of experiments are performed to investigate the proposed correlation coefficient. Further simulations based on a sample of monitoring a crossing by three correlative camera nodes show that the proposed network topology and image fusion and transmission scheme released the pressure of camera node greatly and reduce the network energy consumption of communication of the whole network efficiently. --- paper_title: Distributed collaborative camera actuation for redundant data elimination in wireless multimedia sensor networks paper_content: Given the high cost of processing and communicating the multimedia data in wireless multimedia sensor networks (WMSNs), it is important to reduce possible data redundancy. Therefore, camera sensors should only be actuated when an event is detected within their vicinity. In the meantime, the coverage of the event should not be compromised. In this paper, we propose a low-cost distributed actuation scheme which strives to turn on the least number of cameras to avoid possible redundancy in the multimedia data while still providing the necessary event coverage. The basic idea of this scheme is the collaboration of camera sensors that have heard from scalar sensors about an occurring event to minimize the possible coverage overlaps. This is done by either counting the number of scalar sensors or determining the event boundaries with scalar sensors. Through simulation, we show how the distributed scheme performs in terms of coverage under several centralized and random deployment schemes. We also compare the performance with the case when all the cameras in the vicinity are actuated and when blockages in the region exist. --- paper_title: A Spatial Correlation Model for Visual Information in Wireless Multimedia Sensor Networks paper_content: Wireless multimedia sensor networks (WMSNs) are interconnected devices that allow retrieving video and audio streams, still images, and scalar data from the environment. In a densely deployed WMSN, there exists correlation among the visual information observed by cameras with overlapped field of views. This paper proposes a novel spatial correlation model for visual information in WMSNs. By studying the sensing model and deployments of cameras, a spatial correlation function is derived to describe the correlation characteristics of visual information observed by cameras with overlapped field of views. The joint effect of multiple correlated cameras is also studied. An entropy-based analytical framework is developed to measure the amount of visual information provided by multiple cameras in the network. Furthermore, according to the proposed correlation function and entropy-based framework, a correlation-based camera selection algorithm is designed. Experimental results show that the proposed spatial correlation function can model the correlation characteristics of visual information in WMSNs through low computation and communication costs. Further simulations show that, given a distortion bound at the sink, the correlation-based camera selection algorithm requires fewer cameras to report to the sink than the random selection algorithm. --- paper_title: Vision Graph Construction in Wireless Multimedia Sensor Networks paper_content: In Wireless multimedia sensor networks (WMSNs), two graphs, communication network graph and vision graph, can be established. The camera nodes connected in the vision graph share overlapped field of views (FOVs) and they depend on the densely deployed relay nodes in the communication network graph to communicate with each other. Given a uniformly deployed camera sensor network with relay nodes, the problem is to find the number of hops for the vision-graph- neighbor-searching messages to construct the vision graph in an energy efficient way. In this paper, mathematical models are developed to analyze the FOV overlap of the camera nodes and the multi-hop communications in two dimensional topologies, which are utilized to analyze the optimal hop number. In addition, simulations are conducted to verify our models. --- paper_title: Virtual View Image over Wireless Visual Sensor Network paper_content: In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment results showed that this method could minimize number of transmitted imageries with sufficient information.. --- paper_title: FoV-Clustering as a solution to a novel camera node activation problem in WVSNs paper_content: In Wireless Visual Sensor Networks (WVSNs), node selection and organization to realize applications is the title of some research fields known as network topology models that can be classified into two main categories of single-tier and multi-tier. According to the literature, only multi-tier topology model is able to provide a balance between WVSNs objectives such as low cost, maximum coverage, minimum energy consumption, high functionality, and good reliability. In a general architecture for this model, the lowest tier consists of scalar nodes that can activate camera nodes of upper tier only if necessary. However, regarding the inter-tier communication radio range and camera nodes' covered area, in many cases the activated cameras do not detect any objects. The mentioned problem which wastes energy and processor time for doing unneeded image capturing and object detection operations is introduced for the first time in this paper. Besides, a new clustering method with the aim of determining the scalar nodes located inside the area covered by a camera node is proposed as a solution. As the simulation results indicate, this solution has succeeded in saving camera nodes' energy and prolonging network lifetime. --- paper_title: MAC-Aware and Power-Aware Image Aggregation Scheme in Wireless Visual Sensor Networks paper_content: Traditional wireless sensor networks (WSNs) transmit the scalar data (e.g., temperature and irradiation) to the sink node. A new wireless visual sensor network (WVSN) that can transmit images data is a more promising solution than the WSN on sensing, detecting, and monitoring the environment to enhance awareness of the cyber, physical, and social contexts of our daily activities. However, the size of image data is much bigger than the scalar data that makes image transmission a challenging issue in battery-limited WVSN. In this paper, we study the energy efficient image aggregation scheme in WVSN. Image aggregation is a possible way to eliminate the redundant portions of the image captured by different data source nodes. Hence, transmission power could be reduced via the image aggregation scheme. However, image aggregation requires image processing that incurs node processing power. Besides the additional energy consumption from node processing, there is another MAC-aware retransmission energy loss from image aggregation. In this paper, we first propose the mathematical model to capture these three factors (image transmission, image processing, and MAC retransmission) in WVSN. Numerical results based on the mathematical model and real WVSN sensor node (i.e., Meerkats node) are performed to optimize the energy consumption tradeoff between image transmission, image processing, and MAC retransmission. --- paper_title: Modeling and assessing quality of information in multisensor multimedia monitoring systems paper_content: Current sensor-based monitoring systems use multiple sensors in order to identify high-level information based on the events that take place in the monitored environment. This information is obtained through low-level processing of sensory media streams, which are usually noisy and imprecise, leading to many undesired consequences such as false alarms, service interruptions, and often violation of privacy. Therefore, we need a mechanism to compute the quality of sensor-driven information that would help a user or a system in making an informed decision and improve the automated monitoring process. In this article, we propose a model to characterize such quality of information in a multisensor multimedia monitoring system in terms of certainty, accuracy/confidence and timeliness. Our model adopts a multimodal fusion approach to obtain the target information and dynamically compute these attributes based on the observations of the participating sensors. We consider the environment context, the agreement/disagreement among the sensors, and their prior confidence in the fusion process in determining the information of interest. The proposed method is demonstrated by developing and deploying a real-time monitoring system in a simulated smart environment. The effectiveness and suitability of the method has been demonstrated by dynamically assessing the value of the three quality attributes with respect to the detection and identification of human presence in the environment. --- paper_title: A Spatial Correlation-Based Image Compression Framework for Wireless Multimedia Sensor Networks paper_content: Data redundancy caused by correlation has motivated the application of collaborative multimedia in-network processing for data filtering and compression in wireless multimedia sensor networks (WMSNs). This paper proposes an information theoretic image compression framework with an objective to maximize the overall compression of the visual information gathered in a WMSN. The novelty of this framework relies on its independence of specific image types and coding algorithms, thereby providing a generic mechanism for image compression under different coding solutions. The proposed framework consists of two components. First, an entropy-based divergence measure (EDM) scheme is proposed to predict the compression efficiency of performing joint coding on the images collected by spatially correlated cameras. The EDM only takes camera settings as inputs without requiring statistics of real images. Utilizing the predicted results from EDM, a distributed multi-cluster coding protocol (DMCP) is then proposed to construct a compression-oriented coding hierarchy. The DMCP aims to partition the entire network into a set of coding clusters such that the global coding gain is maximized. Moreover, in order to enhance decoding reliability at data sink, the DMCP also guarantees that each sensor camera is covered by at least two different coding clusters. Experiments on H.264 standards show that the proposed EDM can effectively predict the joint coding efficiency from multiple sources. Further simulations demonstrate that the proposed compression framework can reduce 10%-23% total coding rate compared with the individual coding scheme, i.e., each camera sensor compresses its own image independently. --- paper_title: Multimodal fusion for multimedia analysis: a survey paper_content: This survey aims at providing multimedia researchers with a state-of-the-art overview of fusion strategies, which are used for combining multiple modalities in order to accomplish various multimedia analysis tasks. The existing literature on multimodal fusion research is presented through several classifications based on the fusion methodology and the level of fusion (feature, decision, and hybrid). The fusion methods are described from the perspective of the basic concept, advantages, weaknesses, and their usage in various analysis tasks as reported in the literature. Moreover, several distinctive issues that influence a multimodal fusion process such as, the use of correlation and independence, confidence level, contextual information, synchronization between different modalities, and the optimal modality selection are also highlighted. Finally, we present the open issues for further research in the area of multimodal fusion. --- paper_title: On the adoption of multiview video coding in wireless multimedia sensor networks paper_content: This article explores the potential performance gains achievable by applying the multiview video coding paradigm in wireless multimedia sensor networks (WMSN). Recent studies have illustrated how significant performance gains (in terms of energy savings and consequently of network lifetime) can be obtained by leveraging the spatial correlation among partially overlapped fields of view of multiple video cameras observing the same scene. A crucial challenge is then how to describe the correlation among different views of the same scene with accurate yet simple metrics. In this article, we first experimentally assess the performance gains of multiview video coding as a function of metrics capturing the correlation among different views. We compare their effectiveness in predicting the correlation among different views and consequently assess the potential performance gains of multiview video coding in WMSNs. In particular, we show that, in addition to geometric information, occlusions and movement need to be considered to fully take advantage of multiview video coding. --- paper_title: Distributed Algorithms for Network Lifetime Maximization in Wireless Visual Sensor Networks paper_content: Network lifetime maximization is a critical issue in wireless sensor networks since each sensor has a limited energy supply. In contrast with conventional sensor networks, video sensor nodes compress the video before transmission. The encoding process demands a high power consumption, and thus raises a great challenge to the maintenance of a long network lifetime. In this paper, we examine a strategy for maximizing the network lifetime in wireless visual sensor networks by jointly optimizing the source rates, the encoding powers, and the routing scheme. Fully distributed algorithms are developed using the Lagrangian duality to solve the lifetime maximization problem. We also examine the relationship between the collected video quality and the maximal network lifetime. Through extensive numerical simulations, we demonstrate that the proposed algorithm can achieve a much longer network lifetime compared to the scheme optimized for the conventional wireless sensor networks. --- paper_title: Ant-based routing for wireless multimedia sensor networks using multiple QoS metrics paper_content: In wireless sensor networks, most routing protocols consider energy savings as the main objective and assume data traffic with unconstrained delivery requirements to be a given. However, the introduction of video and imaging sensors unveils additional challenges. The transmission of video and imaging data requires both energy efficiency and QoS assurance (end-to-end delay and packet loss requirements), in order to ensure the efficient use of sensor resources as well as the integrity of the information collected. This paper presents a QoS routing model for Wireless Multimedia Sensor Networks (WMSN). Moreover, based on the traditional ant-based algorithm, an ant-based multi-QoS routing metric (AntSensNet) is proposed. The AntSensNet protocol builds a hierarchical structure on the network before choosing suitable paths to meet various QoS requirements from different kinds of traffic, thus maximizing network utilization, while improving its performance. In addition, AntSensNet is able to use a efficient multi-path video packet scheduling in order to get minimum video distortion transmission. Finally, extensive simulations are conducted to assess the effectiveness of this novel solution and a detailed discussion regarding the effects of different system parameters is provided. Compared to typical routing algorithms in sensor networks and the traditional ant-based algorithm, this new algorithm has better convergence and provides significantly better QoS for multiple types of services in wireless multimedia sensor networks. --- paper_title: Correlation-Aware QoS Routing With Differential Coding for Wireless Video Sensor Networks paper_content: The spatial correlation of visual information retrieved from distributed camera sensors leads to considerable data redundancy in wireless video sensor networks, resulting in significant performance degradation in energy efficiency and quality-of-service (QoS) satisfaction. In this paper, a correlation-aware QoS routing algorithm (CAQR) is proposed to efficiently deliver visual information under QoS constraints by exploiting the correlation of visual information observed by different camera sensors. First, a correlation-aware inter-node differential coding scheme is designed to reduce the amount of traffic in the network. Then, a correlation-aware load balancing scheme is proposed to prevent network congestion by splitting the correlated flows that cannot be reduced to different paths. Finally, the correlation-aware schemes are integrated into an optimization QoS routing framework with an objective to minimize energy consumption subject to delay and reliability constraints. Simulation results demonstrate that the proposed routing algorithm achieves efficient delivery of visual information under QoS constraints in wireless video sensor networks. --- paper_title: Linear network coding paper_content: Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node. --- paper_title: Event-driven geographic routing for wireless image sensor networks paper_content: We propose a distributed routing scheme with adjustable priority support for event-driven wireless sensor networks. The network nodes are assumed to generate periodic data packets that are reported to the destination via multihop routing. Nodes may also infrequently detect an event from which a large number of packets are produced and need to be reported. These high-bandwidth event reports may cause packet queues to develop at the routing nodes along paths to the destination. The proposed routing scheme employs a cost function based on the location information as well as the current queue lengths and remaining energies at the neighboring nodes as a basis for next hop selection. Our scheme also implements a set of relative priority levels for the event-based and periodic data packets. Simulation results are presented and indicate improved network lifetime, lower end-to-end average and maximum delays, and significantly reduced buffer size requirements for the network nodes. the periodic measurements. Handling large bursts of event packets can cause significant local delays along the best path chosen by existing geographic routing schemes. The proposed distributed routing scheme associates a cost to each hop that depends on its existing queue. The remaining energy at the node is also considered as a cost parameter. Cost functions defined for queue size and energy are added to the position-based cost for greedy routing towards the destination. Providing quality of service (QoS) support on an end-to- end basis is infeasible in wireless sensor networks designed to operate under distributed decision making mechanisms. Our work aims to improve end-to-end latency and provide best- effort QoS support in geographic routing by employing two mechanisms in its distributed routing scheme: a cost function based on the queue length at the routing candidates, and a set of relative priority levels for the different packet types. The effect of these priorities is incorporated into the queue length cost function of the receiving node and is used by the transmitting node to order the transmission of packets between the two types. As our routing scheme is based on using local information, the notion of an end-to-end QoS guarantee does not apply. However, as we explain later, such guarantees may not apply to event-driven routing in wireless sensor networks due to the multiplicity of participating source nodes reporting an event, and the reliance of most designs on the incorporation of fault tolerance and multi-path routing schemes in the design of these networks. Performance of the proposed routing scheme is studied via simulation. The effect of different weighting factors for the three routing cost function components on the network perfor- mance is analyzed. In particular, using queue cost significantly lowers the average and maximum delays, and drastically re- duces the node buffer size requirements. Furthermore, includ- ing energy cost significantly increases the network lifetime. More generally, we see that adding the queue size and energy costs to the cost function of greedy routing and incorporating a packet type prioritization scheme allows various tradeoffs to be made between the different performance factors of the network. While MAC layer prioritization schemes may also provide QoS support, they are not the focus of this paper. We assume a periodic MAC scheme in our simulations. The remainder of the paper is organized as follows. In Section II an overview of our proposed routing scheme is --- paper_title: A routing mechanism based on the sensing relevancies of source nodes for time-critical applications in visual sensor networks paper_content: Wireless sensor networks may be deployed to retrieve visual information from the monitored field, enriching monitoring and control applications. Whenever a set of camera-enabled sensor nodes are deployed for time-critical monitoring, visual information as still images and video streams may need to reach the sink as soon as possible, requiring a differentiated treating of the network when compared with non-critical visual data. In such way, considering that source nodes may have different sensing relevancies for the application, according to the desired monitoring tasks and the current sensors' poses and fields of view, we propose a delay-aware multihop routing mechanism where higher relevant visual data packets are routed through paths with lower end-to-end delay. As sensor nodes are expected to be energy-constrained, transmitting only high-relevant packets through shorter/faster paths may prolong their lifetime and assure longer time-critical delivering, with low impact to the overall monitoring quality. --- paper_title: On ant routing algorithms in ad hoc networks with critical connectivity paper_content: This paper shows a novel self-organizing approach for routing datagrams in ad hoc networks, called Distributed Ant Routing (DAR). This approach belongs to the class of routing algorithms inspired by the behavior of the ant colonies in locating and storing food. The effectiveness of the heuristic algorithm is supported by mathematical proofs and demonstrated by a comparison with the well-known Ad hoc On Demand Distance Vector (AODV) algorithm. The differences and the similarities of the two algorithms have been highlighted. Results obtained by a theoretical analysis and a simulation campaign show that DAR allows obtaining some important advantages that makes it a valuable candidate to operate in ad hoc networks and the same method helps in the selection of the algorithm parameters. Since the approach aims at minimizing complexity in the nodes at the expenses of the optimality of the solution, it results to be particularly suitable in environments where fast communication establishment and minimum signalling overhead are requested. These requirements are typical of ad hoc networks with critical connectivity, as described in the paper. Thus the performance of the proposed algorithm are shown in ad hoc networks with critical connectivity and compared to some existing ad hoc routing algorithms. --- paper_title: Joint Coding/Routing Optimization for Distributed Video Sources in Wireless Visual Sensor Networks paper_content: This paper studies a joint coding/routing optimization between network lifetime and video distortion by applying information theory to wireless visual sensor networks for correlated sources. Arbitrary coding [distributed video coding and network coding (NC)] from both combinatorial optimization and information theory could make significant progress toward the performance limit and tractable. Also, multipath routing can spread energy utilization across nodes within the entire network to keep a potentially longer lifetime, and solve the wireless contention issues by the splitting traffic. The objective function not only keeps the total energy consumption of encoding power, transmission power, and reception power minimized, but ensures the information received by sink nodes to approximately reconstruct the visual field. Also, a generalized power consumption model for distributed video sources is developed, in which the coding complexity of Key frames and Wyner-Ziv frames is measured by translating specific coding behavior into energy consumption. On the basis of the distributed multiview video coding and NC-based multipath routing, the balance problem between lifetime (costs) and distortion (capacity) is modeled as an optimization formulation with a fully distributed solution. Through a primal decomposition, a two-level optimization is relaxed with Lagrangian dualization and solved by the gradient algorithm. The low-level optimization problem is further decomposed into a secondary master dual problem with four cross-layer subproblems: a rate control problem, a channel contention problem, a distortion control problem, and an energy conservation problem. The implementation of the distributed algorithm is discussed with regard to the communication overhead and dynamic network change. Simulation results validate the convergence and performance of the proposed algorithm. --- paper_title: Joint Coding/Routing Optimization for Correlated Sources in Wireless Visual Sensor Networks paper_content: This paper studies a joint coding/routing optimization between network lifetime and rate-distortion, by applying information theory to wireless visual sensor networks for correlated sources. Arbitrary coding (distributed source coding and network coding) from both combinatorial optimization and information theory could make significant progress towards the performance limit of information networks and tractable. Also, multipath routing can spread energy utilization across nodes within the entire network to keep a potentially longer lifetime, and solve the wireless contention issues by the splitting traffic. The objective function not only keeps a total energy consumption of encoding power, transmission power, and reception power minimized, but ensures the information received by sink nodes to approximately reconstruct the visual field. Based on the localized Slepian-Wolf coding and network coding-based multipath routing, the balance problem between distortion (capacity) and lifetime (costs) is modeled as an optimization formulation with a distributed solution. Through a primal decomposition, a two-level optimization is relaxed with Lagrangian dualization and solved with the gradient algorithm. The low-level optimization problem is decomposed into a secondary master dual problem (encoding, energy, and congestion prices update) with four cross-layer subproblems: a rate control problem, a channel contention problem, a distortion control problem, and an energy conservation problem. Numerical results validate the convergence and performance of the proposed algorithm. --- paper_title: Data relevance dynamic routing protocol for Wireless Visual Sensor Networks paper_content: Survivability is crucial in Wireless Visual Sensor Networks (WVSNs) especially when they are used for monitoring and tracking applications with limited available resources. In this paper we are proposing the use of a data relevance dynamic routing protocol that tries to keep a balance between the energy consumption and the packet delay in a WVSN. The proposed dynamic routing protocol follows opportunistic routing approaches. The next node selection criterion can change the routing path dynamically following the network conditions and the channel availability while the energy consumption per node is also considered. Simulation results are presented that show an increase in network lifetime of up to 30% compared with traditional routing while the overall packet delay remains similar. --- paper_title: Distributed Adaptive Sampling, Forwarding, and Routing Algorithms for Wireless Visual Sensor Networks paper_content: The efficient management of the limited energy resources of a wireless visual sensor network is central to its successful operation. Within this context, this paper focuses on the adaptive sampling, forwarding, and routing actions of each node in order to maximise the information value of the data collected. These actions are inter-related in this setting because each node's energy consumption must be optimally allocated between sampling and transmitting its own data, receiving and forwarding the data of other nodes, and routing any data. Thus, we develop two optimal decentralised algorithms to solve this distributed constraint optimization problem. The first assumes that the route by which data is forwarded to the base station is fixed, and then calculates the optimal sampling, transmitting, and forwarding actions that each node should perform. The second assumes flexible routing, and makes optimal decisions regarding both the integration of actions that each node should choose, and also the route by which the data should be forwarded to the base station. The two algorithms represent a trade-off in optimality, communication cost, and processing time. In an empirical evaluation on sensor networks (whose underlying communication networks exhibit loops), we show that the algorithm with flexible routing is able to deliver approximately twice the quantity of information to the base station compared to the algorithm using fixed routing (where an arbitrary choice of route is made). However, this gain comes at a considerable communication and computational cost (increasing both by a factor of 100 times). Thus, while the algorithm with flexible routing is suitable for networks with a small numbers of nodes, it scales poorly, and as the size of the network increases, the algorithm with fixed routing is favoured. --- paper_title: Ant Based Routing Protocol for Visual Sensors paper_content: In routing protocols, sensor nodes tend to route events (images) captured to a particular destination (sink) using the most efficient path. The power and bandwidth required to transmit video data from hundreds of cameras to a central location for processing at a high success rate would be enormous. In this work, captured packets were routed from different sensors placed at different locations to the sink using the best path. Since the captured images (packets) need to be routed to the destination (sink) at regular interval and within a predefined period of time, while consuming low energy without performance degradation, Ant based routing which utilizes the behavior of real ants searching for food through pheromone deposition, while dealing with problems that need to find paths to goals, through the simulating behavior of ant colony is adopted. In this end, we present an Improved Energy-Efficient Ant- Based Routing (IEEABR) Algorithm in Visual Sensor Networks. Compared to the state-of-the-art Ant-Based routing protocols; Basic Ant-Based Routing (BABR) Algorithm, Sensor-driven and Cost-aware ant routing (SC), Flooded Forward ant routing (FF), Flooded Piggybacked ant routing (FP), and Energy- Efficient Ant-Based Routing (EEABR), the proposed IEEABR approach have advantages of reduced energy usage, delivering events packets at high success rate with low latency, increases the network lifetime, and actively performing its set target without performance degradation. The performance evaluations for the algorithms on a real application are conducted in a well known WSNs MATLAB-based simulator (RMASE) using both static and dynamic scenario. --- paper_title: SIoT: Giving a Social Structure to the Internet of Things paper_content: The actual development of the Internet of Things (IoT) needs major issues related to things' service discovery and composition to be addressed. This paper proposes a possible approach to solve such issues. We introduce a novel paradigm of "social network of intelligent objects", namely the Social Internet of Things (SIoT), based on the notion of social relationships among objects. Following the definition of a possible social structure among objects, a preliminary architecture for the implementation of SIoT is presented. Through the SIoT paradigm, the capability of humans and devices to discover, select, and use objects with their services in the IoT is augmented. Besides, a level of trustworthiness is enabled to steer the interaction among the billions of objects which will crowd the future IoT. --- paper_title: The Social Internet of Things (SIoT) - When social networks meet the Internet of Things: Concept, architecture and network characterization paper_content: Recently there has been quite a number of independent research activities that investigated the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: (i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; (ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; (iii) we analyze the characteristics of the SIoT network structure by means of simulations. ---
Title: A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks Section 1: Introduction Description 1: This section introduces the concept of Wireless Visual Sensor Networks (WVSNs), their unique characteristics compared to traditional Wireless Sensor Networks (WSNs), and the main research challenges and opportunities in sensor coverage and visual data handling. Section 2: Node Hardware Components in WVSNs Description 2: This section reviews the five major hardware components of visual sensor nodes, discusses the tradeoffs involved, and summarizes the node components for existing WVSN platforms. Section 3: Sensor Coverage/Deployment in WVSNs Description 3: This section discusses the sensor coverage problem in WVSNs, including the additional considerations for view angle coverage and occlusion, and reviews existing research on sensor deployment strategies in WVSNs. Section 4: Visual Data Capture in WVSNs Description 4: This section studies existing works on visual data capture, including methods to identify the minimum set of sensors needed to cover an object and techniques to handle redundant or supportive image data from overlapping Fields of View (FoVs). Section 5: Visual Data Processing in WVSNs Description 5: This section examines existing visual data processing techniques, including image/video aggregation, quality measurement, and collaborative data processing among camera sensors, and discusses challenges due to limited hardware resources. Section 6: Visual Data Transmission in WVSNs Description 6: This section surveys existing works on visual data transmission schemes in WVSNs, focusing on routing protocols, network coding, and strategies to balance between energy consumption, bandwidth, and data quality. Section 7: Social Networking Paradigms in WVSNs Description 7: This section explores the application of social networking concepts to WVSNs, discussing how cooperation among sensor nodes can be managed using social behavior models and presents the Social Internet of Things (SIoT) as a potential research direction. Section 8: Conclusions Description 8: This section concludes the survey by summarizing the unique challenges presented by WVSNs, reviewing the main research findings, and highlighting open issues that still need to be addressed in sensor coverage and visual data handling.
An Overview Of Character Recognition
4
--- paper_title: Optical Character Recognition paper_content: From the Publisher: ::: Optical Character Recognition (OCR) has become an important and widely used technology. Among its many practical applications are the scanners used at store check-out counters, money changing machines, office scanning machines, and the efforts to automate the postal system. Research is particularly active in Japan where one important goal is to develop economical machines that can read Kanji characters. Such machines will be widely used in offices as man-machine interfaces. Dr. Mori from the Ricoh R&D Center in Japan has used his experience in optical character recognition to create a thorough reference to this widely used but still growing technology. --- paper_title: A Syntactic Approach for Handwritten Mathematical Formula Recognition paper_content: Mathematical formulas are good examples of two-dimensional patterns as well as pictures or graphics. The use of syntactic methods is useful for interpreting such complex patterns. In this paper we propose a system for the interpretation of 2-D mathematic formulas based on a syntactic parser. This system is able to recognize a large class of 2-D mathematic formulas written on a graphic tablet. It starts the parsing by localization of the ``principal'' operator in the formula and attempts to partition it into subexpressions which are similarly analyzed by looking for a starting character. The generalized parser used in the system has been developed in our group for continuous speech recognition and picture interpretation. --- paper_title: An omnifont open-vocabulary OCR system for English and Arabic paper_content: We present an omnifont, unlimited-vocabulary OCR system for English and Arabic. The system is based on hidden Markov models (HMM), an approach that has proven to be very successful in the area of automatic speech recognition. We focus on two aspects of the OCR system. First, we address the issue of how to perform OCR on omnifont and multi-style data, such as plain and italic, without the need to have a separate model for each style. The amount of training data from each style, which is used to train a single model, becomes an important issue in the face of the conditional independence assumption inherent in the use of HMMs. We demonstrate mathematically and empirically how to allocate training data among the different styles to alleviate this problem. Second, we show how to use a word-based HMM system to perform character recognition with unlimited vocabulary. The method includes the use of a trigram language model on character sequences. Using all these techniques, we have achieved character error rates of 1.1 percent on data from the University of Washington English Document Image Database and 3.3 percent on data from the DARPA Arabic OCR Corpus. --- paper_title: A high-accuracy syntactic recognition algorithm for handwritten numerals paper_content: A new set of topological features (primitives) for use with a syntactic classifier for high-accuracy recognition of handwritten numerals is proposed. The tree grammar used in this study makes it possible to achieve high-recognition speeds with minimal preprocessing of the test pattern. --- paper_title: Computer recognition of arabic cursive scripts paper_content: Abstract The main objective of this paper is to design an Arabic text recognition system. This system basically includes a segmentation stage in order to recognize typewritten Arabic cursive words. The segmentation process is completely based on tracing the outer contour of a given word and calculating the distance between the extreme points of intersection of the contour with a vertical line. At the output of the segmentation stage, the cursive word is presented as a sequence of isolated character contours. The recognition problem thus reduces to that of classifying each character. A set of Fourier descriptors are obtained from the coordinate sequences of the outer contour of each character. A topological classifier is also used to classify the stress mark over or under the character contour. A reject option is introduced by the classifier so that incorrectly segmented characters are detected. The developed system has shown a recognition rate of 99%. This result guarantees the efficiency of the used Fourier descriptors in discriminating between Arabic characters. --- paper_title: Character recognition—a review paper_content: The machine replication of human reading has been the subject of intensive research for more than three decades. A large number of research papers and reports have already been published on this topic. Many commercial establishments have manufactured recognizers of varying capabilities. Handheld, desk-top, medium-size and large systems costing as high as half a million dollars are available, and are in use for various applications. However, the ultimate goal of developing a reading machine having the same reading capabilities of humans still remains unachieved. So, there still is a great gap between human reading and machine reading capabilities, and a great amount of further effort is required to narrow-down this gap, if not bridge it. This review is organized into six major sections covering a general overview (an introduction), applications of character recognition techniques, methodologies in character recognition, research work in character recognition, some practical OCRs and the conclusions. --- paper_title: Historical review of OCR research and development paper_content: Research and development of OCR systems are considered from a historical point of view. The historical development of commercial systems is included. Both template matching and structure analysis approaches to R&D are considered. It is noted that the two approaches are coming closer and tending to merge. Commercial products are divided into three generations, for each of which some representative OCR systems are chosen and described in some detail. Some comments are made on recent techniques applied to OCR, such as expert systems and neural networks, and some open problems are indicated. The authors' views and hopes regarding future trends are presented. > --- paper_title: High accuracy optical character recognition using neural networks with centroid dithering paper_content: Optical character recognition (OCR) refers to a process whereby printed documents are transformed into ASCII files for the purpose of compact storage, editing, fast retrieval, and other file manipulations through the use of a computer. The recognition stage of an OCR process is made difficult by added noise, image distortion, and the various character typefaces, sizes, and fonts that a document may have. In this study a neural network approach is introduced to perform high accuracy recognition on multi-size and multi-font characters; a novel centroid-dithering training process with a low noise-sensitivity normalization procedure is used to achieve high accuracy results. The study consists of two parts. The first part focuses on single size and single font characters, and a two-layered neural network is trained to recognize the full set of 94 ASCII character images in 12-pt Courier font. The second part trades accuracy for additional font and size capability, and a larger two-layered neural network is trained to recognize the full set of 94 ASCII character images for all point sizes from 8 to 32 and for 12 commonly used fonts. The performance of these two networks is evaluated based on a database of more than one million character images from the testing data set. > --- paper_title: Survey: omnifont-printed character recognition paper_content: This paper presents an overview of methods for recognition of omnifont printed Roman alphabet characters with various fonts, sizes and formats (plain, bold, etc.) from OCR system perspectives. First, it summarizes the current needs for optical printed character recognition (OPCR) in general, and then describes its importance for conversion between paper and electronic media. Current status of commercially available software and products for OPCR are briefly reviewed. Analysis indicates that the challenge we face in OPCR is far from being solved, and there is still a great gap between human needs and machine reading capabilities. Second, OPCR systems and algorithms are briefly reviewed and compared from the context of digital document processing for the following four stages: preprocessing of images, segmentation, recognition, and post-processing. Finally, possible research directions to improve the performance of OPCR systems are suggested, such as using an approach based on the combination of template matching and varieties of feature-based algorithms to recognize isolated characters, the use of multilayered architectures for OPCR, and parallel processing- based high-performance architectures. --- paper_title: An overview of character recognition methodologies paper_content: Abstract This work presents an overview of character recognition methodologies that have evolved in this century. At first the scanning devices that are used in character recognition will be explained, then some points will be stressed on the major research works that have made a great impact in character recognition. From a methodological point of view we will present the different steps that have been employed in OCR. And finally the most important industrial character recognisers will be covered along with the character data bases that are used in testing the various algorithms. --- paper_title: Pen computing: a technology overview and a vision paper_content: This work gives an overview of a new technology that is attracting growing interest in public as well as in the computer industry itself. The visible difference from other technologies is in the use of a pen or pencil as the primary means of interaction between a user and a machine, picking up the familiar pen and paper interface metaphor. From this follows a set of consequences that will be analyzed and put into context with other emerging technologies and visions.Starting with a short historical background and the technical advances that begin making Pen Computing a reality, the new paradigms created by Pen Computing will be explained and discussed. Handwriting recognition, mobility and global information access are other central topics. This is followed by a categorization and an overview of current and future systems using pens as their primary user interface component. --- paper_title: Writer independent on-line handwriting recognition using an HMM approach paper_content: In this paper we describe a Hidden Markov Model (HMM) based writer independent handwriting recognition system. A combination of signal normalization preprocessing and the use of invariant features makes the system robust with respect to variability among di!erent writers as well as di!erent writing environments and ink collection mechanisms. A combination of point oriented and stroke oriented features yields improved accuracy. Language modeling constrains the hypothesis space to manageable levels in most cases. In addition a two-pass N-best approach is taken for large vocabularies. We report experimental results for both character and word recognition on several UNIPEN datasets, which are standard datasets of English text collected from around the world. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. --- paper_title: Off-line cursive script word recognition paper_content: Cursive script word recognition is the problem of transforming a word from the iconic form of cursive writing to its symbolic form. Several component processes of a recognition system for isolated offline cursive script words are described. A word image is transformed through a hierarchy of representation levels: points, contours, features, letters, and words. A unique feature representation is generated bottom-up from the image using statistical dependences between letters and features. Ratings for partially formed words are computed using a stack algorithm and a lexicon represented as a trie. Several novel techniques for low- and intermediate-level processing for cursive script are described, including heuristics for reference line finding, letter segmentation based on detecting local minima along the lower contour and areas with low vertical profiles, simultaneous encoding of contours and their topological relationships, extracting features, and finding shape-oriented events. Experiments demonstrating the performance of the system are also described. > --- paper_title: Building bilingual microcomputer systems paper_content: In the Arab world the need for bilingual microcomputer systems is ever increasing. In addition to the ability to process the Arabic and English scripts, an ideal system should support the use of existing applications with Arabic data and the access to the system facilities through Arabic interfaces. The Integrated Arabic System (IAS) was developed to study the feasibility of building such systems using existing microcomputers and software solutions. --- paper_title: Research on Machine Recognition of Handprinted Characters paper_content: Machine recognition of handprinted Chinese characters has recently become very active in Japan. Both from the practical and the academic point of view, very encouraging results are reported. The work is described systematically and analyzed in terms of so-called feature matching, which is likely to be the mainstream of the research and development of machine recognition of handprinted Chinese characters. A database, ETL8 (881 Kanji, 71 hirakana, and 160 variations for each category), is explained, on which many experiments were performed. Recognition rates reported using this database can be compared, and so somewhat qualitative evaluation of these methods is described. Based on the comparative study, the merits and demerits of both feature and structural matching are discussed and some future directions are mentioned. --- paper_title: Personal identification based on handwriting paper_content: Abstract Many techniques have been reported for handwriting-based writer identification. The majority of techniques assume that the written text is fixed (e.g., in signature verification). In this paper we attempt to eliminate this assumption by presenting a novel algorithm for automatic text-independent writer identification. Given that the handwriting of different people is often visually distinctive, we take a global approach based on texture analysis, where each writer's handwriting is regarded as a different texture. In principle, this allows us to apply any standard texture recognition algorithm for the task (e.g., the multi-channel Gabor filtering technique). Results of 96.0% accuracy on the classification of 1000 test documents from 40 writers are very promising. The method is shown to be robust to noise and contents. --- paper_title: A Hidden Markov Model approach to online handwritten signature verification paper_content: A method for the automatic verification of online handwritten signatures using both global and local features is described. The global and local features capture various aspects of signature shape and dynamics of signature production. We demonstrate that adding a local feature based on the signature likelihood obtained from Hidden Markov Models (HMM), to the global features of a signature, significantly improves the performance of verification. The current version of the program has 2.5% equal error rate. At the 1% false rejection (FR) point, the addition of the local information to the algorithm with only global features reduced the false acceptance (FA) rate from 13% to 5%. --- paper_title: Integration of hand-written address interpretation technology into the United States Postal Service Remote Computer Reader system paper_content: Hand-written address interpretation (HWAI) technology has been recently incorporated into the processing of letter mail by the US Postal Service. The Remote Bar Coding System, which is an image management system for assigning bar codes to mail that has not been fully processed by postal OCR equipment, has been retrofitted with a Remote Computer Reader (RCR), into which the HWAI technology is integrated. A description of the HWAI technology, including its algorithms for the control structure, recognizers and databases, is provided. Its performance on more than a million hand-written mail-pieces in a field deployment of the integrated RCR-HWAI system is indicated. Future enhancements for a nationwide deployment of the system are indicated. --- paper_title: The A2iA Intercheque System: Courtesy Amount and Legal Amount Recognition for French Checks paper_content: We developed a check reading system, termed INTERCHEQUE, which recognizes both the legal (LAR) and the courtesy amount (CAR) on bank checks. The version presented here is designed for the recognition of French, omni-bank, omni-scriptor, handwritten bank checks, and meets industrial requirements, such as high processing speed, robustness, and extremely low error rates. We give an overview of our recognition system and discuss some of the pattern recognition techniques used. We also describe an installation which processes of the order of 70,000 checks per day. Results on a data base of about 170,000 checks show a recognition rate of about 75% for an error rate of the order of 1/10,000 checks. --- paper_title: NPen/sup ++/: a writer independent, large vocabulary on-line cursive handwriting recognition system paper_content: In this paper we describe the NPen/sup ++/ system for writer independent on-line handwriting recognition. This recognizer needs no training for a particular writer and can recognize any common writing style (cursive, hand-printed, or a mixture of both). The neural network architecture, which was originally proposed for continuous speech recognition tasks, and the preprocessing techniques of NPen/sup ++/ are designed to make heavy use of the dynamic writing information, i.e. the temporal sequence of data points recorded on an LCD tablet or digitizer. We present results for the writer independent recognition of isolated words. Tested on different dictionary sizes from 1,000 up to 100,000 words, recognition rates range from 98.0% for the 1,000 word dictionary to 91.4% on a 20,000 word dictionary and 82.9% for the 100,000 word dictionary. No language models are used to achieve these results. --- paper_title: Automated forms-processing software and services paper_content: While document-image systems for the management of collections of documents, such as forms, offer significant productivity improvements, the entry of information from documents remains a labor-intensive and costly task for most organizations. In this paper, we describe a software system for the machine reading of forms data from their scanned images. We describe its major components: form recognition and “dropout,” intelligent character recognition (ICR), and contextual checking. Finally, we describe applications for which our automated forms reader has been successfully used. --- paper_title: Content based internet access to paper documents paper_content: When archives of paper documents are to be accessed via the Internet, the implicit hypertext structure of the original documents should be employed. In this paper we study the different hypertext structures one encounters in a document. Methods for analyzing paper documents to find these structures are presented. The structures also form the basis for the presentation of the content of the document to the user. Results are presented. --- paper_title: An omnifont open-vocabulary OCR system for English and Arabic paper_content: We present an omnifont, unlimited-vocabulary OCR system for English and Arabic. The system is based on hidden Markov models (HMM), an approach that has proven to be very successful in the area of automatic speech recognition. We focus on two aspects of the OCR system. First, we address the issue of how to perform OCR on omnifont and multi-style data, such as plain and italic, without the need to have a separate model for each style. The amount of training data from each style, which is used to train a single model, becomes an important issue in the face of the conditional independence assumption inherent in the use of HMMs. We demonstrate mathematically and empirically how to allocate training data among the different styles to alleviate this problem. Second, we show how to use a word-based HMM system to perform character recognition with unlimited vocabulary. The method includes the use of a trigram language model on character sequences. Using all these techniques, we have achieved character error rates of 1.1 percent on data from the University of Washington English Document Image Database and 3.3 percent on data from the DARPA Arabic OCR Corpus. --- paper_title: Character recognition—a review paper_content: The machine replication of human reading has been the subject of intensive research for more than three decades. A large number of research papers and reports have already been published on this topic. Many commercial establishments have manufactured recognizers of varying capabilities. Handheld, desk-top, medium-size and large systems costing as high as half a million dollars are available, and are in use for various applications. However, the ultimate goal of developing a reading machine having the same reading capabilities of humans still remains unachieved. So, there still is a great gap between human reading and machine reading capabilities, and a great amount of further effort is required to narrow-down this gap, if not bridge it. This review is organized into six major sections covering a general overview (an introduction), applications of character recognition techniques, methodologies in character recognition, research work in character recognition, some practical OCRs and the conclusions. --- paper_title: Off-line Arabic character recognition: the state of the art paper_content: Machine simulation of human reading has been the subject of intensive research for almost three decades. A large number of research papers and reports have already been published on Latin, Chinese and Japanese characters. However, little work has been conducted on the automatic recognition of Arabic characters because of the complexity of printed and handwritten text, and this problem is still an open research field. The main objective of this paper is to present the state of Arabic character recognition research throughout the last two decades. --- paper_title: On-Line and Off-Line Handwriting Recognition : A Comprehensive Survey paper_content: Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the online case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered. --- paper_title: High accuracy optical character recognition using neural networks with centroid dithering paper_content: Optical character recognition (OCR) refers to a process whereby printed documents are transformed into ASCII files for the purpose of compact storage, editing, fast retrieval, and other file manipulations through the use of a computer. The recognition stage of an OCR process is made difficult by added noise, image distortion, and the various character typefaces, sizes, and fonts that a document may have. In this study a neural network approach is introduced to perform high accuracy recognition on multi-size and multi-font characters; a novel centroid-dithering training process with a low noise-sensitivity normalization procedure is used to achieve high accuracy results. The study consists of two parts. The first part focuses on single size and single font characters, and a two-layered neural network is trained to recognize the full set of 94 ASCII character images in 12-pt Courier font. The second part trades accuracy for additional font and size capability, and a larger two-layered neural network is trained to recognize the full set of 94 ASCII character images for all point sizes from 8 to 32 and for 12 commonly used fonts. The performance of these two networks is evaluated based on a database of more than one million character images from the testing data set. > --- paper_title: Survey: omnifont-printed character recognition paper_content: This paper presents an overview of methods for recognition of omnifont printed Roman alphabet characters with various fonts, sizes and formats (plain, bold, etc.) from OCR system perspectives. First, it summarizes the current needs for optical printed character recognition (OPCR) in general, and then describes its importance for conversion between paper and electronic media. Current status of commercially available software and products for OPCR are briefly reviewed. Analysis indicates that the challenge we face in OPCR is far from being solved, and there is still a great gap between human needs and machine reading capabilities. Second, OPCR systems and algorithms are briefly reviewed and compared from the context of digital document processing for the following four stages: preprocessing of images, segmentation, recognition, and post-processing. Finally, possible research directions to improve the performance of OPCR systems are suggested, such as using an approach based on the combination of template matching and varieties of feature-based algorithms to recognize isolated characters, the use of multilayered architectures for OPCR, and parallel processing- based high-performance architectures. --- paper_title: An overview of character recognition methodologies paper_content: Abstract This work presents an overview of character recognition methodologies that have evolved in this century. At first the scanning devices that are used in character recognition will be explained, then some points will be stressed on the major research works that have made a great impact in character recognition. From a methodological point of view we will present the different steps that have been employed in OCR. And finally the most important industrial character recognisers will be covered along with the character data bases that are used in testing the various algorithms. --- paper_title: Optical Character Recognition paper_content: From the Publisher: ::: Optical Character Recognition (OCR) has become an important and widely used technology. Among its many practical applications are the scanners used at store check-out counters, money changing machines, office scanning machines, and the efforts to automate the postal system. Research is particularly active in Japan where one important goal is to develop economical machines that can read Kanji characters. Such machines will be widely used in offices as man-machine interfaces. Dr. Mori from the Ricoh R&D Center in Japan has used his experience in optical character recognition to create a thorough reference to this widely used but still growing technology. --- paper_title: Research on Machine Recognition of Handprinted Characters paper_content: Machine recognition of handprinted Chinese characters has recently become very active in Japan. Both from the practical and the academic point of view, very encouraging results are reported. The work is described systematically and analyzed in terms of so-called feature matching, which is likely to be the mainstream of the research and development of machine recognition of handprinted Chinese characters. A database, ETL8 (881 Kanji, 71 hirakana, and 160 variations for each category), is explained, on which many experiments were performed. Recognition rates reported using this database can be compared, and so somewhat qualitative evaluation of these methods is described. Based on the comparative study, the merits and demerits of both feature and structural matching are discussed and some future directions are mentioned. --- paper_title: Boundary detection using mathematical morphology paper_content: Object boundaries contain important shape information in an image. Mathematical morphology is shape sensitive and can be used in boundary detection. In this paper, we propose dynamic mathematical morphology which only operates on the parts of interest in an image and reacts to certain characteristics of the region. The next position of the structuring element is dynamically selected at each step of the operation. The technique is used to detect object boundaries and has produced encouraging results. --- paper_title: Recognizing off-line cursive handwriting paper_content: We present a system for recognizing off-line, cursive, English text, guided in part by global characteristics (style) of the handwriting. We introduce a new method for segmenting words into letters, based on minimizing a cost function. Segmented letters are normalized with a novel algorithm that scales different parts of a letter separately removing much of the variation in the writing. We use a neural network for letter recognition and use the output of the network as posterior probabilities of letters in the word recognition process. We found that using a hidden Markov Model for word recognition is less successful than assuming an independent process for our small set of test words. In our experiments with several hundred words, written by 7 writers, 96% of the test words were correctly segmented, 52% were correctly recognized, and 70% were in the top three choices. > --- paper_title: A method of detecting the orientation of aligned components paper_content: This report describes a method of detecting the orientation of a set components which are located along parallel lines. the orientation is detected by using the histogram of nearest neighbor directions. Examples are given using three kinds of images: envelope images, Glass patterns of dots, and aerial photographs of houses. --- paper_title: Document image defect models paper_content: A lack of explicit quantitative models of imaging defects due to printing, optics, and digitization has retarded progress in some areas of document image analysis, including syntactic and structural approaches. Establishing the essential properties of such models, such as completeness (expressive power) and calibration (closeness of fit to actual image populations) remain open research problems. Work-in-progress towards a parameterized model of local imaging defects is described, together with a variety of motivating theoretical arguments and empirical evidence. A pseudo-random image generator implementing the model has been built. Applications of the generator are described, including a polyfont classifier for ASCII and a single-font classifier for a large alphabet (Tibetan U-Chen), both of which which were constructed with a minimum of manual effort. Image defect models and their associated generators permit a new kind of image database which is explicitly parameterized and indefinitely extensible, alleviating some drawbacks of existing databases. --- paper_title: A robust skew detection algorithm for grayscale document image paper_content: A fast and robust skew detection algorithm for gray-scale images is presented. The MCCSD (modified cross-correlation skew detection) algorithm uses horizontal and vertical cross-correlation simultaneously to deal with vertically laid-out text, which is commonly used in Chinese or Japanese documents. Instead of calculating the correlation for the entire image, we use small randomly selected regions to speed up the process. The region verification stage and further processing of auxiliary peaks make our method robust and reliable. An experiment shows that the proposed method has good results in detecting skew in various kinds of pages. --- paper_title: Adaptive, quadratic preprocessing of document images for binarization paper_content: This paper presents an adaptive algorithm for preprocessing document images prior to binarization in character recognition problems. Our method is similar in its approach to the blind adaptive equalization of binary communication channels. The adaptive filter utilizes a quadratic system model to provide edge enhancement for input images that have been corrupted by noise and other types of distortions during the scanning process. Experimental results demonstrating significant improvement in the quality of the binarized images over both direct binarization and a previously available preprocessing technique are also included. --- paper_title: Repulsive attractive network for baseline extraction on document images paper_content: This paper describes a new framework, called repulsive attractive (RA) network for baseline extraction on document images. The RA network is a self organizing feature detector which interacts with the document text image through the attractive and repulsive forces defined among the network components and the document image. Experimental results indicate that the network can successfully extract the baselines under heavy noise and with overlaps between the ascending and descending portions of the characters of adjacent lines. The proposed method is also applicable to a wide range of image processing applications, such as curve fitting, segmentation and thinning. --- paper_title: Recognition of cursive writing on personal checks paper_content: The system described in this paper applies Hidden Markov technology both to the task of recognizing the cursive legal amount on personal checks and the isolated (numeric) courtesy amount. --- paper_title: Which Hough transform paper_content: Abstract The Hough transform is recognized as being a powerful tool in shape analysis which gives good results even in the presence of noise and occlusion. Major shortcomings of the technique are excessive storage requirements and computational complexity. Solutions to these problems form the bulk of contributions to the literature concerning the Hough transform. An excellent comprehensive review of available methods up to and partially including 1988 is given by Illingworth and Kittler (Comput. Vision Graphics Image Process. 44, 1988, 87-116). In the years following this survey much new literature has been published. The present work offers an update on state of the art Hough techniques. This includes comparative studies of existing techniques, new perspectives on the theory, very many novel algorithms, parallel implementations, and additions to the task-specific hardware. Care is taken to distinguish between research that aims to further basic understanding of the technique without necessarily being computationally realistic and research that may be applicable in an industrial context. A new trend in Hough transform work, that of the probabilistic Houghs, is identified and reviewed in some detail. Attempts to link the low level perceptive processing offered by the Hough transform to high level knowledge driven processing are also included, together with the many recent successful applications appearing in the literature. --- paper_title: Morphological filtering: An overview paper_content: This paper is an overview on the concept of morphological filtering. Starting from openings and the associated granulometries, we discuss the notion and construction of morphological filters. Then the major differences between the ‘morphological’ and the ‘linear’ approaches are highlighted. Finally, the problem of optimal morphological filtering is presented. --- paper_title: A new scheme for off-line handwritten connected digit recognition paper_content: A scheme is proposed for off-line handwritten connected digit recognition, which uses a sequence of segmentation and recognition algorithms. First, the connected digits are segmented by employing both the gray scale and binary information. Then, a new set of features is extracted from the segments. The parameters of the feature set are adjusted during the training stage of the hidden Markov model (HMM) where the potential digits are recognized. Finally, in order to confirm the preliminary segmentation and recognition results, a recognition based segmentation method is presented. --- paper_title: Chaincode contour processing for handwritten word recognition paper_content: Contour representations of binary images of handwritten words afford considerable reduction in storage requirements while providing lossless representation. On the other hand, the one-dimensional nature of contours presents interesting challenges for processing images for handwritten word recognition. Our experiments indicate that significant gains are to be realized in both speed and recognition accuracy by using a contour representation in handwriting applications. --- paper_title: Quality assessment and restoration of typewritten document images paper_content: We present a useful method for assessing the quality of a typewritten document image and automatically selecting an optimal restoration method based on that assessment. We use five quality measures that assess the severity of background speckle, touching characters, and broken characters. A linear classifier uses these measures to select a restoration method. On a 139-document corpus, our methodology reduced the corpus OCR character error rate from 20.27% to 12.60%. --- paper_title: One dimensional representation of two dimensional information for HMM based handwritten recognition paper_content: We introduce a new set of one-dimensional discrete, constant length features to represent two dimensional shape information for HMM (hidden Markov model), based handwritten optical character recognition problem. The proposed feature set embeds the two dimensional information into a sequence of one-dimensional codes, selected from a code book. It provides a consistent normalization among distinct classes of shapes, which is very convenient for HMM based shape recognition schemes. The new feature set is used in a handwritten optical character recognition scheme, where a sequence of segmentation and recognition stages is employed. The normalization parameters, which maximize the recognition rate, are dynamically estimated in the training stage of the HMM. The proposed character recognition system is tested on both a locally generated cursively handwritten data and isolated number digits of NIST database. The experimental results indicate high recognition rates. --- paper_title: Adaptive unsharp masking for contrast enhancement paper_content: A new scheme of unsharp masking for image contrast enhancement is presented. An adaptive algorithm is introduced so that a sharpening action is performed only in locations where the image exhibits significant dynamics. Hence, the amplification of noise in smooth areas is reduced. An adaptive directional filtering is also performed so as to provide suitable emphasis to the different directional characteristics of the detail. Because it is capable of treating high-detail and medium-detail areas differently, this algorithm also avoids unpleasant overshoot artifacts in regions of sharp transitions. Experimental results demonstrating the usefulness of the adaptive operator in an application involving preprocessing of images for enhancement prior to zooming are also included. --- paper_title: Document image preprocessing based on optimal Boolean filters paper_content: Abstract In this paper, optimal Boolean filters are applied to enhance the binary document images corrupted with uniform noise or uniformly distributed distinct graphical patterns in the background. The performance and operation theory of optimal Boolean filters against other competitive techniques are compared. Experimental results show that the Boolean filters outperforms the morphology approach in extracting the text from overlapped text/background images. The feasibility of trained Boolean filters is also confirmed by experimental results in the case where the original image is not available. --- paper_title: NORMALIZING AND RESTORING ON-LINE HANDWRITING paper_content: Abstract Preprocessing and normalization techniques for on-line handwriting analysis are crucial steps that usually compromise the success of recognition algorithms. These steps are often neglected and presented as solved problems, but this is far from the truth. An overview is presented of the principal on-line techniques for handwriting preprocessing and word normalization, covering the major difficulties encountered and the various approaches usually used to resolve these problems. Some measurable definitions for handwriting characteristics are proposed, such as baseline orientation, character slant and handwriting zones. These definitions are used to measure and quantify the performance of the normalization algorithms. An approach to enhancing and restoring handwriting text is also presented, and an objective evaluation of all the processing results. --- paper_title: How to extend and bootstrap an existing data set with real-life degraded images paper_content: This paper introduces a methodology for bootstrapping and creating large number of groundtruthed "real-life" degraded images from an existing data set with a fraction of the original cost and time. The real-life degradations include geometric distortions, coffee stains, water or ink marks, and folds and creases. The methodology includes an automatic procedure to generate unlimited "real-life" degraded images (with coffee and ink marks and soil spots) without any cost. A small experiment was conducted to illustrate the effectiveness of our methodology. In the experiment, 22 real-life degraded images and the two original images were tested on a commercial OCR system. The accuracy rates of the OCR for the two original pages are 98.46% and 99.34% while the accuracy rates for the degraded pages are ranging from 57.17% to 98.45%, depending on the severity and the type of degradation applied to the pages. --- paper_title: Off-line handwritten word recognition using a hidden Markov model type stochastic network paper_content: Because of large variations involved in handwritten words, the recognition problem is very difficult. Hidden Markov models (HMM) have been widely and successfully used in speech processing and recognition. Recently HMM has also been used with some success in recognizing handwritten words with presegmented letters. In this paper, a complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model type stochastic network is presented. Our scheme includes a morphology and heuristics based segmentation algorithm, a training algorithm that can adapt itself with the changing dictionary, and a modified Viterbi algorithm which searches for the (l+1)th globally best path based on the previous l best paths. Detailed experiments are carried out and successful recognition results are reported. > --- paper_title: Off-line cursive script word recognition paper_content: Cursive script word recognition is the problem of transforming a word from the iconic form of cursive writing to its symbolic form. Several component processes of a recognition system for isolated offline cursive script words are described. A word image is transformed through a hierarchy of representation levels: points, contours, features, letters, and words. A unique feature representation is generated bottom-up from the image using statistical dependences between letters and features. Ratings for partially formed words are computed using a stack algorithm and a lexicon represented as a trie. Several novel techniques for low- and intermediate-level processing for cursive script are described, including heuristics for reference line finding, letter segmentation based on detecting local minima along the lower contour and areas with low vertical profiles, simultaneous encoding of contours and their topological relationships, extracting features, and finding shape-oriented events. Experiments demonstrating the performance of the system are also described. > --- paper_title: Optimal Local Weighted Averaging Methods in Contour Smoothing paper_content: In several applications where binary contours are used to represent and classify patterns, smoothing must be performed to attenuate noise and quantization error. This is often implemented with local weighted averaging of contour point coordinates, because of the simplicity, low-cost and effectiveness of such methods. Invoking the "optimality" of the Gaussian filter, many authors will use Gaussian-derived weights. But generally these filters are not optimal, and there has been little theoretical investigation of local weighted averaging methods per se. This paper focuses on the direct derivation of optimal local weighted averaging methods tailored towards specific computational goals such as the accurate estimation of contour point positions, tangent slopes, or deviation angles. A new and simple digitization noise model is proposed to derive the best set of weights for different window sizes, for each computational task. Estimates of the fraction of the noise actually removed by these optimum weights are also obtained. Finally, the applicability of these findings for arbitrary curvature is verified, by numerically investigating equivalent problems for digital circles of various radii. --- paper_title: A one-pass two-operation process to detect the skeletal pixels on the 4-distance transform paper_content: A skeletonizing procedure is illustrated that is based on the notion of multiple pixels as well as on the use of the 4-distance transform. The set of the skeletal pixels is identified within one sequential raster scan of the picture where the 4-distance transform is stored. Two local conditions, introduced to characterize the multiple pixels are employed. Since the set of the skeletal pixels is at most two pixels wide, the skeleton can be obtained on completion of an additional inspection of the picture, during which time standard removal operations are applied. Besides being correct and computationally convenient, the procedure produces a labeled skeleton, i.e. a skeleton whose adequacy for shape description purposes is generally acknowledged. > --- paper_title: An adaptive logical method for binarization of degraded document images paper_content: Abstract This paper describes a modified logical thresholding method for binarization of seriously degraded and very poor quality gray-scale document images. This method can deal with complex signal-dependent noise, variable background intensity caused by nonuniform illumination, shadow, smear or smudge and very low contrast. The output image has no obvious loss of useful information. Firstly, we analyse the clustering and connection characteristics of the character stroke from the run-length histogram for selected image regions and various inhomogeneous gray-scale backgrounds. Then, we propose a modified logical thresholding method to extract the binary image adaptively from the degraded gray-scale document image with complex and inhomogeneous background. It can adjust the size of the local area and logical thresholding level adaptively according to the local run-length histogram and the local gray-scale inhomogeneity. Our method can threshold various poor quality gray-scale document images automatically without need of any prior knowledge of the document image and manual fine-tuning of parameters. It keeps useful information more accurately without overconnected and broken strokes of the characters, and thus, has a wider range of applications compared with other methods. --- paper_title: Integral ratio: a new class of global thresholding techniques for handwriting images paper_content: We propose a class of histogram based global thresholding techniques called integral ratio. They are designed to threshold gray-scale handwriting images and separate the handwriting from the background. The following tight requirements must be met: 1) all the details of the handwriting are to be retained, 2) the writing paper used may contain strong colored and/or patterned background which must be removed, and 3) the handwriting may be written using a wide variety of pens such as a fountain pen, ballpoint pen, or pencil. A specific application area which requires these tight requirements is forensic document examination, where a handwritten document is often considered as legal evidence and the handwriting must not be tampered with or modified in any way. The proposed class of techniques is based on a two stage thresholding approach requiring each pixel of a handwritten image to be placed into one of three classes: foreground, background, and a fuzzy area between them where it is hard to determine whether a pixel belongs to the foreground or the background. Two techniques, native integral ratio (NIR) and quadratic integral ratio (QIR), were created based on this class and tested against two well-known thresholding techniques: Otsu's (1979) technique and the entropy thresholding technique. We found that QIR has superior performance compared to all the other techniques tested. --- paper_title: An introduction to digital image processing paper_content: A new and distinct spur type apple variety which originated as a limb mutation of the standard winter banana apple tree (non-patented) is provided. This new apple variety possesses a vigorous compact and only slightly spreading growth habit and can be distinguished from its parent and the Housden spur type winter banana apple variety (non-patented). More specifically, the new variety forms more fruiting spurs per unit length on two and three year old wood than the standard winter banana apple tree and less spurs per unit length than the Housden spur type winter banana apple tree. Additionally, the new variety has the ability to heavily bear fruit having a whitish-yellow skin color with a sometimes slight scarlet red blush upon maturity which is substantially identical to that of the standard winter banana apple tree and which has substantially less skin russeting than the Housden spur type winter banana apple tree. --- paper_title: Goal-Directed Evaluation of Binarization Methods paper_content: This paper presents a methodology for evaluation of low-level image analysis methods, using binarization (two-level thresholding) as an example. Binarization of scanned gray scale images is the first step in most document image analysis systems. Selection of an appropriate binarization method for an input image domain is a difficult problem. Typically, a human expert evaluates the binarized images according to his/her visual criteria. However, to conduct an objective evaluation, one needs to investigate how well the subsequent image analysis steps will perform on the binarized image. We call this approach goal-directed evaluation, and it can be used to evaluate other low-level image processing methods as well. Our evaluation of binarization methods is in the context of digit recognition, so we define the performance of the character recognition module as the objective measure. Eleven different locally adaptive binarization methods were evaluated, and Niblack's method gave the best performance. --- paper_title: Thinning methodologies−a comprehensive survey paper_content: A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. > --- paper_title: Line thinning by line following paper_content: Abstract A line following algorithm is presented as a means to perform line thinning on elongated figures. The method is faster than conventional thinning algorithms and not as sensitive to noise. It can be useful for other applications also. --- paper_title: A pattern adaptive thinning algorithm paper_content: A simple sequential thinning algorithm for peeling off pixels along contours is described. An adaptive algorithm obtained by incorporating shape adaptivity into this sequential process is also given. The distortions in the skeleton at the right-angle and acute-angle corners are minimized in the adaptive algorithm. The asymmetry of the skeleton, which is a characteristic of sequential algorithm, and is due to the presence of T-corners in some of the even-thickness pattern is eliminated. The performance (in terms of time requirements and shape preservation) is compared with that of a modern thinning algorithm. --- paper_title: Skeletonization of Arabic characters using clustering based skeletonization algorithm (CBSA) paper_content: Abstract Character skeletonization is an essential step in many character recognition techniques. In this paper, skeletonization of Arabic characters is addressed. While other techniques employ thinning algorithms, in this paper clustering of Arabic characters is used. The use of clustering technique (an expensive step) is justified by the properties of the generated skeleton which has the advantages of other thinning techniques and is robust. The presented technique may be used in the modeling and training stages to reduce the processing time of the recognition system. --- paper_title: An evaluation of parallel thinning algorithms for character recognition paper_content: Skeletonization algorithms have played an important role in the preprocessing phase of OCR systems. In this paper we report on the performance of 10 parallel thinning algorithms from this perspective by gathering statistics from their performance on large sets of data and examining the effects of the different thinning algorithms on an OCR system. > --- paper_title: Fast fully parallel thinning algorithms paper_content: Abstract Three new fast fully parallel 2-D thinning algorithms using reduction operators with 11-pixel supports are presented and evaluated. These are compared to earlier fully parallel thinning algorithms in tests on artificial and natural images; the new algorithms produce either superior parallel computation time (number of parallel iterations) or thinner medial curve results with comparable parallel computation time. Further, estimates of the best possible parallel computation time are developed which are applied to the specific test sets used. The parallel computation times of the new algorithms and one earlier algorithm are shown to approach closely or surpass these estimates and are in this sense near optimally fast. --- paper_title: Segmentation methods for character recognition: from segmentation to document structure analysis paper_content: A pattern-oriented segmentation method for optical character recognition that leads to document structure analysis is presented. As a first example, segmentation of handwritten numerals that touch are treated. Connected pattern components are extracted, and spatial interrelations between components are measured and grouped into meaningful character patterns. Stroke shapes are analyzed and a method of finding the touching positions that separates about 95% of connected numerals correctly is described. Ambiguities are handled by multiple hypotheses and verification by recognition. An extended form of pattern-oriented segmentation, tabular form recognition, is considered. Images of tabular forms are analyzed, and frames in the tabular structure are extracted. By identifying semantic relationships between label frames and data frames, information on the form can be properly recognized. > --- paper_title: Reading handwritten words using hierarchical relaxation paper_content: Abstract Handwritten words are read (recognized) by applying hierarchical relaxation labeling procedures to a hierarchical description of the word. The H and w ritten W ord R eading S ystem (HWWRS) builds a hierarchical description of the word it reads. HWWRS's first task is to translate the array representation (image) of a word into a geometric graph representation that models the line structures in the word. A stroke graph containing the possible strokes in the word is created by matching structural descriptions of strokes to the geometric graph. The stroke graph is then augmented to a letter-stroke graph that represents the possible letters of the word by matching structural descriptions of letters to the stroke graph. The letter-stroke graph represents the ambiguous segmentations of the word into letters and strokes that possibly overlap one another. It constitutes the hierarchical description of the input word. The hierarchical description is contextually disambiguated and thus simplified by the application of relaxation procedures. This simplified hierarchical description defines one or possibly a few words. HWWRS reads the input word by picking the most likely word from a list of possible words. The system is shown to function correctly for several example words, and experiments involving different relaxation formulas are described. --- paper_title: Handwritten word recognition using segmentation-free hidden Markov modeling and segmentation-based dynamic programming techniques paper_content: A lexicon-based, handwritten word recognition system combining segmentation-free and segmentation-based techniques is described. The segmentation-free technique constructs a continuous density hidden Markov model for each lexicon string. The segmentation-based technique uses dynamic programming to match word images and strings. The combination module uses differences in classifier capabilities to achieve significantly better performance. --- paper_title: Word recognition in a segmentation-free approach to OCR paper_content: Segmentation is a key step in current OCR systems. It has been estimated that half the errors in character recognition are due to segmentation. A novel approach that performs OCR without the segmentation step was developed. The approach starts by extracting significant geometric features from the input document image of the page. Each feature then votes for the character that could have generated that feature. Thus, even if some of the features are occluded or lost due to degradation, the remaining features can successfully identify the character. In extreme cases, the degradation may be severe enough to prevent recognition of some of the characters in a word. In such cases, a lexicon-based word recognition technique is used to resolve ambiguity. Inexact matching and probabilistic evaluation used in the technique make it possible to identify the correct word, by detecting a partial set of characters. The authors first present an overview of their segmentation-free OCR system and then focus on the word recognition technique. Preliminary experimental results show that this is a very promising approach. > --- paper_title: Major components of a complete text reading system paper_content: The document image processes used in a recently developed text reading system are described. The system consists of three major components: document analysis, document understanding, and character segmentation/recognition. The document analysis component extracts lines of text from a page for recognition. The document understanding component extracts logical relationships between the document constituents. The character segmentation/recognition component extracts characters from a text line and recognizes them. Experiments on more than a hundred documents have proved that the proposed approaches to document analysis and document understanding are robust even for multicolumned and multiarticle documents containing graphics and photographs, and that the proposed character segmentation/recognition method is robust enough to cope with omnifont characters which frequently touch each other. > --- paper_title: The document spectrum for page layout analysis paper_content: Page layout analysis is a document processing technique used to determine the format of a page. This paper describes the document spectrum (or docstrum), which is a method for structural page layout analysis based on bottom-up, nearest-neighbor clustering of page components. The method yields an accurate measure of skew, within-line, and between-line spacings and locates text lines and text blocks. It is advantageous over many other methods in three main ways: independence from skew angle, independence from different text spacings, and the ability to process local regions of different text orientations within the same image. Results of the method shown for several different page formats and for randomly oriented subpages on the same image illustrate the versatility of the method. We also discuss the differences, advantages, and disadvantages of the docstrum with respect to other lay-out methods. > --- paper_title: Document representation and its application to page decomposition paper_content: Transforming a paper document to its electronic version in a form suitable for efficient storage, retrieval, and interpretation continues to be a challenging problem. An efficient representation scheme for document images is necessary to solve this problem. Document representation involves techniques of thresholding, skew detection, geometric layout analysis, and logical layout analysis. The derived representation can then be used in document storage and retrieval. Page segmentation is an important stage in representing document images obtained by scanning journal pages. The performance of a document understanding system greatly depends on the correctness of page segmentation and labeling of different regions such as text, tables, images, drawings, and rulers. We use the traditional bottom-up approach based on the connected component extraction to efficiently implement page segmentation and region identification. A new document model which preserves top-down generation information is proposed based on which a document is logically represented for interactive editing, storage, retrieval, transfer, and logical analysis. Our algorithm has a high accuracy and takes approximately 1.4 seconds on a SGI Indy workstation for model creation, including orientation estimation, segmentation, and labeling (text, table, image, drawing, and ruler) for a 2550/spl times/3300 image of a typical journal page scanned at 300 dpi. This method is applicable to documents from various technical journals and can accommodate moderate amounts of skew and noise. --- paper_title: Structured document segmentation and representation by the modified X-Y tree paper_content: We describe a top-down approach to the segmentation and representation of documents containing tabular structures. Examples of these documents are invoices and technical papers with tables. The segmentation is based on an extension of X-Y trees, where the regions are split by means of cuts along separators (e.g. lines), in addition to cuts along white spaces. The leaves describe regions containing homogeneous information and cutting separators. Adjacency links among leaves of the tree describe local relationships between corresponding regions. --- paper_title: A new scheme for off-line handwritten connected digit recognition paper_content: A scheme is proposed for off-line handwritten connected digit recognition, which uses a sequence of segmentation and recognition algorithms. First, the connected digits are segmented by employing both the gray scale and binary information. Then, a new set of features is extracted from the segments. The parameters of the feature set are adjusted during the training stage of the hidden Markov model (HMM) where the potential digits are recognized. Finally, in order to confirm the preliminary segmentation and recognition results, a recognition based segmentation method is presented. --- paper_title: Off-line cursive word recognition paper_content: The state of the art in handwriting recognition, especially in cursive word recognition, is surveyed, and some basic notions are reviewed in the field of picture recognition, particularly, line image recognition. The usefulness of 'regular' versus 'singular' classes of features is stressed. These notions are applied to obtain a graph, G, representing a line image, and also to find an 'axis' as the regular part of G. The complements to G of the axis are the 'tarsi', singular parts of G, which correspond to informative features of a cursive word. A segmentation of the graph is obtained, giving a symbolic description chain (SDC). Using one or more as robust anchors, possible words in a list of words are selected. Candidate words are examined to see if the other letters fit the rest of the SDC. Good results are obtained for clean images of words written by several persons. > --- paper_title: A survey of methods and strategies in character segmentation paper_content: Character segmentation has long been a critical area of the OCR process. The higher recognition rates for isolated characters vs. those obtained for words and connected character strings well illustrate this fact. A good part of recent progress in reading unconstrained printed and written text may be ascribed to more insightful handling of segmentation. This paper provides a review of these advances. The aim is to provide an appreciation for the range of techniques that have been developed, rather than to simply list sources. Segmentation methods are listed under four main headings. What may be termed the "classical" approach consists of methods that partition the input image into subimages, which are then classified. The operation of attempting to decompose the image into classifiable units is called "dissection." The second class of methods avoids dissection, and segments the image either explicitly, by classification of prespecified windows, or implicitly by classification of subsets of spatial features collected from the image as a whole. The third strategy is a hybrid of the first two, employing dissection together with recombination rules to define potential segments, but using classification to select from the range of admissible segmentation possibilities offered by these subimages. Finally, holistic approaches that avoid segmentation by recognizing entire character strings as units are described. --- paper_title: Hidden Markov Models in Handwriting Recognition paper_content: Hidden Markov Models (HMM) have now became the prevalent paradigm in automatic speech recognition. Only recently, several researchers in off-line handwriting recognition have tried to transpose the HMM technology to their field after realizing that word images could be assimilated to sequences of observations. HMM’s form a family of tools for modelling sequential processes in a statistical and generative manner. Their reputation is due to the results attained in speech recognition which derive mostly from the existence of automatic training techniques and the advantages of the probabilistic framework. This article first reviews the basic concepts of HMM’s. The second part is devoted to illustrative applications in the field of off- line handwriting recognition. We describe four different applications of HMM’s in various contexts and review some of the other approaches. --- paper_title: A new approach to document analysis based on modified fractal signature paper_content: This paper presents a new approach to document analysis. The proposed approach is based on modified fractal signature. Instead of the time-consuming traditional approaches (top-down and bottom-up approaches) where iterative operations are necessary to break a document into blocks to extract its geometric (layout) structure, this new approach can divide a document into blocks in only one step. This approach can be used to process documents with high geometrical complexity. Experiments have been conducted to prove the proposed new approach for document processing. --- paper_title: Off-line handwritten word recognition using a hidden Markov model type stochastic network paper_content: Because of large variations involved in handwritten words, the recognition problem is very difficult. Hidden Markov models (HMM) have been widely and successfully used in speech processing and recognition. Recently HMM has also been used with some success in recognizing handwritten words with presegmented letters. In this paper, a complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model type stochastic network is presented. Our scheme includes a morphology and heuristics based segmentation algorithm, a training algorithm that can adapt itself with the changing dictionary, and a modified Viterbi algorithm which searches for the (l+1)th globally best path based on the previous l best paths. Detailed experiments are carried out and successful recognition results are reported. > --- paper_title: Segmentation of merged characters by neural networks and shortest path paper_content: Abstract A major problem with a neural network-based approach to printed character recognition is the segmentation of merged characters. A hybrid method is proposed which combines a neural network-based deferred segmentation scheme with conventional immediate segmentation techniques. In the deferred segmentation, a neural network is employed to distinguish single characters from composites. To find a proper vertical cut that separates a composite, a shortest-path algorithm seeking minimal-penalty curved cuts is used. Integrating those components with a multiresolution neural network OCR and an efficient spelling checker, the resulting system significantly improves its ability to read omnifont document text. --- paper_title: Adaptive document segmentation and geometric relation labeling: algorithms and experimental results paper_content: This paper describes a generic document segmentation and geometric relation labeling method with applications to document analysis. Unlike the previous document segmentation methods where text spacing, border lines, and/or a priori layout models based template processing are performed, the present method begins with a hierarchy of partitioned image layers where inhomogeneous higher-level regions are recursively positioned into lower-level rectangular subregions and at the same time lower-level smaller homogeneous regions are merged into larger homogeneous regions. The present method differs from the traditional split-and-merge segmentation method in that it orthogonally splits regions using thresholds adaptively computed from projection profiles. --- paper_title: Page segmentation based on thinning of background paper_content: This paper presents a new method of page segmentation based on the analysis of background (white areas). The proposed method is capable of segmenting pages with non-rectangular layout as well as with various angles of skew. The characteristics of the method are as follows: (1) thinning of the background enables us to represent white areas of any shape as connected thin lines or chains and the robustness for tilted page images is also achieved by the representation; and (2) based on this representation, the task of page segmentation is defined as to find the loops enclosing printed areas. The task is achieved by eliminating unnecessary chains using not only a feature of white areas, but also a feature of black areas divided by a chain. Based on the experimental results and the comparison with previous methods, we discuss the advantages and limitations of the proposed method. --- paper_title: Page segmentation by segment tracing paper_content: A page segmentation method that allows one to cut a document page image into polygonal blocks as well as into classical rectangular blocks is described. The intercolumn and interparagraph gaps are extracted as horizontal and vertical lines. The points of intersection between these lines are treated as vertices of polygonal blocks. With the aid of the 4-connected chain codes and an intersection table, simple isothetic polygonal blocks are constructed from these points of intersection. > --- paper_title: Analysis of class separation and combination of class-dependent features for handwriting recognition paper_content: In this paper, we propose a new approach to combine multiple features in handwriting recognition based on two ideas: feature selection-based combination and class dependent features. A nonparametric method is used for feature evaluation, and the first part of this paper is devoted to the evaluation of features in terms of their class separation and recognition capabilities. In the second part, multiple feature vectors are combined to produce a new feature vector. Based on the fact that a feature has different discriminating powers for different classes, a new scheme of selecting and combining class-dependent features is proposed. In this scheme, a class is considered to have its own optimal feature vector for discriminating itself from the other classes. Using an architecture of modular neural networks as the classifier, a series of experiments were conducted on unconstrained handwritten numerals. The results indicate that the selected features are effective in separating pattern classes and the new feature vector derived from a combination of two types of such features further improves the recognition rate. --- paper_title: Preprocessing techniques for cursive script word recognition paper_content: Abstract This paper deals with techniques for improving the recognition rate of a cursive script word recognition system. Closed-loop preprocessing techniques have been designed and implemented to achieve this objective on a limited vocabulary but with no restrictions on handwriting style. This paper discusses the details of such a system and its performance on samples from several authors. Results obtained from this study are promising and suggest that closed-loop verification is a potentially more useful technique than previous open-loop processing approaches. --- paper_title: Handwritten word recognition using segmentation-free hidden Markov modeling and segmentation-based dynamic programming techniques paper_content: A lexicon-based, handwritten word recognition system combining segmentation-free and segmentation-based techniques is described. The segmentation-free technique constructs a continuous density hidden Markov model for each lexicon string. The segmentation-based technique uses dynamic programming to match word images and strings. The combination module uses differences in classifier capabilities to achieve significantly better performance. --- paper_title: A practical pattern recognition system for translation, scale and rotation invariance paper_content: We present a practical pattern recognition system that is invariant with respect to translation, scale and rotation of objects. The system is also insensitive to large variations of the threshold used. As feature vectors, Zernike moments are used and we compare them with Hu's seven moment invariants. For a practical machine vision system, three key issues are discussed: pattern normalization, fast computation of Zernike moments, and classification using k-NN rule. As testing results, the system recognizes a set of 62 alphanumeric machine-printed characters with different sizes, at arbitrary orientations, and with different thresholds where the size of the characters varies from 10/spl times/10 to 512/spl times/512 pixels. > --- paper_title: Multi-layer projections for the classification of similar Chinese characters paper_content: An algorithm is presented of extracting features from Chinese characters. These features consist of the Fourier spectrum of projections obtained from multiple-layers of annular partitions. This method takes into consideration the square shape of Chinese characters to that the extracted features contain the significant information of the different parts of the character, and are insensitive to rotation and linear displacement. For the experiments, 97 similar Chinese characters were selected from the most frequently used characters. These characters were divided into 34 groups according to similarity in shape. Three different fonts of Chinese characters (Song, Kai and Bold face) were used. Four additional symbols were also included to study the effects of character symmetry on the proposed algorithm. Experimental results indicate that for any displacement and for rotations in the range of (-180 degrees , +180 degrees ), this method can separate without exception all similar Chinese characters including the complex ones. > --- paper_title: The feature extraction of Chinese character based on contour information paper_content: A new method, called central projection transformation, is proposed in this paper for feature extraction. From our experiments, the new method is found to be efficient in extracting features based on the contours of Chinese characters. Chinese characters have complex structures, and some of them are composed of several separate components, so several contours are embedded in a character. This may obstruct the application of the contour approach in recognizing Chinese characters. Central projection transformation can convert such a multi-contour pattern into a solid, convex pattern whose contour is a unique polygon. Most of the information of this new pattern is still located around its periphery. This approach can greatly simplify the processing of Chinese characters and other multi-contour patterns. It is also a powerful tool for processing Arabic, Japanese and other characters. --- paper_title: Multiresolution recognition of handwritten numerals with wavelet transform and multilayer cluster neural network paper_content: In this paper, we propose a new scheme for multiresolution recognition of totally unconstrained handwritten numerals using wavelet transform and a simple multilayer cluster neural network. The proposed scheme consists of two stages: a feature extraction stage for extracting multiresolution features with wavelet transform, and a classification stage for classifying totally unconstrained handwritten numerals with a simple multilayer cluster neural network. In order to verify the performance of the proposed scheme, experiments with unconstrained handwritten numeral database of Concordia University of Canada, that of Electro-Technical Laboratory of Japan, and that of Electronics and Telecommunications Research Institute of Korea were performed. The error rates were 3.20%, 0.83%, and 0.75%, respectively. These results showed that the proposed scheme is very robust in terms of various writing styles and sizes. --- paper_title: Character recognition using statistical moments paper_content: Abstract This paper presents a character recognition system that is implemented using a variety of statistical moments as features. These moments include Hu moment invariants, Affine moment invariants and Tsirikolias–Mertzios moments. Euclidean distance measure, cross correlation and discrimination cost were used as the classification techniques. The mean of the intraclass standard deviations of the features was used as a weighting factor during the classification process to improve recognition accuracy. The system was rigorously tested under different conditions, including using different number of training sets and documents with different fonts. It was found that Tsirikolias–Mertzios moments with weighted cross correlation classifier provided the best recognition rates. --- paper_title: Recognition of handprinted Chinese characters using Gabor features paper_content: A method for handprinted Chinese character recognition based on Gabor filters is proposed. The Gabor approach to character recognition is intuitively appealing because it is inspired by a multi-channel filtering theory for processing visual information in the early stages of the human visual system. The performance of a character recognition system using Gabor features is demonstrated on the ETL-8 character set. Mental results show that the Gabor features yielded an error rate of 2.4% versus the error rate of 4.4% obtained by using a popular feature extraction method. --- paper_title: One dimensional representation of two dimensional information for HMM based handwritten recognition paper_content: We introduce a new set of one-dimensional discrete, constant length features to represent two dimensional shape information for HMM (hidden Markov model), based handwritten optical character recognition problem. The proposed feature set embeds the two dimensional information into a sequence of one-dimensional codes, selected from a code book. It provides a consistent normalization among distinct classes of shapes, which is very convenient for HMM based shape recognition schemes. The new feature set is used in a handwritten optical character recognition scheme, where a sequence of segmentation and recognition stages is employed. The normalization parameters, which maximize the recognition rate, are dynamically estimated in the training stage of the HMM. The proposed character recognition system is tested on both a locally generated cursively handwritten data and isolated number digits of NIST database. The experimental results indicate high recognition rates. --- paper_title: A comparative study of different classifiers for handprinted character recognition paper_content: In this paper, we present a comparative study of four different classifiers for isolated handprinted character recognition. These four classifiers are i) a nearest template (NT) classifier, ii) an enhanced nearest template (ENT) classifier, iii) a standard feedforward neural network (FNN) classifier, and iv) a hybrid classifier. The NT classifier is a variation of the nearest neighbor classifier which stores a small number of templates (or prototypes) and their statistics generated by a special clustering algorithm. Motivated by radial basis function networks, the ENT classifier is proposed to augment the NT classifier with an optimal transform which maps the distances generated by the NT classifier to character categories. The FNN classifier is a 3-layer (with one hidden layer) feedforward network trained using the backpropagation algorithm. The hybrid classifier combines results from the FNN and NT classifiers in an efficient way to improve the recognition accuracy with only a slight increase in computation. In this paper, we evaluate the performance of these four classifiers in terms of recognition accuracy, top 3 coverage rate, and recognition speed, using the NIST isolated lower-case alphabet database. Our experiments show that the FNN classifier outperforms the NT and ENT classifiers in all the three evaluation criteria. The hybrid classifier achieves the best recognition accuracy at a cost of little extra computation over the FNN classifier. The ENT classifier can significantly improve the recognition accuracy of the NT classifier when a small number of templates is used. --- paper_title: Invariant pattern recognition by moment fourier descriptor paper_content: Abstract This paper proposes a new shape descriptor, called moment Fourier descriptor, which can describe a complex object composed by a set of closed regions. This descriptor is shown to be independent of object's translation, rotation and scaling. The essential advantage of moment Fourier descriptor is that it can be used to recognize more complex patterns than the traditional Fourier descriptors. Simulations are given to show the high performance of the proposed approach. --- paper_title: Recognition of legal amounts on bank cheques paper_content: This article describes the recognition of legal amounts of a bank cheque processing system developed at CENPARMI. The preprocessing, sentence to word segmentation and word recognition approaches are presented along with some critical reviews. The overall engine is a combination of a global feature scheme with an HMM module. The global features consist of the encoding of the relative position of the ascenders, descenders and loops within a word. The HMM uses one feature set based on the orientation of contour points as well as their distance to the baselines. Our system is fully trainable, reducing to a strict minimum the number of hand-set parameters. The system is also modular and independent of specific languages as we have to deal with at least two languages in Canada, namely English and French. The system can be easily adapted to read other European languages based on the Roman alphabet. The system is continuously tested on data from the local phone company, and we report here the results on a balanced French database of approximately 2000 cheques with specified amounts. --- paper_title: Chaincode contour processing for handwritten word recognition paper_content: Contour representations of binary images of handwritten words afford considerable reduction in storage requirements while providing lossless representation. On the other hand, the one-dimensional nature of contours presents interesting challenges for processing images for handwritten word recognition. Our experiments indicate that significant gains are to be realized in both speed and recognition accuracy by using a contour representation in handwriting applications. --- paper_title: Off-line recognition of cursive script produced by a cooperative writer paper_content: A method for the off-line recognition of cursive handwriting based on hidden Markov models is described. The features used in the HMMs are based on the arcs of skeleton graphs of the words to be recognized. An average correct recognition rate of over 98% on the word level has been achieved in experiments with cooperative writers using two dictionaries of 150 words each. --- paper_title: Structural Feature Extraction Using Multiple Bases paper_content: Abstract The prime difficulty in research and development of the handwritten character recognition systems is in the variety of shape deformations. In particular, throughout more than a quarter of a century of research, it is found that some qualitative features such as quasi-topological features (convexity and concavity), directional features, and singular points (branch points and crossings) are effective in coping with variations of shapes. On the basis of this observation, Nishida and Mort ( IEEE Trans. Pattern Anal. Mach. Intell. 14, 1992, 516-533; and Structured Document Image Analysis (H. S. Baird, H. Bunke, and K. Yamamoto, Eds.), pp. 139-187, Springer-Verlag, New York, 1992) proposed a method for structural description of character shapes by few components with rich features. This method is clear and rigorous, can cope with various deformations, and has been shown to be powerful in practice. Furthermore, shape prototypes (structural models) can be constructed automatically from the training data (Nishida and Mori, IEEE Trans. Pattern Anal. Mach. Intell. 15, 1993, 1298-1311). However, in the analysis of directional features, the number of directions is fixed to 4, and more directions such as 8 or 16 cannot be dealt with. For various applications of Nishida and Mori's method, we present a method for structural analysis and description of simple arcs or closed curves based on 2 m -directional features ( m = 2, 3, 4, ...) and convex/concave features. On the other band, software OCR systems without specialized hardware have attracted much attention recently. Based on the proposed method of structural analysis and description, we describe a software implementation of a handwritten character recognition system using multistage strategy. --- paper_title: A method of extracting curvature features and its application to handwritten character recognition paper_content: Proposes an orthodox method of extracting curvature features based on a curve fitting approximation. It enables us to obtain analog values of curvatures of a given curve. This means that a so-called gray zone between two categories can be identified and also that very shallow concavities can be detected. For this purpose, cubic B-splines are obtained using a least squares method with natural conditions at the end-points. The method was tested on synthesized noisy data sets such as 2/spl rarr/Z, 4/spl rarr/9 and 1/spl rarr/3. The results are so good that the method can be used to obtain analog features as intended. Demonstrative experimental results are shown for the data set for 1/spl rarr/3. --- paper_title: HMM-KNN word recognition engine for bank cheque processing paper_content: Describes the mixed HMM-KNN word recognition module of a bank cheque processing system developed at CENPARMI. It uses a combination of 2 segmentation free word recognition schemes. The first scheme uses a set of global features associated to a modified K nearest neighbour classifier; while the second one uses a set of directional contour features as input to an HMM. The system has been designed to be modular and independent of specific languages as in Canada one has to deal with at least 2 languages, namely English and French. It can be easily adapted to read other European languages based on the Roman alphabet. The system is continuously tested on data from the local phone company, and we report here the results on a database of approximately 4,500 cheques. --- paper_title: On optimal order in modeling sequence of letters in words of common language as a Markov chain paper_content: Abstract In recognition of words of a language such as English, the letter sequences of the words are often modeled as Markov chains. In this paper the problem of determining the optimal order of such Markov chains is addressed using Tong's minimum Akaike information criterion estimate (MAICE) approach and Hoel's likelihood ratio statistic based hypothesis-testing approach. Simulation results show that the sequence of letters in English words is more likely to be a second order Markov chain than a first order one. --- paper_title: Distance features for neural network-based recognition of handwritten characters paper_content: Features play an important role in OCR systems. In this paper, we propose two new features which are based on distance information. In the first feature (called DT, Distance Transformation), each white pixel has a distance value to the nearest black pixel. The second feature is called DDD (Directional Distance Distribution) which contains rich information encoding both the black/white and directional distance distributions. A new concept of map tiling is introduced and applied to the DDD feature to improve its discriminative power. For an objective evaluation and comparison of the proposed and conventional features, three distinct sets of characters (i.e., numerals, English capital letters, and Hangul initial sounds) have been tested using standard databases. Based on the results, three propositions can be derived to confirm the superiority of both the DDD feature and the map tilings. --- paper_title: On-line handwriting character recognition method with directional features and direction-change features paper_content: We propose a new on-line recognition method to recognize handwritten cursive-style characters correctly. Our method simultaneously uses both directional features, otherwise known as off-line features, and direction-change features, which we designed as on-line features. These features are expressed in the divided meshes of the character area. The directional features express the directions between character's coordinates within the meshes. The direction-change features express where in the mesh and in which direction each direction of the character's coordinates change, and express where the circular parts of the character are in the mesh. These features express both written strokes in the pen-down state and unwritten imaginary strokes in the pen-up state. Our method recognizes an inputted character by comparing the inputted character's features and the features of standard characters. The recognition rate was improved by our method with directional features and direction-change features as opposed to the traditional method with only directional features. Moreover, the recognition rate was also improved by considering imaginary strokes in the pen-up state. --- paper_title: Recognition of handprinted chinese characters via stroke relaxation paper_content: Abstract A new relaxation matching method based on the information of the neighborhood relationship among extracted sub-strokes is proposed to recognize handprinted Chinese characters (HCCs). In order to ensure the convergence in the relaxation process, a new iterated scheme is devised. A supporting function is also designed to solve the problem of wide variability among writers and some inevitable defects in the preprocessing procedure. The distance function on which the matching possibilities of sub-strokes are reflected is determined by using the linear programming method to obtain the best result. The experiments are conducted by using the Kanji of the ETL-8 database. From the experimental results, it is shown that the proposed algorithm does improve the recognition rate of HCCs. --- paper_title: Direct extraction of topographic features for gray scale character recognition paper_content: Optical character recognition (OCR) traditionally applies to binary-valued imagery although text is always scanned and stored in gray scale. However, binarization of multivalued image may remove important topological information from characters and introduce noise to character background. In order to avoid this problem, it is indispensable to develop a method which can minimize the information loss due to binarization by extracting features directly from gray scale character images. In this paper, we propose a new method for the direct extraction of topographic features from gray scale character images. By comparing the proposed method with Wang and Pavlidis' method, we realized that the proposed method enhanced the performance of topographic feature extraction by computing the directions of principal curvature efficiently and prevented the extraction of unnecessary features. We also show that the proposed method is very effective for gray scale skeletonization compared to Levi and Montanari's method. > --- paper_title: The HOVER system for rapid holistic verification of off-line handwritten phrases paper_content: The authors describe ongoing research on a system for rapid verification of unconstrained off-line handwritten phrases using perceptual holistic features of the handwritten phrase image. The system is used to verify handwritten street names automatically extracted from live US mail against recognition results of analytical classifiers. The system rejects errors with 98% accuracy at the 30% accept level, while consuming approximately 20 msec per image on the average on a 150 MHz SPARC 10. --- paper_title: Hierarchical attributed graph representation and recognition of handwritten chinese characters paper_content: Abstract This paper presents a new method of recognizing handwritten Chinese characters. A structural representation called hierarchical attributed graph representation (HAGR) is introduced to describe handwritten Chinese characters. The HAGR provides a simple and direct representation of handwritten Chinese characters. With HAGR, the recognition process becomes a simple task of graph matching. A cost function mapping a candidate to a model graph is introduced. This approach can tolerate the variations of HAGR which reflect the instabilities or variabilities of handwritten Chinese characters resulting from different writing styles. Several rules have been introduced to rearrange the order of the vertices of the graphs in order to avoid the combinatorial explosion in graph matching. In addition, the database of the character models is organized in a search-tree structure. For a candidate character, the search process to find a corresponding model character has been divided into a number of simple and local decisions at different levels of the tree. This considerably improves the efficiency and accuracy of the matching process. --- paper_title: Performance comparison of several feature selection methods based on node pruning in handwritten character recognition paper_content: The paper presents a performance comparison of several feature selection methods based on neural network node pruning. Assuming the features are extracted and presented as the inputs of a 3 layered perceptron classifier, we apply the five feature selection methods before/during/after neural network training in order to prune only input nodes of the neural network. Four of them are node pruning methods such as node saliency method, node sensitivity method, and two interactive pruning methods using different contribution measures. The last one is a statistical method based on principle component analysis (PCA). The first two of them prune input nodes during training whereas the last three do before/after network training. For gradient and upper down, left right hole concavity features, we perform several experiments of handwritten English alphabet and digit recognition with/without pruning using the five feature selection algorithms, respectively. The experimental results show that node saliency method outperforms the others. --- paper_title: Recognizing components of handwritten characters by attributed relational graphs with stable features paper_content: We present a method for Chinese character component recognition. We use an attributed relational graph model to describe a component. This model allows us to express the knowledge of the component shape and can include stable features of the component in various writing styles and various characters. We then describe a method to extract graph representation of an input character. The graph models are used to recognize components from a whole character by subgraph isomorphism. Our method need not segment component from character and can tolerate links and overlaps between a component and the other part. Experiments for handwritten Chinese characters show its efficiency. --- paper_title: Automatic feature generation for handwritten digit recognition paper_content: An automatic feature generation method for handwritten digit recognition is described. Two different evaluation measures, orthogonality and information, are used to guide the search for features. The features are used in a backpropagation trained neural network. Classification rates compare favorably with results published in a survey of high-performance handwritten digit recognition systems. This classifier is combined with several other high performance classifiers. Recognition rates of around 98% are obtained using two classifiers on a test set with 1000 digits per class. --- paper_title: Statistical Pattern Recognition: A Review paper_content: The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field. --- paper_title: Representation and Recognition of Handwritten Digits Using Deformable Templates paper_content: We investigate the application of deformable templates to recognition of handprinted digits. Two characters are matched by deforming the contour of one to fit the edge strengths of the other, and a dissimilarity measure is derived from the amount of deformation needed, the goodness of fit of the edges, and the interior overlap between the deformed shapes. Classification using the minimum dissimilarity results in recognition rates up to 99.25 percent on a 2,000 character subset of NIST Special Database 1. Additional experiments on an independent test data were done to demonstrate the robustness of this method. Multidimensional scaling is also applied to the 2,000/spl times/2,000 proximity matrix, using the dissimilarity measure as a distance, to embed the patterns as points in low-dimensional spaces. A nearest neighbor classifier is applied to the resulting pattern matrices. The classification accuracies obtained in the derived feature space demonstrate that there does exist a good low-dimensional representation space. Methods to reduce the computational requirements, the primary limiting factor of this method, are discussed. --- paper_title: Handwritten word recognition using segmentation-free hidden Markov modeling and segmentation-based dynamic programming techniques paper_content: A lexicon-based, handwritten word recognition system combining segmentation-free and segmentation-based techniques is described. The segmentation-free technique constructs a continuous density hidden Markov model for each lexicon string. The segmentation-based technique uses dynamic programming to match word images and strings. The combination module uses differences in classifier capabilities to achieve significantly better performance. --- paper_title: On machine recognition of hand-printed Chinese characters by feature relaxation paper_content: Abstract A new relaxation matching method based on features is introduced for the recognition of hand-printed Chinese characters. The types of features are selected carefully to reflect the structural information of characters. Matching probabilities between two features, one from the mask and the other from input, are computed by the relaxation method. A new distance measure between two characters based on these matching probabilities is defined. We demonstrate, through examples, the utility of the new approach in the recognition of hand-printed Chinese characters. It is especially powerful in distinguishing similarly-shaped characters within a cluster produced by preclassification. --- paper_title: Handprinted character recognition based on spatial topology distance measurement paper_content: In this work we present a self-organization matching approach to accomplish the recognition of handprinted characters drawn with thick strokes. This approach is used to flex the unknown handprinted character toward matching its object characters gradually. The extracted character features used in the self-organization matching are center loci, orientation, and major axes of ellipses which fit the inked area of the patterns. Simulations provide encouraging results using the proposed method. --- paper_title: A note on binary template matching paper_content: Abstract This paper considers some generalizations of binary template matching procedures which enable one to weight matches according to both statistical and spatial information. --- paper_title: A tutorial on hidden Markov models and selected applications in speech recognition paper_content: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > --- paper_title: Recognition of cursive writing on personal checks paper_content: The system described in this paper applies Hidden Markov technology both to the task of recognizing the cursive legal amount on personal checks and the isolated (numeric) courtesy amount. --- paper_title: A new scheme for off-line handwritten connected digit recognition paper_content: A scheme is proposed for off-line handwritten connected digit recognition, which uses a sequence of segmentation and recognition algorithms. First, the connected digits are segmented by employing both the gray scale and binary information. Then, a new set of features is extracted from the segments. The parameters of the feature set are adjusted during the training stage of the hidden Markov model (HMM) where the potential digits are recognized. Finally, in order to confirm the preliminary segmentation and recognition results, a recognition based segmentation method is presented. --- paper_title: Recognition of handwritten digits using template and model matching paper_content: Abstract A pipeline strategy for handwritten numeral recognition that combines a two-stage template-based technique and a model-based technique is described. The template matcher combines multiple information sources. The second stage of the template matcher was trained on rejects from the first stage. The template matcher classifies 70–80% of the digits with reliability rates over 99%. It also generates class membership hypotheses for the remaining digits which constrain the model-based system. Recognition rates of 94.03–96.39% and error rates of 0.54%–1.05% are obtained on test data consisting of over 13,000 well-segmented digits from ZIP codes in the USPS mail. --- paper_title: Noise, histogram and cluster validity for Gaussian-mixtured data paper_content: Abstract In this study, a critique of the clustering methodology is carried out for the definition of a cluster, determination of the number of clusters and evaluation of heuristic partitional clustering algorithms, when the data is a noisy Gaussian Mixture. The effects of noise in determining the number of clusters and the clustering parameters are investigated. Two cluster validity criteria, namely, the likelihood information criterion and the sum of squared error are described. It is concluded that these criteria can be used as a guide in deciding on the number of valid clusters. By using the proposed sum of squared error criterion, an improvement algorithm which reduces the effect of noise on the results of heuristic clustering algorithms is described. --- paper_title: One dimensional representation of two dimensional information for HMM based handwritten recognition paper_content: We introduce a new set of one-dimensional discrete, constant length features to represent two dimensional shape information for HMM (hidden Markov model), based handwritten optical character recognition problem. The proposed feature set embeds the two dimensional information into a sequence of one-dimensional codes, selected from a code book. It provides a consistent normalization among distinct classes of shapes, which is very convenient for HMM based shape recognition schemes. The new feature set is used in a handwritten optical character recognition scheme, where a sequence of segmentation and recognition stages is employed. The normalization parameters, which maximize the recognition rate, are dynamically estimated in the training stage of the HMM. The proposed character recognition system is tested on both a locally generated cursively handwritten data and isolated number digits of NIST database. The experimental results indicate high recognition rates. --- paper_title: Cursive script recognition by elastic matching paper_content: Dynamic programming has been found useful for performing nonlinear time warping for matching patterns in automatic speech recognition. Here, this technique is applied to the problem of recognizing cursive script. The parameters used in the matching are derived from time sequences of x-y coordinate data of words handwritten on an electronic tablet. Chosen for their properties of invariance with respect to size and translation of the writing, these parameters are found particularly suitable for the elastic matching technique. A salient feature of the recognition system is the establishment, in a training procedure, of prototypes by each writer using the system. In this manner, the system is tailored to the user. Processing is performed on a word-by-word basis after the writing is separated into words. Using prototypes for each letter, the matching procedure allows any letter to follow any letter and finds the letter sequence which best fits the unknown word. A major advantage of this procedure is that it combines letter segmentation and recognition in one operation by, in essence, evaluating recognition at all possible segmentations, thus avoiding the usual segmentation-then-recognition philosophy. Results on cursive writing are presented where the alphabet is restricted to the lower-case letters. Letter recognition accuracy is over 95 percent for each of three writers. --- paper_title: Handwritten character classification using nearest neighbor in large databases paper_content: Shows that systems built on a simple statistical technique and a large training database can be automatically optimized to produce classification accuracies of 99% in the domain of handwritten digits. It is also shown that the performance of these systems scale consistently with the size of the training database, where the error rate is cut by more than half for every tenfold increase in the size of the training set from 10 to 100,000 examples. Three distance metrics for the standard nearest neighbor classification system are investigated: a simple Hamming distance metric, a pixel distance metric, and a metric based on the extraction of penstroke features. Systems employing these metrics were trained and tested on a standard, publicly available, database of nearly 225,000 digits provided by the National Institute of Standards and Technology. Additionally, a confidence metric is both introduced by the authors and also discovered and optimized by the system. The new confidence measure proves to be superior to the commonly used nearest neighbor distance. > --- paper_title: A Hierarchical Approach to Efficient Curvilinear Object Searching paper_content: Curvilinear object searching is a common problem encountered in pattern recognition and information retrieval. How to improve the efficiency of searching is the major concern, especially when the data set is large. In this paper we propose a hierarchical approach, where high-level, salient shape features of various types are extracted and used to represent curvilinear objects at different levels of abstraction. The searching process is carried out top-down?first at the top level where only numbers of features of the same type are compared, then at the middle level where the geometric constraints among the features are checked, and finally at the bottom level where the parts between the features are considered. The searching space is reduced at each level and finally the most extensive matching operation needs to be applied to only a restricted set of candidates, thus achieving high efficiency. The general scheme has been implemented in two different applications, road image matching and cursive handwriting recognition. Experimental results from both applications are reported. Guidelines for feature selection are also provided to facilitate adaptation of the general scheme to other applications. --- paper_title: Generalized hidden Markov models. II. Application to handwritten word recognition paper_content: For part I see ibid. vol.8, no. 1 (2000). This paper presents an application of the generalized hidden Markov models to handwritten word recognition. The system represents a word image as an ordered list of observation vectors by encoding features computed from each column in the given word image. Word models are formed by concatenating the state chains of the constituent character hidden Markov models. The novel work presented includes the preprocessing, feature extraction, and the application of the generalized hidden Markov models to handwritten word recognition. Methods for training the classical and generalized (fuzzy) models are described. Experiments were performed on a standard data set of handwritten word images obtained from the US Post Office mail stream, which contains real-word samples of different styles and qualities. --- paper_title: Techniques for automatically correcting words in text paper_content: Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n -gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text. --- paper_title: Off-line handwritten word recognition using a hidden Markov model type stochastic network paper_content: Because of large variations involved in handwritten words, the recognition problem is very difficult. Hidden Markov models (HMM) have been widely and successfully used in speech processing and recognition. Recently HMM has also been used with some success in recognizing handwritten words with presegmented letters. In this paper, a complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model type stochastic network is presented. Our scheme includes a morphology and heuristics based segmentation algorithm, a training algorithm that can adapt itself with the changing dictionary, and a modified Viterbi algorithm which searches for the (l+1)th globally best path based on the previous l best paths. Detailed experiments are carried out and successful recognition results are reported. > --- paper_title: The segmental K-means algorithm for estimating parameters of hidden Markov models paper_content: The authors discuss and document a parameter estimation algorithm for data sequence modeling involving hidden Markov models. The algorithm, called the segmental K-means method, uses the state-optimized joint likelihood for the observation data and the underlying Markovian state sequence as the objective function for estimation. The authors prove the convergence of the algorithm and compare it with the traditional Baum-Welch reestimation method. They also print out the increased flexibility this algorithm offers in the general speech modeling framework. > --- paper_title: An HMM-based legal amount field OCR system for checks paper_content: The system described in this paper applies hidden Markov technology to the task of recognizing the handwritten legal amount on personal checks. We argue that the most significant source of error in handwriting recognition is the segmentation process. In traditional handwriting OCR systems, recognition is performed at the character level, using the output of an independent segmentation step. Using a fixed stepsize series of vertical slices from the image, the HMM system described in this paper avoids taking segmentation decisions early in the recognition process. --- paper_title: Fuzzy approach to solve the recognition problem of handwritten chinese characters paper_content: Abstract A method based on the concept of fuzzy set for handwritten Chinese character (HCC) recognition is proposed in this paper. Chinese characters can be viewed as a collection of line segments, called strokes. Since the strokes under consideration here are fuzzy in nature, the concept of fuzzy set is utilized in the similarity measure. Two membership functions are defined for the location measure and type measure between two strokes, and a function of fuzzy entropy is used in information measure. Although the recognition problem can be reduced to the assignment problem, some modifications are still necessary. All the similarities between the corresponding strokes can be chosen by solving the assignment problem using the cost function of fuzzy entropy, and then are averaged to derive the score of similarity between two Chinese characters. 881 classes of Chinese characters in ETL-8 (160 variations/class) are used as the test patterns, and the recognition rate is about 96%. In addition, experiments about the effects of the membership function based on the class separability are also discussed in this paper. --- paper_title: A fuzzy approach to hand-written rotation-invariant character recognition paper_content: A novel approach based on fuzzy set theory is developed for recognizing handwritten rotated characters. This fuzzy approach consists of four steps: (1) generating crisp sets for reference characters rotated through different degrees; (2) fuzzifying these crisp sets; (3) determining the degrees of a given character to the fuzzy sets; and (4) classifying the given character based on an average rule or a maximum rule. Simulation results show that the fuzzy approach correctly classified 94% to 100% of a small test set of characters. > --- paper_title: A fuzzy-syntactic approach to allograph modeling for cursive script recognition paper_content: This paper presents an original method for creating allograph models and recognizing them within cursive handwriting. This method concentrates on the morphological aspect of cursive script recognition. It uses fuzzy-shape grammars to define the morphological characteristics of conventional allographs which can be viewed as basic knowledge for developing a writer independent recognition system. The system uses no linguistic knowledge to output character sequences that possibly correspond to an unknown cursive word input. The recognition method is tested using multi-writer cursive random letter sequences. For a test dataset containing a handwritten cursive text 600 characters in length written by ten different writers, average character recognition rates of 84.4% to 91.6% are obtained, depending on whether only the best character sequence output of the system is considered or if the best of the top 10 is accepted. These results are achieved without any writer-dependent tuning. The same dataset is used to evaluate the performance of human readers. An average recognition rate of 96.0% was reached, using ten different readers, presented with randomized samples of each writer. The worst reader-writer performance was 78.3%. Moreover, results show that system performances are highly correlated with human performances. > --- paper_title: A fuzzy graph theoretic approach to recognize the totally unconstrained handwritten numerals paper_content: Abstract An automatic off-line character recognition system for totally unconstrained handwritten numerals is presented. The system was trained and tested on the field data collected by the U.S. Postal Services Department from dead letter envelopes. It was trained on 1763 unnormalized samples. The training process produced a feasible set of 105 Fuzzy Constrained Character Graph Models (FCCGMs). FCCGMs tolerate large variability in size, shape and writing style. Characters were recognized by applying a set of rules to match a character tree representation to a FCCGM. A character tree is obtained by first converting the character skeleton into an approximate polygon and then transforming the polygon into a tree structure suitable for recognition purposes. The system was tested on (not including the training set) 1812 unnormalized samples and it proved to be powerful in recognition rate and tolerance to multi-writer, multi-pen, multi-textured paper, and multi-color ink. Reliability, recognition, substitution error, and rejection rates of the system are 97.1, 90.7, 2.9, and 6.4%, respectively. --- paper_title: Dynamic-programming-based handwritten word recognition using the Choquet fuzzy integral as the match function paper_content: The Choquet fuzzy integral is applied to handwritten ::: word recognition. A handwritten word recognition system is described. The word recognition system assigns a recognition confidence value to each string in a lexicon of candidate strings. The system uses a lexicon-driven approach that integrates segmentation and recognition via dynamic programming matching. The dynamic programming matcher finds a segmentation of the word image for each string in the lexicon. The traditional match score between a segmentation and a string is an average. In this paper, fuzzy integrals are used instead of an average. Experimental results demonstrate the utility of this approach. A surprising result is obtained that indicates a simple choice of fuzzy integral works better than a more complex choice. --- paper_title: Alternatives to variable duration HMM in handwriting recognition paper_content: A successful handwritten word recognition (HWR) system using a variable duration hidden Markov model (VDHMM) and the path discriminant-HMM (PD-HMM) strategy is easy to implement. The central theme of the paper is to show that if the duration statistics are computed, it could be utilized to implement a model-discriminant-HMM (MD-HMM) approach for better experimental results. The paper also describes a PD-HMM based HWR system where the duration statistics are not explicitly computed, but results are still comparable to VDHMM based HWR scheme. --- paper_title: Off-line handwritten word recognition using a hidden Markov model type stochastic network paper_content: Because of large variations involved in handwritten words, the recognition problem is very difficult. Hidden Markov models (HMM) have been widely and successfully used in speech processing and recognition. Recently HMM has also been used with some success in recognizing handwritten words with presegmented letters. In this paper, a complete scheme for totally unconstrained handwritten word recognition based on a single contextual hidden Markov model type stochastic network is presented. Our scheme includes a morphology and heuristics based segmentation algorithm, a training algorithm that can adapt itself with the changing dictionary, and a modified Viterbi algorithm which searches for the (l+1)th globally best path based on the previous l best paths. Detailed experiments are carried out and successful recognition results are reported. > --- paper_title: Recognition of printed text under realistic conditions paper_content: Abstract Past research in OCR has focused on the shape analysis of binarized images, quite often assuming good quality document and isolated characters. Such assumptions are challenged by the conditions met in practice: binarization is difficult for low contrast documents, characters often touch each other, not only on the sides but also between lines, etc. After a brief review of past work we will describe current efforts to deal with OCR as a signal processing problem where the causes of noise and distortions as well the idealized images (definitions of typefaces) are modeled and subjected to a quantitative analysis. The key idea of the analysis is that while printed text images may be binary in an ideal state, the images seen by the sensors are gray scale because of convolution distortion and other causes. Therefore binarization should be carried out at the same time as feature extraction. --- paper_title: A Syntactic Approach for Handwritten Mathematical Formula Recognition paper_content: Mathematical formulas are good examples of two-dimensional patterns as well as pictures or graphics. The use of syntactic methods is useful for interpreting such complex patterns. In this paper we propose a system for the interpretation of 2-D mathematic formulas based on a syntactic parser. This system is able to recognize a large class of 2-D mathematic formulas written on a graphic tablet. It starts the parsing by localization of the ``principal'' operator in the formula and attempts to partition it into subexpressions which are similarly analyzed by looking for a starting character. The generalized parser used in the system has been developed in our group for continuous speech recognition and picture interpretation. --- paper_title: A high-accuracy syntactic recognition algorithm for handwritten numerals paper_content: A new set of topological features (primitives) for use with a syntactic classifier for high-accuracy recognition of handwritten numerals is proposed. The tree grammar used in this study makes it possible to achieve high-recognition speeds with minimal preprocessing of the test pattern. --- paper_title: Character recognition without segmentation paper_content: A segmentation-free approach to OCR is presented as part of a knowledge-based word interpretation model. It is based on the recognition of subgraphs homeomorphic to previously defined prototypes of characters. Gaps are identified as potential parts of characters by implementing a variant of the notion of relative neighborhood used in computational perception. Each subgraph of strokes that matches a previously defined character prototype is recognized anywhere in the word even if it corresponds to a broken character or to a character touching another one. The characters are detected in the order defined by the matching quality. Each subgraph that is recognized is introduced as a node in a directed net that compiles different alternatives of interpretation of the features in the feature graph. A path in the net represents a consistent succession of characters. A final search for the optimal path under certain criteria gives the best interpretation of the word features. Broken characters are recognized by looking for gaps between features that may be interpreted as part of a character. Touching characters are recognized because the matching allows nonmatched adjacent strokes. The recognition results for over 24,000 printed numeral characters belonging to a USPS database and on some hand-printed words confirmed the method's high robustness level. > --- paper_title: Recognition of handprinted chinese characters via stroke relaxation paper_content: Abstract A new relaxation matching method based on the information of the neighborhood relationship among extracted sub-strokes is proposed to recognize handprinted Chinese characters (HCCs). In order to ensure the convergence in the relaxation process, a new iterated scheme is devised. A supporting function is also designed to solve the problem of wide variability among writers and some inevitable defects in the preprocessing procedure. The distance function on which the matching possibilities of sub-strokes are reflected is determined by using the linear programming method to obtain the best result. The experiments are conducted by using the Kanji of the ETL-8 database. From the experimental results, it is shown that the proposed algorithm does improve the recognition rate of HCCs. --- paper_title: Attributed Grammar-A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition paper_content: Attributed grammars are defned from the pattern recognidon point of view and shown to be useful for descriptions of syntactic stuctures as well as semantic attributes in primitives, subpatterns, and patterns. A pattern analysis system using attributed grammars Is proposed for pattern classification and description. This system extracts primitives and their attributes after preprocessing, performs syntax analysis of the resulting pattern representations, computes and extracts subpattern attributes for syntactically accepted patterns, and finally makes decisions according to the Bayes decision rule. Such a system uses a combination of syntactic and statistical pattern recognition techniques, as is demonstrated by illustrative examples and experimental results. --- paper_title: On machine recognition of hand-printed Chinese characters by feature relaxation paper_content: Abstract A new relaxation matching method based on features is introduced for the recognition of hand-printed Chinese characters. The types of features are selected carefully to reflect the structural information of characters. Matching probabilities between two features, one from the mask and the other from input, are computed by the relaxation method. A new distance measure between two characters based on these matching probabilities is defined. We demonstrate, through examples, the utility of the new approach in the recognition of hand-printed Chinese characters. It is especially powerful in distinguishing similarly-shaped characters within a cluster produced by preclassification. --- paper_title: Hierarchical attributed graph representation and recognition of handwritten chinese characters paper_content: Abstract This paper presents a new method of recognizing handwritten Chinese characters. A structural representation called hierarchical attributed graph representation (HAGR) is introduced to describe handwritten Chinese characters. The HAGR provides a simple and direct representation of handwritten Chinese characters. With HAGR, the recognition process becomes a simple task of graph matching. A cost function mapping a candidate to a model graph is introduced. This approach can tolerate the variations of HAGR which reflect the instabilities or variabilities of handwritten Chinese characters resulting from different writing styles. Several rules have been introduced to rearrange the order of the vertices of the graphs in order to avoid the combinatorial explosion in graph matching. In addition, the database of the character models is organized in a search-tree structure. For a candidate character, the search process to find a corresponding model character has been divided into a number of simple and local decisions at different levels of the tree. This considerably improves the efficiency and accuracy of the matching process. --- paper_title: Structure recognition methods for various types of documents paper_content: In this paper, we describe experimental methods of recognizing the document structures of various types of documents in the framework of document understanding. Namely, we interpret document structures with individually characterized document knowledge. The document understanding process is divided into three procedures: the first is the recognition of document structures from a two-dimensional point of view; the second is the recognition of item relationships from a one-dimensional point of view; and the third is the recognition of characters from a zero-dimensional point of view. The procedure for recognizing structures plays the most important role in document understanding. This procedure extracts and classifies the logical item blocks from paper-based documents distinctly. --- paper_title: A generalized knowledge-based system for the recognition of unconstrained handwritten numerals paper_content: A method of recognizing unconstrained handwritten numerals using a knowledge base is proposed. Features are collected from a training set and stored in a knowledge base that is used in the recognition stage. Recognition is accomplished by either an inference process or a structural method. The scheme is general, flexible, and applicable to different methods of feature extraction and recognition. By changing the acceptance parameters, a continuous range of performance can be achieved. Encouraging results on nearly 17000 totally unconstrained handwritten numerals are presented. The performance of the system under different recognition-rejection tradeoff ratios is analyzed in detail. > --- paper_title: Off-line cursive word recognition paper_content: The state of the art in handwriting recognition, especially in cursive word recognition, is surveyed, and some basic notions are reviewed in the field of picture recognition, particularly, line image recognition. The usefulness of 'regular' versus 'singular' classes of features is stressed. These notions are applied to obtain a graph, G, representing a line image, and also to find an 'axis' as the regular part of G. The complements to G of the axis are the 'tarsi', singular parts of G, which correspond to informative features of a cursive word. A segmentation of the graph is obtained, giving a symbolic description chain (SDC). Using one or more as robust anchors, possible words in a list of words are selected. Candidate words are examined to see if the other letters fit the rest of the SDC. Good results are obtained for clean images of words written by several persons. > --- paper_title: Techniques for automatically correcting words in text paper_content: Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n -gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text. --- paper_title: Building bilingual microcomputer systems paper_content: In the Arab world the need for bilingual microcomputer systems is ever increasing. In addition to the ability to process the Arabic and English scripts, an ideal system should support the use of existing applications with Arabic data and the access to the system facilities through Arabic interfaces. The Integrated Arabic System (IAS) was developed to study the feasibility of building such systems using existing microcomputers and software solutions. --- paper_title: Postprocessing of recognized strings using nonstationary Markovian models paper_content: This paper presents nonstationary Markovian models and their application to recognition of strings of tokens. Domain specific knowledge is brought to bear on the application of recognizing zip codes in the US mailstream by the use of postal directory files. These files provide a wealth of information on the delivery points (mailstops) corresponding to each zip code. This data feeds into the models as n-grams, statistics that are integrated with recognition scores of digit images. An especially interesting facet of the model is its ability to excite and inhibit certain positions in the n-grams leading to the familiar area of Markov random fields. We empirically illustrate the success of Markovian modeling in postprocessing applications of string recognition. We present the recognition accuracy of the different models on a set of 20000 zip codes. The performance is superior to the present system which ignores all contextual information and simply relies on the recognition scores of the digit recognizers. --- paper_title: A three-dimensional neural network model for unconstrained handwritten numeral recognition : A new approach paper_content: The paper describes a three-dimensional (3-D) neural network recognition system for conflict resolution in recognition of unconstrained handwritten numerals. This neural network classifier is a combination of modified self-organizing map (MSOM) and learning vector quantization (LVQ). The 3-D neural network recognition system has many layers of such neural network classifiers and the number of layers forms the third dimension. The Experiments are conducted employing SOM, MSOM, SOM and LVQ, and MSOM and LVQ networks. These experiments on a database of unconstrained handwritten samples show that the combination of MSOM and LVQ performs better than other networks in terms of classification, recognition and training time. The 3-D neural network eliminates the substitution error. --- paper_title: Artificial Neural Networks: A Tutorial paper_content: Artificial neural nets (ANNs) are massively parallel systems with large numbers of interconnected simple processors. The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model. It outlines network architectures and learning processes, and presents some of the most commonly used ANN models. It concludes with character recognition, a successful ANN application. --- paper_title: Handprinted character recognition based on spatial topology distance measurement paper_content: In this work we present a self-organization matching approach to accomplish the recognition of handprinted characters drawn with thick strokes. This approach is used to flex the unknown handprinted character toward matching its object characters gradually. The extracted character features used in the self-organization matching are center loci, orientation, and major axes of ellipses which fit the inked area of the patterns. Simulations provide encouraging results using the proposed method. --- paper_title: Handwritten digit recognition by adaptive-subspace self-organizing map (ASSOM) paper_content: The adaptive-subspace self-organizing map (ASSOM) proposed by Kohonen is a recent development in self-organizing map (SOM) computation. In this paper, we propose a method to realize ASSOM using a neural learning algorithm in nonlinear autoencoder networks. Our method has the advantage of numerical stability. We have applied our ASSOM model to build a modular classification system for handwritten digit recognition. Ten ASSOM modules are used to capture different features in the ten classes of digits. When a test digit is presented to all the modules, each module provides a reconstructed pattern and the system outputs a class label by comparing the ten reconstruction errors. Our experiments show promising results. For relatively small size modules, the classification accuracy reaches 99.3% on the training set and over 97% on the testing set. --- paper_title: High accuracy optical character recognition using neural networks with centroid dithering paper_content: Optical character recognition (OCR) refers to a process whereby printed documents are transformed into ASCII files for the purpose of compact storage, editing, fast retrieval, and other file manipulations through the use of a computer. The recognition stage of an OCR process is made difficult by added noise, image distortion, and the various character typefaces, sizes, and fonts that a document may have. In this study a neural network approach is introduced to perform high accuracy recognition on multi-size and multi-font characters; a novel centroid-dithering training process with a low noise-sensitivity normalization procedure is used to achieve high accuracy results. The study consists of two parts. The first part focuses on single size and single font characters, and a two-layered neural network is trained to recognize the full set of 94 ASCII character images in 12-pt Courier font. The second part trades accuracy for additional font and size capability, and a larger two-layered neural network is trained to recognize the full set of 94 ASCII character images for all point sizes from 8 to 32 and for 12 commonly used fonts. The performance of these two networks is evaluated based on a database of more than one million character images from the testing data set. > --- paper_title: Perceptrons An Introduction To Computational Geometry paper_content: Thank you very much for downloading perceptrons an introduction to computational geometry. As you may know, people have search numerous times for their chosen books like this perceptrons an introduction to computational geometry, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some harmful virus inside their laptop. --- paper_title: A novel feature recognition neural network and its application to character recognition paper_content: Presents a feature recognition network for pattern recognition that learns the patterns by remembering their different segments. The base algorithm for this network is a Boolean net algorithm that the authors developed during past research. Simulation results show that the network can recognize patterns after significant noise, deformation, translation and even scaling. The network is compared to existing popular networks used for the same purpose, especially the Neocognitron. The network is also analyzed as regards to interconnection complexity and information storage/retrieval. > --- paper_title: A two-stage multi-network OCR system with a soft pre-classifier and a network selector paper_content: We propose a generic two-stage multi-network classification scheme and a realization of this generic scheme: a two-stage multi-network OCR system. The generic two-stage multi-network classification scheme decomposes the estimation of a posteriori probabilities into two coarse-to-fine stages. This generic classification scheme is especially suitable for the classification tasks which involve a large number of categories. The two-stage multi-network OCR system consists of a bank of specialized networks, each of which is designed to recognize a subset of whole character set. A soft pre-classifier and a network selector are employed in the two-stage multi-network OCR system for selectively invoking necessary specialized network. The network selector makes decisions based on both the prior case information and the outputs of the pre-classifier. Compared with the system which uses either a single network or one-stage multiple networks, the two-stage multi-network OCR system offers advantages in recognition accuracy, confidence measure, speed, and flexibility. --- paper_title: A method of combining multiple experts for the recognition of unconstrained handwritten numerals paper_content: For pattern recognition, when a single classifier cannot provide a decision which is 100 percent correct, multiple classifiers should be able to achieve higher accuracy. This is because group decisions are generally better than any individual's. Based on this concept, a method called the "Behavior-Knowledge Space Method" was developed, which can aggregate the decisions obtained from individual classifiers and derive the best final decisions from the statistical point of view. Experiments on 46451 samples of unconstrained handwritten numerals have shown that this method achieves very promising performances and outperforms voting, Bayesian, and Dempster-Shafer approaches. > --- paper_title: Adaptive Mixtures of Local Experts paper_content: We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network. --- paper_title: Automatic allograph selection and multiple expert classification for totally unconstrained handwritten character recognition paper_content: We introduce a new method for online character recognition based on the co-operation of two classifiers, respectively operating on static and dynamic character properties. Both classifiers use the nearest neighbour algorithm. References have been selected previously using an unsupervised clustering technique for selecting, in each character class, the most representative allographs. Several co-operation architectures are presented, from the easier (balanced sum of both classifier outputs) types to the most complicated (integrating neural network) one. The recognition improvement varies between 30% and 50% according to the merging technique implemented. We evaluate the performance of each method based on the recognition rate and speed. Results are presented on 62 different character classes, and more than 75000 examples are from the UNIPEN database. --- paper_title: Recognition of handwritten numerals by Quantum Neural Network with fuzzy features paper_content: This paper describes a new kind of neural network – Quantum Neural Network (QNN) – and its application to the recognition of handwritten numerals. QNN combines the advantages of neural modelling and fuzzy theoretic principles. Novel experiments have been designed for in-depth studies of applying the QNN to both real data and confusing images synthesized by morphing. Tests on synthesized data examine QNN's fuzzy decision boundary with the intention to illustrate its mechanism and characteristics, while studies on real data prove its great potential as a handwritten numeral classifier and the special role it plays in multi-expert systems. An effective decision-fusion system is proposed and a high reliability of 99.10% has been achieved. --- paper_title: OCR in a Hierarchical Feature Space paper_content: This paper describes hierarchical OCR, a character recognition methodology that achieves high speed and accuracy by using a multiresolution and hierarchical feature space. Features at different resolutions, from coarse to fine-grained, are implemented by means of a recursive classification scheme. Typically, recognizers have to balance the use of features at many resolutions (which yields a high accuracy), with the burden on computational resources in terms of storage space and processing time. We present in this paper, a method that adaptively determines the degree of resolution necessary in order to classify an input pattern. This leads to optimal use of computational resources. The hierarchical OCR dynamically adapts to factors such as the quality of the input pattern, its intrinsic similarities and differences from patterns of other classes it is being compared against, and the processing time available. Furthermore, the finer resolution is accorded to only certain "zones" of the input pattern which are deemed important given the classes that are being discriminated. Experimental results support the methodology presented. When tested on standard NIST data sets, the hierarchical OCR proves to be 300 times faster than a traditional K-nearest-neighbor classification method, and 10 times taster than a neural network method. The comparison uses the same feature set for all methods. Recognition rate of about 96 percent is achieved by the hierarchical OCR. This is at par with the other two traditional methods. --- paper_title: Combining classifiers based on minimization of a Bayes error rate paper_content: In order to raise a class discrimination power by combining multiple classifiers, the upper bound of a Bayes error rate bounded by the conditional entropy of a class variable and decision variables should be minimized. Wang and Wong (1979) proposed a tree dependence approximation scheme of a high order probability distribution composed of those variables, based on minimizing the upper bound. In addition to that, this paper presents an extended approximation scheme dealing with higher order dependency. Multiple classifiers recognizing unconstrained handwritten numerals were combined by the proposed approximation scheme based on the minimization of the Bayes error rate, and the high recognition rates were obtained by them. --- paper_title: Handwritten numerical recognition based on multiple algorithms paper_content: Abstract In this paper, the authors combine two algorithms for application to the recognition of unconstrained isolated handwritten numerals. The first algorithm employs a modified quadratic discriminant function utilizing direction sensitive spatial features of the numeral image. The second algorithm utilizes features derived from the profile of the character in a structural configuration to recognize the numerals. While both algorithms yield very low error rates, the authors combine the two algorithms in different ways to study the best polling strategy and realize very low error rates (0.2% or less) and rejection rates below 4%. --- paper_title: Improving Performance in Neural Networks Using a Boosting Algorithm paper_content: A boosting algorithm converts a learning machine with error rate less than 50% to one with an arbitrarily low error rate. However, the algorithm discussed here depends on having a large supply of independent training samples. We show how to circumvent this problem and generate an ensemble of learning machines whose performance in optical character recognition problems is dramatically improved over that of a single network. We report the effect of boosting on four databases (all handwritten) consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service (USPS) and the following from the National Institute of Standards and Testing (NIST): 220,000 digits, 45,000 upper case alphas, and 45,000 lower case alphas. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected. Boosting improved performance in some cases by a factor of three. --- paper_title: Automated forms-processing software and services paper_content: While document-image systems for the management of collections of documents, such as forms, offer significant productivity improvements, the entry of information from documents remains a labor-intensive and costly task for most organizations. In this paper, we describe a software system for the machine reading of forms data from their scanned images. We describe its major components: form recognition and “dropout,” intelligent character recognition (ICR), and contextual checking. Finally, we describe applications for which our automated forms reader has been successfully used. --- paper_title: Off-line cursive script word recognition paper_content: Cursive script word recognition is the problem of transforming a word from the iconic form of cursive writing to its symbolic form. Several component processes of a recognition system for isolated offline cursive script words are described. A word image is transformed through a hierarchy of representation levels: points, contours, features, letters, and words. A unique feature representation is generated bottom-up from the image using statistical dependences between letters and features. Ratings for partially formed words are computed using a stack algorithm and a lexicon represented as a trie. Several novel techniques for low- and intermediate-level processing for cursive script are described, including heuristics for reference line finding, letter segmentation based on detecting local minima along the lower contour and areas with low vertical profiles, simultaneous encoding of contours and their topological relationships, extracting features, and finding shape-oriented events. Experiments demonstrating the performance of the system are also described. > --- paper_title: Detection and correction of recognition errors in check reading paper_content: An important subtask in the automated reading of bank checks by an OCR program is the combination of the recognition results of the courtesy and the legal amount. In this paper we present some new techniques to solve this problem. First, a systematic approach for the translation of the legal amount into a digit string is described. The technique is based on syntax-directed translation. Once the result of the legal amount recognizer has been translated into the corresponding digit string, it can be easily compared to the result of the courtesy amount recognizer. Thus, inconsistencies between the legal and the courtesy amount can be detected and precisely located at the level of individual subwords in the legal amount and digits in the courtesy amount. Moreover, recognition errors can be potentially corrected. In experiments with real data from Swiss postal checks a significant improvement in the acceptance rate could be achieved without increasing the error rate of the overall system. --- paper_title: An efficient algorithm for matching a lexicon with a segmentation graph paper_content: This paper presents an efficient algorithm for lexicon-driven handwritten word recognition. In this algorithm, a word image is represented by a segmentation graph, and the lexicon is represented by a trie. As opposed to the standard lexicon-driven matching approach, where dynamic programming is invoked independently for matching each entry in the lexicon against the segmentation graph, the proposed algorithm matches the trie with the segmentation graph. Computation is saved by the efficient representation of the lexicon using the trie data structure. The performance of the proposed approach is compared with the standard dynamic programming algorithm. The proposed approach saves about 48.4% (excluding the trie initialization cost) and 15% of computation time from the standard algorithm when a dynamic lexicon is used. Better performance can be expected in static lexicon cases where the trie needs to be constructed only once. --- paper_title: An architecture for handwritten text recognition systems paper_content: This paper presents an end-to-end system for reading handwritten page images. Five functional modules included in the system are introduced in this paper: (i) pre-processing, which concerns introducing an image representation for easy manipulation of large page images and image handling procedures using the image representation; (ii) line separation, concerning text line detection and extracting images of lines of text from a page image; (iii) word segmentation, which concerns locating word gaps and isolating words from a line of text image obtained efficiently and in an intelligent manner; (iv) word recognition, concerning handwritten word recognition algorithms; and (v) linguistic post-pro- cessing, which concerns the use of linguistic constraints to intelligently parse and recognize text. Key ideas employed in each functional module, which have been developed for dealing with the diversity of handwriting in its various aspects with a goal of system reliability and robustness, are described in this paper. Preliminary experiments show promising results in terms of speed and accuracy. --- paper_title: Design of a linguistic postprocessor using variable memory length Markov models paper_content: We describe a linguistic postprocessor for character recognizers. The central module of our system is a trainable variable memory length Markov model (VLMM) that predicts the next character given a variable length window of past characters. The overall system is composed of several finite state automata, including the main VLMM and a proper noun VLMM. The best model reported in the literature (Brown et al., 1992) achieves 1.75 bits per character on the Brown corpus. On that same corpus, our model, trained on 10 times less data, reaches 2.19 bits per character and is 200 times smaller (/spl sime/160,000 parameters). The model was designed for handwriting recognition applications but could also be used for other OCR problems and speech recognition. --- paper_title: Lexicon-driven handwritten word recognition using optimal linear combinations of order statistics paper_content: In the standard segmentation-based approach to handwritten word recognition, individual character-class confidence scores are combined via averaging to estimate confidences in the hypothesized identities for a word. We describe a methodology for generating optimal linear combination of order statistics operators for combining character class confidence scores. Experimental results are provided on over 1000 word images. --- paper_title: Techniques for automatically correcting words in text paper_content: Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n -gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text. --- paper_title: Off-line Arabic character recognition: the state of the art paper_content: Machine simulation of human reading has been the subject of intensive research for almost three decades. A large number of research papers and reports have already been published on Latin, Chinese and Japanese characters. However, little work has been conducted on the automatic recognition of Arabic characters because of the complexity of printed and handwritten text, and this problem is still an open research field. The main objective of this paper is to present the state of Arabic character recognition research throughout the last two decades. --- paper_title: On-Line and Off-Line Handwriting Recognition : A Comprehensive Survey paper_content: Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the online case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered. ---
Title: An Overview Of Character Recognition Section 1: INTRODUCTION Description 1: Provide an introduction to character recognition (CR), its significance, the different types and stages, and the structure of the paper. Section 2: HISTORY Description 2: Discuss the historical developments and progress in character recognition, from early efforts to recent advancements. Section 3: CHARACTER RECOGNITION (CR) SYSTEMS Description 3: Classify character recognition systems based on data acquisition techniques and text type, and discuss the various methods and applications. Section 4: METHODOLOGIES OF CR SYSTEMS Description 4: Review the methodologies and techniques used in CR systems, covering pre-processing, segmentation, representation, training and recognition, and post-processing stages. Section 5: DISCUSSION Description 5: Summarize the current state of CR research, highlight existing challenges and limitations, and propose future research directions.
A survey of single and multi-hop link schedulers for mmWave wireless systems
7
--- paper_title: Augmenting data center networks with multi-gigabit wireless links paper_content: The 60 GHz wireless technology that is now emerging has the potential to provide dense and extremely fast connectivity at low cost. In this paper, we explore its use to relieve hotspots in oversubscribed data center (DC) networks. By experimenting with prototype equipment, we show that the DC environment is well suited to a deployment of 60GHz links contrary to concerns about interference and link reliability. Using directional antennas, many wireless links can run concurrently at multi-Gbps rates on top-of-rack (ToR) switches. The wired DC network can be used to sidestep several common wireless problems. By analyzing production traces of DC traffic for four real applications, we show that adding a small amount of network capacity in the form of wireless flyways to the wired DC network can improve performance. However, to be of significant value, we find that one hop indirect routing is needed. Informed by our 60GHz experiments and DC traffic analysis, we present a design that uses DC traffic levels to select and adds flyways to the wired DC network. Trace-driven evaluations show that network-limited DC applications with predictable traffic workloads running on a 1:2 oversubscribed network can be sped up by 45% in 95% of the cases, with just one wireless device per ToR switch. With two devices, in 40% of the cases, the performance is identical to that of a non-oversubscribed network. --- paper_title: Robust Topology Engineering in Multiradio Multichannel Wireless Networks paper_content: Topology engineering concerns with the problem of automatic determination of physical layer parameters to form a network with desired properties. In this paper, we investigate the joint power control, channel assignment, and radio interface selection for robust provisioning of link bandwidth in infrastructure multiradio multichannel wireless networks in presence of channel variability and external interference. To characterize the logical relationship between spatial contention constraints and transmit power, we formulate the joint power control and radio-channel assignment as a generalized disjunctive programming problem. The generalized Benders decomposition technique is applied for decomposing the radio-channel assignment (combinatorial constraints) and network resource allocation (continuous constraints) so that the problem can be solved efficiently. The proposed algorithm is guaranteed to converge to the optimal solution within a finite number of iterations. We have evaluated our scheme using traces collected from two wireless testbeds and simulation studies in Qualnet. Experiments show that the proposed algorithm is superior to existing schemes in providing larger interference margin, and reducing outage and packet loss probabilities. --- paper_title: 60 GHz Wireless: Up Close and Personal paper_content: To meet the needs of next-generation high-data-rate applications, 60 GHz wireless networks must deliver Gb/s data rates and reliability at a low cost. In this article, we surveyed several ongoing challenges, including the design of cost-efficient and low-loss on-chip and in-package antennas and antenna arrays, the characterization of CMOS processes at millimeter-wave frequencies, the discovery of efficient modulation techniques that are suitable for the unique hardware impairments and frequency selective channel characteristics at millimeter-wave frequencies, and the creation of MAC protocols that more effectively coordinate 60 GHz networks with directional antennas. Solving these problems not only provides for wireless video streaming and interconnect replacement, but also moves printed and magnetic media such as books and hard drives to a lower cost, higher reliability semiconductor form factor with wireless connectivity between and within devices. --- paper_title: On-Body Propagation at 60 GHz paper_content: The on-body propagation at 60 GHz is studied analytically, numerically and experimentally using a skin-equivalent phantom. First, to provide analytical-based fundamental models of path gain, the theory of propagating waves near a flat phantom is studied by considering vertical and horizontal elementary dipoles. The analytical models are in excellent agreement with full-wave simulations. For a vertically polarized wave, a minimum power decay exponent of 3.5 is found. Then, propagation on the body is investigated experimentally in vertical and horizontal polarizations using two linearly-polarized open-ended waveguides. The analytical models fit very well with the measurements. Furthermore, the effect of polarization on the antenna performance is studied numerically and experimentally. --- paper_title: On 60 GHz wireless link performance in indoor environments paper_content: The multi-Gbps throughput potential of 60 GHz wireless interfaces make them an attractive technology for next-generation gigabit WLANs. For increased coverage, and improved resilience to human-body blockage, beamsteering with high-gain directional antennas is emerging to be an integral part of 60 GHz radios. However, the real-world performance of these state-of-the-art radios in typical indoor environments has not previously been explored well in open literature. ::: ::: To this end, in this paper, we address the following open questions: how do these radios perform in indoor line-of-sight(LOS) and non-line-of-sight (NLOS) locations? how sensitive is performance to factors such as node orientation or placement? how robust is performance to human-body blockage and mobility? Our measurement results from a real office setting, using a first-of-its-kind experimental platform (called Presto), show that, contrary to conventional perception, state-of-the-art 60 GHz radios perform well even in NLOS locations, in the presence of human-body blockage and LOS mobility. While their performance is affected by node (or more precisely, antenna array) orientation, simply using a few more antenna arrays and dynamically selecting amongst them shows potential to address this issue. The implications of these observations is in lowering the barriers to their adoption in next-generation gigabit WLANs. --- paper_title: Robust Topology Engineering in Multiradio Multichannel Wireless Networks paper_content: Topology engineering concerns with the problem of automatic determination of physical layer parameters to form a network with desired properties. In this paper, we investigate the joint power control, channel assignment, and radio interface selection for robust provisioning of link bandwidth in infrastructure multiradio multichannel wireless networks in presence of channel variability and external interference. To characterize the logical relationship between spatial contention constraints and transmit power, we formulate the joint power control and radio-channel assignment as a generalized disjunctive programming problem. The generalized Benders decomposition technique is applied for decomposing the radio-channel assignment (combinatorial constraints) and network resource allocation (continuous constraints) so that the problem can be solved efficiently. The proposed algorithm is guaranteed to converge to the optimal solution within a finite number of iterations. We have evaluated our scheme using traces collected from two wireless testbeds and simulation studies in Qualnet. Experiments show that the proposed algorithm is superior to existing schemes in providing larger interference margin, and reducing outage and packet loss probabilities. --- paper_title: Beam switching support to resolve link-blockage problem in 60 GHz WPANs paper_content: In this paper, we propose a solution to resolve link blockage problem in 60 GHz WPANs. Line-of-Sight (LOS) link is easily blocked by a moving person, which is concerned as one of the severe problems in 60 GHz systems. Beamforming is a feasible technique to resolve link blockage by switching the beam path from LOS link to a Non-LOS (NLOS) link. We propose and evaluate two kinds of Beam Switching (BS) mechanisms: instant decision based BS and environment learning based BS. We examine these mechanisms in a typical indoor WPAN scenario. Extensive simulations have been carried out, and our results reveal that combining angle-of-arrival with the received signal to noise ratio could make better decision for beam switching. Our work provides valuable observations for beam switching during point-to-point communication using 60 GHz radio. --- paper_title: Directional MAC Protocol for Millimeter Wave based Wireless Personal Area Networks paper_content: Recently, up to 7 GHz license-free spectrum around 60 GHz has been allocated worldwide for high data rate wireless communications. This enables the deployment of WPANs at 60 GHz for short-range multimedia applications up to gigabits per second. In this paper we propose a new scheme to increase the efficiency of MAC layer protocol for WPANs at 60 GHz when directional antennas are used. Our scheme is based on an adaptation of the current IEEE 15.3 standard MAC protocol for WPANs on two aspects. Firstly, we propose a rate-adaptation based scheme to coordinate the directional and omni-directional transmissions in WPANs. Secondly, we propose a novel channel time allocation algorithm, which enables spatial reuse TDMA. The analytical results reveal that our algorithm significantly increases the system capacity. --- paper_title: Enhanced MAC layer protocol for millimeter wave based WPAN paper_content: The IEEE 802.15.3 standard defines a widely-accepted MAC layer protocol for high data rate WPANs. However, resource allocation schemes are not specified in the IEEE 802.15.3 MAC. In this paper, an enhanced IEEE 802.15.3 MAC is proposed for 60 GHz based WPANs. The enhancement relies on a novel resource management scheme. By exploiting the advanced features of the 60 GHz technology, e.g. the usage of direction antennas, our resource management scheme could realize spatial reuse TDMA in 60 GHz based WPANs. We have implemented our protocol in OPNET. Based on this simulation platform we have evaluated the advantages of using our additions to the IEEE 802.15.3 MAC. We show that our enhanced MAC protocol significantly improves the throughput and delay characteristics in 60 GHz based WPANs. --- paper_title: Scalable Heuristic STDMA Scheduling Scheme for Practical Multi-Gbps Millimeter-Wave WPAN and WLAN Systems paper_content: This paper proposes a hybrid SDMA/TDMA scalable heuristic scheduling scheme for throughput enhancement in practical multi-Gbps millimeter-wave systems with directional antenna. A theoretical cross layer design framework is proposed taking into consideration the MAC layer, PHY layer and millimeter-wave propagation channels based on actual measurements. The proposed generalized framework can be well suited into two of the most prominent industrial wireless communication standards, namely IEEE 802.15.3c WPAN and IEEE 802.11ad WLAN. Employing the developed theoretical framework, evaluation for throughput and error performance is conducted with validation of computer simulations. Firstly, it is found that the proposed STDMA scheme is capable of improving system throughput in the order of typically 4 to 8dB. Secondly, the proposed scheduling scheme offers superior design flexibility through its scalability features, with the optimum heuristic order observed at the 13-th order, a reasonably low value considering the performance enhancement that can be obtained. Lastly, the typical allowable interference in a network due to coexisting communication links sharing the same time slot is recommended to be controlled below -20dB. Numerical results other then the typical findings are detailed in the paper. --- paper_title: Rex: A randomized EXclusive region based scheduling scheme for mmWave WPANs with directional antenna paper_content: Millimeter-wave (mmWave) transmissions are promising technologies for high data rate (multi-Gbps) Wireless Personal Area Networks (WPANs). In this paper, we first introduce the concept of exclusive region (ER) to allow concurrent transmissions to explore the spatial multiplexing gain of wireless networks. Considering the unique characteristics of mmWave communications and the use of omni-directional or directional antennae, we derive the ER conditions which ensure that concurrent transmissions can always outperform serial TDMA transmissions in a mmWave WPAN. We then propose REX, a randomized ER based scheduling scheme, to decide a set of senders that can transmit simultaneously. In addition, the expected number of flows that can be scheduled for concurrent transmissions is obtained analytically. Extensive simulations are conducted to validate the analysis and demonstrate the effectiveness and efficiency of the proposed REX scheduling scheme. The results should provide important guidelines for future deployment of mmWave based WPANs. --- paper_title: CTAP-Minimized Scheduling Algorithm for Millimeter-Wave-Based Wireless Personal Area Networks paper_content: Beamforming is used in IEEE 802.15.3c networks to avoid high propagation attenuation and path loss and improve the overall system throughput by exploiting spatial channel reuse. In this paper, we introduce design challenges of scheduling in beamforming-enabled IEEE 802.15.3c networks. These challenges include positioning, axis alignment, and interference relation verification. We then propose a joint design of axis alignment, positioning, and scheduling. The objectives of the proposed joint design are to reduce the consumed channel time, increase the degree of spatial channel reuse, and improve the channel utilization. For positioning, we define and prove a sufficient condition for anchor selection to improve positioning accuracy. The designed channel time allocation period (CTAP)-minimized scheduling algorithm is depicted as a two-layer flow graph, and it consists of the following three phases: 1) layer-1 edge construction; 2) layer-2 edge construction; and 3) scheduling. Through the observation of transmission and reception beams, we define a rule to verify the interference relation of two flows. In addition, given correct topology information, we prove that CTAP-minimized uses the least time to serve all data flows. We evaluate and compare our algorithm with existing approaches through simulations. The observed performance metrics include utilized channel time, system throughput, scheduling efficiency, and spatial channel reuse degree. The results show that CTAP-minimized performs well and achieves its objectives. --- paper_title: Virtual time-slot allocation scheme for throughput enhancement in a millimeter-wave multi-Gbps WPAN system paper_content: This paper proposes a virtual time-slot allocation (VTSA) scheme for throughput enhancement to realize a multi-Gbps time division multiple access (TDMA) wireless personal area network (WPAN) system in a realistic millimeter-wave residential multipath environment. TDMA system without time-slot-reuse mechanism conventionally allocates one TDMA time-slot to only one communication link at a time. In the proposed VTSA scheme, taking advantage on the large path loss in the millimeterwave band, a single TDMA time-slot can be reallocated and reused by multiple communication links simultaneously (hence the name virtual), thus significantly increasing system throughput. On the other hand, allowing multiple communication links to occupy the same time-slot causes the generation of co-channel interference (CCI). The cross layer VTSA scheme is therefore designed to be able to maximize the throughput improvement by adaptively scheduling the sharing of time-slots, and at the same time monitor the potential performance degradation due to CCI. As a result, it is found that the VTSA scheme is capable of improving system throughput as much as 30% in both AWGN and multipath channels (line-of-sight (LOS) and non-line-of-sight (NLOS) environment). Additionally, by coupling with higher-order modulation schemes, the system is able to achieve up to a maximum throughput of 3.8 Gbps. It is also observed that higher-order modulations although have higher maximum achievable throughput in low CCI environment, the tolerance against increasing CCI is considerably lower than that of the lower-order modulations. --- paper_title: An improved ER scheduling algorithm based spatial reuse scheme for mmWave WPANs paper_content: This article focuses on improving the system capacity of 60GHz wireless personal area networks (WPANs) and puts forward an effective spatial reuse scheme which combines the existing exclusive region (ER) based scheduling algorithm with simple power control. In the study, the concept of ER for concurrent transmissions is introduced first. Considering the large path loss in the millimeter-wave (mmWave) band and the characteristics of typical indoor multipath channels, we modify the signal propagation model as well as ER radius for 60GHz WPANs. Then we propose an improved ER based scheduling algorithm which adopts power control instead of constant transmitting power. In the algorithm, devices are assumed to receive data at the receiver sensitivity, which refers to the minimum receiving power to ensure the reliability for communications. Computer simulations evaluate an excellent performance of the proposed spatial reuse scheme at channel capacity. In addition, the impact of the receiver sensitivity on the system is discussed. --- paper_title: Directional CSMA/CA Protocol with Spatial Reuse for mmWave Wireless Networks paper_content: In recent years, the millimeter wave (mmWave) technology has gained considerable interest due to the huge unlicensed bandwidth (i.e., up to 7GHz) available in the 60GHz band in most part of world. In this paper, we investigate the problem of medium access control (MAC) in mmWave wireless networks, within which directional antennas are used to combat the high path loss incurred in the 60GHz band. We extend a directional CSMA/CA protocol presented in our prior work by exploiting spatial reuse. The proposed protocol adopts virtual carrier sensing and allows non-interfering links to communicate simultaneously. We present a performance analysis as well as simulations to evaluate the proposed protocol. Our results show that the directional MAC with spatial reuse can achieve considerable performance improvements over the 802.11 MAC and the protocol proposed in our prior work. It introduces low protocol overhead and has robust performance even when the network is heavily congested. --- paper_title: STDMA-based scheduling algorithm for concurrent transmissions in directional millimeter wave networks paper_content: In this paper, a concurrent transmission scheduling algorithm is proposed to enhance the resource utilization efficiency for multi-Gbps millimeter-wave (mmWave) networks. Specifically, we exploit spatial-time division multiple access (STDMA) to improve the system throughput by allowing both non-interfering and interfering links to transmit concurrently, considering the high propagation loss at mmWave band and the utilization of directional antenna. Concurrent transmission scheduling in mmWave networks is formulated as an optimization model to maximize the number of flows scheduled in the network such that the quality of service (QoS) requirement of each flow is satisfied. We further decompose the optimization problem and propose a flip-based heuristic scheduling algorithm with low computational complexity to solve the problem. Extensive simulations demonstrate that the proposed algorithm can significantly improve the network performance in terms of network throughput and the number of supported flows. --- paper_title: Scheduling with Reusability Improvement for Millimeter Wave Based Wireless Personal Area Networks paper_content: IEEE 802.15.3c has recently been formed for developing a millimeter-wave (mmWave)-based wireless personal area networks (WPANs). It utilizes 60 GHz license-free spectrum and provides very high data rate (over 3 Gbps). To deal with the problems of high propagation attenuation and path loss, beamforming antennas are utilized. In this paper, we consider the characteristics of beamforming to design an efficient scheduling algorithm. Specifically, we integrate axis alignment and location determination into scheduling mechanism. The designed scheduling algorithm improves the degree of spatial and directional channel reusability and the overall system performance. We evaluate our approach through simulations. The simulation results show that the designed scheduling algorithm performs well and does achieve its objectives. --- paper_title: Beam switching support to resolve link-blockage problem in 60 GHz WPANs paper_content: In this paper, we propose a solution to resolve link blockage problem in 60 GHz WPANs. Line-of-Sight (LOS) link is easily blocked by a moving person, which is concerned as one of the severe problems in 60 GHz systems. Beamforming is a feasible technique to resolve link blockage by switching the beam path from LOS link to a Non-LOS (NLOS) link. We propose and evaluate two kinds of Beam Switching (BS) mechanisms: instant decision based BS and environment learning based BS. We examine these mechanisms in a typical indoor WPAN scenario. Extensive simulations have been carried out, and our results reveal that combining angle-of-arrival with the received signal to noise ratio could make better decision for beam switching. Our work provides valuable observations for beam switching during point-to-point communication using 60 GHz radio. --- paper_title: On frame-based scheduling for directional mmWave WPANs paper_content: Millimeter wave (mmWave) communications in the 60 GHz band can provide multi-gigabit rates for emerging bandwidth-intensive applications, and has thus gained considerable interest recently. In this paper, we investigate the problem of efficient scheduling in mmWave wireless personal area networks (WPAN). We develop a frame-based scheduling directional MAC protocol, termed FDMAC, to achieve the goal of leveraging collision-free concurrent transmissions to fully exploit spatial reuse in mmWave WPANs. The high efficiency of FDMAC is achieved by amortizing the scheduling overhead over multiple concurrent, back-to-back transmissions in a row. The core of FDMAC is a graph coloring-based scheduling algorithm, termed greedy coloring (GC) algorithm, that can compute near-optimal schedules with respect to the total transmission time with low complexity. The proposed FDMAC is analyzed and evaluated under various traffic models and patterns. Its superior performance is validated with extensive simulations. --- paper_title: Multi-User Operation in mmWave Wireless Networks paper_content: In this paper, we investigate the problem of multi-user spatial division multiple access (MU SDMA) operation in mmWave wireless networks, within which directional antennas are used to combat the high path loss incurred in the 60GHz band. We study the feasibility of MU SDMA in mmWave networks and propose two MAC protocols to support CSMA/CA based uplink and downlink MU SDMA transmissions. The proposed protocols adopt virtual carrier sensing and allows multiple users to communicate with an access point (AP) simultaneously. Performance analysis and simulation results both show that the proposed protocols can achieve considerable performance improvements over a system that supports only single user (SU) operation. --- paper_title: Medium Access Control for 60 GHz Outdoor Mesh Networks with Highly Directional Links paper_content: We investigate an architecture for multi-Gigabit outdoor mesh networks operating in the unlicensed 60 GHz "millimeter (mm) wave" band. In this band, the use of narrow beams is essential for attaining the required link ranges in order to overcome the higher path loss at mm wave carrier frequencies. However, highly directional links make standard MAC methods for interference management, such as carrier sense multiple access, which rely on neighboring nodes hearing each other, become inapplicable. In this paper, we study the extent to which we can reduce, or even dispense with, interference management, by exploiting the reduction in interference due to the narrow beamwidths and the oxygen absorption characteristic of the 60 GHz band. We provide a probabilistic analysis of the interference incurred due to uncoordinated transmissions, and show that, for the parameters considered, the links in the network can be thought of as pseudo-wired. That is, interference can essentially be ignored in MAC design, and the challenge is to schedule half-duplex transmissions in the face of the "deafness" resulting from highly directional links. We provide preliminary simulation results to validate our approach. --- paper_title: Opportunistic Spatial Reuse in IEEE 802.15.3c Wireless Personal Area Networks paper_content: The IEEE 802.15.3c wireless personal area network (WPAN) standard is designed to support highly directional wireless communications at the 60-GHz frequency band. Highly directional communications are helpful to reduce interference; hence, great potentials are provided for aggregate network throughput improvement via spatial reuse (SR). However, in 802.15.3c WPANs, the SR is limited in use since the 802.15.3c standard does not specify an explicit procedure for the SR. We design a new and simple scheme exploiting the SR, thus improving aggregate network throughput. In the proposed scheme, the 802.15.3c devices measure and report the measured channel status. Then, a controlling device, which is called the piconet coordinator (PNC) in the 802.15.3c WPAN, schedules resources available for each device. The simulation results demonstrate that the proposed scheme contributes to significant aggregate network throughput improvement. The proposed scheme is so simple that it can be easily employed for practical 802.15.3c devices. In addition, it is guaranteed to work well without conflicting with existing 802.15.3c operations. --- paper_title: Directional MAC Protocol for Millimeter Wave based Wireless Personal Area Networks paper_content: Recently, up to 7 GHz license-free spectrum around 60 GHz has been allocated worldwide for high data rate wireless communications. This enables the deployment of WPANs at 60 GHz for short-range multimedia applications up to gigabits per second. In this paper we propose a new scheme to increase the efficiency of MAC layer protocol for WPANs at 60 GHz when directional antennas are used. Our scheme is based on an adaptation of the current IEEE 15.3 standard MAC protocol for WPANs on two aspects. Firstly, we propose a rate-adaptation based scheme to coordinate the directional and omni-directional transmissions in WPANs. Secondly, we propose a novel channel time allocation algorithm, which enables spatial reuse TDMA. The analytical results reveal that our algorithm significantly increases the system capacity. --- paper_title: Enhanced MAC layer protocol for millimeter wave based WPAN paper_content: The IEEE 802.15.3 standard defines a widely-accepted MAC layer protocol for high data rate WPANs. However, resource allocation schemes are not specified in the IEEE 802.15.3 MAC. In this paper, an enhanced IEEE 802.15.3 MAC is proposed for 60 GHz based WPANs. The enhancement relies on a novel resource management scheme. By exploiting the advanced features of the 60 GHz technology, e.g. the usage of direction antennas, our resource management scheme could realize spatial reuse TDMA in 60 GHz based WPANs. We have implemented our protocol in OPNET. Based on this simulation platform we have evaluated the advantages of using our additions to the IEEE 802.15.3 MAC. We show that our enhanced MAC protocol significantly improves the throughput and delay characteristics in 60 GHz based WPANs. --- paper_title: Angle of arrival extended S-V model for the 60 GHz wireless indoor channel paper_content: In this paper we measure and characterize 55-65 GHz wireless channels for a typical indoor environment. An Angle of Arrival (AoA) Modified Saleh-Valenzuela (S-V) model is used. Key AoA Modified S-V model parameters such as cluster decay factor, ray decay factor, cluster arrival rate, ray arrival rate and aoa ray angular standard deviation are extracted from the measured data. --- paper_title: Augmenting data center networks with multi-gigabit wireless links paper_content: The 60 GHz wireless technology that is now emerging has the potential to provide dense and extremely fast connectivity at low cost. In this paper, we explore its use to relieve hotspots in oversubscribed data center (DC) networks. By experimenting with prototype equipment, we show that the DC environment is well suited to a deployment of 60GHz links contrary to concerns about interference and link reliability. Using directional antennas, many wireless links can run concurrently at multi-Gbps rates on top-of-rack (ToR) switches. The wired DC network can be used to sidestep several common wireless problems. By analyzing production traces of DC traffic for four real applications, we show that adding a small amount of network capacity in the form of wireless flyways to the wired DC network can improve performance. However, to be of significant value, we find that one hop indirect routing is needed. Informed by our 60GHz experiments and DC traffic analysis, we present a design that uses DC traffic levels to select and adds flyways to the wired DC network. Trace-driven evaluations show that network-limited DC applications with predictable traffic workloads running on a 1:2 oversubscribed network can be sped up by 45% in 95% of the cases, with just one wireless device per ToR switch. With two devices, in 40% of the cases, the performance is identical to that of a non-oversubscribed network. --- paper_title: Millimeter Wave WPAN: Cross-Layer Modeling and Multi-Hop Architecture paper_content: The 7 GHz of unlicensed spectrum in the 60 GHz band offers the potential for multiGigabit indoor wireless personal area networking (WPAN). With recent advances in the speed of silicon (CMOS and SiGe) processes, low-cost transceiver realizations in this "millimeter (mm) wave" band are within reach. However, mm wave communication links are more fragile than those at lower frequencies (e.g., 2.4 or 5 GHz) because of larger propagation losses and reduced diffraction around obstacles. On the other hand, directional antennas that provide directivity gains and reduction in delay spread are far easier to implement at mm-scale wavelengths. In this paper, we present a cross-layer modeling methodology and a novel multihop medium access control (MAC) architecture for efficient utilization of 60 GHz spectrum, taking into account the preceding physical characteristics. We propose an in-room WPAN architecture in which every link is constrained to be directional, for improved power efficiency (due to directivity gains) and simplicity of implementation (due to reduced delay spread). We develop an elementary diffraction-based model to determine network link connectivity, and define a multihop MAC protocol that accounts for directional transmission/reception, procedures for topology discovery and recovery from link blockages. --- paper_title: 3D beamforming for wireless data centers paper_content: Contrary to prior assumptions, recent measurements show that data center traffic is not constrained by network bisection bandwidth, but is instead prone to congestion loss caused by short traffic bursts. Compared to the cost and complexity of modifying data center architectures, a much more attractive option is to augment wired links with flexible wireless links in the 60 GHz band. Current proposals, however, are severely constrained by two factors. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles between the endpoints. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. In this paper, we explore the feasibility of a new wireless primitive for data centers, 3D beamforming. We explore the design space, and show how bouncing 60 GHz wireless links off reflective ceilings can address both link blockage and link interference, thus improving link range and number of current transmissions in the data center. --- paper_title: Millimeter Wave WPAN: Cross-Layer Modeling and Multi-Hop Architecture paper_content: The 7 GHz of unlicensed spectrum in the 60 GHz band offers the potential for multiGigabit indoor wireless personal area networking (WPAN). With recent advances in the speed of silicon (CMOS and SiGe) processes, low-cost transceiver realizations in this "millimeter (mm) wave" band are within reach. However, mm wave communication links are more fragile than those at lower frequencies (e.g., 2.4 or 5 GHz) because of larger propagation losses and reduced diffraction around obstacles. On the other hand, directional antennas that provide directivity gains and reduction in delay spread are far easier to implement at mm-scale wavelengths. In this paper, we present a cross-layer modeling methodology and a novel multihop medium access control (MAC) architecture for efficient utilization of 60 GHz spectrum, taking into account the preceding physical characteristics. We propose an in-room WPAN architecture in which every link is constrained to be directional, for improved power efficiency (due to directivity gains) and simplicity of implementation (due to reduced delay spread). We develop an elementary diffraction-based model to determine network link connectivity, and define a multihop MAC protocol that accounts for directional transmission/reception, procedures for topology discovery and recovery from link blockages. --- paper_title: Augmenting data center networks with multi-gigabit wireless links paper_content: The 60 GHz wireless technology that is now emerging has the potential to provide dense and extremely fast connectivity at low cost. In this paper, we explore its use to relieve hotspots in oversubscribed data center (DC) networks. By experimenting with prototype equipment, we show that the DC environment is well suited to a deployment of 60GHz links contrary to concerns about interference and link reliability. Using directional antennas, many wireless links can run concurrently at multi-Gbps rates on top-of-rack (ToR) switches. The wired DC network can be used to sidestep several common wireless problems. By analyzing production traces of DC traffic for four real applications, we show that adding a small amount of network capacity in the form of wireless flyways to the wired DC network can improve performance. However, to be of significant value, we find that one hop indirect routing is needed. Informed by our 60GHz experiments and DC traffic analysis, we present a design that uses DC traffic levels to select and adds flyways to the wired DC network. Trace-driven evaluations show that network-limited DC applications with predictable traffic workloads running on a 1:2 oversubscribed network can be sped up by 45% in 95% of the cases, with just one wireless device per ToR switch. With two devices, in 40% of the cases, the performance is identical to that of a non-oversubscribed network. --- paper_title: Robust Topology Engineering in Multiradio Multichannel Wireless Networks paper_content: Topology engineering concerns with the problem of automatic determination of physical layer parameters to form a network with desired properties. In this paper, we investigate the joint power control, channel assignment, and radio interface selection for robust provisioning of link bandwidth in infrastructure multiradio multichannel wireless networks in presence of channel variability and external interference. To characterize the logical relationship between spatial contention constraints and transmit power, we formulate the joint power control and radio-channel assignment as a generalized disjunctive programming problem. The generalized Benders decomposition technique is applied for decomposing the radio-channel assignment (combinatorial constraints) and network resource allocation (continuous constraints) so that the problem can be solved efficiently. The proposed algorithm is guaranteed to converge to the optimal solution within a finite number of iterations. We have evaluated our scheme using traces collected from two wireless testbeds and simulation studies in Qualnet. Experiments show that the proposed algorithm is superior to existing schemes in providing larger interference margin, and reducing outage and packet loss probabilities. --- paper_title: Codebook Beam Switching Based Relay Scheme for 60 GHz Anti-blockage Communication paper_content: A relay solution to link-blockage problem in 60GHz communication systems is proposed based on codebook beam switching. The directional link is easily blocked by a moving person and furniture, which is regarded as one of the severe problems in 60GHz systems. Beam switching mechanism is feasible to resolve link-blockage by switching beams from the blocked path to an optimal relay path. Typically, the selection of backup relay path is crucial. Based on beam training and environment learning, a beam pair sequence list (BPSL) for relay paths is proposed for accurate selection of relay path and rapid switching of communication link. The generation and evaluation of beam pair sequence are presented. Based on BPSL, a complete and closed-form link switching process is developed. Theoretical analysis and numerical results show that for IEEE 802.15.3c indoor channel model CM1, the proposed scheme has good performance in complexity, applicability and system throughput. --- paper_title: Joint Scalable Coding and Routing for 60 GHz Real-Time Live HD Video Streaming Applications paper_content: Transmission of high-definition (HD) video is a promising application for 60 GHz wireless links, since very high transmission rates (up to several Gbit/s) are possible. In particular we consider a sports stadium broadcasting system where signals from multiple cameras are transmitted to a central location. Due to the high pathloss of 60 GHz radiation over the large distances encountered in this scenario, the use of relays might be required. The current paper analyzes the joint selection of the routes (relays) and the compression rates from the various sources for maximization of the overall video quality. We consider three different scenarios: (i) each source transmits only to one relay and the relay can receive only one data stream, and (ii) each source can transmit only to a single relay, but relays can aggregate streams from different sources and forward to the destination, and (iii) the source can split its data stream into parallel streams, which can be transmitted via different relays to the destination. For each scenario, we derive the mathematical formulations of the optimization problem and re-formulate them as convex mixed-integer programming, which can guarantee optimal solutions. Extensive simulations demonstrate that high-quality transmission is possible for at least ten cameras over distances of 300 m. Furthermore, optimization of the video quality gives results that can significantly outperform algorithms that maximize data rates. --- paper_title: Millimeter Wave WPAN: Cross-Layer Modeling and Multi-Hop Architecture paper_content: The 7 GHz of unlicensed spectrum in the 60 GHz band offers the potential for multiGigabit indoor wireless personal area networking (WPAN). With recent advances in the speed of silicon (CMOS and SiGe) processes, low-cost transceiver realizations in this "millimeter (mm) wave" band are within reach. However, mm wave communication links are more fragile than those at lower frequencies (e.g., 2.4 or 5 GHz) because of larger propagation losses and reduced diffraction around obstacles. On the other hand, directional antennas that provide directivity gains and reduction in delay spread are far easier to implement at mm-scale wavelengths. In this paper, we present a cross-layer modeling methodology and a novel multihop medium access control (MAC) architecture for efficient utilization of 60 GHz spectrum, taking into account the preceding physical characteristics. We propose an in-room WPAN architecture in which every link is constrained to be directional, for improved power efficiency (due to directivity gains) and simplicity of implementation (due to reduced delay spread). We develop an elementary diffraction-based model to determine network link connectivity, and define a multihop MAC protocol that accounts for directional transmission/reception, procedures for topology discovery and recovery from link blockages. --- paper_title: Wireless networking with directional antennas for 60 GHz systems paper_content: 60 GHz wireless networks have the potential to support high data rate applications, but have short range transmission limitations due to larger propagation losses and reduced diffraction around obstacles. On the other hand, directional antennas are easier to implement at millimeter wavelengths and can provide benefits such as spatial reuse and higher transmission range. This paper proposes a network architecture for 60 GHz wireless personal area networks (WPANs) using directional antennas. It describes protocols for neighbor discovery, medium access, and multi-hop route establishment that exploit directional antennas to improve network performance and maintain connectivity. As a result, the proposed method can be used as a practical, low-cost solution to overcome the difficulty of short range and large propagation loss in 60 GHz systems. --- paper_title: A robust 60 GHz wireless network with parallel relaying paper_content: The challenge at millimeter-wave frequencies is that the propagation characteristics approximates to that of light. In a nonline-of-sight scenario, when even the mobile station (MS) is near the base station, the attenuation may be tens of dBs due to shadowing and obstructions. Increasing the number of base stations reduces such effects but at the expense of cost and complexity. An attractive method to mitigate such shadowing effects is to use dedicated active or passive relay stations. Proposed here is a network infrastructure in the form of a 3D pyramid. It consists of a single access point with four (but not restricted to) active relays operating in parallel in a medium sized room of 400 m/sup 2/. Simulation is performed in a sophisticated 3D ray tracing tool. Human shadowing densities of 1 person/400 m/sup 2/ up to 1 person/1 m/sup 2/ are set to test the robustness of such a system. Results show that comparing to a normal system with just a single access point either mounted on the ceiling or at the same level as a MS; the pyramid relaying system provides superior coverage and capacity. --- paper_title: 3D beamforming for wireless data centers paper_content: Contrary to prior assumptions, recent measurements show that data center traffic is not constrained by network bisection bandwidth, but is instead prone to congestion loss caused by short traffic bursts. Compared to the cost and complexity of modifying data center architectures, a much more attractive option is to augment wired links with flexible wireless links in the 60 GHz band. Current proposals, however, are severely constrained by two factors. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles between the endpoints. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. In this paper, we explore the feasibility of a new wireless primitive for data centers, 3D beamforming. We explore the design space, and show how bouncing 60 GHz wireless links off reflective ceilings can address both link blockage and link interference, thus improving link range and number of current transmissions in the data center. --- paper_title: Wireless data center networking with steered-beam mmWave links paper_content: This paper presents a new type of wireless networking applications in data centers using steered-beam mmWave links. By taking advantage of clean LOS channels on top of server racks, robust wireless packet-switching network can be built. The transmission latency can be reduced by flexibly bridging adjacent rows of racks wirelessly without using long cables and multiple switches. Eliminating cables and switches also reduces equipment costs as well as server installation and reconfiguration costs. Security can be physically enhanced with controlled directivity and negligible wall penetration. The aggregate data transmission BW per given volume is expected to scale as the fourth power of carrier frequency. The paper also deals with the architecture of such network configurations and a preliminary demonstration system. --- paper_title: Mirror mirror on the ceiling: flexible wireless links for data centers paper_content: Modern data centers are massive, and support a range of distributed applications across potentially hundreds of server racks. As their utilization and bandwidth needs continue to grow, traditional methods of augmenting bandwidth have proven complex and costly in time and resources. Recent measurements show that data center traffic is often limited by congestion loss caused by short traffic bursts. Thus an attractive alternative to adding physical bandwidth is to augment wired links with wireless links in the 60 GHz band. We address two limitations with current 60 GHz wireless proposals. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. We propose and evaluate a new wireless primitive for data centers, 3D beamforming, where 60 GHz signals bounce off data center ceilings, thus establishing indirect line-of-sight between any two racks in a data center. We build a small 3D beamforming testbed to demonstrate its ability to address both link blockage and link interference, thus improving link range and number of concurrent transmissions in the data center. In addition, we propose a simple link scheduler and use traffic simulations to show that these 3D links significantly expand wireless capacity compared to their 2D counterparts. --- paper_title: Augmenting data center networks with multi-gigabit wireless links paper_content: The 60 GHz wireless technology that is now emerging has the potential to provide dense and extremely fast connectivity at low cost. In this paper, we explore its use to relieve hotspots in oversubscribed data center (DC) networks. By experimenting with prototype equipment, we show that the DC environment is well suited to a deployment of 60GHz links contrary to concerns about interference and link reliability. Using directional antennas, many wireless links can run concurrently at multi-Gbps rates on top-of-rack (ToR) switches. The wired DC network can be used to sidestep several common wireless problems. By analyzing production traces of DC traffic for four real applications, we show that adding a small amount of network capacity in the form of wireless flyways to the wired DC network can improve performance. However, to be of significant value, we find that one hop indirect routing is needed. Informed by our 60GHz experiments and DC traffic analysis, we present a design that uses DC traffic levels to select and adds flyways to the wired DC network. Trace-driven evaluations show that network-limited DC applications with predictable traffic workloads running on a 1:2 oversubscribed network can be sped up by 45% in 95% of the cases, with just one wireless device per ToR switch. With two devices, in 40% of the cases, the performance is identical to that of a non-oversubscribed network. --- paper_title: Beam switching support to resolve link-blockage problem in 60 GHz WPANs paper_content: In this paper, we propose a solution to resolve link blockage problem in 60 GHz WPANs. Line-of-Sight (LOS) link is easily blocked by a moving person, which is concerned as one of the severe problems in 60 GHz systems. Beamforming is a feasible technique to resolve link blockage by switching the beam path from LOS link to a Non-LOS (NLOS) link. We propose and evaluate two kinds of Beam Switching (BS) mechanisms: instant decision based BS and environment learning based BS. We examine these mechanisms in a typical indoor WPAN scenario. Extensive simulations have been carried out, and our results reveal that combining angle-of-arrival with the received signal to noise ratio could make better decision for beam switching. Our work provides valuable observations for beam switching during point-to-point communication using 60 GHz radio. --- paper_title: Joint Scalable Coding and Routing for 60 GHz Real-Time Live HD Video Streaming Applications paper_content: Transmission of high-definition (HD) video is a promising application for 60 GHz wireless links, since very high transmission rates (up to several Gbit/s) are possible. In particular we consider a sports stadium broadcasting system where signals from multiple cameras are transmitted to a central location. Due to the high pathloss of 60 GHz radiation over the large distances encountered in this scenario, the use of relays might be required. The current paper analyzes the joint selection of the routes (relays) and the compression rates from the various sources for maximization of the overall video quality. We consider three different scenarios: (i) each source transmits only to one relay and the relay can receive only one data stream, and (ii) each source can transmit only to a single relay, but relays can aggregate streams from different sources and forward to the destination, and (iii) the source can split its data stream into parallel streams, which can be transmitted via different relays to the destination. For each scenario, we derive the mathematical formulations of the optimization problem and re-formulate them as convex mixed-integer programming, which can guarantee optimal solutions. Extensive simulations demonstrate that high-quality transmission is possible for at least ten cameras over distances of 300 m. Furthermore, optimization of the video quality gives results that can significantly outperform algorithms that maximize data rates. --- paper_title: Millimeter Wave WPAN: Cross-Layer Modeling and Multi-Hop Architecture paper_content: The 7 GHz of unlicensed spectrum in the 60 GHz band offers the potential for multiGigabit indoor wireless personal area networking (WPAN). With recent advances in the speed of silicon (CMOS and SiGe) processes, low-cost transceiver realizations in this "millimeter (mm) wave" band are within reach. However, mm wave communication links are more fragile than those at lower frequencies (e.g., 2.4 or 5 GHz) because of larger propagation losses and reduced diffraction around obstacles. On the other hand, directional antennas that provide directivity gains and reduction in delay spread are far easier to implement at mm-scale wavelengths. In this paper, we present a cross-layer modeling methodology and a novel multihop medium access control (MAC) architecture for efficient utilization of 60 GHz spectrum, taking into account the preceding physical characteristics. We propose an in-room WPAN architecture in which every link is constrained to be directional, for improved power efficiency (due to directivity gains) and simplicity of implementation (due to reduced delay spread). We develop an elementary diffraction-based model to determine network link connectivity, and define a multihop MAC protocol that accounts for directional transmission/reception, procedures for topology discovery and recovery from link blockages. --- paper_title: Improving 60 GHz Indoor Connectivity with Relaying paper_content: The 60 GHz technology has a great potential to provide wireless communication at multi-gigabit rates in future home networks. To maintain the network connectivity with 60 GHz links, which are highly susceptible to propagation and penetration losses, is a major challenge. The quality and the robustness of the 60 GHz links can be improved by employing relay nodes in the network. In this paper, the contribution of relaying to the connectivity and the quality of the 60 GHz radio links is studied by modeling three indoor scenarios. It is analytically and through simulations shown that having a relay node in a 60 GHz network decreases the average freespace path loss 33% in the worst case scenario. The effects of relay device position and the obstacle density on the improvement of the average received signal level are investigated with a verified 3D ray tracing tool. A comparative simulation study on the performance of different relay configurations under various network conditions is conducted. The results yield that even a single relay device positioned at the height of other nodes can improve 50% of the links in considerable levels in a 60 GHz indoor network. It is also shown that additional relay nodes do not contribute to 60 GHz indoor connectivity significantly, if there are two properly positioned relay devices in a network which is moderately populated. --- paper_title: Multi-Hop Concurrent Transmission in Millimeter Wave WPANs with Directional Antenna paper_content: Millimeter-wave (mmWave) communications is a promising enabling technology for high rate (Giga-bit) multimedia applications. However, because oxygen absorption peaks at 60 GHz, mmWave signal power degrades significantly over distance. Therefore, a traffic flow transmitting over multiple short hops is preferred to improve flow throughput. In this paper, we first design a hop selection metric for the piconet controller (PNC) to select appropriate relay hops for a traffic flow, aiming to improve the flow throughput and balance the traffic load across the network. We then propose a multi-hop concurrent transmission (MHCT) scheme to exploit the spatial capacity of the mmWave WPAN. Extensive simulations show that the proposed MHCT scheme can significantly improve the traffic flow throughput and network throughput. --- paper_title: Directional Relay with Spatial Time Slot Scheduling for mmWave WPAN Systems paper_content: In this paper, we propose a spacial time slot scheduling algorithm for relay operation to improve the throughput performance of millimeter-wave wireless personal area network (mmWave WPAN) systems which employ directional antenna. The upcoming mmWave WPAN is designed for high definition TV (HDTV) transmission, high speed wireless docking and gaming, etc. Based on the fact that the significant path loss of millimeter-wave environments provides good space isolation, we have proposed a coexistence mechanism by sharing time slots for relay with direct transmission to guarantee throughput for the above data-rate-greedy applications. This paper is an extension that addresses spacial time slot scheduling for relay operation taking the effect of directional antenna into consideration. We model the throughput maximization with scheduling as an integer optimization and solve it by transforming the problem to a max-weight matching problem of a bipartite graph. We propose a scheduling algorithm based on the Kuhn-Munkres algorithm which can be used to solve the max weight matching problem. Simulation results show that there is up to 25% throughput improvement achieved compared with random scheduling method. --- paper_title: Deflect Routing for Throughput Improvement in Multi-Hop Millimeter-Wave WPAN System paper_content: In this paper, we propose a cross-layer deflect routing scheme that improves effective throughput of multi-hop millimeter-wave wireless personal area network (mmWave WPAN) system. The upcoming mmWave WPAN is designed to provide Gbps-order transmission capability, targeting applications like high definition TV (HDTV) transmission, high speed wireless docking and gaming etc. Adopting multi-hop relay offers solutions to the issues existing in mmWave WPAN system such as limited coverage range caused by the significant path loss, dramatic communication link change due to the unexpected blockage of the line-of-sight (LOS) path etc. However, multi-hop relay on the other hand decreases the effective throughput because of the required extra time. As a result, the compromised throughput may not be able to support the above applications which are greedy for data rate. Inspired by the fact that the significant path loss of millimeter-wave environment can provide good space isolation, our proposed scheme manages to improve the multi-hop throughput by sharing time slots for relay path with direct path for other transmissions. We also propose two algorithms, namely random fit deflect routing (RFDR) and best fit deflect routing (BFDR), to find the relay path on which the interference due to time slot sharing is low enough to guarantee the concurrent transmissions. Computer simulations show that, in realistic 60GHz environment, the effective multi-hop throughput can be improved up to 45%. ---
Title: A survey of single and multi-hop link schedulers for mmWave wireless systems Section 1: INTRODUCTION Description 1: In this section, write about the advances and significance of 60 GHz, or Millimeter-Wave (mmWave), communications, regulatory changes, and their implications for future wireless networks. Mention challenges such as high path loss, link blockage problems, and potential solutions involving directional antennas and relay nodes. Section 2: SINGLE HOP Description 2: Discuss the implementation of single-hop communication in 60 GHz systems, primarily focusing on the IEEE 802.15.3c specification. Review various algorithms and methods for scheduling non-interfering links to maximize spatial reuse and network capacity. Section 3: MULTI-HOP Description 3: Explain the link blockage problem unique to 60 GHz systems and the necessity for multi-hop communication. Describe the types of relays (active and passive) used to overcome this issue and examine specific protocols and algorithms designed to maintain network connectivity and optimize relay placement. Section 4: Active Relays Description 4: Detail the different strategies involving active relays for maintaining network connectivity, such as discovery processes, normal operation modes, and handling of lost nodes. Highlight specific protocols and their methodologies for employing active relays in dynamic network environments. Section 5: Passive Relays Description 5: Explore the use of passive relays to address link blockage, especially in fixed network scenarios like data centers. Discuss various positioning strategies and scheduling algorithms that exploit ceiling reflectors or strategically placed passive relays to enhance coverage and capacity. Section 6: HYBRID APPROACHES Description 6: Illustrate approaches that combine spatial reuse with relay path selection to maximize network throughput. Review algorithms such as random fit deflect routing (RFDR) and best fit deflect routing (BFDR) and consider concurrent transmission schemes that utilize multi-hop routes efficiently. Section 7: CONCLUSION Description 7: Summarize the key problems and solutions discussed throughout the survey. Highlight areas where future research is needed, including distributed solutions for maximizing both spatial reuse and transmission data rates, optimizing passive relay placement, and minimizing transmission delays in video applications. ...
Electromagnetic Tracking in Medicine—A Review of Technology, Validation, and Applications
8
--- paper_title: Electromagnetic tracking in the clinical environment paper_content: When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. --- paper_title: Computer-aided navigation in neurosurgery paper_content: The article comprises three main parts: a historical review on navigation, the mathematical basics for calculation and the clinical applications of navigation devices. Main historical steps are described from the first idea till the realisation of the frame-based and frameless navigation devices including robots. In particular the idea of robots can be traced back to the Iliad of Homer, the first testimony of European literature over 2500 years ago. In the second part the mathematical calculation of the mapping between the navigation and the image space is demonstrated, including different registration modalities and error estimations. The error of the navigation has to be divided into the technical error of the device calculating its own position in space, the registration error due to inaccuracies in the calculation of the transformation matrix between the navigation and the image space, and the application error caused additionally by anatomical shift of the brain structures during operation. In the third part the main clinical fields of application in modern neurosurgery are demonstrated, such as localisation of small intracranial lesions, skull-base surgery, intracerebral biopsies, intracranial endoscopy, functional neurosurgery and spinal navigation. At the end of the article some possible objections to navigation-aided surgery are discussed. --- paper_title: Image-guided interventions : technology and applications paper_content: Overview and History of Image-Guided Interventions.- Tracking Devices.- Visualization in Image-Guided Interventions.- Augmented Reality.- Software.- Rigid Registration.- Nonrigid Registration.- Model-Based Image Segmentation for Image-Guided Interventions.- Imaging Modalities.- MRI-Guided FUS and its Clinical Applications.- Neurosurgical Applications.- Computer-Assisted Orthopedic Surgery.- Thoracoabdominal Interventions.- Real-Time Interactive MRI for Guiding Cardiovascular Surgical Interventions.- Three-Dimensional Ultrasound Guidance and Robot Assistance for Prostate Brachytherapy.- Radiosurgery.- Radiation Oncology.- Assessment of Image-Guided Interventions. --- paper_title: Optimisation and evaluation of an electromagnetic tracking device for high-accuracy three-dimensional ultrasound imaging of the carotid arteries. paper_content: Electromagnetic tracking devices provide a flexible, low cost solution for three-dimensional ultrasound (3-D US) imaging. They are, however, susceptible to interference. A commercial device (Ascension pcBIRD) was evaluated to assess the accuracy in locating the scan probe as part of a digital, freehand 3-D US imaging system aimed at vascular applications. The device was optimised by selecting a measurement rate and filter setting that minimised the mean deviation in repeated position and orientation measurements. Experimental evaluation of accuracy indicated that, overall, absolute errors were small: the RMS absolute error was 0.2 mm (range: -0.7 to 0.5 mm) for positional measurements over translations up to 90 mm, and 0.2 degrees (range: -0.8 to 0.9 degrees ) for rotational measurements up to 30 degrees. In the case of position measurements, the absolute errors were influenced by the location of the scanner relative to the scan volume. We conclude that the device tested provides an accuracy sufficient for use within a freehand 3-D US system for carotid artery imaging. --- paper_title: Intraventricular catheter placement by electromagnetic navigation safely applied in a paediatric major head injury patient paper_content: INTRODUCTION ::: In the management of severe head injuries, the use of intraventricular catheters for intracranial pressure (ICP) monitoring and the option of cerebrospinal fluid drainage is gold standard. In children and adolescents, the insertion of a cannula in a compressed ventricle in case of elevated intracranial pressure is difficult; therefore, a pressure sensor is placed more often intraparenchymal as an alternative option. ::: ::: ::: DISCUSSION ::: In cases of persistent elevated ICP despite maximal brain pressure management, the use of an intraventricular monitoring device with the possibility of cerebrospinal fluid drainage is favourable. We present the method of intracranial catheter placement by means of an electromagnetic navigation technique. --- paper_title: Volumetric characterization of the Aurora magnetic tracker system for image-guided transorbital endoscopic procedures. paper_content: In some medical procedures, it is difficult or impossible to maintain a line of sight for a guidance system. For such applications, people have begun to use electromagnetic trackers. Before a localizer can be effectively used for an image-guided procedure, a characterization of the localizer is required. The purpose of this work is to perform a volumetric characterization of the fiducial localization error (FLE) in the working volume of the Aurora magnetic tracker by sampling the magnetic field using a tomographic grid. Since the Aurora magnetic tracker will be used for image-guided transorbital procedures we chose a working volume that was close to the average size of the human head. A Plexiglass grid phantom was constructed and used for the characterization of the Aurora magnetic tracker. A volumetric map of the magnetic space was performed by moving the flat Plexiglass phantom up in increments of 38.4 mm from 9.6 mm to 201.6 mm. The relative spatial and the random FLE were then calculated. Since the target of our endoscopic guidance is the orbital space behind the optic nerve, the maximum distance between the field generator and the sensor was calculated depending on the placement of the field generator from the skull. For the different field generator placements we found the average random FLE to be less than 0.06 mm for the 6D probe and 0.2 mm for the 5D probe. We also observed an average relative spatial FLE of less than 0.7 mm for the 6D probe and 1.3 mm for the 5D probe. We observed that the error increased as the distance between the field generator and the sensor increased. We also observed a minimum error occurring between 48 mm and 86 mm from the base of the tracker. --- paper_title: Dynamic Response of Electromagnetic Spatial Displacement Trackers paper_content: Overall system latency-the elapsed time from input human motion until the immediate consequences of that input are available in the display-is one of the most frequently cited shortcoming of current virtual environment VE technology. Given that spatial displacement trackers are employed to monitor head and hand position and orientation in many VE applications, the dynamic response intrinsic to these devices is an unavoidable contributor to overall system latency. In this paper, we describe a testbed and method for measurement of tracker dynamic response that use a motorized rotary swing arm to sinusoidally displace the VE sensor at a number of frequencies spanning the bandwidth of volitional human movement. During the tests, actual swing arm angle and VE sensor reports are collected and time stamped. By calibrating the time stamping technique, the tracker's internal transduction and processing time are separated from data transfer and host computer software execution latencies. We have used this test-bed to examine several VE sensors-most recently to compare latency, gain, and noise characteristics of two commercially available electromagnetic trackers: Ascension Technology Corp.'s Flock of BirdsTM and Polhemus Inc.'s FastrakTM. --- paper_title: Advanced superconducting gradiometer/Magnetometer arrays and a novel signal processing technique paper_content: Recent developments in superconducting magnetic gradiometer technology have led to the construction of advanced ultrasensitive gradiometer/magnetometer arrays. Details of construction techniques and data showing operational capabilities are presented. The most recent of the gradiometer/magnetometer arrays simultaneously measures five independent spatial gradients of the magnetic field and three vector components of the magnetic field. The measured signals from this array are subjected to a novel signal processing technique which provides detailed information about the magnetic signal source. --- paper_title: Accuracy of electromagnetic tracking with a prototype field generator in an interventional OR setting. paper_content: PURPOSE ::: The authors have studied the accuracy and robustness of a prototype electromagnetic window field generator (WFG) in an interventional radiology suite with a robotic C-arm. The overall purpose is the development of guidance systems combining real-time imaging with tracking of flexible instruments for bronchoscopy, laparoscopic ultrasound, endoluminal surgery, endovascular therapy, and spinal surgery. ::: ::: ::: METHODS ::: The WFG has a torus shape, which facilitates x-ray imaging through its centre. The authors compared the performance of the WFG to that of a standard field generator (SFG) under the influence of the C-arm. Both accuracy and robustness measurements were performed with the C-arm in different positions and poses. ::: ::: ::: RESULTS ::: The system was deemed robust for both field generators, but the accuracy was notably influenced as the C-arm was moved into the electromagnetic field. The SFG provided a smaller root-mean-square position error but was more influenced by the C-arm than the WFG. The WFG also produced smaller maximum and variance of the error. ::: ::: ::: CONCLUSIONS ::: Electromagnetic (EM) tracking with the new WFG during C-arm based fluoroscopy guidance seems to be a step forward, and with a correction scheme implemented it should be feasible. --- paper_title: Design and application of an assessment protocol for electromagnetic tracking systems. paper_content: This paper defines a simple protocol for competitive and quantified evaluation of electromagnetic tracking systems such as the NDI Aurora (A) and Ascension microBIRD with dipole transmitter (B). It establishes new methods and a new phantom design which assesses the reproducibility and allows comparability with different tracking systems in a consistent environment. A machined base plate was designed and manufactured in which a 50 mm grid of holes was precisely drilled for position measurements. In the center a circle of 32 equispaced holes enables the accurate measurement of rotation. The sensors can be clamped in a small mount which fits into pairs of grid holes on the base plate. Relative positional/orientational errors are found by subtracting the known distances/ rotations between the machined locations from the differences of the mean observed positions/ rotation. To measure the influence of metallic objects we inserted rods made of steel (SST 303, SST 416), aluminum, and bronze into the sensitive volume between sensor and emitter. We calculated the fiducial registration error and fiducial location error with a standard stylus calibration for both tracking systems and assessed two different methods of stylus calibration. The positional jitter amounted to 0.14 mm(A) and 0.08 mm(B). A relative positional error of 0.96 mm +/- 0.68 mm, range -0.06 mm; 2.23 mm(A) and 1.14 mm +/- 0.78 mm, range -3.72 mm; 1.57 mm(B) for a given distance of 50 mm was found. The relative rotation error was found to be 0.51 degrees (A)/0.04 degrees (B). The most relevant distortion caused by metallic objects results from SST 416. The maximum error 4.2 mm(A)/ > or = 100 mm(B) occurs when the rod is close to the sensor(20 mm). While (B) is more sensitive with respect to metallic objects, (A) is less accurate concerning orientation measurements. (B) showed a systematic error when distances are calculated. --- paper_title: Method for estimating dynamic EM tracking accuracy of surgical navigation tools paper_content: Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included. --- paper_title: Magnetic Position and Orientation Tracking System paper_content: Three-axis generation and sensing of quasi-static magneticdipole fields provide information sufficient to determine both the position and orientation of the sensor relative to the source. Linear rotation transformations based upon the previous measurements are applied to both the source excitation and sensor output vectors, yielding quantities that are linearly propotional to small changes in the position and orientation. Changes are separated using linear combinations of sensor output vectors, transformed to the desired coordinate frame, and used to update the previous measurements. Practical considerations for a head-tracking application are discussed. --- paper_title: SPASYN-an electromagnetic relative position and orientation tracking system paper_content: Two relatively remote independent body coordinate frames are related in both position and orientation (six degrees of freedom) using precise electromagnetic field measurements. Antenna triads are fixed in each body frame. Variously polarized excitations in one body are correlated with signals detected in the remote body. Near-field and far-field processing strategies are presented with applications. --- paper_title: Image-Guided Endoscopic Surgery: Results of Accuracy and Performance in a Multicenter Clinical Study Using an Electromagnetic Tracking System paper_content: Image-guided surgery has recently been described in the literature as a useful technology for improved functional endoscopic sinus surgery localization. Image-guided surgery yields accurate knowledge of the surgical field boundaries, allowing safer and more thorough sinus surgery. We have previously reviewed our initial experience with The InstaTrak System. This article presents a multicenter clinical study (n=55) that assesses the system's capability for localizing structures in critical surgical sites. The purpose of this paper is to present quantitative data on accuracy and performance. We describe several new advances including an automated registration technique that eliminates the redundant computed tomography scan, compensation for head movement, and the ability to use interchangeable instruments. --- paper_title: Application of electromagnetic navigation in surgical treatment of intracranial tumors: analysis of 12 cases. paper_content: OBJECTIVE ::: To explore the application and characteristics of electromagnetic navigation in neurosurgical operation. ::: ::: ::: METHODS ::: Neurosurgical operations with the assistance of electromagnetic navigation were performed in 12 patients with intracranial tumors. ::: ::: ::: RESULTS ::: Total removal of the tumor was achieved in 8 cases, subtotal removal in 3 and removal of the majority of the tumor in 1 case. The error in the navigation averaged 1.9+/-0.9 mm and the time consumed by preoperative preparation was 19+/-2 min with the exception in 1 case. ::: ::: ::: CONCLUSION ::: In comparison with optic navigation, electromagnetic navigation offers better convenience and absence of signal blockage, and with a head frame, automatic registration can be achieved. --- paper_title: A Paired-Orientation Alignment Problem in a Hybrid Tracking System for Computer Assisted Surgery paper_content: Coordinate Alignment (CA) is an important problem in hybrid tracking systems involving two or more tracking devices. CA typically associates the measurements from two or more tracking systems with respect to distinct base frames and makes them comparable in the same aligned coordinate system. In this article, we discuss a sub-problem, Paired-Orientation Alignment (POA), in the category of CA. This sub-problem occurs during the development of an integrated electromagnetic and inertial attitude (orientation) tracking system, where only the orientation information is acquired from the two tracking devices. The problem is modeled as a matrix equation YC = D with constraints, which can be solved as a least-squares problem using quaternions. A closed-form analytical solution is given by the pseudo-inverse matrix. This method is specifically for registering the paired-orientation measurements between two coordinate systems, without using position information. The algorithm is illustrated by simulations and proof-of-concept tracking experiments. --- paper_title: Study on an experimental AC electromagnetic tracking system paper_content: 3D tracking system is one of the key devices to realize the sense of immersion and human computer interaction in a virtual or augmented reality system. This paper presents the design of an experimental AC electromagnetic 3D tracking system that is based on AC magnetic field transmitting and sensing. The proposed system is composed of 3-axis orthogonal magnetic sensor, 3-axis orthogonal magnetic transmitter, 2-axis accelerometers, data acquisition and processing system etc. After obtaining the orientation of the receiver by measuring earth magnetic and gravity field with the DC output of magnetic sensors and accelerometers, the position can be calculated from the received AC magnetic field generated by the magnetic transmitter. The design of the experimental system on the basis of theoretical analysis is presented in detail and the results of the actual experiment prove the feasibility of the proposed system. --- paper_title: Evaluation of dynamic electromagnetic tracking deviation paper_content: Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. ::: We found a root mean square error ( e RMS ) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error e max = 2.31mm, minimum error e min = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms. --- paper_title: Towards image guided robotic surgery: multi-arm tracking through hybrid localization paper_content: Objective ::: Use of the robotic assisted surgery has been increasing in recent years, due both the continuous increase in the number of applications and the clinical benefits that surgical robots can provide. Currently robotic assisted surgery relies on endoscopic video for navigation, providing only surface visualization, thus limiting subsurface vision. To be able to visualize and identify subsurface information, techniques in image-guidance can be used. As part of designing an image guidance system, all arms of the robot need to be co-localized in a common coordinate system. --- paper_title: An improved calibration framework for electromagnetic tracking devices paper_content: Electromagnetic trackers have many favorable characteristics but are notorious for their sensitivity to magnetic field distortions resulting from metal and electronic equipment in the environment. We categorize existing tracker calibration methods and present an improved technique for reducing the static position and orientation errors that are inherent to these devices. A quaternion-based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a 6-DOF mobile platform and an optical position measurement system, allowing the collection of full-pose data at nearly arbitrary orientations of the receiver. A polynomial correction technique is applied and evaluated using a Polhemus Fastrak resulting in a substantial improvement of tracking accuracy. Finally, we apply advanced visualization algorithms to give new insight into the nature of the magnetic distortion field. --- paper_title: Dynamic Response of Electromagnetic Spatial Displacement Trackers paper_content: Overall system latency-the elapsed time from input human motion until the immediate consequences of that input are available in the display-is one of the most frequently cited shortcoming of current virtual environment VE technology. Given that spatial displacement trackers are employed to monitor head and hand position and orientation in many VE applications, the dynamic response intrinsic to these devices is an unavoidable contributor to overall system latency. In this paper, we describe a testbed and method for measurement of tracker dynamic response that use a motorized rotary swing arm to sinusoidally displace the VE sensor at a number of frequencies spanning the bandwidth of volitional human movement. During the tests, actual swing arm angle and VE sensor reports are collected and time stamped. By calibrating the time stamping technique, the tracker's internal transduction and processing time are separated from data transfer and host computer software execution latencies. We have used this test-bed to examine several VE sensors-most recently to compare latency, gain, and noise characteristics of two commercially available electromagnetic trackers: Ascension Technology Corp.'s Flock of BirdsTM and Polhemus Inc.'s FastrakTM. --- paper_title: Investigation of Attitude Tracking Using an Integrated Inertial and Magnetic Navigation System for Hand-Held Surgical Instruments paper_content: Due to the need for accurate navigation in minimally invasive surgery, many methods have been introduced to the operating room for tracking the position and orientation of instruments. This paper considers the subproblem of using integrated inertial and magnetic sensing to track the attitude (orientation) of surgical instruments. In this scenario, it is usually assumed that the sensor is quasi-static and the surrounding magnetic field is steady. For practical hand-held surgical instruments, perturbations exist due to intended and unintended (e.g., tremor) motion and due to distortion of the surrounding magnetic field. We consider the problem of estimating the gravity and magnetic field in the inertial sensor frame with small perturbations. The dynamics of the gravity and magnetic field is studied under perturbations, their relationships to gyroscope measurements are analyzed, and Kalman filters (KFs) are formulated to reduce these perturbations. The estimated gravity and magnetic values (outputs of the KFs) are subsequently used in an extended KF for attitude estimation. In this filter, the prediction model is given by the system dynamics, formulated using quaternions, and the observation model is given by vector analysis of the estimated gravity and magnetic field. Experiments are performed to validate the algorithms under clinically realistic motions. The complete system demonstrates an improvement in the accuracy of the attitude estimate in the presence of small perturbations, and satisfies the specified accuracy requirement of 1°. --- paper_title: A review of RFID localization: Applications and techniques paper_content: Indoor localization has been actively researched recently due to security and safety as well as service matters. Previous research and development for indoor localization includes infrared, wireless LAN and ultrasonic. However, these technologies suffer either from the limited accuracy or lacking of the infrastructure. Radio Frequency Identification (RFID) is very attractive because of reasonable system price, and reader reliability. The RFID localization can be categorized into tag and reader localizations. In this paper, major localization techniques for both tag and reader localizations are reviewed to provide the readers state of the art of the indoor localization algorithms. The advantage and disadvantage of each technique for particular applications were also discussed. --- paper_title: A new system to perform continuous target tracking for radiation and surgery using non-ionizing alternating current electromagnetics paper_content: Abstract A new technology based on alternating current (AC) nonionizing electromagnetic fields has been developed to enable precise localization and continuous tracking of mobile soft tissue targets or tumors during radiation therapy and surgery. The technology utilizes miniature, permanently implanted, wireless transponders (Beacon™ Transponders) and a novel array to enable objective localization and continuous tracking of targets in three-dimensional space. The characteristics of this system include use of safe, nonionizing electromagnetic fields; negligible tissue interaction, delivery of inherently objective three-dimensional data, continuous operation during external beam radiation therapy or surgery. Feasibility testing has shown the system was capable of locating and continuously tracking transponders with submillimeter accuracy at tracking rates of up to 10 Hz and was unaffected by operational linear accelerator. Preclinical animal studies have shown the transponders to be suitable for permanent implantation and early human clinical studies have demonstrated the feasibility of inserting transponders into the prostate. This system will enable accurate initial patient setup and provide real-time target tracking demanded by today's highly conformal radiation techniques and has significant potential as a surgical navigation platform. --- paper_title: Advanced superconducting gradiometer/Magnetometer arrays and a novel signal processing technique paper_content: Recent developments in superconducting magnetic gradiometer technology have led to the construction of advanced ultrasensitive gradiometer/magnetometer arrays. Details of construction techniques and data showing operational capabilities are presented. The most recent of the gradiometer/magnetometer arrays simultaneously measures five independent spatial gradients of the magnetic field and three vector components of the magnetic field. The measured signals from this array are subjected to a novel signal processing technique which provides detailed information about the magnetic signal source. --- paper_title: Review on Patents about Magnetic Localisation Systems for in vivo Catheterizations paper_content: Abstract: in vivo Catheterizations are usually performed by physicians using X-Ray fluoroscopic guide and contrast-media. The X-Ray exposure both of the patient and of the operators can induce collateral effects. The present review describes the status of the art on recent patents about magnetic position/orientation indicators capable to drive the probe during in-vivo medical diagnostic or interventional procedures. They are based on the magnetic field produced by sources and revealed by sensors. Possible solutions are: the modulated magnetic field produced by a set of coils positioned externally to the patient is measured by sensors installed on the intra-body probe; the magnetic field produced by a thin permanent magnet installed on the intra-body probe is measured by magnetic field sensors positioned outside the patient body. In either cases, position and orientation of the probe are calculated in real time: this allows the elimination of repetitive X-Ray scans used to monitor the probe. The aim of the proposed systems is to drive the catheter inside the patient vascular tree with a reduction of the X-Ray exposure both of the patient and of the personnel involved in the intervention. The present paper intends also to highlight advantages/disadvantages of the presented solutions. --- paper_title: Accuracy of a wireless localization system for radiotherapy. paper_content: PURPOSE ::: A system has been developed for patient positioning based on real-time localization of implanted electromagnetic transponders (beacons). This study demonstrated the accuracy of the system before clinical trials. ::: ::: ::: METHODS AND MATERIALS ::: We describe the overall system. The localization component consists of beacons and a source array. A rigid phantom was constructed to place the beacons at known offsets from a localization array. Tests were performed at distances of 80 and 270 mm from the array and at positions in the array plane of up to 8 cm offset. Tests were performed in air and saline to assess the effect of tissue conductivity and with multiple transponders to evaluate crosstalk. Tracking was tested using a dynamic phantom creating a circular path at varying speeds. ::: ::: ::: RESULTS ::: Submillimeter accuracy was maintained throughout all experiments. Precision was greater proximal to the source plane (sigmax = 0.006 mm, sigmay = 0.01 mm, sigmaz = 0.006 mm), but continued to be submillimeter at the end of the designed tracking range at 270 mm from the array (sigmax = 0.27 mm, sigmay = 0.36 mm, sigmaz = 0.48 mm). The introduction of saline and the use of multiple beacons did not affect accuracy. Submillimeter accuracy was maintained using the dynamic phantom at speeds of up to 3 cm/s. ::: ::: ::: CONCLUSION ::: This system has demonstrated the accuracy needed for localization and monitoring of position during treatment. --- paper_title: Method for estimating dynamic EM tracking accuracy of surgical navigation tools paper_content: Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included. --- paper_title: Accurate localization of RFID tags using phase difference paper_content: Due to their light weight, low power, and practically unlimited identification capacity, radio frequency identification (RFID) tags and associated devices offer distinctive advantages and are widely recognized for their promising potential in context-aware computing; by tagging objects with RFID tags, the environment can be sensed in a cost- and energy-efficient means. However, a prerequisite to fully realizing the potential is accurate localization of RFID tags, which will enable and enhance a wide range of applications. In this paper we show how to exploit the phase difference between two or more receiving antennas to compute accurate localization. Phase difference based localization has better accuracy, robustness and sensitivity when integrated with other measurements compared to the currently popular technique of localization using received signal strength. Using a software-defined radio setup, we show experimental results that support accurate localization of RFID tags and activity recognition based on phase difference. --- paper_title: Magnetic Position and Orientation Tracking System paper_content: Three-axis generation and sensing of quasi-static magneticdipole fields provide information sufficient to determine both the position and orientation of the sensor relative to the source. Linear rotation transformations based upon the previous measurements are applied to both the source excitation and sensor output vectors, yielding quantities that are linearly propotional to small changes in the position and orientation. Changes are separated using linear combinations of sensor output vectors, transformed to the desired coordinate frame, and used to update the previous measurements. Practical considerations for a head-tracking application are discussed. --- paper_title: Phase difference based RFID navigation for medical applications paper_content: RFID localization is a promising new field of work that is eagerly awaited for many different types of applications. For use in a medical context, special requirements and limitations must be taken into account, especially regarding accuracy, reliability and operating range. In this paper we present an experimental setup for a medical navigation system based on RFID. For this we applied a machine learning algorithm, namely support vector regression, to phase difference data gathered from multiple RFID receivers. The performance was tested on six datasets of different shape and placement within the volume spanned by the receivers. In addition, two grid based training sets of different size were considered for the regression. Our results show that it is possible to reach an accuracy of tag localization that is sufficient for some medical applications. Although we could not reach an overall accuracy of less than one millimeter in our experiments so far, the deviation was limited to two millimeters in most cases and the general results indicate that application of RFID localization even to highly critical applications, e. g., for brain surgery, will be possible soon. --- paper_title: SPASYN-an electromagnetic relative position and orientation tracking system paper_content: Two relatively remote independent body coordinate frames are related in both position and orientation (six degrees of freedom) using precise electromagnetic field measurements. Antenna triads are fixed in each body frame. Variously polarized excitations in one body are correlated with signals detected in the remote body. Near-field and far-field processing strategies are presented with applications. --- paper_title: Image-guided interventions : technology and applications paper_content: Overview and History of Image-Guided Interventions.- Tracking Devices.- Visualization in Image-Guided Interventions.- Augmented Reality.- Software.- Rigid Registration.- Nonrigid Registration.- Model-Based Image Segmentation for Image-Guided Interventions.- Imaging Modalities.- MRI-Guided FUS and its Clinical Applications.- Neurosurgical Applications.- Computer-Assisted Orthopedic Surgery.- Thoracoabdominal Interventions.- Real-Time Interactive MRI for Guiding Cardiovascular Surgical Interventions.- Three-Dimensional Ultrasound Guidance and Robot Assistance for Prostate Brachytherapy.- Radiosurgery.- Radiation Oncology.- Assessment of Image-Guided Interventions. --- paper_title: 3D magnetic tracking of a single subminiature coil with a large 2D-array of uniaxial transmitters paper_content: A novel system and method for magnetic tracking of a single subminiature coil is described. The novelty of the method consists in employing a large, 8 /spl times/ 8 array of coplanar transmitting coils. This allows us to always keep the receiving coil not far from the wide, flat transmitting array, to increase the signal-to-noise ratio, and to decrease the retransmitted interference. The whole transmitting array, 64 coils, is sequentially activated only at the initiation stage to compute the initial position of the receiving coil. The redundancy in the transmitters number provides fast and unambiguous convergence of the optimization algorithm. At the following tracking stages, a small (8 coils) transmitting subarray is activated. The relatively small subarray size allows us to keep a high update rate and resolution of tracking. For a 50-Hz update rate, the tracking resolution is not worse than 0.25 mm, 0.2/spl deg/ rms at a 200-mm height above the transmitting array's center. This resolution corresponds to an /spl sim/1-mm, 0.6/spl deg/ tracking accuracy. The novelty of the method consists as well in optimizing the transmitting coils' geometry to substantially (down to 0.5 mm) reduce the systematic error caused by the inaccuracy of the dipole field approximation. --- paper_title: A new system to perform continuous target tracking for radiation and surgery using non-ionizing alternating current electromagnetics paper_content: Abstract A new technology based on alternating current (AC) nonionizing electromagnetic fields has been developed to enable precise localization and continuous tracking of mobile soft tissue targets or tumors during radiation therapy and surgery. The technology utilizes miniature, permanently implanted, wireless transponders (Beacon™ Transponders) and a novel array to enable objective localization and continuous tracking of targets in three-dimensional space. The characteristics of this system include use of safe, nonionizing electromagnetic fields; negligible tissue interaction, delivery of inherently objective three-dimensional data, continuous operation during external beam radiation therapy or surgery. Feasibility testing has shown the system was capable of locating and continuously tracking transponders with submillimeter accuracy at tracking rates of up to 10 Hz and was unaffected by operational linear accelerator. Preclinical animal studies have shown the transponders to be suitable for permanent implantation and early human clinical studies have demonstrated the feasibility of inserting transponders into the prostate. This system will enable accurate initial patient setup and provide real-time target tracking demanded by today's highly conformal radiation techniques and has significant potential as a surgical navigation platform. --- paper_title: An Electromagnetic Tracking Method Using Rotating Orthogonal Coils paper_content: In this paper, an electromagnetic tracking method that uses two rotating orthogonal coils is proposed. Two cross-shaped coils can rotate together and be driven in sequence to generate a magnetic field. One of the rotating orthogonal coils is used to track the position of the three-axis sensor and the other is to help solve the orientation of the sensor. As rotation can provide a geometrical relationship between the magnetic source and the sensor, the method does not require the generated magnetic field to imitate the ideal dipole in calculation. A fast noniterative algorithm and simple 1-D mapping along the axis of one coil are used to realize six degree-of-freedom (6DOF) tracking. Thus, the complexity of the magnetic tracking method and the corresponding system can be greatly reduced. Simulation results show that the method has good tracking accuracy and speed, which can be further improved by using an adaptive step-size searching strategy. So, it is concluded that the method can be used as an effective target monitoring method in many application domains such as minimally invasive therapy. --- paper_title: Accuracy of a wireless localization system for radiotherapy. paper_content: PURPOSE ::: A system has been developed for patient positioning based on real-time localization of implanted electromagnetic transponders (beacons). This study demonstrated the accuracy of the system before clinical trials. ::: ::: ::: METHODS AND MATERIALS ::: We describe the overall system. The localization component consists of beacons and a source array. A rigid phantom was constructed to place the beacons at known offsets from a localization array. Tests were performed at distances of 80 and 270 mm from the array and at positions in the array plane of up to 8 cm offset. Tests were performed in air and saline to assess the effect of tissue conductivity and with multiple transponders to evaluate crosstalk. Tracking was tested using a dynamic phantom creating a circular path at varying speeds. ::: ::: ::: RESULTS ::: Submillimeter accuracy was maintained throughout all experiments. Precision was greater proximal to the source plane (sigmax = 0.006 mm, sigmay = 0.01 mm, sigmaz = 0.006 mm), but continued to be submillimeter at the end of the designed tracking range at 270 mm from the array (sigmax = 0.27 mm, sigmay = 0.36 mm, sigmaz = 0.48 mm). The introduction of saline and the use of multiple beacons did not affect accuracy. Submillimeter accuracy was maintained using the dynamic phantom at speeds of up to 3 cm/s. ::: ::: ::: CONCLUSION ::: This system has demonstrated the accuracy needed for localization and monitoring of position during treatment. --- paper_title: Standardized assessment of new electromagnetic field generators in an interventional radiology setting. paper_content: PURPOSE ::: Two of the main challenges associated with electromagnetic (EM) tracking in computer-assisted interventions (CAIs) are (1) the compensation of systematic distance errors arising from the influence of metal near the field generator (FG) or the tracked sensor and (2) the optimized setup of the FG to maximize tracking accuracy in the area of interest. Recently, two new FGs addressing these issues were proposed for the well-established Aurora(®) tracking system [Northern Digital, Inc. (NDI), Waterloo, Canada]: the Tabletop 50-70 FG, a planar transmitter with a built-in shield that compensates for metal distortions emanating from treatment tables, and the prototypical Compact FG 7-10, a mobile generator designed to be attached to mobile imaging devices. The purpose of this paper was to assess the accuracy and precision of these new FGs in an interventional radiology setting. ::: ::: ::: METHODS ::: A standardized assessment protocol, which uses a precisely machined base plate to measure relative error in position and orientation, was applied to the two new FGs as well as to the well-established standard Aurora(®) Planar FG. The experiments were performed in two different settings: a reference laboratory environment and a computed tomography (CT) scanning room. In each setting, the protocol was applied to three different poses of the measurement plate within the tracking volume of the three FGs. ::: ::: ::: RESULTS ::: The two new FGs provided higher precision and accuracy within their respective measurement volumes as well as higher robustness with respect to the CT scanner compared to the established FG. Considering all possible 5 cm distances on the grid, the error of the Planar FG was increased by a factor of 5.94 in the clinical environment (4.4 mm) in comparison to the error in the laboratory environment (0.8 mm). In contrast, the mean values for the two new FGs were all below 1 mm with an increase in the error by factors of only 2.94 (Reference: 0.3 mm; CT: 0.9 mm) and 1.04 (both: 0.5 mm) in the case of the Tabletop FG and the Compact FG, respectively. ::: ::: ::: CONCLUSIONS ::: Due to their high accuracy and robustness, the Tabletop FG and the Compact FG could eliminate the need for compensation of EM field distortions in certain CT-guided interventions. --- paper_title: A survey of electromagnetic position tracker calibration techniques paper_content: This paper is a comprehensive survey of various techniques used to calibrate electromagnetic position tracking systems. A common framework is established to present the calibration problem as the interpolation problem in 3D. All the known calibration techniques are classified into local and global methods and grouped according to their mathematical models. Both the location error and the orientation error correction techniques are surveyed. Data acquisition devices and methods as well as publicly available software implementations are reviewed, too. --- paper_title: Evaluation of dynamic electromagnetic tracking deviation paper_content: Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. ::: We found a root mean square error ( e RMS ) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error e max = 2.31mm, minimum error e min = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms. --- paper_title: Review on Patents about Magnetic Localisation Systems for in vivo Catheterizations paper_content: Abstract: in vivo Catheterizations are usually performed by physicians using X-Ray fluoroscopic guide and contrast-media. The X-Ray exposure both of the patient and of the operators can induce collateral effects. The present review describes the status of the art on recent patents about magnetic position/orientation indicators capable to drive the probe during in-vivo medical diagnostic or interventional procedures. They are based on the magnetic field produced by sources and revealed by sensors. Possible solutions are: the modulated magnetic field produced by a set of coils positioned externally to the patient is measured by sensors installed on the intra-body probe; the magnetic field produced by a thin permanent magnet installed on the intra-body probe is measured by magnetic field sensors positioned outside the patient body. In either cases, position and orientation of the probe are calculated in real time: this allows the elimination of repetitive X-Ray scans used to monitor the probe. The aim of the proposed systems is to drive the catheter inside the patient vascular tree with a reduction of the X-Ray exposure both of the patient and of the personnel involved in the intervention. The present paper intends also to highlight advantages/disadvantages of the presented solutions. --- paper_title: The Effects of Metals and Interfering Fields on Electromagnetic Trackers paper_content: The operation of six degree-of-freedom electromagnetic trackers is based on the spatial properties of the electromagnetic fields generated by three small coils. Anything in the environment that causes these fields to be distorted will result in measurement noise and/or errors. An experimental investigation was undertaken to measure the effect of external fields present in a typical working environment (namely mains and computer monitor fields) and the presence of metals (25-mm cubes of various types of metals, a large steel bar, and a large steel sheet). A theoretical model is proposed to explain the observations. Two devices were used in this investigation: a Polhemus Fastrak and an Ascension Flock of Birds. --- paper_title: Magnetic Position and Orientation Tracking System paper_content: Three-axis generation and sensing of quasi-static magneticdipole fields provide information sufficient to determine both the position and orientation of the sensor relative to the source. Linear rotation transformations based upon the previous measurements are applied to both the source excitation and sensor output vectors, yielding quantities that are linearly propotional to small changes in the position and orientation. Changes are separated using linear combinations of sensor output vectors, transformed to the desired coordinate frame, and used to update the previous measurements. Practical considerations for a head-tracking application are discussed. --- paper_title: SPASYN-an electromagnetic relative position and orientation tracking system paper_content: Two relatively remote independent body coordinate frames are related in both position and orientation (six degrees of freedom) using precise electromagnetic field measurements. Antenna triads are fixed in each body frame. Variously polarized excitations in one body are correlated with signals detected in the remote body. Near-field and far-field processing strategies are presented with applications. --- paper_title: Electromagnetic tracking in the clinical environment paper_content: When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. --- paper_title: Immersive 3DUI on one dollar a day paper_content: A convergence between consumer electronics and virtual reality is occurring. We present an immersive head-mounted-display-based, wearable 3D user interface that is inexpensive (less than $900 USD), robust (sourceless tracking), and portable (lightweight and untethered). While the current display has known deficiencies, the user tracking quality is within the constraints of many existing applications, while the portability and cost offers opportunities for innovative applications that are not currently feasible. --- paper_title: Method for estimating dynamic EM tracking accuracy of surgical navigation tools paper_content: Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included. --- paper_title: Image-guided interventions : technology and applications paper_content: Overview and History of Image-Guided Interventions.- Tracking Devices.- Visualization in Image-Guided Interventions.- Augmented Reality.- Software.- Rigid Registration.- Nonrigid Registration.- Model-Based Image Segmentation for Image-Guided Interventions.- Imaging Modalities.- MRI-Guided FUS and its Clinical Applications.- Neurosurgical Applications.- Computer-Assisted Orthopedic Surgery.- Thoracoabdominal Interventions.- Real-Time Interactive MRI for Guiding Cardiovascular Surgical Interventions.- Three-Dimensional Ultrasound Guidance and Robot Assistance for Prostate Brachytherapy.- Radiosurgery.- Radiation Oncology.- Assessment of Image-Guided Interventions. --- paper_title: Towards unified electromagnetic tracking system assessment-static errors paper_content: Recent advances in Image-Guided Surgery allows physicians to incorporate up-to-date, high quality patient data in the surgical decision making, and sometimes to directly perform operations based on pre- or intra-operatively acquired patient images. Electromagnetic tracking is the fastest growing area within, where the position and orientation of tiny sensors can be determined with sub-millimeter accuracy in the field created by a generator. One of the major barriers to the wider spread of electromagnetic tracking solutions is their susceptibility to ferromagnetic materials and external electromagnetic sources. The research community has long been engaged with the topic to find engineering solutions to increase measurement reliability and accuracy. This article gives an overview of related experiments, and presents our recommendation towards a robust method to collect representative data about electromagnetic trackers. --- paper_title: The reliability and accuracy of an electromagnetic motion analysis system when used conjointly with an accelerometer paper_content: The effect of an accelerometer driven electronic postural monitor (Spineangel®) placed within the electromagnetic measurement field of the Polhemus Fastrak™ is unknown. This study assessed the reliability and accuracy of Fastrak™ linear and angular measurements, when the Spineangel® was placed close to the sensor(s) and transmitter. Bland Altman plots and intraclass correlation coefficient (2,1) were used to determine protocol reproducibility and measurement consistency. Excellent reliability was found for linear and angular measurements (0.96, 95% CI: 0.90–0.99; and 1.00, 95% CI: 1.00–1.00, respectively) with the inclusion of Spineangel®; similar results were found, without the inclusion of Spineangel®, for linear and angular measurements, (0.96, 95% CI: 0.89–0.99; and 1.00, 95% CI: 1.00–1.00, respectively). The greatest linear discrepancies between the two test conditions were found to be less than 3.5 mm, while the greatest angular discrepancies were below 3.5°. As the effect on accuracy was minimal, t... --- paper_title: Electromagnetic tracking for US-guided interventions: standardized assessment of a new compact field generator paper_content: PURPOSE ::: One of the main challenges related to electromagnetic tracking in the clinical setting is a placement of the field generator (FG) that optimizes the reliability and accuracy of sensor localization. Recently, a new mobile FG for the NDI Aurora(®) tracking system has been presented. This Compact FG is the first FG that can be attached directly to an ultrasound (US) probe. The purpose of this study was to assess the precision and accuracy of the Compact FG in the presence of nearby mounted US probes. ::: ::: ::: MATERIALS AND METHODS ::: Six different US probes were mounted onto the Compact FG by means of a custom-designed mounting adapter. To assess precision and accuracy of the Compact FG, we employed a standardized assessment protocol. Utilizing a specifically manufactured plate, we measured positional data on three levels of distances from the FG as well as rotational data. ::: ::: ::: RESULTS ::: While some probes had negligible influence on tracking accuracy two probes increased the mean distance error up to 1.5 mm compared with a reference measurement of 0.5 mm. The jitter error consistently stayed below 0.2 mm in all cases. The mean relative error in orientation was found to be smaller than 3°. ::: ::: ::: CONCLUSION ::: Attachment of an US probe to the Compact FG does not have a critical influence on tracking accuracy in most cases. Clinical benefit of this promising mobile FG must be shown in future studies. --- paper_title: Evaluation of dynamic electromagnetic tracking deviation paper_content: Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. ::: We found a root mean square error ( e RMS ) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error e max = 2.31mm, minimum error e min = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms. --- paper_title: Electromagnetic tracking in the clinical environment paper_content: When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. --- paper_title: THE EFFECT OF TRANSPONDER MOTION ON THE ACCURACY OF THE CALYPSO ELECTROMAGNETIC LOCALIZATION SYSTEM paper_content: Purpose: To determine position and velocity-dependent effects in the overall accuracy of the Calypso Electromagnetic localization system, under conditions that emulate transponder motion during normal free breathing. Methods and Materials: Three localization transponders were mounted on a remote-controlled turntable that could move the transponders along a circular trajectory at speeds up to 3 cm/s. A stationary calibration established the coordinates of multiple points on each transponder's circular path. Position measurements taken while the transponders were in motion at a constant speed were then compared with the stationary coordinates. Results: No statistically significant changes in the transponder positions in (x,y,z) were detected when the transponders were in motion. Conclusions: The accuracy of the localization system is unaffected by transponder motion. --- paper_title: Design and application of an assessment protocol for electromagnetic tracking systems. paper_content: This paper defines a simple protocol for competitive and quantified evaluation of electromagnetic tracking systems such as the NDI Aurora (A) and Ascension microBIRD with dipole transmitter (B). It establishes new methods and a new phantom design which assesses the reproducibility and allows comparability with different tracking systems in a consistent environment. A machined base plate was designed and manufactured in which a 50 mm grid of holes was precisely drilled for position measurements. In the center a circle of 32 equispaced holes enables the accurate measurement of rotation. The sensors can be clamped in a small mount which fits into pairs of grid holes on the base plate. Relative positional/orientational errors are found by subtracting the known distances/ rotations between the machined locations from the differences of the mean observed positions/ rotation. To measure the influence of metallic objects we inserted rods made of steel (SST 303, SST 416), aluminum, and bronze into the sensitive volume between sensor and emitter. We calculated the fiducial registration error and fiducial location error with a standard stylus calibration for both tracking systems and assessed two different methods of stylus calibration. The positional jitter amounted to 0.14 mm(A) and 0.08 mm(B). A relative positional error of 0.96 mm +/- 0.68 mm, range -0.06 mm; 2.23 mm(A) and 1.14 mm +/- 0.78 mm, range -3.72 mm; 1.57 mm(B) for a given distance of 50 mm was found. The relative rotation error was found to be 0.51 degrees (A)/0.04 degrees (B). The most relevant distortion caused by metallic objects results from SST 416. The maximum error 4.2 mm(A)/ > or = 100 mm(B) occurs when the rod is close to the sensor(20 mm). While (B) is more sensitive with respect to metallic objects, (A) is less accurate concerning orientation measurements. (B) showed a systematic error when distances are calculated. --- paper_title: A hardware and software protocol for the evaluation of electromagnetic tracker accuracy in the clinical environment: a multi-center study paper_content: This paper proposes an assessment protocol that incorporates both hardware and analysis methods for evaluation of electromagnetic tracker accuracy in different clinical environments. The susceptibility of electromagnetic tracker measurement accuracy is both highly dependent on nearby ferromagnetic interference sources and non-isotropic. These inherent limitations combined with the various hardware components and assessment techniques used within different studies makes the direct comparison of measurement accuracy between studies difficult. This paper presents a multicenter study to evaluate electromagnetic devices in different clinical environments using a common hardware phantom and assessment techniques so that results are directly comparable. Measurement accuracy has been shown to be in the range of 0.79-6.67mm within a 180mm 3 sub-volume of the Aurora measurement space in five different clinical environments. --- paper_title: An Evaluation of the Aurora System as a Flesh-Point Tracking Tool for Speech Production Research paper_content: Purpose Northern Digital Instruments (NDI; Waterloo, Ontario, Canada) manufactures a commercially available magnetometer device called Aurora that features real-time display of sensor position trac... --- paper_title: Standardized assessment of new electromagnetic field generators in an interventional radiology setting. paper_content: PURPOSE ::: Two of the main challenges associated with electromagnetic (EM) tracking in computer-assisted interventions (CAIs) are (1) the compensation of systematic distance errors arising from the influence of metal near the field generator (FG) or the tracked sensor and (2) the optimized setup of the FG to maximize tracking accuracy in the area of interest. Recently, two new FGs addressing these issues were proposed for the well-established Aurora(®) tracking system [Northern Digital, Inc. (NDI), Waterloo, Canada]: the Tabletop 50-70 FG, a planar transmitter with a built-in shield that compensates for metal distortions emanating from treatment tables, and the prototypical Compact FG 7-10, a mobile generator designed to be attached to mobile imaging devices. The purpose of this paper was to assess the accuracy and precision of these new FGs in an interventional radiology setting. ::: ::: ::: METHODS ::: A standardized assessment protocol, which uses a precisely machined base plate to measure relative error in position and orientation, was applied to the two new FGs as well as to the well-established standard Aurora(®) Planar FG. The experiments were performed in two different settings: a reference laboratory environment and a computed tomography (CT) scanning room. In each setting, the protocol was applied to three different poses of the measurement plate within the tracking volume of the three FGs. ::: ::: ::: RESULTS ::: The two new FGs provided higher precision and accuracy within their respective measurement volumes as well as higher robustness with respect to the CT scanner compared to the established FG. Considering all possible 5 cm distances on the grid, the error of the Planar FG was increased by a factor of 5.94 in the clinical environment (4.4 mm) in comparison to the error in the laboratory environment (0.8 mm). In contrast, the mean values for the two new FGs were all below 1 mm with an increase in the error by factors of only 2.94 (Reference: 0.3 mm; CT: 0.9 mm) and 1.04 (both: 0.5 mm) in the case of the Tabletop FG and the Compact FG, respectively. ::: ::: ::: CONCLUSIONS ::: Due to their high accuracy and robustness, the Tabletop FG and the Compact FG could eliminate the need for compensation of EM field distortions in certain CT-guided interventions. --- paper_title: An improved calibration framework for electromagnetic tracking devices paper_content: Electromagnetic trackers have many favorable characteristics but are notorious for their sensitivity to magnetic field distortions resulting from metal and electronic equipment in the environment. We categorize existing tracker calibration methods and present an improved technique for reducing the static position and orientation errors that are inherent to these devices. A quaternion-based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a 6-DOF mobile platform and an optical position measurement system, allowing the collection of full-pose data at nearly arbitrary orientations of the receiver. A polynomial correction technique is applied and evaluated using a Polhemus Fastrak resulting in a substantial improvement of tracking accuracy. Finally, we apply advanced visualization algorithms to give new insight into the nature of the magnetic distortion field. --- paper_title: Towards unified electromagnetic tracking system assessment-static errors paper_content: Recent advances in Image-Guided Surgery allows physicians to incorporate up-to-date, high quality patient data in the surgical decision making, and sometimes to directly perform operations based on pre- or intra-operatively acquired patient images. Electromagnetic tracking is the fastest growing area within, where the position and orientation of tiny sensors can be determined with sub-millimeter accuracy in the field created by a generator. One of the major barriers to the wider spread of electromagnetic tracking solutions is their susceptibility to ferromagnetic materials and external electromagnetic sources. The research community has long been engaged with the topic to find engineering solutions to increase measurement reliability and accuracy. This article gives an overview of related experiments, and presents our recommendation towards a robust method to collect representative data about electromagnetic trackers. --- paper_title: Stability of miniature electromagnetic tracking systems paper_content: This study aims at a comparative evaluation of two recently introduced electromagnetic tracking systems under reproducible simulated operating-room (OR) conditions: the recently launched Medtronic StealthStation™ Treon-EM™ and the NDI Aurora™. We investigate if and to what extent these systems provide improved performance and stability in the presence of surgical instruments as possible sources of distortions compared with earlier reports on electromagnetic tracking technology. To investigate possible distortions under pseudo-realistic OR conditions, a large Langenbeck hook, a dental drill with its handle and an ultrasonic (US) scanhead are fixed on a special measurement rack at variable distances from the navigation sensor. The position measurements made by the Treon-EM™ were least affected by the presence of the instruments. The lengths of the mean deviation vectors were 0.21 mm for the Langenbeck hook, 0.23 mm for the drill with handle and 0.56 mm for the US scanhead. The Aurora™ was influenced by the three sources of distortion to a higher degree. A mean deviation vector of 1.44 mm length was observed in the vicinity of the Langenbeck hook, 0.53 mm length with the drill and 2.37 mm due to the US scanhead. The maximum of the root mean squared error (RMSE) for all coordinates in the presence of the Langenbeck hook was 0.3 mm for the Treon™ and 2.1 mm for the Aurora™; the drill caused a maximum RMSE of 0.2 mm with the Treon™ and 1.2 mm with the Aurora™. In the presence of the US scanhead, the maximum RMSE was 1.4 mm for the Treon™ and 5.1 mm for the Aurora™. The new generation of electromagnetic tracking systems has significantly improved compared to common systems that were available in the middle of the 1990s and has reached a high level of technical development. We conclude that, in general, both systems are suitable for routine clinical application. --- paper_title: Optimisation and evaluation of an electromagnetic tracking device for high-accuracy three-dimensional ultrasound imaging of the carotid arteries. paper_content: Electromagnetic tracking devices provide a flexible, low cost solution for three-dimensional ultrasound (3-D US) imaging. They are, however, susceptible to interference. A commercial device (Ascension pcBIRD) was evaluated to assess the accuracy in locating the scan probe as part of a digital, freehand 3-D US imaging system aimed at vascular applications. The device was optimised by selecting a measurement rate and filter setting that minimised the mean deviation in repeated position and orientation measurements. Experimental evaluation of accuracy indicated that, overall, absolute errors were small: the RMS absolute error was 0.2 mm (range: -0.7 to 0.5 mm) for positional measurements over translations up to 90 mm, and 0.2 degrees (range: -0.8 to 0.9 degrees ) for rotational measurements up to 30 degrees. In the case of position measurements, the absolute errors were influenced by the location of the scanner relative to the scan volume. We conclude that the device tested provides an accuracy sufficient for use within a freehand 3-D US system for carotid artery imaging. --- paper_title: Electromagnetic Tracking for Thermal Ablation and Biopsy Guidance: Clinical Evaluation of Spatial Accuracy paper_content: Purpose To evaluate the spatial accuracy of electromagnetic needle tracking and demonstrate the feasibility of ultrasonography (US)–computed tomography (CT) fusion during CT- and US-guided biopsy and radiofrequency ablation procedures. Materials and Methods The authors performed a 20-patient clinical trial to investigate electromagnetic needle tracking during interventional procedures. The study was approved by the institutional investigational review board, and written informed consent was obtained from all patients. Needles were positioned by using CT and US guidance. A commercial electromagnetic tracking device was used in combination with prototype internally tracked needles and custom software to record needle positions relative to previously obtained CT scans. Position tracking data were acquired to evaluate the tracking error, defined as the difference between tracked needle position and reference standard needle position on verification CT scans. Registration between tracking space and image space was obtained by using reference markers attached to the skin ("fiducials"), and different registration methods were compared. The US transducer was tracked to demonstrate the potential use of real-time US-CT fusion for imaging guidance. Results One patient was excluded from analysis because he was unable to follow breathing instructions during the acquisition of CT scans. Nineteen of the 20 patients were evaluable, demonstrating a basic tracking error of 5.8 mm ± 2.6, which improved to 3.5 mm ± 1.9 with use of nonrigid registrations that used previous internal needle positions as additional fiducials. Fusion of tracked US with CT was successful. Patient motion and distortion of the tracking system by the CT table and gantry were identified as sources of error. Conclusions The demonstrated spatial tracking accuracy is sufficient to display clinically relevant preprocedural imaging information during needle-based procedures. Virtual needles displayed within preprocedural images may be helpful for clandestine targets such as arterial phase enhancing liver lesions or during thermal ablations when obscuring gas is released. Electromagnetic tracking may help improve imaging guidance for interventional procedures and warrants further investigation, especially for procedures in which the outcomes are dependent on accuracy. --- paper_title: An evaluation of interference of inflatable penile prostheses with electromagnetic localization and tracking system paper_content: Purpose: ::: The Calypso system is stated by the manufacturer to be contraindicated for cases where the patient has been implanted with a penile prosthesis. This is due to concern for potential metal interference-related reduction of spatial localization and tracking accuracy. Here we quantify the localization and tracking accuracy of the Calypso system in the presence of inflatable penile prosthesis devices from three most widely used models which account for, essentially, 100% of implants in North America. ::: ::: Methods: ::: Phantom studies were first performed to quantify the interference of Calypso localization and tracking accuracy from both varying metal (steel) masses, and from the penile prosthetic devices themselves. The interference of varying steel masses was studied as a function of two factors: (a) the mass and (b) the location of steel material. The Calypso daily quality assurance (QA) phantom with three implanted Beacon® transponders was used to measure any aliasing of position that might occur due to metal interference. After confirming the safety of use in phantom, we implanted Calypso Beacon® transponders in one patient with a previously implanted AMS Model 700 inflatable penile prosthetic device. For each of the 42 delivered treatment fractions, redundant stereotactic ultrasound (US) image guidance was performed to ensure good agreement between US and Calypso guidance. ::: ::: Results: ::: We observed that a steel mass of less than 18 g did not cause any detectable positional aliasing for the Calypso tracking function. The mass of metal material measured to exist in the three penile prosthetic devices studied here (MP35N alloy) was approximately 1 g for each. No positional aliasing was observed for the three prosthetic devices in phantom, and good agreement between redundant US and Calypso was also observed in patient. ::: ::: Conclusions: ::: Both phantom and patient evaluations with the penile prosthetic devices showed no measurable interference with the Calypso system, thus indicating that accurate Calypso-based alignments can be performed in the presence of current industry standard inflatable penile prosthetic devices. --- paper_title: The reliability and accuracy of an electromagnetic motion analysis system when used conjointly with an accelerometer paper_content: The effect of an accelerometer driven electronic postural monitor (Spineangel®) placed within the electromagnetic measurement field of the Polhemus Fastrak™ is unknown. This study assessed the reliability and accuracy of Fastrak™ linear and angular measurements, when the Spineangel® was placed close to the sensor(s) and transmitter. Bland Altman plots and intraclass correlation coefficient (2,1) were used to determine protocol reproducibility and measurement consistency. Excellent reliability was found for linear and angular measurements (0.96, 95% CI: 0.90–0.99; and 1.00, 95% CI: 1.00–1.00, respectively) with the inclusion of Spineangel®; similar results were found, without the inclusion of Spineangel®, for linear and angular measurements, (0.96, 95% CI: 0.89–0.99; and 1.00, 95% CI: 1.00–1.00, respectively). The greatest linear discrepancies between the two test conditions were found to be less than 3.5 mm, while the greatest angular discrepancies were below 3.5°. As the effect on accuracy was minimal, t... --- paper_title: Electromagnetic tracking for US-guided interventions: standardized assessment of a new compact field generator paper_content: PURPOSE ::: One of the main challenges related to electromagnetic tracking in the clinical setting is a placement of the field generator (FG) that optimizes the reliability and accuracy of sensor localization. Recently, a new mobile FG for the NDI Aurora(®) tracking system has been presented. This Compact FG is the first FG that can be attached directly to an ultrasound (US) probe. The purpose of this study was to assess the precision and accuracy of the Compact FG in the presence of nearby mounted US probes. ::: ::: ::: MATERIALS AND METHODS ::: Six different US probes were mounted onto the Compact FG by means of a custom-designed mounting adapter. To assess precision and accuracy of the Compact FG, we employed a standardized assessment protocol. Utilizing a specifically manufactured plate, we measured positional data on three levels of distances from the FG as well as rotational data. ::: ::: ::: RESULTS ::: While some probes had negligible influence on tracking accuracy two probes increased the mean distance error up to 1.5 mm compared with a reference measurement of 0.5 mm. The jitter error consistently stayed below 0.2 mm in all cases. The mean relative error in orientation was found to be smaller than 3°. ::: ::: ::: CONCLUSION ::: Attachment of an US probe to the Compact FG does not have a critical influence on tracking accuracy in most cases. Clinical benefit of this promising mobile FG must be shown in future studies. --- paper_title: Technical accuracy of optical and the electromagnetic tracking systems paper_content: Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested. --- paper_title: ELECTROMAGNETIC TRACKING SYSTEM FOR NEUROVASCULAR INTERVENTIONS paper_content: Electromagnetic tracking systems (EMTS) are widely used in a computer-assisted surgery, motion capture, kinematic studies and military applications. The main application in the medical domain represents minimal invasive surgery. For this purpose, the EMTS systems are integrated into the surgery device, i.e. needle, endoscope or catheter supporting tracking of this device within the human body without X-ray radiation. However, in neurovascular interventions this technique hardly could be applied so far. This is mainly due to a too large size of the receiver coils which have to be integrated into the microcatheter. In this paper we present such a microcatheter for neurovascular interventions featured with a 5 DOF sensor enabling tracking of the catheter tip by a commercial EMTS system and first evaluation of its accuracy. --- paper_title: Evaluation of dynamic electromagnetic tracking deviation paper_content: Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. ::: We found a root mean square error ( e RMS ) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error e max = 2.31mm, minimum error e min = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms. --- paper_title: Evaluation of a new electromagnetic tracking system using a standardized assessment protocol paper_content: This note uses a published protocol to evaluate a newly released 6 degrees of freedom electromagnetic tracking system (Aurora, Northern Digital Inc.). A practice for performance monitoring over time is also proposed. The protocol uses a machined base plate to measure relative error in position and orientation as well as the influence of metallic objects in the operating volume. Positional jitter (ERMS) was found to be 0.17 mm ± 0.19 mm. A relative positional error of 0.25 mm ± 0.22 mm at 50 mm offsets and 0.97 mm ± 1.01 mm at 300 mm offsets was found. The mean of the relative rotation error was found to be 0.20° ± 0.14° with respect to the axial and 0.91° ± 0.68° for the longitudinal rotation. The most significant distortion caused by metallic objects is caused by 400-series stainless steel. A 9.4 mm maximum error occurred when the rod was closest to the emitter, 10 mm away. The improvement compared to older generations of the Aurora with respect to accuracy is substantial. --- paper_title: Electromagnetic tracking in the clinical environment paper_content: When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. --- paper_title: Quality assurance for clinical implementation of an electromagnetic tracking system paper_content: The Calypso Medical 4D localization system utilizes alternating current electromagnetics for accurate, real-time tumor tracking. A quality assurance program to clinically implement this system is described here. Testing of the continuous electromagnetic tracking system (Calypso Medical Technologies, Seattle, WA) was performed using an in-house developed four-dimensional stage and a quality assurance fixture containing three radiofrequency transponders at independently measured locations. The following tests were performed to validate the Calypso system: (a) Localization and tracking accuracy, (b) system reproducibility, (c) measurement of the latency of the tracking system, and (d) measurement of transmission through the Calypso table overlay and the electromagnetic array. The translational and rotational localization accuracies were found to be within 0.01 cm and 1.0 degree, respectively. The reproducibility was within 0.1 cm. The average system latency was measured to be within 303 ms. The attenuation by the Calypso overlay was measured to be 1.0% for both 6 and 18 MV photons. The attenuations by the Calypso array were measured to be 2% and 1.5% for 6 and 18 MV photons, respectively. For oblique angles, the transmission was measured to be 3% for 6 MV, while it was 2% for 18 MV photons. A quality assurance process has been developed for the clinical implementation of an electromagnetic tracking system in radiation therapy. --- paper_title: Accuracy assessment protocols for electromagnetic tracking systems paper_content: Electromagnetic tracking systems have found increasing use in medical applications during the last few years. As with most non-trivial spatial measurement systems, the complex determination of positions and orientations from their underlying raw sensor measurements results in complicated, non-uniform error distributions over the specified measurement volume. This makes it difficult to unambiguously determine accuracy and performance assessments that allow users to judge the suitability of these systems for their particular needs. Various assessment protocols generally emphasize different measurement aspects that typically arise in clinical use. This can easily lead to inconclusive or even contradictory conclusions. We examine some of the major issues involved and discuss three useful calibration protocols. The measurement accuracy of a system can be described in terms of its 'trueness' and its 'precision'. Often, the two are strongly coupled and cannot be easily determined independently. We present a method that allows the two to be disentangled, so that the resultant trueness properly represents the systematic, non-reducible part of the measurement error, and the resultant precision (or repeatability) represents only the statistical, reducible part. Although the discussion is given largely within the context of electromagnetic tracking systems, many of the results are applicable to measurement systems in general. --- paper_title: A framework for calibration of electromagnetic surgical navigation system paper_content: In this paper, we present a framework of calibrating an electromagnetic tracker (Northern Digital's Aurora) using an accurate optical tracker (the Optotrak system, also from Northern Digital). First, registration methods for these two navigation systems are developed. Sub millimeter accuracy registration is achieved for both cases. We also address the latency between the two different trackers. The registration accuracy for dynamic acquired data is greatly improved after we compensate for the tracker latency. In our calibration approach, we sample the measurement field of the Aurora and compute the position and orientation error using the Optotrak measurements and previously computed registration results as "ground truth". Then we approximate the error field using Bernstein polynomials. Another comparative technique we use is to decompose the error space using KD tree, and then approximate each atomic cell with local interpolation. Experimental results show significant improvement in tracking accuracy for both position and orientation. Finally we discuss our future directions. --- paper_title: Assesment of metallic distortions of an electromagnetic tracking system paper_content: Electromagnetic tracking systems are affected by the presence of metal or more general conductive objects. In this paper results of two protocols are presented, which will access the amount of distortions caused by certain type of metals. One of the main application areas of electromagnetic tracking systems is the medical field. Therefore this paper concentrates on types of metals, which are common in a medical environment, like typical tool and implant materials and OR table steel. Results are obtained and compared for the first generation of Aurora systems (Aurora 1), released in September 2003 and for the new Aurora system (Aurora 2), which was released in September 2005. --- paper_title: Accuracy of electromagnetic tracking with a prototype field generator in an interventional OR setting. paper_content: PURPOSE ::: The authors have studied the accuracy and robustness of a prototype electromagnetic window field generator (WFG) in an interventional radiology suite with a robotic C-arm. The overall purpose is the development of guidance systems combining real-time imaging with tracking of flexible instruments for bronchoscopy, laparoscopic ultrasound, endoluminal surgery, endovascular therapy, and spinal surgery. ::: ::: ::: METHODS ::: The WFG has a torus shape, which facilitates x-ray imaging through its centre. The authors compared the performance of the WFG to that of a standard field generator (SFG) under the influence of the C-arm. Both accuracy and robustness measurements were performed with the C-arm in different positions and poses. ::: ::: ::: RESULTS ::: The system was deemed robust for both field generators, but the accuracy was notably influenced as the C-arm was moved into the electromagnetic field. The SFG provided a smaller root-mean-square position error but was more influenced by the C-arm than the WFG. The WFG also produced smaller maximum and variance of the error. ::: ::: ::: CONCLUSIONS ::: Electromagnetic (EM) tracking with the new WFG during C-arm based fluoroscopy guidance seems to be a step forward, and with a correction scheme implemented it should be feasible. --- paper_title: THE EFFECT OF TRANSPONDER MOTION ON THE ACCURACY OF THE CALYPSO ELECTROMAGNETIC LOCALIZATION SYSTEM paper_content: Purpose: To determine position and velocity-dependent effects in the overall accuracy of the Calypso Electromagnetic localization system, under conditions that emulate transponder motion during normal free breathing. Methods and Materials: Three localization transponders were mounted on a remote-controlled turntable that could move the transponders along a circular trajectory at speeds up to 3 cm/s. A stationary calibration established the coordinates of multiple points on each transponder's circular path. Position measurements taken while the transponders were in motion at a constant speed were then compared with the stationary coordinates. Results: No statistically significant changes in the transponder positions in (x,y,z) were detected when the transponders were in motion. Conclusions: The accuracy of the localization system is unaffected by transponder motion. --- paper_title: Accuracy of navigation: a comparative study of infrared optical and electromagnetic navigation. paper_content: We evaluated the accuracy of navigation systems for measuring the mechanical axis in patients undergoing total knee arthroplasty and in the synthetic bone model. Infrared optical and electromagnetic navigation systems were compared. Both systems were found to be accurate and reproducible in an experimental environment. However, the accuracies of both systems were affected by erroneous registration, and the optical system was found to be more reproducible. In clinical situations, the mean difference was 1.23 degrees, and difference greater than 3 degrees occurred in 15% of clinical trials. These discordances may have been due to ambiguous anatomic landmarks causing registration errors and the possibility of electromagnetic signal interference in the operating room. --- paper_title: Volumetric characterization of the Aurora magnetic tracker system for image-guided transorbital endoscopic procedures. paper_content: In some medical procedures, it is difficult or impossible to maintain a line of sight for a guidance system. For such applications, people have begun to use electromagnetic trackers. Before a localizer can be effectively used for an image-guided procedure, a characterization of the localizer is required. The purpose of this work is to perform a volumetric characterization of the fiducial localization error (FLE) in the working volume of the Aurora magnetic tracker by sampling the magnetic field using a tomographic grid. Since the Aurora magnetic tracker will be used for image-guided transorbital procedures we chose a working volume that was close to the average size of the human head. A Plexiglass grid phantom was constructed and used for the characterization of the Aurora magnetic tracker. A volumetric map of the magnetic space was performed by moving the flat Plexiglass phantom up in increments of 38.4 mm from 9.6 mm to 201.6 mm. The relative spatial and the random FLE were then calculated. Since the target of our endoscopic guidance is the orbital space behind the optic nerve, the maximum distance between the field generator and the sensor was calculated depending on the placement of the field generator from the skull. For the different field generator placements we found the average random FLE to be less than 0.06 mm for the 6D probe and 0.2 mm for the 5D probe. We also observed an average relative spatial FLE of less than 0.7 mm for the 6D probe and 1.3 mm for the 5D probe. We observed that the error increased as the distance between the field generator and the sensor increased. We also observed a minimum error occurring between 48 mm and 86 mm from the base of the tracker. --- paper_title: Design and application of an assessment protocol for electromagnetic tracking systems. paper_content: This paper defines a simple protocol for competitive and quantified evaluation of electromagnetic tracking systems such as the NDI Aurora (A) and Ascension microBIRD with dipole transmitter (B). It establishes new methods and a new phantom design which assesses the reproducibility and allows comparability with different tracking systems in a consistent environment. A machined base plate was designed and manufactured in which a 50 mm grid of holes was precisely drilled for position measurements. In the center a circle of 32 equispaced holes enables the accurate measurement of rotation. The sensors can be clamped in a small mount which fits into pairs of grid holes on the base plate. Relative positional/orientational errors are found by subtracting the known distances/ rotations between the machined locations from the differences of the mean observed positions/ rotation. To measure the influence of metallic objects we inserted rods made of steel (SST 303, SST 416), aluminum, and bronze into the sensitive volume between sensor and emitter. We calculated the fiducial registration error and fiducial location error with a standard stylus calibration for both tracking systems and assessed two different methods of stylus calibration. The positional jitter amounted to 0.14 mm(A) and 0.08 mm(B). A relative positional error of 0.96 mm +/- 0.68 mm, range -0.06 mm; 2.23 mm(A) and 1.14 mm +/- 0.78 mm, range -3.72 mm; 1.57 mm(B) for a given distance of 50 mm was found. The relative rotation error was found to be 0.51 degrees (A)/0.04 degrees (B). The most relevant distortion caused by metallic objects results from SST 416. The maximum error 4.2 mm(A)/ > or = 100 mm(B) occurs when the rod is close to the sensor(20 mm). While (B) is more sensitive with respect to metallic objects, (A) is less accurate concerning orientation measurements. (B) showed a systematic error when distances are calculated. --- paper_title: Systematic distortions in magnetic position digitizers paper_content: W. Birkfellner, F. Watzinger, F. Wanschitz, G. Enislidis, C. Kollmann, D. Rafolt, R. Nowotny, R. Ewers, and H.Bergmann Citation: Medical Physics 25, 2242 (1998); doi: 10.1118/1.598425 View online: http://dx.doi.org/10.1118/1.598425 View Table of Contents: http://scitation.aip.org/content/aapm/journal/medphys/25/11?ver=pdfcov Published by the American Association of Physicists in Medicine Articles you may be interested in Organ motion tracking in USgFUS - A feasibility study using sonoelastography AIP Conf. Proc. 1503, 135 (2012); 10.1063/1.4769931 Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography imageguidance for small animal stereotactic interventions Med. Phys. 37, 1647 (2010); 10.1118/1.3312520 Magnetoacoustic tomographic imaging of electrical impedance with magnetic induction Appl. Phys. Lett. 91, 083903 (2007); 10.1063/1.2772763 Magnetoacoustic tomography with magnetic induction for imaging electrical impedance of biological tissue J. Appl. Phys. 99, 066112 (2006); 10.1063/1.2186371 Evaluation of a miniature electromagnetic position tracker Med. Phys. 29, 2205 (2002); 10.1118/1.1508377 --- paper_title: Accuracy of a wireless localization system for radiotherapy. paper_content: PURPOSE ::: A system has been developed for patient positioning based on real-time localization of implanted electromagnetic transponders (beacons). This study demonstrated the accuracy of the system before clinical trials. ::: ::: ::: METHODS AND MATERIALS ::: We describe the overall system. The localization component consists of beacons and a source array. A rigid phantom was constructed to place the beacons at known offsets from a localization array. Tests were performed at distances of 80 and 270 mm from the array and at positions in the array plane of up to 8 cm offset. Tests were performed in air and saline to assess the effect of tissue conductivity and with multiple transponders to evaluate crosstalk. Tracking was tested using a dynamic phantom creating a circular path at varying speeds. ::: ::: ::: RESULTS ::: Submillimeter accuracy was maintained throughout all experiments. Precision was greater proximal to the source plane (sigmax = 0.006 mm, sigmay = 0.01 mm, sigmaz = 0.006 mm), but continued to be submillimeter at the end of the designed tracking range at 270 mm from the array (sigmax = 0.27 mm, sigmay = 0.36 mm, sigmaz = 0.48 mm). The introduction of saline and the use of multiple beacons did not affect accuracy. Submillimeter accuracy was maintained using the dynamic phantom at speeds of up to 3 cm/s. ::: ::: ::: CONCLUSION ::: This system has demonstrated the accuracy needed for localization and monitoring of position during treatment. --- paper_title: Method for estimating dynamic EM tracking accuracy of surgical navigation tools paper_content: Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included. --- paper_title: An electromagnetic “Tracker-in-Table” configuration for X-ray fluoroscopy and cone-beam CT-guided surgery paper_content: PURPOSE ::: A novel electromagnetic tracking configuration was characterized and implemented for image-guided surgery incorporating C-arm fluoroscopy and/or cone-beam CT (CBCT). The tracker employed a field generator (FG) with an open rectangular aperture and a frame enclosure with two essentially hollow sides, yielding a design that presents little or no X-ray attenuation across the C-arm orbit. The "Window" FG (WFG) was characterized in comparison with a conventional "Aurora" FG (AFG), and a configuration in which the WFG was incorporated directly into the operating table was investigated in preclinical phantom studies. ::: ::: ::: METHOD ::: The geometric accuracy and field of view (FOV) of the WFG and AFG were evaluated in terms of target registration error (TRE) using an acrylic phantom on an (electromagnetic compatible) experimental bench. The WFG design was incorporated in a prototype operating table featuring a carbon fiber top beneath, which the FG could be translated for positioning under the patient. The X-ray compatibility was evaluated using a prototype mobile C-arm for fluoroscopy and CBCT in an anthropomorphic chest phantom. The susceptibility to EM field distortion associated with surgical tools (e.g., spine screws) and the C-arm itself was investigated in terms of TRE, and calibration methods were tested to provide robust image-world registration with minimal perturbation from the rotational C-arm. ::: ::: ::: RESULTS ::: The WFG demonstrated mean TRE of 1.28 ± 0.79 mm compared to 1.13 ± 0.72 mm for the AFG, with no statistically significant difference between the two (p = 0.32 and n = 250). The WFG exhibited a deeper field of view by ~10 cm providing an equivalent degree of geometric accuracy to a depth of z ~55 cm, compared to z ~45 cm for the AFG. Although the presence of a small number of spine screws did not degrade tracker accuracy, the mobile C-arm perturbed the electromagnetic field sufficiently to degrade TRE; however, a calibration method was identified to mitigate the effect. Specifically, the average calibration between posterior-anterior and lateral orientations of the C-arm was found to yield fairly robust registration for any C-arm pose with only a slight reduction in geometric accuracy (1.43 ± 0.31 mm in comparison with 1.28 ± 0.79 mm, p = 0.05). The WFG demonstrated reasonable X-ray compatibility, although the initial design of the window frame included suboptimal material and shape of the side bars that caused a level of streak artifacts in CBCT reconstructions. The streak artifacts were of sufficient magnitude to degrade soft-tissue visibility in CBCT but were negligible in the context of high-contrast imaging tasks (e.g., bone visualization). ::: ::: ::: CONCLUSION ::: The open frame of the WFG offers a potentially valuable configuration for electromagnetic trackers in image-guided surgery applications that are based on X-ray fluoroscopy and/or CBCT. The geometric accuracy and FOV are comparable to the conventional AFG and offers increased depth (z-direction) FOV. Incorporation directly within the operating table offers a streamlined implementation in which the tracker is in place but "invisible," potentially simplifying tableside logistics, avoidance of the sterile field, and compatibility with X-ray imaging. --- paper_title: A hardware and software protocol for the evaluation of electromagnetic tracker accuracy in the clinical environment: a multi-center study paper_content: This paper proposes an assessment protocol that incorporates both hardware and analysis methods for evaluation of electromagnetic tracker accuracy in different clinical environments. The susceptibility of electromagnetic tracker measurement accuracy is both highly dependent on nearby ferromagnetic interference sources and non-isotropic. These inherent limitations combined with the various hardware components and assessment techniques used within different studies makes the direct comparison of measurement accuracy between studies difficult. This paper presents a multicenter study to evaluate electromagnetic devices in different clinical environments using a common hardware phantom and assessment techniques so that results are directly comparable. Measurement accuracy has been shown to be in the range of 0.79-6.67mm within a 180mm 3 sub-volume of the Aurora measurement space in five different clinical environments. --- paper_title: EFFECT OF 3D ULTRASOUND PROBES ON THE ACCURACY OF ELECTROMAGNETIC TRACKING SYSTEMS paper_content: Abstract In the last few years, 3D ultrasound probes have became readily available. New fields of image-guided surgery applications are opened by attaching small electromagnetic position sensors to 3D ultrasound probes. However, nothing is known about the distortions caused by 3D ultrasound probes regarding electromagnetic sensors. Several trials were performed to investigate error-proneness of state-of-the-art electromagnetic tracking systems when used in combination with 3D ultrasound probes. It was found that 3D ultrasound probes do distort electromagnetic sensors more than 2D probes do. When attaching electromagnetic sensors to 3D probes, maximum errors of 5 mm up to 119 mm occur. The distortion strongly depends on the electromagnetic technology as well on the probe technology used. Thus, for 3D ultrasound-guided applications using electromagnetic tracking technology, the interference of ultrasound probes and electromagnetic sensors have to be checked carefully. (E-mail: [email protected] ) --- paper_title: An Evaluation of the Aurora System as a Flesh-Point Tracking Tool for Speech Production Research paper_content: Purpose Northern Digital Instruments (NDI; Waterloo, Ontario, Canada) manufactures a commercially available magnetometer device called Aurora that features real-time display of sensor position trac... --- paper_title: Method for evaluating compatibility of commercial electromagnetic (EM) microsensor tracking systems with surgical and imaging tables paper_content: Electromagnetic (EM) tracking systems have been successfully used for Surgical Navigation in ENT, cranial, and spine ::: applications for several years. Catheter sized micro EM sensors have also been used in tightly controlled cardiac ::: mapping and pulmonary applications. EM systems have the benefit over optical navigation systems of not requiring a ::: line-of-sight between devices. Ferrous metals or conductive materials that are transient within the EM working volume ::: may impact tracking performance. Effective methods for detecting and reporting EM field distortions are generally well ::: known. Distortion compensation can be achieved for objects that have a static spatial relationship to a tracking sensor. ::: New commercially available micro EM tracking systems offer opportunities for expanded image-guided navigation ::: procedures. It is important to know and understand how well these systems perform with different surgical tables and ::: ancillary equipment. By their design and intended use, micro EM sensors will be located at the distal tip of tracked ::: devices and therefore be in closer proximity to the tables. ::: Our goal was to define a simple and portable process that could be used to estimate the EM tracker accuracy, and to ::: vet a large number of popular general surgery and imaging tables that are used in the United States and abroad. --- paper_title: Accuracy of an electromagnetic tracking device: a study of the optimal range and metal interference. paper_content: The positional and rotational accuracy of a direct-current magnetic tracking device commonly used in biomechanical investigations was evaluated. The effect of different metals was also studied to determine the possibility of interference induced by experimental test fixtures or orthopaedic implants within the working field. Positional and rotational data were evaluated for accuracy and resolution by comparing the device output to known motions as derived from a calibrated grid board or materials testing machine. The effect of different metals was evaluated by placing cylindrical metal samples at set locations throughout the working field and comparing the device readings before and after introducing each metal sample. Positional testing revealed an optimal operational range with the transmitter and receiver separation between 22.5 and 64.0 cm. Within this range the mean positional error was found to be 1.8 percent of the step size, and resolution was determined to be 0.25 mm. The mean rotational error over a 1-20 degree range was found to be 1.6% of the rotational increment with a rotational resolution of 0.1 degrees. Of the metal alloys tested only mild steel produced significant interference, which was maximum when the sample was placed adjacent to the receiver. At this location the mild steel induced a positional difference of 5.26 cm and an angular difference of 9.75 degrees. The device was found to be insensitive to commonly used orthopaedic alloys. In this study, the electromagnetic tracking device was found to have positional and rotational errors of less than 2 percent, when utilized within its optimal operating range. This accuracy combined with its insensitivity to orthopaedic alloys should make it suitable for a variety of musculoskeletal research investigations. --- paper_title: The Effects of Metals and Interfering Fields on Electromagnetic Trackers paper_content: The operation of six degree-of-freedom electromagnetic trackers is based on the spatial properties of the electromagnetic fields generated by three small coils. Anything in the environment that causes these fields to be distorted will result in measurement noise and/or errors. An experimental investigation was undertaken to measure the effect of external fields present in a typical working environment (namely mains and computer monitor fields) and the presence of metals (25-mm cubes of various types of metals, a large steel bar, and a large steel sheet). A theoretical model is proposed to explain the observations. Two devices were used in this investigation: a Polhemus Fastrak and an Ascension Flock of Birds. --- paper_title: Standardized assessment of new electromagnetic field generators in an interventional radiology setting. paper_content: PURPOSE ::: Two of the main challenges associated with electromagnetic (EM) tracking in computer-assisted interventions (CAIs) are (1) the compensation of systematic distance errors arising from the influence of metal near the field generator (FG) or the tracked sensor and (2) the optimized setup of the FG to maximize tracking accuracy in the area of interest. Recently, two new FGs addressing these issues were proposed for the well-established Aurora(®) tracking system [Northern Digital, Inc. (NDI), Waterloo, Canada]: the Tabletop 50-70 FG, a planar transmitter with a built-in shield that compensates for metal distortions emanating from treatment tables, and the prototypical Compact FG 7-10, a mobile generator designed to be attached to mobile imaging devices. The purpose of this paper was to assess the accuracy and precision of these new FGs in an interventional radiology setting. ::: ::: ::: METHODS ::: A standardized assessment protocol, which uses a precisely machined base plate to measure relative error in position and orientation, was applied to the two new FGs as well as to the well-established standard Aurora(®) Planar FG. The experiments were performed in two different settings: a reference laboratory environment and a computed tomography (CT) scanning room. In each setting, the protocol was applied to three different poses of the measurement plate within the tracking volume of the three FGs. ::: ::: ::: RESULTS ::: The two new FGs provided higher precision and accuracy within their respective measurement volumes as well as higher robustness with respect to the CT scanner compared to the established FG. Considering all possible 5 cm distances on the grid, the error of the Planar FG was increased by a factor of 5.94 in the clinical environment (4.4 mm) in comparison to the error in the laboratory environment (0.8 mm). In contrast, the mean values for the two new FGs were all below 1 mm with an increase in the error by factors of only 2.94 (Reference: 0.3 mm; CT: 0.9 mm) and 1.04 (both: 0.5 mm) in the case of the Tabletop FG and the Compact FG, respectively. ::: ::: ::: CONCLUSIONS ::: Due to their high accuracy and robustness, the Tabletop FG and the Compact FG could eliminate the need for compensation of EM field distortions in certain CT-guided interventions. --- paper_title: A novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery paper_content: With the increased use and development of image-guided surgical applications, there is a need for methods of ::: analysis of the accuracy and precision of the components which compose these systems. One primary component ::: of an image-guided surgery system is the position tracking system which allows for the localization of a tool within ::: the surgical field and provides information which is translated back to the images. Previously much work has been ::: done in characterizing these systems for spatial accuracy and precision. Much of this previous work examines ::: single tracking systems or modalities. We have devised a method which allows for the characterization of a novel ::: tracking system independent of modality and location. We describe the development of a phantom system which ::: allows for rapid design and creation of surfaces with different geometries. We have also demonstrated a method of ::: analysis of the data generated by this phantom system, and used it to compare Biosense-Webster's CartoXP TM , ::: and Northern Digital's Aurora TM magnetic trackers. We have determined that the accuracy and precision of the ::: CartoXP was best, followed closely by the Aurora's dome volume, then the Aurora's cube volume. The mean ::: accuracy for all systems was better than 3mm and decays with distance from the field generator. --- paper_title: A direction space interpolation technique for calibration of electromagnetic surgical navigation systems paper_content: A new generation of electromagnetic (EM) navigation systems with extremely compact sensors have great potential for clinical applications requiring that surgical devices be tracked within a patient’s body. However, electro-magnetic field distortions limit the accuracy of such devices. Further, the errors may be sensitive both to position and orientation of EM sensors within the field. This paper presents a computationally efficient method for in-situ 5 DOF calibration of the basic sensors of a typical EM system (Northern Digital’s Aurora), and presents preliminary results demonstrating an improvement of approximately 2.6 : 1 positional accuracy and 1.6 : 1 for orientation even when the sensors are moved through arbitrary orientation changes. This work represents one step in a larger effort to understand the field distortions associated with these systems and to develop effective and predictable calibration and registration strategies for their use in stereotactic image-guided interventions. --- paper_title: Quantification of AC electromagnetic tracking system accuracy in a CT scanner environment paper_content: The purpose of this study was to quantify the effects of a computed tomography (CT) scanner environment on the ::: positional accuracy of an AC electromagnetic tracking system, the second generation NDI Aurora. A three-axis ::: positioning robot was used to move an electromagnetically tracked needle above the CT table throughout a 30cm by ::: 30cm axial plane sampled in 2.5cm steps. The corresponding position data was captured from the Aurora and was ::: registered to the positioning system data using a rigid body transformation minimizing the least squares L2-norm. Data ::: was sampled at varying distances from the CT gantry (three feet, two feet, and one foot) and with the CT table in a ::: nominal position and lowered by 10cm. A coordinate system was defined with the x axis normal to the CT table and the ::: origin at the center of the CT table, and the z axis spanning the table in the lateral direction with the origin at the center ::: of the CT table. In this coordinate system, the positional relationships of each sampled point, the CT table, and the ::: Aurora field generator are clearly defined. This allows error maps to be displayed in accurate spatial relationship to the ::: CT scanner as well as to a representative patient anatomy. By quantifying the distortions in relation to the position of CT ::: scanner components and the Aurora field generator, the optimal working field of view and recommended guidelines for ::: operation can be determined such that targeting inside human anatomy can be done with reasonable expectations of ::: desired performance. --- paper_title: Dynamic Response of Electromagnetic Spatial Displacement Trackers paper_content: Overall system latency-the elapsed time from input human motion until the immediate consequences of that input are available in the display-is one of the most frequently cited shortcoming of current virtual environment VE technology. Given that spatial displacement trackers are employed to monitor head and hand position and orientation in many VE applications, the dynamic response intrinsic to these devices is an unavoidable contributor to overall system latency. In this paper, we describe a testbed and method for measurement of tracker dynamic response that use a motorized rotary swing arm to sinusoidally displace the VE sensor at a number of frequencies spanning the bandwidth of volitional human movement. During the tests, actual swing arm angle and VE sensor reports are collected and time stamped. By calibrating the time stamping technique, the tracker's internal transduction and processing time are separated from data transfer and host computer software execution latencies. We have used this test-bed to examine several VE sensors-most recently to compare latency, gain, and noise characteristics of two commercially available electromagnetic trackers: Ascension Technology Corp.'s Flock of BirdsTM and Polhemus Inc.'s FastrakTM. --- paper_title: Accuracy assessment for navigated maxillo-facial surgery using an electromagnetic tracking device paper_content: Abstract Purpose To evaluate the accuracy and the usability of an electromagnetic tracking device in maxillo-facial surgery through testing on a phantom skull under operating room (OR) conditions. Material and methods A standard plastic skull phantom was equipped with a custom made model of the maxilla and with target markers and dental brackets. Imaging was performed with a computed tomography (CT) scanner. The extent and robustness of the electromagnetic tracking system’s target registration error (TRE) was evaluated under various conditions. Results For each measurement a total of 243 registrations were performed with 5 point registration and 4374 registrations with 6 point registration. The average target registration error for the 5 point registration under OR conditions was 2.1 mm (SD 0.86) and 1.03 (SD 0.53) for the 6 point registration. Metallic instruments applied to the skull increased the TRE significantly in both registration methods. Conclusion The electromagnetic tracking device showed a high accuracy and performed stable in both registration methods. Electromagnetic interference due to metallic instruments was significant but the extent of TRE was still acceptable in comparison to optical navigation devices. A benefit of EM tracking is the absence of line-of-sight hindrance. The test setting simulating OR conditions has proven suitable for further studies. --- paper_title: Electromagnetic Tracking for Thermal Ablation and Biopsy Guidance: Clinical Evaluation of Spatial Accuracy paper_content: Purpose To evaluate the spatial accuracy of electromagnetic needle tracking and demonstrate the feasibility of ultrasonography (US)–computed tomography (CT) fusion during CT- and US-guided biopsy and radiofrequency ablation procedures. Materials and Methods The authors performed a 20-patient clinical trial to investigate electromagnetic needle tracking during interventional procedures. The study was approved by the institutional investigational review board, and written informed consent was obtained from all patients. Needles were positioned by using CT and US guidance. A commercial electromagnetic tracking device was used in combination with prototype internally tracked needles and custom software to record needle positions relative to previously obtained CT scans. Position tracking data were acquired to evaluate the tracking error, defined as the difference between tracked needle position and reference standard needle position on verification CT scans. Registration between tracking space and image space was obtained by using reference markers attached to the skin ("fiducials"), and different registration methods were compared. The US transducer was tracked to demonstrate the potential use of real-time US-CT fusion for imaging guidance. Results One patient was excluded from analysis because he was unable to follow breathing instructions during the acquisition of CT scans. Nineteen of the 20 patients were evaluable, demonstrating a basic tracking error of 5.8 mm ± 2.6, which improved to 3.5 mm ± 1.9 with use of nonrigid registrations that used previous internal needle positions as additional fiducials. Fusion of tracked US with CT was successful. Patient motion and distortion of the tracking system by the CT table and gantry were identified as sources of error. Conclusions The demonstrated spatial tracking accuracy is sufficient to display clinically relevant preprocedural imaging information during needle-based procedures. Virtual needles displayed within preprocedural images may be helpful for clandestine targets such as arterial phase enhancing liver lesions or during thermal ablations when obscuring gas is released. Electromagnetic tracking may help improve imaging guidance for interventional procedures and warrants further investigation, especially for procedures in which the outcomes are dependent on accuracy. --- paper_title: ELECTROMAGNETIC TRACKING SYSTEM FOR NEUROVASCULAR INTERVENTIONS paper_content: Electromagnetic tracking systems (EMTS) are widely used in a computer-assisted surgery, motion capture, kinematic studies and military applications. The main application in the medical domain represents minimal invasive surgery. For this purpose, the EMTS systems are integrated into the surgery device, i.e. needle, endoscope or catheter supporting tracking of this device within the human body without X-ray radiation. However, in neurovascular interventions this technique hardly could be applied so far. This is mainly due to a too large size of the receiver coils which have to be integrated into the microcatheter. In this paper we present such a microcatheter for neurovascular interventions featured with a 5 DOF sensor enabling tracking of the catheter tip by a commercial EMTS system and first evaluation of its accuracy. --- paper_title: Accuracy of navigation: a comparative study of infrared optical and electromagnetic navigation. paper_content: We evaluated the accuracy of navigation systems for measuring the mechanical axis in patients undergoing total knee arthroplasty and in the synthetic bone model. Infrared optical and electromagnetic navigation systems were compared. Both systems were found to be accurate and reproducible in an experimental environment. However, the accuracies of both systems were affected by erroneous registration, and the optical system was found to be more reproducible. In clinical situations, the mean difference was 1.23 degrees, and difference greater than 3 degrees occurred in 15% of clinical trials. These discordances may have been due to ambiguous anatomic landmarks causing registration errors and the possibility of electromagnetic signal interference in the operating room. --- paper_title: Navigation with Electromagnetic Tracking for Interventional Radiology Procedures: A Feasibility Study paper_content: PURPOSE ::: To assess the feasibility of the use of preprocedural imaging for guide wire, catheter, and needle navigation with electromagnetic tracking in phantom and animal models. ::: ::: ::: MATERIALS AND METHODS ::: An image-guided intervention software system was developed based on open-source software components. Catheters, needles, and guide wires were constructed with small position and orientation sensors in the tips. A tetrahedral-shaped weak electromagnetic field generator was placed in proximity to an abdominal vascular phantom or three pigs on the angiography table. Preprocedural computed tomographic (CT) images of the phantom or pig were loaded into custom-developed tracking, registration, navigation, and rendering software. Devices were manipulated within the phantom or pig with guidance from the previously acquired CT scan and simultaneous real-time angiography. Navigation within positron emission tomography (PET) and magnetic resonance (MR) volumetric datasets was also performed. External and endovascular fiducials were used for registration in the phantom, and registration error and tracking error were estimated. ::: ::: ::: RESULTS ::: The CT scan position of the devices within phantoms and pigs was accurately determined during angiography and biopsy procedures, with manageable error for some applications. Preprocedural CT depicted the anatomy in the region of the devices with real-time position updating and minimal registration error and tracking error (<5 mm). PET can also be used with this system to guide percutaneous biopsies to the most metabolically active region of a tumor. ::: ::: ::: CONCLUSIONS ::: Previously acquired CT, MR, or PET data can be accurately codisplayed during procedures with reconstructed imaging based on the position and orientation of catheters, guide wires, or needles. Multimodality interventions are feasible by allowing the real-time updated display of previously acquired functional or morphologic imaging during angiography, biopsy, and ablation. --- paper_title: Accuracy assessment for navigated maxillo-facial surgery using an electromagnetic tracking device paper_content: Abstract Purpose To evaluate the accuracy and the usability of an electromagnetic tracking device in maxillo-facial surgery through testing on a phantom skull under operating room (OR) conditions. Material and methods A standard plastic skull phantom was equipped with a custom made model of the maxilla and with target markers and dental brackets. Imaging was performed with a computed tomography (CT) scanner. The extent and robustness of the electromagnetic tracking system’s target registration error (TRE) was evaluated under various conditions. Results For each measurement a total of 243 registrations were performed with 5 point registration and 4374 registrations with 6 point registration. The average target registration error for the 5 point registration under OR conditions was 2.1 mm (SD 0.86) and 1.03 (SD 0.53) for the 6 point registration. Metallic instruments applied to the skull increased the TRE significantly in both registration methods. Conclusion The electromagnetic tracking device showed a high accuracy and performed stable in both registration methods. Electromagnetic interference due to metallic instruments was significant but the extent of TRE was still acceptable in comparison to optical navigation devices. A benefit of EM tracking is the absence of line-of-sight hindrance. The test setting simulating OR conditions has proven suitable for further studies. --- paper_title: How Does Electromagnetic Navigation Stack Up Against Infrared Navigation in Minimally Invasive Total Knee Arthroplasties paper_content: Abstract Forty-six primary total knee arthroplasties were performed using either an electromagnetic (EM) or infrared (IR) navigation system. In this IRB-approved study, patients were evaluated clinically and for accuracy using spiral computed tomographic imaging and 36-in standing radiographs. Although EM navigation was subject to metal interference, it was not as drastic as line-of-sight interference with IR navigation. Mechanical alignment was ideal in 92.9% of EM and 90.0% of IR cases based on spiral computed tomographic imaging and 100% of EM and 95% of IR cases based on x-ray. Individual measurements of component varus/valgus and sagittal measurements showed EM to be equivalent to IR, with both systems producing subdegree accuracy in 95% of the readings. --- paper_title: Technical accuracy of optical and the electromagnetic tracking systems paper_content: Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested. --- paper_title: Electromagnetic navigation bronchoscopy: A descriptive analysis paper_content: Electromagnetic navigation bronchoscopy (ENB) is an exciting new bronchoscopic technique that promises accurate navigation to peripheral pulmonary target lesions, using technology similar to a car global positioning system (GPS) unit. Potential uses for ENB include biopsy of peripheral lung lesions, pleural dye marking of nodules for surgical wedge resection, placement of fiducial markers for stereotactic radiotherapy, and therapeutic insertion of brachytherapy catheters into malignant tissue. This article will describe the ENB procedure, review the published literature, compare ENB to existing biopsy techniques, and outline the challenges for widespread implementation of this new technology. --- paper_title: Evaluation of dynamic electromagnetic tracking deviation paper_content: Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. ::: We found a root mean square error ( e RMS ) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error e max = 2.31mm, minimum error e min = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms. --- paper_title: Navigation Systems for Ablation paper_content: Navigation systems, devices, and intraprocedural software are changing the way interventional oncology is practiced. Before the development of precision navigation tools integrated with imaging systems, thermal ablation of hard-to-image lesions was highly dependent on operator experience, spatial skills, and estimation of positron emission tomography–avid or arterial-phase targets. Numerous navigation systems for ablation bring the opportunity for standardization and accuracy that extends the operator's ability to use imaging feedback during procedures. In this report, existing systems and techniques are reviewed and specific clinical applications for ablation are discussed to better define how these novel technologies address specific clinical needs and fit into clinical practice. --- paper_title: Electromagnetic tracking in the clinical environment paper_content: When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. --- paper_title: Immersive 3DUI on one dollar a day paper_content: A convergence between consumer electronics and virtual reality is occurring. We present an immersive head-mounted-display-based, wearable 3D user interface that is inexpensive (less than $900 USD), robust (sourceless tracking), and portable (lightweight and untethered). While the current display has known deficiencies, the user tracking quality is within the constraints of many existing applications, while the portability and cost offers opportunities for innovative applications that are not currently feasible. --- paper_title: Model for defining and reporting reference-based validation protocols in medical image processing paper_content: Objectives Image processing tools are often embedded in larger systems. Validation of image processing methods is important because the performance of such methods can have an impact on the performance of the larger systems and consequently on decisions and actions based on the use of these systems. Most validation studies compare the direct or indirect results of a method with a reference that is assumed to be very close or equal to the correct solution. In this paper, we propose a model for defining and reporting reference-based validation protocols in medical image processing. --- paper_title: THE EFFECT OF TRANSPONDER MOTION ON THE ACCURACY OF THE CALYPSO ELECTROMAGNETIC LOCALIZATION SYSTEM paper_content: Purpose: To determine position and velocity-dependent effects in the overall accuracy of the Calypso Electromagnetic localization system, under conditions that emulate transponder motion during normal free breathing. Methods and Materials: Three localization transponders were mounted on a remote-controlled turntable that could move the transponders along a circular trajectory at speeds up to 3 cm/s. A stationary calibration established the coordinates of multiple points on each transponder's circular path. Position measurements taken while the transponders were in motion at a constant speed were then compared with the stationary coordinates. Results: No statistically significant changes in the transponder positions in (x,y,z) were detected when the transponders were in motion. Conclusions: The accuracy of the localization system is unaffected by transponder motion. --- paper_title: Accuracy of navigation: a comparative study of infrared optical and electromagnetic navigation. paper_content: We evaluated the accuracy of navigation systems for measuring the mechanical axis in patients undergoing total knee arthroplasty and in the synthetic bone model. Infrared optical and electromagnetic navigation systems were compared. Both systems were found to be accurate and reproducible in an experimental environment. However, the accuracies of both systems were affected by erroneous registration, and the optical system was found to be more reproducible. In clinical situations, the mean difference was 1.23 degrees, and difference greater than 3 degrees occurred in 15% of clinical trials. These discordances may have been due to ambiguous anatomic landmarks causing registration errors and the possibility of electromagnetic signal interference in the operating room. --- paper_title: Design and application of an assessment protocol for electromagnetic tracking systems. paper_content: This paper defines a simple protocol for competitive and quantified evaluation of electromagnetic tracking systems such as the NDI Aurora (A) and Ascension microBIRD with dipole transmitter (B). It establishes new methods and a new phantom design which assesses the reproducibility and allows comparability with different tracking systems in a consistent environment. A machined base plate was designed and manufactured in which a 50 mm grid of holes was precisely drilled for position measurements. In the center a circle of 32 equispaced holes enables the accurate measurement of rotation. The sensors can be clamped in a small mount which fits into pairs of grid holes on the base plate. Relative positional/orientational errors are found by subtracting the known distances/ rotations between the machined locations from the differences of the mean observed positions/ rotation. To measure the influence of metallic objects we inserted rods made of steel (SST 303, SST 416), aluminum, and bronze into the sensitive volume between sensor and emitter. We calculated the fiducial registration error and fiducial location error with a standard stylus calibration for both tracking systems and assessed two different methods of stylus calibration. The positional jitter amounted to 0.14 mm(A) and 0.08 mm(B). A relative positional error of 0.96 mm +/- 0.68 mm, range -0.06 mm; 2.23 mm(A) and 1.14 mm +/- 0.78 mm, range -3.72 mm; 1.57 mm(B) for a given distance of 50 mm was found. The relative rotation error was found to be 0.51 degrees (A)/0.04 degrees (B). The most relevant distortion caused by metallic objects results from SST 416. The maximum error 4.2 mm(A)/ > or = 100 mm(B) occurs when the rod is close to the sensor(20 mm). While (B) is more sensitive with respect to metallic objects, (A) is less accurate concerning orientation measurements. (B) showed a systematic error when distances are calculated. --- paper_title: Electromagnetic Servoing—A New Tracking Paradigm paper_content: Electromagnetic (EM) tracking is highly relevant for many computer assisted interventions. This is in particular due to the fact that the scientific community has not yet developed a general solution for tracking of flexible instruments within the human body. Electromagnetic tracking solutions are highly attractive for minimally invasive procedures, since they do not require line of sight. However, a major problem with EM tracking solutions is that they do not provide uniform accuracy throughout the tracking volume and the desired, highest accuracy is often only achieved close to the center of tracking volume. In this paper, we present a solution to the tracking problem, by mounting an EM field generator onto a robot arm. Proposing a new tracking paradigm, we take advantage of the electromagnetic tracking to detect the sensor within a specific sub-volume, with known and optimal accuracy. We then use the more accurate and robust robot positioning for obtaining uniform accuracy throughout the tracking volume. Such an EM servoing methodology guarantees optimal and uniform accuracy, by allowing us to always keep the tracked sensor close to the center of the tracking volume. In this paper, both dynamic accuracy and accuracy distribution within the tracking volume are evaluated using optical tracking as ground truth. In repeated evaluations, the proposed method was able to reduce the overall error from 6.64±7.86 mm to a significantly improved accuracy of 3.83±6.43 mm. In addition, the combined system provides a larger tracking volume, which is only limited by the reach of the robot and not the much smaller tracking volume defined by the magnetic field generator. --- paper_title: Accuracy of a wireless localization system for radiotherapy. paper_content: PURPOSE ::: A system has been developed for patient positioning based on real-time localization of implanted electromagnetic transponders (beacons). This study demonstrated the accuracy of the system before clinical trials. ::: ::: ::: METHODS AND MATERIALS ::: We describe the overall system. The localization component consists of beacons and a source array. A rigid phantom was constructed to place the beacons at known offsets from a localization array. Tests were performed at distances of 80 and 270 mm from the array and at positions in the array plane of up to 8 cm offset. Tests were performed in air and saline to assess the effect of tissue conductivity and with multiple transponders to evaluate crosstalk. Tracking was tested using a dynamic phantom creating a circular path at varying speeds. ::: ::: ::: RESULTS ::: Submillimeter accuracy was maintained throughout all experiments. Precision was greater proximal to the source plane (sigmax = 0.006 mm, sigmay = 0.01 mm, sigmaz = 0.006 mm), but continued to be submillimeter at the end of the designed tracking range at 270 mm from the array (sigmax = 0.27 mm, sigmay = 0.36 mm, sigmaz = 0.48 mm). The introduction of saline and the use of multiple beacons did not affect accuracy. Submillimeter accuracy was maintained using the dynamic phantom at speeds of up to 3 cm/s. ::: ::: ::: CONCLUSION ::: This system has demonstrated the accuracy needed for localization and monitoring of position during treatment. --- paper_title: Computer-aided navigation in neurosurgery paper_content: The article comprises three main parts: a historical review on navigation, the mathematical basics for calculation and the clinical applications of navigation devices. Main historical steps are described from the first idea till the realisation of the frame-based and frameless navigation devices including robots. In particular the idea of robots can be traced back to the Iliad of Homer, the first testimony of European literature over 2500 years ago. In the second part the mathematical calculation of the mapping between the navigation and the image space is demonstrated, including different registration modalities and error estimations. The error of the navigation has to be divided into the technical error of the device calculating its own position in space, the registration error due to inaccuracies in the calculation of the transformation matrix between the navigation and the image space, and the application error caused additionally by anatomical shift of the brain structures during operation. In the third part the main clinical fields of application in modern neurosurgery are demonstrated, such as localisation of small intracranial lesions, skull-base surgery, intracerebral biopsies, intracranial endoscopy, functional neurosurgery and spinal navigation. At the end of the article some possible objections to navigation-aided surgery are discussed. --- paper_title: Image-guided neurosurgery with 3-dimensional multimodal imaging data on a stereoscopic monitor. paper_content: BACKGROUND ::: In the past 2 decades, intraoperative navigation technology has changed preoperative and intraoperative strategies and methodology tremendously. ::: ::: ::: OBJECTIVE ::: To report our first experiences with a stereoscopic navigation system based on multimodality-derived, patient-specific 3-dimensional (3-D) information displayed on a stereoscopic monitor and controlled by a virtual user interface. ::: ::: ::: METHODS ::: For the planning of each case, a 3-D multimodality model was created on the Dextroscope. The 3-D model was transferred to a console in the operating room that was running Dextroscope-compatible software and included a stereoscopic LCD (liquid crystal display) monitor (DexVue). Surgery was carried out with a standard frameless navigation system (VectorVision, BrainLAB) that was linked to DexVue. Making use of the navigational space coordinates provided by the VectorVision system during surgery, we coregistered the patient's 3-D model with the actual patient in the operating room. The 3-D model could then be displayed as seen along the axis of a handheld probe or the microscope view. The DexVue data were viewed with polarizing glasses and operated via a 3-D interface controlled by a cordless mouse containing inertial sensors. The navigational value of DexVue was evaluated postoperatively with a questionnaire. A total of 39 evaluations of 21 procedures were available. ::: ::: ::: RESULTS ::: In all 21 cases, the connection of VectorVision with DexVue worked reliably, and consistent spatial concordance of the navigational information was displayed on both systems. The questionnaires showed that in all cases the stereoscopic 3-D data were preferred for navigation. In 38 of 39 evaluations, the spatial orientation provided by the DexVue system was regarded as an improvement. In no case was there worsened spatial orientation. ::: ::: ::: CONCLUSION ::: We consider navigating primarily with stereoscopic, 3-D multimodality data an improvement over navigating with image planes, and we believe that this technology enables a more intuitive intraoperative interpretation of the displayed navigational information and hence an easier surgical implementation of the preoperative plan. --- paper_title: A comparison of optical and electromagnetic computer-assisted navigation systems for fluoroscopic targeting. paper_content: Objectives: Freehand targeting using fluoroscopic guidance is routine for placement of interlocking screws associated with intramedullary nailing and for insertion of screws for reconstruction of pelvic and acetabular injuries. New technologies that use fluoroscopy with the assistance of computer guidance have the potential to improve accuracy and reduce radiation exposure to patient and surgeon. We sought to compare 2 fluoroscopic navigation tracking technologies, optical and electromagnetic versus standard freehand fluoroscopic targeting in a standardized model. Intervention: Three experienced orthopaedic trauma surgeons placed 3.2-mm guide pins through test foam blocks that simulate cancellous bone. The entry site for each pin was within a circular (18-mm) entry zone. On the opposite surface of the test block (130-mm across), the target was a 1-mm-diameter radioopaque spherical ball marker. Each surgeon placed 10 pins using freehand targeting (control group) navigation using Medtronic iON StealthStation (Optical A), navigation using BrainLAB VectorVision (Optical B), or navigation using GE Medical Systems InstaTrak 3500 system (EM). Outcome Measurements: Data were collected for accuracy (the distance from the exit site of the guidewire to the target spherical ball marker), fluoroscopy time (seconds), and total number of individual fluoroscopy images taken. Results: The 2 optical systems and the electromagnetic system provided significantly improved accuracy compared to freehand technique. The average distance from the target was significantly (3.5 times) greater for controls (7.1 mm) than for each of the navigated systems (Optical A = 2.1 mm, Optical B = 1.9 mm EM = 2.4 mm; P 0.05). The ability to place guidewires in a 5-mm safe zone surrounding the target sphere was also significantly improved with the optical systems and the EM system (99% of wires in the safe zone) compared to controls (47% in the safe zone) (P 0.05). Fluoroscopy time (seconds) and number of fluoroscopy images were similar among the three navigated groups (P > 0.05). Each of these parameters was significantly less when using the computer-guided systems than for freehand-unguided insertion (P < 0.01). Conclusions: Both optical and electromagnetic computer-assisted guidance systems have the potential to improve accuracy and reduce radiation use for freehand fluoroscopic targeting in orthopaedic surgery. --- paper_title: Standardized assessment of new electromagnetic field generators in an interventional radiology setting. paper_content: PURPOSE ::: Two of the main challenges associated with electromagnetic (EM) tracking in computer-assisted interventions (CAIs) are (1) the compensation of systematic distance errors arising from the influence of metal near the field generator (FG) or the tracked sensor and (2) the optimized setup of the FG to maximize tracking accuracy in the area of interest. Recently, two new FGs addressing these issues were proposed for the well-established Aurora(®) tracking system [Northern Digital, Inc. (NDI), Waterloo, Canada]: the Tabletop 50-70 FG, a planar transmitter with a built-in shield that compensates for metal distortions emanating from treatment tables, and the prototypical Compact FG 7-10, a mobile generator designed to be attached to mobile imaging devices. The purpose of this paper was to assess the accuracy and precision of these new FGs in an interventional radiology setting. ::: ::: ::: METHODS ::: A standardized assessment protocol, which uses a precisely machined base plate to measure relative error in position and orientation, was applied to the two new FGs as well as to the well-established standard Aurora(®) Planar FG. The experiments were performed in two different settings: a reference laboratory environment and a computed tomography (CT) scanning room. In each setting, the protocol was applied to three different poses of the measurement plate within the tracking volume of the three FGs. ::: ::: ::: RESULTS ::: The two new FGs provided higher precision and accuracy within their respective measurement volumes as well as higher robustness with respect to the CT scanner compared to the established FG. Considering all possible 5 cm distances on the grid, the error of the Planar FG was increased by a factor of 5.94 in the clinical environment (4.4 mm) in comparison to the error in the laboratory environment (0.8 mm). In contrast, the mean values for the two new FGs were all below 1 mm with an increase in the error by factors of only 2.94 (Reference: 0.3 mm; CT: 0.9 mm) and 1.04 (both: 0.5 mm) in the case of the Tabletop FG and the Compact FG, respectively. ::: ::: ::: CONCLUSIONS ::: Due to their high accuracy and robustness, the Tabletop FG and the Compact FG could eliminate the need for compensation of EM field distortions in certain CT-guided interventions. --- paper_title: A buyer's guide to electromagnetic tracking systems for clinical applications paper_content: When choosing an Electromagnetic Tracking System (EMTS) for image-guided procedures, it is desirable for the system to be usable for different procedures and environments. Several factors influence this choice. To date, the only factors that have been studied extensively, are the accuracy and the susceptibility of electromagnetic tracking systems to distortions caused by ferromagnetic materials. In this paper we provide a holistic overview of the factors that should be taken into account when choosing an EMTS. These factors include: the system’s refresh rate, the number of sensors that need to be tracked, the size of the navigated region, system interaction with the environment, can the sensors be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. We evaluate the Aurora EMTS (Northern Digital Inc., Waterloo, Ontario, Canada) and the 3D Guidance EMTS with the flat-panel and the short-range field generators (Ascension Technology Corp., Burlington, Vermont, USA) in three clinical environments. We show that these systems are applicable to specific procedures or in specific environments, but that, no single system is currently optimal for all environments and procedures we evaluated. --- paper_title: Effect of metal and sampling rate on accuracy of Flock of Birds electromagnetic tracking system. paper_content: Electromagnetic tracking devices are used in many biomechanics applications. Previous studies have shown that metal located within the working field of direct current electromagnetic tracking devices produces significant errors. However, the effect of sampling rate on the errors produced in a metallic environment has never been studied. In this study, the accuracy of Ascension Technologies' Flock of Birds was evaluated at sampling rates of 20, 60, 100, and 140 Hz, in the presence of both aluminum and steel. Aluminum interference caused an increase in measurement error as the sampling rate increased. Conversely, steel interference caused a decrease in measurement error as the sampling rate increased. We concluded that the accuracy of the Flock of Birds tracking system can be optimized in the presence of metal by careful choice in sampling rate. --- paper_title: Publication bias in clinical research paper_content: In a retrospective survey, 487 research projects approved by the Central Oxford Research Ethics Committee between 1984 and 1987, were studied for evidence of publication bias. As of May, 1990, 285 of the studies had been analysed by the investigators, and 52% of these had been published. Studies with statistically significant results were more likely to be published than those finding no difference between the study groups (adjusted odds ratio [OR] 2.32; 95% confidence interval [Cl] 1.25-4.28). Studies with significant results were also more likely to lead to a greater number of publications and presentations and to be published in journals with a high citation impact factor. An increased likelihood of publication was also associated with a high rating by the investigator of the importance of the study results, and with increasing sample size. The tendency towards publication bias was greater with observational and laboratory-based experimental studies (OR = 3.79; 95% Cl = 1.47-9.76) than with randomised clinical trials (OR = 0.84; 95% Cl = 0.34-2.09). We have confirmed the presence of publication bias in a cohort of clinical research studies. These findings suggest that conclusions based only on a review of published data should be interpreted cautiously, especially for observational studies. Improved strategies are needed to identify the results of unpublished as well as published studies. --- paper_title: Image-guided interventions : technology and applications paper_content: Overview and History of Image-Guided Interventions.- Tracking Devices.- Visualization in Image-Guided Interventions.- Augmented Reality.- Software.- Rigid Registration.- Nonrigid Registration.- Model-Based Image Segmentation for Image-Guided Interventions.- Imaging Modalities.- MRI-Guided FUS and its Clinical Applications.- Neurosurgical Applications.- Computer-Assisted Orthopedic Surgery.- Thoracoabdominal Interventions.- Real-Time Interactive MRI for Guiding Cardiovascular Surgical Interventions.- Three-Dimensional Ultrasound Guidance and Robot Assistance for Prostate Brachytherapy.- Radiosurgery.- Radiation Oncology.- Assessment of Image-Guided Interventions. --- paper_title: Successful placement of postpyloric enteral tubes using electromagnetic guidance in critically ill children* paper_content: OBJECTIVES ::: Initiation of postpyloric feeding is often delayed by difficulties in placement of enteral tubes. We evaluated the effectiveness of bedside postpyloric enteral tube (PET) placement using an electromagnetic (EM)-guided device. We hypothesized that: 1) EM-guided placement of PETs would be successful more often than standard blind placement with a shorter total time to successful placement and 2) the EM-guided technique would have similar overall costs to the standard technique. ::: ::: ::: DESIGN ::: Prospective cohort trial with serial control groups in a pediatric intensive care unit at a tertiary care children's hospital. ::: ::: ::: INTERVENTIONS ::: We collected data on a cohort of consecutive pediatric intensive care unit patients who underwent PET placement by standard blind technique followed by a cohort who underwent EM-guided placement. The primary outcome measure was successful placement determined by abdominal radiography. ::: ::: ::: MEASUREMENTS AND MAIN RESULTS ::: One hundred seven patients were evaluated in the trial: 57 in the standard group and 50 in the EM-guided group. Demographic data, percent intubated, and admission diagnosis were similar in both groups. Forty-one of 50 patients (82%) in the EM-guided group had successful placement compared with 22 of 57 in the standard group (38%) (p < 0.0001). The average time to successful placement was 1.7 vs. 21 hours in the EM-guided group and standard group, respectively (p < 0.0001). Children in the EM-guided group received fewer radiographs (p = 0.007) and were given more prokinetic drugs (p = 0.045). There were no episodes of pneumothorax in either group. After controlling for prokinetic drug use, EM-guided placement was more likely to result in successful placement than the standard blind technique (odds ratio 6.4, 95% confidence interval 2.5-16.3). An annual placement rate of 250 PETs by EM guidance, based on our institution's current utilization rates, is associated with a cost savings of $55.46 per PET placed. ::: ::: ::: CONCLUSION ::: EM guidance is an efficient and cost-effective method of bedside PET placement. --- paper_title: Navigated targeting of liver lesions: pitfalls of electromagnetic tracking paper_content: One of the major challenges related to percutaneous radiofrequency ablation (RFA) of liver tumors is the exact placement of the instrument within the lesion. Previous studies have shown the benefit of computer-assisted needle insertion based on optical tracking of both the instrument and internal fiducials used for registration. However, the concept has not been accepted for clinical use. This may in part be attributed to the line-of-sight constraint imposed by optical tracking systems which results in the use of needles thick enough to avoid bending. Electromagnetic (EM) tracking systems allow the localization of medical instruments without line-of-sight requirements, but are known to be less robust to the influence of metallic and/or ferromagnetic objects. In this paper, we apply a previously introduced fiducial-based system for navigated needle insertion with an EM tracking system and assess the overall targeting error using a static phantom in two different settings: in a non-metallic environment (REF) and on a CT stretcher (CT). While accurate needle insertion could be achieved in the reference environment (REF: 2.6±0.7 mm), targeting errors dropped drastically in the presence of metal (CT: 10.4±6.1 mm). For accurate and robust computer-assisted needle insertion, EM field distortions should thus either be avoided by assuring a suitable environment or by using methods for shielding or error compensation. --- paper_title: Calibration of electromagnetic tracking devices paper_content: Electromagnetic tracking devices are often used to track location and orientation of a user in a virtual reality environment. Their precision, however, is not always high enough because of the dependence of the system on the local electromagnetic field which can be altered easily by many external factors. The purpose of this article is to give an overview of the calibration techniques used to improve the precision of the electromagnetic tracking devices and to present a new method that compensates both the position and orientation errors. It is shown numerically that significant improvements in the precision of the detected position and orientation can be achieved with a small number of calibration measurements to be taken. Unresolved problems and research topics related to the proposed method are discussed. --- paper_title: Algorithm for calibration of the electromagnetic tracking system paper_content: Electromagnetic tracking systems (EMTS) are increasingly used in computer assisted surgery systems, motion capture units and military systems. The main application in the medical domain represents the support of minimally invasive surgeries. However, those electromagnetic tracking systems suffer from low accuracy and systematic errors caused by conductive and ferromagnetic media. In this paper we present a method for the calibration of electromagnetic tracking systems, which could be adopted in the development of novel electromagnetic tracking devices. The proposed method was evaluated on an experimental setup of a tracking system. --- paper_title: A New Calibration Procedure for Magnetic Tracking Systems paper_content: In this study, we suggest a new approach for the calibration of magnetic tracking systems that allows us to calibrate the entire system in a single setting. The suggested approach is based on solving a system of equations involving all the system parameters. These parameters include: 1) the magnetic positions of the transmitting coils; 2) their magnetic moments; 3) the magnetic position of the sensor; 4) its sensitivity; and 5) the gain of the sensor output amplifier. We choose a set of parameters that define the origin, orientation, and scale of the reference coordinate system and consider them as constants in the above system of equations. Another set of constants is the sensor output measured at a number of arbitrary positions. The unknowns in the above equations are all the other system parameters. To define the origin and orientation of the reference coordinate system, we first relate it to a physical object, e.g., to the transmitter housing. We then use special supports to align the sensor with the edges of the transmitter housing and measure the sensor output at a number of aligned positions. To define the scale of the reference coordinate system, we measure the distance between two arbitrary sensor locations with a precise instrument (a caliper). This is the only parameter that should be calibrated with the help of an external measurement tool. To illustrate the efficiency of the new approach, we applied the calibration procedure to a magnetic tracking system employing 64 transmitting coils. We have measured the systematic tracking errors before and after applying the calibration. The systematic tracking errors were reduced by an order of magnitude due to applying the new calibration procedure. --- paper_title: An improved calibration framework for electromagnetic tracking devices paper_content: Electromagnetic trackers have many favorable characteristics but are notorious for their sensitivity to magnetic field distortions resulting from metal and electronic equipment in the environment. We categorize existing tracker calibration methods and present an improved technique for reducing the static position and orientation errors that are inherent to these devices. A quaternion-based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a 6-DOF mobile platform and an optical position measurement system, allowing the collection of full-pose data at nearly arbitrary orientations of the receiver. A polynomial correction technique is applied and evaluated using a Polhemus Fastrak resulting in a substantial improvement of tracking accuracy. Finally, we apply advanced visualization algorithms to give new insight into the nature of the magnetic distortion field. --- paper_title: Electromagnetic tracking in the clinical environment paper_content: When choosing an electromagnetic tracking system (EMTS) for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments. --- paper_title: Standardized assessment of new electromagnetic field generators in an interventional radiology setting. paper_content: PURPOSE ::: Two of the main challenges associated with electromagnetic (EM) tracking in computer-assisted interventions (CAIs) are (1) the compensation of systematic distance errors arising from the influence of metal near the field generator (FG) or the tracked sensor and (2) the optimized setup of the FG to maximize tracking accuracy in the area of interest. Recently, two new FGs addressing these issues were proposed for the well-established Aurora(®) tracking system [Northern Digital, Inc. (NDI), Waterloo, Canada]: the Tabletop 50-70 FG, a planar transmitter with a built-in shield that compensates for metal distortions emanating from treatment tables, and the prototypical Compact FG 7-10, a mobile generator designed to be attached to mobile imaging devices. The purpose of this paper was to assess the accuracy and precision of these new FGs in an interventional radiology setting. ::: ::: ::: METHODS ::: A standardized assessment protocol, which uses a precisely machined base plate to measure relative error in position and orientation, was applied to the two new FGs as well as to the well-established standard Aurora(®) Planar FG. The experiments were performed in two different settings: a reference laboratory environment and a computed tomography (CT) scanning room. In each setting, the protocol was applied to three different poses of the measurement plate within the tracking volume of the three FGs. ::: ::: ::: RESULTS ::: The two new FGs provided higher precision and accuracy within their respective measurement volumes as well as higher robustness with respect to the CT scanner compared to the established FG. Considering all possible 5 cm distances on the grid, the error of the Planar FG was increased by a factor of 5.94 in the clinical environment (4.4 mm) in comparison to the error in the laboratory environment (0.8 mm). In contrast, the mean values for the two new FGs were all below 1 mm with an increase in the error by factors of only 2.94 (Reference: 0.3 mm; CT: 0.9 mm) and 1.04 (both: 0.5 mm) in the case of the Tabletop FG and the Compact FG, respectively. ::: ::: ::: CONCLUSIONS ::: Due to their high accuracy and robustness, the Tabletop FG and the Compact FG could eliminate the need for compensation of EM field distortions in certain CT-guided interventions. --- paper_title: Electromagnetic navigation bronchoscopy paper_content: In 2009, lung cancer was estimated to be the second most common form of cancer diagnosed in men, after prostate, and the second, after breast cancer, in women. It is estimated that it caused 159,390 deaths more than breast, colon and prostate cancers combined. While age-adjusted death rates for this cancer have been declining since 2000, they remain high. --- paper_title: A survey of electromagnetic position tracker calibration techniques paper_content: This paper is a comprehensive survey of various techniques used to calibrate electromagnetic position tracking systems. A common framework is established to present the calibration problem as the interpolation problem in 3D. All the known calibration techniques are classified into local and global methods and grouped according to their mathematical models. Both the location error and the orientation error correction techniques are surveyed. Data acquisition devices and methods as well as publicly available software implementations are reviewed, too. --- paper_title: Interpolation Volume Calibration: A Multisensor Calibration Technique for Electromagnetic Trackers paper_content: AC electromagnetic trackers are well suited for head tracking but are adversely affected by conductive and ferromagnetic materials. Tracking performance can be improved by mapping the tracking volume to produce coefficients that correct position and orientation (PnO) measurements caused by stationary distorting materials. The mapping process is expensive and time consuming, requiring complicated high-precision equipment to provide registration of the measurements to the source reference frame. In this study, we develop a new approach to mapping that provides registration of mapping measurements without precision equipment. Our method, i.e., the interpolation volume calibration system, uses two simple fixtures, each with multiple sensors in a rigid geometry, to determine sensor PnO in a distorted environment without mechanical measurements or other tracking technologies. We test our method in a distorted tracking environment, constructing a lookup table of the magnetic field that is used as the basis for distortion compensation. The new method compares favorably with the traditional approach providing a significant reduction in cost and effort. --- paper_title: Systematic distortions in magnetic position digitizers paper_content: W. Birkfellner, F. Watzinger, F. Wanschitz, G. Enislidis, C. Kollmann, D. Rafolt, R. Nowotny, R. Ewers, and H.Bergmann Citation: Medical Physics 25, 2242 (1998); doi: 10.1118/1.598425 View online: http://dx.doi.org/10.1118/1.598425 View Table of Contents: http://scitation.aip.org/content/aapm/journal/medphys/25/11?ver=pdfcov Published by the American Association of Physicists in Medicine Articles you may be interested in Organ motion tracking in USgFUS - A feasibility study using sonoelastography AIP Conf. Proc. 1503, 135 (2012); 10.1063/1.4769931 Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography imageguidance for small animal stereotactic interventions Med. Phys. 37, 1647 (2010); 10.1118/1.3312520 Magnetoacoustic tomographic imaging of electrical impedance with magnetic induction Appl. Phys. Lett. 91, 083903 (2007); 10.1063/1.2772763 Magnetoacoustic tomography with magnetic induction for imaging electrical impedance of biological tissue J. Appl. Phys. 99, 066112 (2006); 10.1063/1.2186371 Evaluation of a miniature electromagnetic position tracker Med. Phys. 29, 2205 (2002); 10.1118/1.1508377 --- paper_title: Magneto-Optical Tracking of Flexible Laparoscopic Ultrasound: Model-Based Online Detection and Correction of Magnetic Tracking Errors paper_content: Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm. --- paper_title: Electromagnetic tracking method and apparatus for compensation of metal artifacts using modular arrays of reference sensors paper_content: An electromagnetic tracking method includes generating an electromagnetic field (14) in a region of interest (16). The electromagnetic field is subject to distortion in response to a presence of metalartifacts proximate the electromagnetic field. An array of reference sensors (30,50,102,104,110) having a predefined known configuration are disposed proximate the region of interest. A first set oflocations of the array of reference sensors is determined with respect to the electromagnetic field generator (12) in response to an excitation of one or more of the reference sensors via the electromagnetic field. A second mechanism (28), other than the electromagnetic field, determines a first portion of a second set of locations of at least one or more sensors of the array of reference sensorswith respect to the second mechanism, the second mechanism being in a known spatial relationship with the electromagnetic field generator. A remainder portion of the second set of locations of the reference sensors of the array of reference sensors is determined in response to (i) the first portion of the second set of locations determined using the second mechanism and (ii) the predefined known configuration of the array of reference sensors. The method further includes compensating for metal distortion of the electromagnetic field in the region of interest as a function of the first and second sets of reference sensor locations of the array of reference sensors. --- paper_title: New approaches to online estimation of electromagnetic tracking errors for laparoscopic ultrasonography paper_content: In abdominal surgery, a laparoscopic ultrasound transducer is commonly used to detect lesions such as metastases. The determination and visualization of the position and orientation of its flexible tip in relation to the patient or other surgical instruments can be a great support for surgeons using the transducer intraoperatively. This difficult subject has recently received attention from the scientific community. Electromagnetic tracking systems can be applied to track the flexible tip; however, current limitations of electromagnetic tracking include its accuracy and sensibility, i.e., the magnetic field can be distorted by ferromagnetic material. This paper presents two novel methods for estimation of electromagnetic tracking error. Based on optical tracking of the laparoscope, as well as on magneto-optic and visual tracking of the transducer, these methods automatically detect in 85% of all cases whether tracking is erroneous or not, and reduce tracking errors by up to 2.5 mm. --- paper_title: An improved calibration framework for electromagnetic tracking devices paper_content: Electromagnetic trackers have many favorable characteristics but are notorious for their sensitivity to magnetic field distortions resulting from metal and electronic equipment in the environment. We categorize existing tracker calibration methods and present an improved technique for reducing the static position and orientation errors that are inherent to these devices. A quaternion-based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a 6-DOF mobile platform and an optical position measurement system, allowing the collection of full-pose data at nearly arbitrary orientations of the receiver. A polynomial correction technique is applied and evaluated using a Polhemus Fastrak resulting in a substantial improvement of tracking accuracy. Finally, we apply advanced visualization algorithms to give new insight into the nature of the magnetic distortion field. --- paper_title: A direction space interpolation technique for calibration of electromagnetic surgical navigation systems paper_content: A new generation of electromagnetic (EM) navigation systems with extremely compact sensors have great potential for clinical applications requiring that surgical devices be tracked within a patient’s body. However, electro-magnetic field distortions limit the accuracy of such devices. Further, the errors may be sensitive both to position and orientation of EM sensors within the field. This paper presents a computationally efficient method for in-situ 5 DOF calibration of the basic sensors of a typical EM system (Northern Digital’s Aurora), and presents preliminary results demonstrating an improvement of approximately 2.6 : 1 positional accuracy and 1.6 : 1 for orientation even when the sensors are moved through arbitrary orientation changes. This work represents one step in a larger effort to understand the field distortions associated with these systems and to develop effective and predictable calibration and registration strategies for their use in stereotactic image-guided interventions. --- paper_title: Intraoperative Magnetic Tracker Calibration Using a Magneto-Optic Hybrid Tracker for 3-D Ultrasound-Based Navigation in Laparoscopic Surgery paper_content: This paper describes a ultrasound (3-D US) system that aims to achieve augmented reality (AR) visualization during laparoscopic surgery, especially for the liver. To acquire 3-D US data of the liver, the tip of a laparoscopic ultrasound probe is tracked inside the abdominal cavity using a magnetic tracker. The accuracy of magnetic trackers, however, is greatly affected by magnetic field distortion that results from the close proximity of metal objects and electronic equipment, which is usually unavoidable in the operating room. In this paper, we describe a calibration method for intraoperative magnetic distortion that can be applied to laparoscopic 3-D US data acquisition; we evaluate the accuracy and feasibility of the method by in vitro and in vivo experiments. Although calibration data can be acquired freehand using a magneto-optic hybrid tracker, there are two problems associated with this method--error caused by the time delay between measurements of the optical and magnetic trackers, and instability of the calibration accuracy that results from the uniformity and density of calibration data. A temporal calibration procedure is developed to estimate the time delay, which is then integrated into the calibration, and a distortion model is formulated by zeroth-degree to fourth-degree polynomial fitting to the calibration data. In the in vivo experiment using a pig, the positional error caused by magnetic distortion was reduced from 44.1 to 2.9 mm. The standard deviation of corrected target positions was less than 1.0 mm. Freehand acquisition of calibration data was performed smoothly using a magneto-optic hybrid sampling tool through a trocar under guidance by realtime 3-D monitoring of the tool trajectory; data acquisition time was less than 2 min. The present study suggests that our proposed method could correct for magnetic field distortion inside the patient's abdomen during a laparoscopic procedure within a clinically permissible period of time, as well as enabling an accurate 3-D US reconstruction to be obtained that can be superimposed onto live endoscopic images. --- paper_title: Electromagnetic navigation bronchoscopy paper_content: In 2009, lung cancer was estimated to be the second most common form of cancer diagnosed in men, after prostate, and the second, after breast cancer, in women. It is estimated that it caused 159,390 deaths more than breast, colon and prostate cancers combined. While age-adjusted death rates for this cancer have been declining since 2000, they remain high. --- paper_title: Model for defining and reporting reference-based validation protocols in medical image processing paper_content: Objectives Image processing tools are often embedded in larger systems. Validation of image processing methods is important because the performance of such methods can have an impact on the performance of the larger systems and consequently on decisions and actions based on the use of these systems. Most validation studies compare the direct or indirect results of a method with a reference that is assumed to be very close or equal to the correct solution. In this paper, we propose a model for defining and reporting reference-based validation protocols in medical image processing. --- paper_title: Fused video and ultrasound images for minimally invasive partial nephrectomy: a phantom study paper_content: The shift to minimally invasive abdominal surgery has increased reliance on image guidance during surgical procedures. However, these images are most often presented independently, increasing the cognitive workload for the surgeon and potentially increasing procedure time. When warm ischemia of an organ is involved, time is an important factor to consider. To address these limitations, we present a more intuitive visualization that combines images in a common augmented reality environment. In this paper, we assess surgeon performance under the guidance of the conventional visualization system and our fusion system using a phantom study that mimics the tumour resection of partial nephrectomy. The RMS error between the fused images was 2.43mm, which is sufficient for our purposes. A faster planning time for the resection was achieved using our fusion visualization system. This result is a positive step towards decreasing risks associated with long procedure times in minimally invasive abdominal interventions. --- paper_title: A Navigation Platform for Guidance of Beating Heart Transapical Mitral Valve Repair paper_content: Traditional surgical approaches for repairing diseased mitral valves (MVs) have relied on placing the patient on cardiopulmonary bypass (on pump), stopping the heart and accessing the arrested heart directly. However, because this approach has the potential for adverse neurological, vascular, and immunological sequelae, less invasive beating heart alternatives are desirable. Emerging beating heart techniques have been developed to offer high-risk patients MV repair using ultrasound guidance alone without stopping the heart. This paper describes the first porcine trials of the NeoChord DS1000 (Minnetonka, MN), employed to attach neochordae to a MV leaflet using the traditional ultrasound-guided protocol augmented by dynamic virtual geometric models. The distance errors of the tracked tool tip from the intended midline trajectory (5.2 ± 2.4 mm versus 16.8 ± 10.9 mm, p = 0.003), navigation times (16.7 ± 8.0 s versus 92.0 ± 84.5 s, p = 0.004), and total path lengths (225.2 ± 120.3 mm versus 1128.9 ± 931.1 mm, p = 0.003) were significantly shorter in the augmented ultrasound compared to navigation with ultrasound alone,1 indicating a substantial improvement in the safety and simplicity of the procedure. --- paper_title: CT-guided percutaneous lung biopsy: comparison of conventional CT fluoroscopy to CT fluoroscopy with electromagnetic navigation system in 60 consecutive patients. paper_content: PURPOSE ::: To determine if use of an electromagnetic navigation system (EMN) decreases radiation dose and procedure time of CT fluoroscopy guided lung biopsy in lesions smaller than 2.5 cm. ::: ::: ::: MATERIALS/METHODS ::: 86 consecutive patients with small lung masses (<2.5 cm) were approached. 60 consented and were randomized to undergo biopsy with CT fluoroscopy (CTF) (34 patients) or EMN (26 patients). Technical failure required conversion to CTF in 8/26 EMN patients; 18 patients completed biopsy with EMN. Numerous biopsy parameters were compared as described below. ::: ::: ::: RESULTS ::: Average fluoroscopy time using CTF was 28.2s compared to 35.0 s for EMN (p=0.1). Average radiation dose was 117 mGy using CTF and 123 mGy for EMN (p=0.7). Average number of needle repositions was 3.7 for CTF and 4.4 for EMN (p=0.4). Average procedure time was 15 min for CTF and 20 min for EMN (p=0.01). There were 7 pneumothoracesin the CTF group and 6 pneumothoraces in the EMN group (p=0.7). One pneumothorax in the CTF group and 3 pneumothoraces in the EMN group required chest tube placement (p=0.1). One pneumothorax patient in each group required hospital admission. Diagnostic specimens were obtained in 31/34 patients in the CTF group and 22/26 patients in the EMN group (p=0.4). ::: ::: ::: CONCLUSIONS ::: EMN was not statistically different than CTF for fluoroscopy time, radiation dose, number of needle repositions, incidence of pneumothorax, need for chest tube, or diagnostic yield. Procedure time was increased with EMN. --- paper_title: How Does Electromagnetic Navigation Stack Up Against Infrared Navigation in Minimally Invasive Total Knee Arthroplasties paper_content: Abstract Forty-six primary total knee arthroplasties were performed using either an electromagnetic (EM) or infrared (IR) navigation system. In this IRB-approved study, patients were evaluated clinically and for accuracy using spiral computed tomographic imaging and 36-in standing radiographs. Although EM navigation was subject to metal interference, it was not as drastic as line-of-sight interference with IR navigation. Mechanical alignment was ideal in 92.9% of EM and 90.0% of IR cases based on spiral computed tomographic imaging and 100% of EM and 95% of IR cases based on x-ray. Individual measurements of component varus/valgus and sagittal measurements showed EM to be equivalent to IR, with both systems producing subdegree accuracy in 95% of the readings. --- paper_title: Intraventricular catheter placement by electromagnetic navigation safely applied in a paediatric major head injury patient paper_content: INTRODUCTION ::: In the management of severe head injuries, the use of intraventricular catheters for intracranial pressure (ICP) monitoring and the option of cerebrospinal fluid drainage is gold standard. In children and adolescents, the insertion of a cannula in a compressed ventricle in case of elevated intracranial pressure is difficult; therefore, a pressure sensor is placed more often intraparenchymal as an alternative option. ::: ::: ::: DISCUSSION ::: In cases of persistent elevated ICP despite maximal brain pressure management, the use of an intraventricular monitoring device with the possibility of cerebrospinal fluid drainage is favourable. We present the method of intracranial catheter placement by means of an electromagnetic navigation technique. --- paper_title: A new method for three-dimensional laparoscopic ultrasound model reconstruction paper_content: Background ::: Laparoscopic ultrasound is an important modality in the staging of gastrointestinal tumors. Correct staging depends on good spatial understanding of the regional tumor infiltration. Three-dimensional (3D) models may facilitate the evaluation of tumor infiltration. The aim of the study was to perform a volumetric test and a clinical feasibility test of a new 3D method using standard laparoscopic ultrasound equipment. --- paper_title: The reliability and accuracy of an electromagnetic motion analysis system when used conjointly with an accelerometer paper_content: The effect of an accelerometer driven electronic postural monitor (Spineangel®) placed within the electromagnetic measurement field of the Polhemus Fastrak™ is unknown. This study assessed the reliability and accuracy of Fastrak™ linear and angular measurements, when the Spineangel® was placed close to the sensor(s) and transmitter. Bland Altman plots and intraclass correlation coefficient (2,1) were used to determine protocol reproducibility and measurement consistency. Excellent reliability was found for linear and angular measurements (0.96, 95% CI: 0.90–0.99; and 1.00, 95% CI: 1.00–1.00, respectively) with the inclusion of Spineangel®; similar results were found, without the inclusion of Spineangel®, for linear and angular measurements, (0.96, 95% CI: 0.89–0.99; and 1.00, 95% CI: 1.00–1.00, respectively). The greatest linear discrepancies between the two test conditions were found to be less than 3.5 mm, while the greatest angular discrepancies were below 3.5°. As the effect on accuracy was minimal, t... --- paper_title: Intracranial Image-Guided Neurosurgery: Experience with a new Electromagnetic Navigation System paper_content: Summary.Summary.Background: The aim of image-guided neurosurgery is to accurately project computed tomography (CT) or magnetic resonance imaging (MRI) data into the operative field for defining anatomical landmarks, pathological structures and tumour margins. To achieve this end, different image-guided and computer-assisted, so-called “neuronavigation” systems have been developed in order to offer the neurosurgeon precise spatial information.Method: The present study reports on the experience gained with a prototype of the NEN-NeuroGuardTM neuronavigation system (Nicolet Biomedical, Madison, WI, USA). It utilises a pulsed DC electromagnetic field for determining the location in space of surgical instruments to which miniaturised sensors are attached. The system was evaluated in respect to its usefulness, ease of integration into standard neurosurgical procedures, reliability and accuracy.Findings: The NEN-system was used with success in 24 intracranial procedures for lesions including both gliomas and cerebral metastases. It allowed real-time display of surgical manoeuvres on pre-operative CT or MR images without a stereotactic frame or a robotic arm. The mean registration error associated with MRI was 1.3 mm (RMS error) and 1.5 mm (RMS error) with CT-data. The average intra-operative target-localising error was 3.2 mm (± 1.5 mm SD). Thus, the equipment was of great help in planning and performing skin incisions and craniotomies as well as in reaching deep-seated lesions with a minimum of trauma.Interpretation: The NEN-NeuroGuardTM system is a very user-friendly and reliable tool for image-guided neurosurgery. It does not have the limitations of a conventional stereotactic frame. Due to its electromagnetic technology it avoids the “line-of-sight” problem often met by optical navigation systems since its sensors remain active even when situated deep inside the skull or hidden, for example, by drapes or by the surgical microscope. --- paper_title: Electromagnetic navigation bronchoscopy: A descriptive analysis paper_content: Electromagnetic navigation bronchoscopy (ENB) is an exciting new bronchoscopic technique that promises accurate navigation to peripheral pulmonary target lesions, using technology similar to a car global positioning system (GPS) unit. Potential uses for ENB include biopsy of peripheral lung lesions, pleural dye marking of nodules for surgical wedge resection, placement of fiducial markers for stereotactic radiotherapy, and therapeutic insertion of brachytherapy catheters into malignant tissue. This article will describe the ENB procedure, review the published literature, compare ENB to existing biopsy techniques, and outline the challenges for widespread implementation of this new technology. --- paper_title: Magnetic Assisted Navigation in Electrophysiology and Cardiac Resynchronisation: A Review paper_content: Magnetic assisted navigation is a new innovation that may prove useful in catheter ablation of cardiac arrhythmias and cardiac resynchronization therapy. The ability to steer extremely floppy catheters and guidewires may allow for these to be positioned safely in previously inaccessible areas of the heart. The integration of other new technology, such as image integration and electroanatomic mapping systems, should advance our abilities further. Although studies have shown the technology to be feasible, with the advantage to the physician of decreased radiation exposure, studies need to be performed to show additional benefit over standard techniques. --- paper_title: Navigation Systems for Ablation paper_content: Navigation systems, devices, and intraprocedural software are changing the way interventional oncology is practiced. Before the development of precision navigation tools integrated with imaging systems, thermal ablation of hard-to-image lesions was highly dependent on operator experience, spatial skills, and estimation of positron emission tomography–avid or arterial-phase targets. Numerous navigation systems for ablation bring the opportunity for standardization and accuracy that extends the operator's ability to use imaging feedback during procedures. In this report, existing systems and techniques are reviewed and specific clinical applications for ablation are discussed to better define how these novel technologies address specific clinical needs and fit into clinical practice. --- paper_title: Evaluation of magnetic scope navigation in screening endoscopic examination of colorectal cancer paper_content: BackgroundColorectal cancer is the most common cancer in Europe. Early diagnosis and treatment gives the patient a chance for complete recovery. Screening colonoscopies in the symptom-free patients are currently performed on a wide scale. The examinations are performed under local anesthesia which does not eliminate all discomfort and pain related to the examination. The aim of this study was to evaluate magnetic scope navigation in screening endoscopic examinations performed to detect early-stage colorectal cancer.MethodsThe study group consisted of 200 patients, aged 40–65 years, who were free from colon cancer symptoms. All patients underwent complete colonoscopy under local anesthesia. The equipment could be fitted with the scope that allows three-dimensional observation of instrument localization in the bowel. The examination was performed by three experienced endoscopists, each of whom performed over 5,000 colonoscopies. The patients were randomized to two groups: those whose equipment did not have 3D navigation (group I) and those whose equipment did have 3D navigation (group II). Each group consisted of 100 cases matched by gender, age, and BMI. The authors compared the duration of introducing instrument to cecum, the pulse rate before the examination and at the time the instrument reached the cecum, and subjective pain evaluation by the patient on the visual analog scale.ResultsGroup I consisted of 54 women and 46 men with a mean age of 54.6 years and mean BMI of 27.8 kg/m2, and group II had 58 women and 42 men, mean age of 55.1 years and mean BMI of 26.4 kg/m2. The average time it took for the instrument to reach the cecum was 216s in group I and 181s in group II (P < 0.05). Pain measured on the 10-point VAS scale was 2.44 in group I and 1.85 in group II (P < 0.05). The results showed a significantly shorter time for the instrument to reach the cecum in group II and significantly lower pain intensity during the examination was reported by the group II patients. No significant differences were found in the pulse measurements between the groups (P = 0.5).Conclusions3D navigation during colonoscopy decreases the time for the instrument to reach the cecum and lowers pain intensity subjectively reported by the patients. The use of 3D and the possibility to observe instrument localization and maneuvers brings more comfort to the patients. --- paper_title: Comparison of transabdominal ultrasound and electromagnetic transponders for prostate localization paper_content: The aim of this study is to compare two methodologies of prostate localization in a large cohort of patients. Daily prostate localization using B-mode ultrasound has been performed at the Nebraska Medical Center since 2000. More recently, a technology using electromagnetic transponders implanted within the prostate was introduced into our clinic (Calypso®). With each technology, patients were localized initially using skin marks. Localization error distributions were determined from offsets between the initial setup positions and those determined by ultrasound or Calypso. Ultrasound localization data was summarized from 16619 imaging sessions spanning 7 years; Calypso localization data consists of 1524 fractions in 41 prostate patients treated in the course of a clinical trial at five institutions and 640 localizations from the first 16 patients treated with our clinical system. Ultrasound and Calypso patients treated between March and September 2007 at the Nebraska Medical Center were analyzed and compared, allowing a single institutional comparison of the two technologies. In this group of patients, the isocenter determined by ultrasound-based localization is on average 5.3 mm posterior to that determined by Calypso, while the systematic and random errors and PTV margins calculated from the ultrasound localizations were 3 - 4 times smaller than those calculated from the Calypso localizations. Our study finds that there are systematic differences between Calypso and ultrasound for prostate localization. Normal 0 false false false MicrosoftInternetExplorer4 --- paper_title: Electromagnetically navigated laparoscopic ultrasound. paper_content: A three-dimensional (3D) representation of laparoscopic ultrasound examinations could be helpful in diagnostic and therapeutic laparoscopy, but has not yet been realised with flexible laparoscopic ultrasound probes. Therefore, an electromagnetic navigation system was integrated into the tip of a conventional laparoscopic ultrasound probe. Navigated 3D laparoscopic ultrasound was compared with the imaging data of 3D navigated transcutaneous ultrasound and 3D computed tomography (CT) scan. The 3D CT scan served as the "gold standard". Clinical applicability in standardized operating room (OR) settings, imaging quality, diagnostic potential, and accuracy in volumetric assessment of various well-defined hepatic lesions were analyzed. Navigated 3D laparoscopic ultrasound facilitates exact definition of tumor location and margins. As compared with the "gold standard" of the 3D CT scans, 3D laparoscopic ultrasound has a tendency to underestimate the volume of the region of interest (ROI) (Delta3.1%). A comparison of 3D laparoscopy and transcutaneous 3D ultrasonography demonstrated clearly that the former is more accurate for volumetric assessment of the ROI and facilitates a more detailed display of the lesions. 3D laparoscopic ultrasound imaging with a navigated probe is technically feasible. The technique facilitates detailed ultrasound evaluation of laparoscopic procedures that involve visual, in-depth, and volumetric perception of complex liver pathologies. Navigated 3D laparoscopic ultrasound may have the potential to promote the practical role of laparoscopic ultrasonography, and become a valuable tool for local ablative therapy. In this article, our clinical experiences with a certified prototype of a 3D laparoscopic ultrasound probe, as well as its in vitro and in vivo evaluation, is reported. --- paper_title: A novel technique for tailoring frontal osteoplastic flaps using the ENT magnetic navigation system. paper_content: CONCLUSION ::: The ENT magnetic navigation system is potentially useful and offers the most accurate technique for harvesting frontal osteoplastic flaps. It represents a valid tool in the wide range of instruments available to rhinologists. ::: ::: ::: OBJECTIVE ::: Precise delineation of the boundaries of the frontal sinus is a crucial step when harvesting a frontal osteoplastic flap. We present a novel technique using the ENT magnetic navigation system. ::: ::: ::: METHODS ::: Nineteen patients affected by different pathologies involving the frontal sinus underwent an osteoplastic flap procedure using the ENT magnetic navigation system between January 2009 and April 2011. ::: ::: ::: RESULTS ::: The ENT magnetic navigation system was found to be a safe and accurate tool for delineating the frontal sinus boundaries. No intraoperative complications occurred during the osteoplastic procedures. --- paper_title: Computer-aided navigation in neurosurgery paper_content: The article comprises three main parts: a historical review on navigation, the mathematical basics for calculation and the clinical applications of navigation devices. Main historical steps are described from the first idea till the realisation of the frame-based and frameless navigation devices including robots. In particular the idea of robots can be traced back to the Iliad of Homer, the first testimony of European literature over 2500 years ago. In the second part the mathematical calculation of the mapping between the navigation and the image space is demonstrated, including different registration modalities and error estimations. The error of the navigation has to be divided into the technical error of the device calculating its own position in space, the registration error due to inaccuracies in the calculation of the transformation matrix between the navigation and the image space, and the application error caused additionally by anatomical shift of the brain structures during operation. In the third part the main clinical fields of application in modern neurosurgery are demonstrated, such as localisation of small intracranial lesions, skull-base surgery, intracerebral biopsies, intracranial endoscopy, functional neurosurgery and spinal navigation. At the end of the article some possible objections to navigation-aided surgery are discussed. --- paper_title: Next generation distal locking for intramedullary nails using an electromagnetic X-ray-radiation-free real-time navigation system. paper_content: BACKGROUND ::: Distal locking marks one challenging step during intramedullary nailing that can lead to an increased irradiation and prolonged operation times. The aim of this study was to evaluate the reliability and efficacy of an X-ray-radiation-free real-time navigation system for distal locking procedures. ::: ::: ::: METHODS ::: A prospective randomized cadaver study with 50 standard free-hand fluoroscopic-guided and 50 electromagnetic-guided distal locking procedures was performed. All procedures were timed using a stopwatch. Intraoperative fluoroscopy exposure time and absorbed radiation dose (mGy) readings were documented. All tibial nails were locked with two mediolateral and one anteroposterior screw. Successful distal locking was accomplished once correct placement of all three screws was confirmed. ::: ::: ::: RESULTS ::: Successful distal locking was achieved in 98 cases. No complications were encountered using the electromagnetic navigation system. Eight complications arose during free-hand fluoroscopic distal locking. Undetected secondary drill slippage on the ipsilateral cortex accounted for most problems followed by undetected intradrilling misdirection causing a fissural fracture of the contralateral cortex while screw insertion in one case. Compared with the free-hand fluoroscopic technique, electromagnetically navigated distal locking provides a median time benefit of 244 seconds without using ionizing radiation. ::: ::: ::: CONCLUSION ::: Compared with the standard free-hand fluoroscopic technique, the electromagnetic guidance system used in this study showed high reliability and was associated with less complications, took significantly less time, and used no radiation exposure for distal locking procedures. ::: ::: ::: LEVEL OF EVIDENCE ::: Therapeutic study, level II. --- paper_title: Image-guided neurosurgery with 3-dimensional multimodal imaging data on a stereoscopic monitor. paper_content: BACKGROUND ::: In the past 2 decades, intraoperative navigation technology has changed preoperative and intraoperative strategies and methodology tremendously. ::: ::: ::: OBJECTIVE ::: To report our first experiences with a stereoscopic navigation system based on multimodality-derived, patient-specific 3-dimensional (3-D) information displayed on a stereoscopic monitor and controlled by a virtual user interface. ::: ::: ::: METHODS ::: For the planning of each case, a 3-D multimodality model was created on the Dextroscope. The 3-D model was transferred to a console in the operating room that was running Dextroscope-compatible software and included a stereoscopic LCD (liquid crystal display) monitor (DexVue). Surgery was carried out with a standard frameless navigation system (VectorVision, BrainLAB) that was linked to DexVue. Making use of the navigational space coordinates provided by the VectorVision system during surgery, we coregistered the patient's 3-D model with the actual patient in the operating room. The 3-D model could then be displayed as seen along the axis of a handheld probe or the microscope view. The DexVue data were viewed with polarizing glasses and operated via a 3-D interface controlled by a cordless mouse containing inertial sensors. The navigational value of DexVue was evaluated postoperatively with a questionnaire. A total of 39 evaluations of 21 procedures were available. ::: ::: ::: RESULTS ::: In all 21 cases, the connection of VectorVision with DexVue worked reliably, and consistent spatial concordance of the navigational information was displayed on both systems. The questionnaires showed that in all cases the stereoscopic 3-D data were preferred for navigation. In 38 of 39 evaluations, the spatial orientation provided by the DexVue system was regarded as an improvement. In no case was there worsened spatial orientation. ::: ::: ::: CONCLUSION ::: We consider navigating primarily with stereoscopic, 3-D multimodality data an improvement over navigating with image planes, and we believe that this technology enables a more intuitive intraoperative interpretation of the displayed navigational information and hence an easier surgical implementation of the preoperative plan. --- paper_title: [Magnetic-field-based navigation system for ultrasound-guided interventions]. paper_content: PURPOSE ::: To describe a new magnetic field-based navigation system for ultrasound-guided interventional procedures. ::: ::: ::: METHODS ::: The navigation system supports biopsies either in plane or out of plane. To evaluate the accuracy of the system, targets in four standard silicone phantoms were punctured. Six cystic and six solid lesions were present in every phantom. The success of the puncture was controlled by aspiration of the content of the lesions, which was coloured. Furthermore, liver lesions of different sizes (2-6 mm) and depths (3-6 cm) were produced by injecting a viscous fluid and punctured in three swine carcasses. Functions of the gallbladder and nephrostomies were carried out as well. ::: ::: ::: RESULTS ::: All 48 targets were successfully punctured using the in plane and out of plane modus in the phantom. In the carcasses all 16 lesions were reached in plane. In the out of plane modus two 6 cm deep lesions were not reached. All other 14 biopsies were successful. Nephrostomy of the not dilated renal pelvis and punction of the gallbladder were successfully carried out. ::: ::: ::: CONCLUSIONS ::: The described navigation system is a promising tool for fast and safe performance of ultrasound-guided interventional procedures. --- paper_title: Placement of Intraventricular Catheters Using Flexible Electromagnetic Navigation and a Dynamic Reference Frame: A New Technique paper_content: Background: Catheterization of narrow ventricles may prove difficult resulting in misplacement or inefficient trials with potential damage to brain tissue. Material and Met --- paper_title: Freehand placement of depth electrodes using electromagnetic frameless stereotactic guidance. paper_content: The presurgical evaluation of patients with epilepsy often requires an intracranial study in which both subdural grid electrodes and depth electrodes are needed. Performing a craniotomy for grid placement with a stereotactic frame in place can be problematic, especially in young children, leading some surgeons to consider frameless stereotaxy for such surgery. The authors report on the use of a system that uses electromagnetic impulses to track the tip of the depth electrode. Ten pediatric patients with medically refractory focal lobar epilepsy required placement of both subdural grid and intraparenchymal depth electrodes to map seizure onset. Presurgical frameless stereotaxic targeting was performed using a commercially available electromagnetic image-guided system. Freehand depth electrode placement was then performed with intraoperative guidance using an electromagnetic system that provided imaging of the tip of the electrode, something that has not been possible using visually or sonically based systems. Accuracy of placement of depth electrodes within the deep structures of interest was confirmed postoperatively using CT and CT/MR imaging fusion. Depth electrodes were appropriately placed in all patients. Electromagnetic-tracking-based stereotactic targeting improves the accuracy of freehand placement of depth electrodes in patients with medically refractory epilepsy. The ability to track the electrode tip, rather than the electrode tail, is a major feature that enhances accuracy. Additional advantages of electromagnetic frameless guidance are discussed. --- paper_title: Electromagnetic navigation improves minimally invasive robot-assisted lung brachytherapy. paper_content: Objective: Recent advances in minimally invasive thoracic surgery have renewed an interest in the role of interstitial brachytherapy for lung cancer. Our previous work has demonstrated that a minimally invasive robot-assisted (MIRA) lung brachytherapy system produced results that were equal to or better than those obtained with standard video-assisted thoracic surgery (VATS) and comparable to results with open surgery. The purpose of this project was to evaluate the performance of an integrated system for MIRA lung brachytherapy that incorporated modified electromagnetic navigation and ultrasound image guidance with robotic assistance.Methods: The experimental test-bed consisted of a VATS box, ZEUS® and AESOP® surgical robotic arms, a seed injector, an ultrasound machine, video monitors, a computer, and an endoscope. Our previous custom-designed electromagnetic navigational software and the robotic controller were modified and incorporated into the MIRA III system to become the next-generation MIRA IV. In... --- paper_title: An Evaluation of the Aurora System as a Flesh-Point Tracking Tool for Speech Production Research paper_content: Purpose Northern Digital Instruments (NDI; Waterloo, Ontario, Canada) manufactures a commercially available magnetometer device called Aurora that features real-time display of sensor position trac... --- paper_title: Free-hand CT-based electromagnetically guided interventions: accuracy, efficiency and dose usage. paper_content: The purpose of this paper was to evaluate computed tomography (CT) based electromagnetically tip-tracked (EMT) interventions in various clinical applications. An EMT system was utilized to perform percutaneous interventions based on CT datasets. Procedure times and spatial accuracy of needle placement were analyzed using logging data in combination with periprocedurally acquired CT control scans. Dose estimations in comparison to a set of standard CT-guided interventions were carried out. Reasons for non-completion of planned interventions were analyzed. Twenty-five procedures scheduled for EMT were analyzed, 23 of which were successfully completed using EMT. The average time for performing the procedure was 23.7 ± 17.2 min. Time for preparation was 5.8 ± 7.3 min while the interventional (skin-to-target) time was 2.7 ± 2.4 min. The average puncture length was 7.2 ± 2.5 cm. Spatial accuracy was 3.1 ± 2.1 mm. Non-completed procedures were due to patient movement and reference fixation problems. Radiation doses (dosis-length-product) were significantly lower (p = 0.012) for EMT-based interventions (732 ± 481 mGy x cm) in comparison to the control group of standard CT-guided interventions (1343 ± 1054 mGy x cm). Electromagnetic navigation can accurately guide percutaneous interventions in a variety of indications. Accuracy and time usage permit the routine use of the utilized system. Lower radiation exposure for EMT-based punctures provides a relevant potential for dose saving. --- paper_title: Treatment response assessment of radiofrequency ablation for hepatocellular carcinoma: usefulness of virtual CT sonography with magnetic navigation. paper_content: PURPOSE ::: Virtual CT sonography using magnetic navigation provides cross sectional images of CT volume data corresponding to the angle of the transducer in the magnetic field in real-time. The purpose of this study was to clarify the value of this virtual CT sonography for treatment response of radiofrequency ablation for hepatocellular carcinoma. ::: ::: ::: PATIENTS AND METHODS ::: Sixty-one patients with 88 HCCs measuring 0.5-1.3 cm (mean±SD, 1.0±0.3 cm) were treated by radiofrequency ablation. For early treatment response, dynamic CT was performed 1-5 days (median, 2 days). We compared early treatment response between axial CT images and multi-angle CT images using virtual CT sonography. ::: ::: ::: RESULTS ::: Residual tumor stains on axial CT images and multi-angle CT images were detected in 11.4% (10/88) and 13.6% (12/88) after the first session of RFA, respectively (P=0.65). Two patients were diagnosed as showing hyperemia enhancement after the initial radiofrequency ablation on axial CT images and showed local tumor progression shortly because of unnoticed residual tumors. Only virtual CT sonography with magnetic navigation retrospectively showed the residual tumor as circular enhancement. In safety margin analysis, 10 patients were excluded because of residual tumors. The safety margin more than 5 mm by virtual CT sonographic images and transverse CT images were determined in 71.8% (56/78) and 82.1% (64/78), respectively (P=0.13). The safety margin should be overestimated on axial CT images in 8 nodules. ::: ::: ::: CONCLUSION ::: Virtual CT sonography with magnetic navigation was useful in evaluating the treatment response of radiofrequency ablation therapy for hepatocellular carcinoma. --- paper_title: Three-Dimensional Electromagnetic Navigation vs. Fluoroscopy for Endovascular Aneurysm Repair: A Prospective Feasibility Study in Patients paper_content: PurposeTo evaluate the in vivo feasibility of a 3-dimensional (3D) electromagnetic (EM) navigation system with electromagnetically-tracked catheters in endovascular aneurysm repair (EVAR).MethodsThe pilot study included 17 patients undergoing EVAR with a bifurcated stent-graft. Ten patients were assigned to the control group, in which a standard EVAR procedure was used. The remaining 7 patients (intervention group) underwent an EVAR procedure during which a cone-beam computed tomography image was acquired after implantation of the main stent-graft. The 3D image was presented on the navigation screen. From the contralateral side, the tip of an electromagnetically-tracked catheter was visualized in the 3D image and positioned in front of the contralateral cuff in the main stent-graft. A guidewire was inserted through the catheter and blindly placed into the stent-graft. The placement of the guidewire was verified by fluoroscopy before the catheter was pushed over the guidewire. If the guidewire was incorrec... --- paper_title: Out-of-Plane Computed-Tomography-Guided Biopsy Using a Magnetic-Field-Based Navigation System paper_content: The purpose of this article is to report our clinical experience with out-of-plane computed-tomography (CT)-guided biopsies using a magnetic-field-based navigation system. Between February 2002 and March 2003, 20 patients underwent CT-guided biopsy in which an adjunct magnetic-field-based navigation system was used to aid an out-of-plane biopsy approach. Eighteen patients had an underlying primary malignancy. All biopsies involved the use of a coaxial needle system in which an outer 18G guide needle was inserted to the lesion using the navigation system and an inner 22G needle was then used to obtain fine-needle aspirates. Complications and technical success were recorded. Target lesions were located in the adrenal gland (n = 7), liver (n = 6), pancreas (n = 3), lung (n = 2), retroperitoneal lymph node (n = 1), and pelvis (n = 1). The mean lesion size (maximum transverse diameter) was 26.5 mm (range: 8–70 mm) and the mean and median cranial–caudal distance, between the transaxial planes of the final needle tip location and the needle insertion site, was 40 mm (range: 18–90 mm). Needle tip positioning was successfully placed within the lesion in all 20 biopsies. A diagnosis of malignancy was obtained in 14 biopsies. Benign diagnoses were encountered in the remaining six biopsies and included a benign adrenal gland (n = 2), fibroelastic tissue (n = 1), hepocytes with steatosis (n = 2) and reactive hepatocytes (n = 1). No complications were encountered. A magnetic-field-based navigation system is an effective adjunct tool for accurate and safe biopsy of lesions that require an out-of-plane CT approach. --- paper_title: Magnetic navigation in ultrasound-guided interventional radiology procedures paper_content: Aim To evaluate the usefulness of magnetic navigation in ultrasound (US)-guided interventional procedures. Materials and methods Thirty-seven patients who were scheduled for US-guided interventional procedures (20 liver cancer ablation procedures and 17 other procedures) were included. Magnetic navigation with three-dimensional (3D) computed tomography (CT), magnetic resonance imaging (MRI), 3D US, and position-marking magnetic navigation were used for guidance. The influence on clinical outcome was also evaluated. Results Magnetic navigation facilitated applicator placement in 15 of 20 ablation procedures for liver cancer in which multiple ablations were performed; enhanced guidance in two small liver cancers invisible on conventional US but visible at CT or MRI; and depicted the residual viable tumour after transcatheter arterial chemoembolization for liver cancer in one procedure. In four of 17 other interventional procedures, position-marking magnetic navigation increased the visualization of the needle tip. Magnetic navigation was beneficial in 11 (55%) of 20 ablation procedures; increased confidence but did not change management in five (25%); added some information but did not change management in two (10%); and made no change in two (10%). In the other 17 interventional procedures, the corresponding numbers were 1 (5.9%), 2 (11.7%), 7 (41.2%), and 7 (41.2%), respectively (p = 0.002). Conclusion Magnetic navigation in US-guided interventional procedure provides solutions in some difficult cases in which conventional US guidance is not suitable. It is especially useful in complicated interventional procedures such as ablation for liver cancer. --- paper_title: Image-Guided Endoscopic Surgery: Results of Accuracy and Performance in a Multicenter Clinical Study Using an Electromagnetic Tracking System paper_content: Image-guided surgery has recently been described in the literature as a useful technology for improved functional endoscopic sinus surgery localization. Image-guided surgery yields accurate knowledge of the surgical field boundaries, allowing safer and more thorough sinus surgery. We have previously reviewed our initial experience with The InstaTrak System. This article presents a multicenter clinical study (n=55) that assesses the system's capability for localizing structures in critical surgical sites. The purpose of this paper is to present quantitative data on accuracy and performance. We describe several new advances including an automated registration technique that eliminates the redundant computed tomography scan, compensation for head movement, and the ability to use interchangeable instruments. --- paper_title: Real-time FDG PET Guidance during Biopsies and Radiofrequency Ablation Using Multimodality Fusion with Electromagnetic Navigation paper_content: Combined electromagnetic device tracking and CT/US/fluorine 18 fluorodeoxyglucose (FDG) PET fusion allows successful biopsy and ablation of lesions that either demonstrate heterogeneous FDG uptake or are not well seen or are totally inapparent at conventional diagnostic imaging. --- paper_title: Fusion of MRI and sonography image for breast cancer evaluation using real-time virtual sonography with magnetic navigation: first experience. paper_content: OBJECTIVE ::: We recently developed a real-time virtual sonography (RVS) system that enables simultaneous display of both sonography and magnetic resonance imaging (MRI) cutaway images of the same site in real time. The aim of this study was to evaluate the role of RVS in the management of enhancing lesions visualized with MRI. ::: ::: ::: METHODS ::: Between June 2006 and April 2007, 65 patients underwent MRI for staging of known breast cancer at our hospital. All patients were examined using mammography, sonography, MRI and RVS before surgical resection. Results were correlated with histopathologic findings. MRI was obtained on a 1.5 T imager, with the patient in the supine position using a flexible body surface coil. Detection rate was determined for index tumors and incidental enhancing lesions (IELs), with or without RVS. ::: ::: ::: RESULTS ::: Overall sensitivity for detecting index tumors was 85% (55/65) for mammography, 91% (59/65) for sonography, 97% (63/65) for MRI and 98% (64/65) for RVS. Notably, in one instance in which the cancer was not seen on MRI, RVS detected it with the supplementation of sonography. IELs were found in 26% (17/65) of the patients. Of 23 IELs that were detected by MRI, 30% (7/23) of IELs could be identified on repeated sonography alone, but 83% (19/23) of them were identified using the RVS system (P = 0.001). The RVS system was able to correctly project enhanced MRI information onto a body surface, as we checked sonography form images. ::: ::: ::: CONCLUSIONS ::: Our results suggest that the RVS system can identify enhancing breast lesions with excellent accuracy. --- paper_title: Electromagnetic navigation platform for endovascular surgery: how to develop sensorized catheters and guidewires paper_content: Background ::: ::: Endovascular procedures are nowadays limited by difficulties arising from the use of 2D images and are associated with dangerous X-ray exposure and the injection of nephrotoxic contrast medium. ::: ::: ::: ::: Methods ::: ::: An electromagnetic navigator is proposed to guide endovascular procedures with reduced radiation dose and contrast medium injection. Five DOF electromagnetic sensors are calibrated and used to track in real time the positions and orientation of endovascular catheters and guidewires, while intraoperative 3D rotational angiography is used to acquire 3D models of patient anatomy. A preliminary prototype is developed to prove the feasibility of the system using an anthropomorphic phantom. ::: ::: ::: ::: Results ::: ::: The spatial accuracy of the system was evaluated during 70 targeting trials obtaining an overall accuracy of 1.2 ± 0.3 mm; system usability was positively evaluated by three surgeons. ::: ::: ::: ::: Conclusions ::: ::: The strategy proposed to sensorize endovascular instruments paves the way for the development of surgical strategies with reduced radiation dose and contrast medium injection. Further in vitro, animal and clinical experiments are necessary for complete surgical validation. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Assessment of the accuracy of infrared and electromagnetic navigation using an industrial robot: Which factors are influencing the accuracy of navigation? paper_content: Our objectives were to detect factors that influence the accuracy of surgical navigation (magnitude of deformity, plane of deformity, position of the navigation bases) and compare the accuracy of infrared with electromagnetic navigation. Human cadaveric femora were used. A robot connected with a computer moved one of the bony fragments in a desired direction. The bases of the infrared navigation (BrainLab) and the receivers of the electromagnetic device (Fastrak-Pohlemus) were attached to the proximal and distal parts of the bone. For the first part of the study, deformities were classified in eight groups (e.g., 0 to 5°). For the second part, the bases were initially placed near the osteotomy and then far away. The mean absolute differences between both navigation system measurements and the robotic angles were significantly affected by the magnitude of angulation with better accuracy for smaller angulations (p < 0.001). The accuracy of infrared navigation was significantly better in the frontal and sagittal plane. Changing the position of the navigation bases near and far away from the deformity apex had no significant effect on the accuracy of infrared navigation; however, it influenced the accuracy of electromagnetic navigation in the frontal plane (p < 0.001). In conclusion, the use of infrared navigation systems for corrections of small angulation-deformities in the frontal or sagittal plane provides the most accurate results, irrespectively from the positioning of the navigation bases. © 2011 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 29: 1476–1483, 2011 --- paper_title: Upgrade of an optical navigation system with a permanent electromagnetic position control: a first step towards "navigated control" for liver surgery. paper_content: INTRODUCTION ::: The main problems of navigation in liver surgery are organ movement and deformation. With a combination of direct optical and indirect electromagnetic tracking technology, visualisation and positional control of surgical instruments within three-dimensional ultrasound data and registration of organ movements can be realised simultaneously. ::: ::: ::: METHODS ::: Surgical instruments for liver resection were localised with an infrared-based navigation system (Polaris). Movements of the organ itself were registered using an electromagnetic navigation system (Aurora). The combination of these two navigation techniques and a new surgical navigation procedure focussed on a circumscribed critical dissection area were applied for the first time in liver resections. ::: ::: ::: RESULTS ::: This new technique was effectively implemented. The position of the surgical instrument was localised continuously. Repeated position control with observation of the navigation screen was not necessary. During surgical resection, a sonic warning signal was activated when the surgical instrument entered a "no touch" area--an area of reduced safety margin. ::: ::: ::: CONCLUSION ::: Optical tracking of surgical instruments and simultaneous electromagnetic registration of organ position is feasible in liver resection. --- paper_title: Accuracy considerations in image-guided cardiac interventions: experience and lessons learned paper_content: MOTIVATION ::: Medical imaging and its application in interventional guidance has revolutionized the development of minimally invasive surgical procedures leading to reduced patient trauma, fewer risks, and shorter recovery times. However, a frequently posed question with regard to an image guidance system is "how accurate is it?" On one hand, the accuracy challenge can be posed in terms of the tolerable clinical error associated with the procedure; on the other hand, accuracy is bound by the limitations of the system's components, including modeling, patient registration, and surgical instrument tracking, all of which ultimately impact the overall targeting capabilities of the system. ::: ::: ::: METHODS ::: While these processes are not unique to any interventional specialty, this paper discusses them in the context of two different cardiac image guidance platforms: a model-enhanced ultrasound platform for intracardiac interventions and a prototype system for advanced visualization in image-guided cardiac ablation therapy. ::: ::: ::: RESULTS ::: Pre-operative modeling techniques involving manual, semi-automatic and registration-based segmentation are discussed. The performance and limitations of clinically feasible approaches for patient registration evaluated both in the laboratory and in the operating room are presented. Our experience with two different magnetic tracking systems for instrument and ultrasound transducer localization is reported. Ultimately, the overall accuracy of the systems is discussed based on both in vitro and preliminary in vivo experience. ::: ::: ::: CONCLUSION ::: While clinical accuracy is specific to a particular patient and procedure and vastly dependent on the surgeon's experience, the system's engineering limitations are critical to determine whether the clinical requirements can be met. --- paper_title: Successful placement of postpyloric enteral tubes using electromagnetic guidance in critically ill children* paper_content: OBJECTIVES ::: Initiation of postpyloric feeding is often delayed by difficulties in placement of enteral tubes. We evaluated the effectiveness of bedside postpyloric enteral tube (PET) placement using an electromagnetic (EM)-guided device. We hypothesized that: 1) EM-guided placement of PETs would be successful more often than standard blind placement with a shorter total time to successful placement and 2) the EM-guided technique would have similar overall costs to the standard technique. ::: ::: ::: DESIGN ::: Prospective cohort trial with serial control groups in a pediatric intensive care unit at a tertiary care children's hospital. ::: ::: ::: INTERVENTIONS ::: We collected data on a cohort of consecutive pediatric intensive care unit patients who underwent PET placement by standard blind technique followed by a cohort who underwent EM-guided placement. The primary outcome measure was successful placement determined by abdominal radiography. ::: ::: ::: MEASUREMENTS AND MAIN RESULTS ::: One hundred seven patients were evaluated in the trial: 57 in the standard group and 50 in the EM-guided group. Demographic data, percent intubated, and admission diagnosis were similar in both groups. Forty-one of 50 patients (82%) in the EM-guided group had successful placement compared with 22 of 57 in the standard group (38%) (p < 0.0001). The average time to successful placement was 1.7 vs. 21 hours in the EM-guided group and standard group, respectively (p < 0.0001). Children in the EM-guided group received fewer radiographs (p = 0.007) and were given more prokinetic drugs (p = 0.045). There were no episodes of pneumothorax in either group. After controlling for prokinetic drug use, EM-guided placement was more likely to result in successful placement than the standard blind technique (odds ratio 6.4, 95% confidence interval 2.5-16.3). An annual placement rate of 250 PETs by EM guidance, based on our institution's current utilization rates, is associated with a cost savings of $55.46 per PET placed. ::: ::: ::: CONCLUSION ::: EM guidance is an efficient and cost-effective method of bedside PET placement. --- paper_title: Application of electromagnetic navigation in surgical treatment of intracranial tumors: analysis of 12 cases. paper_content: OBJECTIVE ::: To explore the application and characteristics of electromagnetic navigation in neurosurgical operation. ::: ::: ::: METHODS ::: Neurosurgical operations with the assistance of electromagnetic navigation were performed in 12 patients with intracranial tumors. ::: ::: ::: RESULTS ::: Total removal of the tumor was achieved in 8 cases, subtotal removal in 3 and removal of the majority of the tumor in 1 case. The error in the navigation averaged 1.9+/-0.9 mm and the time consumed by preoperative preparation was 19+/-2 min with the exception in 1 case. ::: ::: ::: CONCLUSION ::: In comparison with optic navigation, electromagnetic navigation offers better convenience and absence of signal blockage, and with a head frame, automatic registration can be achieved. --- paper_title: A Navigation Platform for Guidance of Beating Heart Transapical Mitral Valve Repair paper_content: Traditional surgical approaches for repairing diseased mitral valves (MVs) have relied on placing the patient on cardiopulmonary bypass (on pump), stopping the heart and accessing the arrested heart directly. However, because this approach has the potential for adverse neurological, vascular, and immunological sequelae, less invasive beating heart alternatives are desirable. Emerging beating heart techniques have been developed to offer high-risk patients MV repair using ultrasound guidance alone without stopping the heart. This paper describes the first porcine trials of the NeoChord DS1000 (Minnetonka, MN), employed to attach neochordae to a MV leaflet using the traditional ultrasound-guided protocol augmented by dynamic virtual geometric models. The distance errors of the tracked tool tip from the intended midline trajectory (5.2 ± 2.4 mm versus 16.8 ± 10.9 mm, p = 0.003), navigation times (16.7 ± 8.0 s versus 92.0 ± 84.5 s, p = 0.004), and total path lengths (225.2 ± 120.3 mm versus 1128.9 ± 931.1 mm, p = 0.003) were significantly shorter in the augmented ultrasound compared to navigation with ultrasound alone,1 indicating a substantial improvement in the safety and simplicity of the procedure. --- paper_title: Implementation of an electromagnetic tracking system for accurate intrahepatic puncture needle guidance: accuracy results in an in vitro model. paper_content: RATIONALE AND OBJECTIVES ::: Electromagnetic tracking potentially may be used to guide percutaneous needle-based interventional procedures. The accuracy of electromagnetic guided-needle puncture procedures has not been specifically characterized. This article reports the functional accuracy of a needle guidance system featuring real-time tracking of respiratory-related target motion. ::: ::: ::: MATERIALS AND METHODS ::: A needle puncture algorithm based on a "free-hand" needle puncture technique for percutaneous intrahepatic portocaval systemic shunt was employed. Preoperatively obtained computed tomographic images were displayed on a graphical user interface and registered with the electromagnetically tracked needle position. The system and procedure was tested on an abdominal torso phantom containing a liver model mounted on a motor-driven platform to simulate respiratory excursion. The liver model featured two hollow tubes to simulate intrahepatic vessels. Registration and respiratory motion tracking was performed using four skin fiducials and a needle fiducial within the liver. Success rates for 15 attempts at simultaneous puncture of the two "vessels" of different luminal diameters guided by the electromagnetic tracking system were recorded. ::: ::: ::: RESULTS ::: Successful "vessel" puncture occurred in 0%, 33%, and 53% of attempts for 3-, 5-, and 7-mm diameter "vessels," respectively. Using a two-dimensional accuracy prediction analysis, predicted accuracy exceeded actual puncture accuracy by 25%-35% for all vessel diameters. Accuracy outcome improved when depth-only errors were omitted from the analysis. ::: ::: ::: CONCLUSIONS ::: Actual puncture success rate approximates predicted rates for target vessels 5 mm in diameter or greater when depth errors are excluded. Greater accuracy for smaller diameter vessels would be desirable for implementation in a broader range of clinical applications. --- paper_title: How Does Electromagnetic Navigation Stack Up Against Infrared Navigation in Minimally Invasive Total Knee Arthroplasties paper_content: Abstract Forty-six primary total knee arthroplasties were performed using either an electromagnetic (EM) or infrared (IR) navigation system. In this IRB-approved study, patients were evaluated clinically and for accuracy using spiral computed tomographic imaging and 36-in standing radiographs. Although EM navigation was subject to metal interference, it was not as drastic as line-of-sight interference with IR navigation. Mechanical alignment was ideal in 92.9% of EM and 90.0% of IR cases based on spiral computed tomographic imaging and 100% of EM and 95% of IR cases based on x-ray. Individual measurements of component varus/valgus and sagittal measurements showed EM to be equivalent to IR, with both systems producing subdegree accuracy in 95% of the readings. --- paper_title: Intracranial Image-Guided Neurosurgery: Experience with a new Electromagnetic Navigation System paper_content: Summary.Summary.Background: The aim of image-guided neurosurgery is to accurately project computed tomography (CT) or magnetic resonance imaging (MRI) data into the operative field for defining anatomical landmarks, pathological structures and tumour margins. To achieve this end, different image-guided and computer-assisted, so-called “neuronavigation” systems have been developed in order to offer the neurosurgeon precise spatial information.Method: The present study reports on the experience gained with a prototype of the NEN-NeuroGuardTM neuronavigation system (Nicolet Biomedical, Madison, WI, USA). It utilises a pulsed DC electromagnetic field for determining the location in space of surgical instruments to which miniaturised sensors are attached. The system was evaluated in respect to its usefulness, ease of integration into standard neurosurgical procedures, reliability and accuracy.Findings: The NEN-system was used with success in 24 intracranial procedures for lesions including both gliomas and cerebral metastases. It allowed real-time display of surgical manoeuvres on pre-operative CT or MR images without a stereotactic frame or a robotic arm. The mean registration error associated with MRI was 1.3 mm (RMS error) and 1.5 mm (RMS error) with CT-data. The average intra-operative target-localising error was 3.2 mm (± 1.5 mm SD). Thus, the equipment was of great help in planning and performing skin incisions and craniotomies as well as in reaching deep-seated lesions with a minimum of trauma.Interpretation: The NEN-NeuroGuardTM system is a very user-friendly and reliable tool for image-guided neurosurgery. It does not have the limitations of a conventional stereotactic frame. Due to its electromagnetic technology it avoids the “line-of-sight” problem often met by optical navigation systems since its sensors remain active even when situated deep inside the skull or hidden, for example, by drapes or by the surgical microscope. --- paper_title: Electromagnetic navigation bronchoscopy: A descriptive analysis paper_content: Electromagnetic navigation bronchoscopy (ENB) is an exciting new bronchoscopic technique that promises accurate navigation to peripheral pulmonary target lesions, using technology similar to a car global positioning system (GPS) unit. Potential uses for ENB include biopsy of peripheral lung lesions, pleural dye marking of nodules for surgical wedge resection, placement of fiducial markers for stereotactic radiotherapy, and therapeutic insertion of brachytherapy catheters into malignant tissue. This article will describe the ENB procedure, review the published literature, compare ENB to existing biopsy techniques, and outline the challenges for widespread implementation of this new technology. --- paper_title: Assessment of the ablated area after radiofrequency ablation by the spread of bubbles: comparison with virtual sonography with magnetic navigation. paper_content: BACKGROUND/AIMS ::: The purpose of this study was to investigate whether bubble images after radiofrequency ablation (RFA) can predict the ablated area. ::: ::: ::: METHODOLOGY ::: The spread of bubbles 5 minutes after RFA were compared with the unenhanced area of virtual sonography with magnetic navigation in two RFA methods: expandable needle and cool-tip needle. ::: ::: ::: RESULTS ::: Thirty-one hepatocellular carcinoma nodules were treated by RFA with either an expandable needle or cool-tip needle (n=14 and n=17, respectively) and examined. In the 14 nodules treated by expandable needle, bubble images (puncture direction; r=0.833, p=0.0002, perpendicular direction; r=0.803, p=0.0005) were closely correlated with the unenhanced area of virtual sonography. On the other hand, in 17 nodules treated by cool-tip needle, there was no correlation between the bubble images and virtual sonography (puncture direction; r=0.590, p=0.0127, perpendicular direction; r=0.342, p=0.180). ::: ::: ::: CONCLUSIONS ::: The observation of bubbles with the expandable needle can accurately predict the ablated area and is helpful for assessing local control of RFA. --- paper_title: Review on Patents about Magnetic Localisation Systems for in vivo Catheterizations paper_content: Abstract: in vivo Catheterizations are usually performed by physicians using X-Ray fluoroscopic guide and contrast-media. The X-Ray exposure both of the patient and of the operators can induce collateral effects. The present review describes the status of the art on recent patents about magnetic position/orientation indicators capable to drive the probe during in-vivo medical diagnostic or interventional procedures. They are based on the magnetic field produced by sources and revealed by sensors. Possible solutions are: the modulated magnetic field produced by a set of coils positioned externally to the patient is measured by sensors installed on the intra-body probe; the magnetic field produced by a thin permanent magnet installed on the intra-body probe is measured by magnetic field sensors positioned outside the patient body. In either cases, position and orientation of the probe are calculated in real time: this allows the elimination of repetitive X-Ray scans used to monitor the probe. The aim of the proposed systems is to drive the catheter inside the patient vascular tree with a reduction of the X-Ray exposure both of the patient and of the personnel involved in the intervention. The present paper intends also to highlight advantages/disadvantages of the presented solutions. --- paper_title: Evaluation of magnetic scope navigation in screening endoscopic examination of colorectal cancer paper_content: BackgroundColorectal cancer is the most common cancer in Europe. Early diagnosis and treatment gives the patient a chance for complete recovery. Screening colonoscopies in the symptom-free patients are currently performed on a wide scale. The examinations are performed under local anesthesia which does not eliminate all discomfort and pain related to the examination. The aim of this study was to evaluate magnetic scope navigation in screening endoscopic examinations performed to detect early-stage colorectal cancer.MethodsThe study group consisted of 200 patients, aged 40–65 years, who were free from colon cancer symptoms. All patients underwent complete colonoscopy under local anesthesia. The equipment could be fitted with the scope that allows three-dimensional observation of instrument localization in the bowel. The examination was performed by three experienced endoscopists, each of whom performed over 5,000 colonoscopies. The patients were randomized to two groups: those whose equipment did not have 3D navigation (group I) and those whose equipment did have 3D navigation (group II). Each group consisted of 100 cases matched by gender, age, and BMI. The authors compared the duration of introducing instrument to cecum, the pulse rate before the examination and at the time the instrument reached the cecum, and subjective pain evaluation by the patient on the visual analog scale.ResultsGroup I consisted of 54 women and 46 men with a mean age of 54.6 years and mean BMI of 27.8 kg/m2, and group II had 58 women and 42 men, mean age of 55.1 years and mean BMI of 26.4 kg/m2. The average time it took for the instrument to reach the cecum was 216s in group I and 181s in group II (P < 0.05). Pain measured on the 10-point VAS scale was 2.44 in group I and 1.85 in group II (P < 0.05). The results showed a significantly shorter time for the instrument to reach the cecum in group II and significantly lower pain intensity during the examination was reported by the group II patients. No significant differences were found in the pulse measurements between the groups (P = 0.5).Conclusions3D navigation during colonoscopy decreases the time for the instrument to reach the cecum and lowers pain intensity subjectively reported by the patients. The use of 3D and the possibility to observe instrument localization and maneuvers brings more comfort to the patients. --- paper_title: Electromagnetic Navigation to Aid Radiofrequency Ablation and Biopsy of Lung Tumors paper_content: Purpose We evaluated an electromagnetic (EM) navigation system (Veran Medical Technologies Inc, St. Louis, MO) to determine its potential to reduce the number of skin punctures and instrument adjustments during computed tomographic-guided percutaneous ablation and biopsy of lung nodules. Description Ten patients undergoing lung percutaneous ablation were prospectively enrolled. The mean age was 70 years. Positioning of the needle device was verified with computed tomographic fluoroscopy prior to the execution of any biopsy or ablation. Each EM navigation-guided procedure was defined as an EM-intervention. Evaluation Nineteen EM interventions were performed. When an EM-guided biopsy was performed, the intervention was done immediately prior to ablation. For all 19 EM interventions, only one skin-puncture was required. The mean number of instrument adjustments required was 1.2 (range, 0 to 2). The mean time for each EM intervention was 5.2 minutes (range, 1 to 20 minutes). Pneumothorax occurred in 5 patients (50%). Only the number of instrument adjustments was significantly related to the pneumothorax rate ( p = 0.005). Conclusions The EM navigation is feasible and seems to be a useful aid for image-guided procedures. Early experience suggests a low number of skin-puncture and instrument adjustments using the EM navigation system. Instrument adjustments were a key factor in pneumothorax development. --- paper_title: [Ultrasound-guided interventions using magnetic field navigation. First experiences with Ultra-Guide 2000 under operative conditions]. paper_content: Abstract Ultrasound-guided interventions are presently performed as free-hand-type procedures or using biopsy transducers. In this article we report on our experience with a new navigation-system for sonographically guided interventional procedures under OR-conditions. The US-Guide 2000 trade mark is an electromagnetic guidance system that assists physicians in ultrasound-guided interventional procedures. This system accommodates both in-plane and out-of-plane needle placement. We evaluated this system for the first time under OR-conditions. Overall, for 39 interventional procedures (23 thermoablations of malignant liver lesions, 16 diagnostic punctures) were performed. All targets were reached successfully without any complications. No interactions with other OR-devices were seen. The US-Guide 2000 trade mark as a virtual needle-guiding system allows a safe and accurate sonographically assisted intervention. The major advantage is the possibility of out-of-plane needle placement and the combination of flexibility of free-hand-type procedures with the accuracy of a biopsy transducer. This increases the safety of punctures especially when lesions are difficult to reach and/or are situated next to vulnerable structures. It also reduces the interventional trauma. --- paper_title: A novel technique for tailoring frontal osteoplastic flaps using the ENT magnetic navigation system. paper_content: CONCLUSION ::: The ENT magnetic navigation system is potentially useful and offers the most accurate technique for harvesting frontal osteoplastic flaps. It represents a valid tool in the wide range of instruments available to rhinologists. ::: ::: ::: OBJECTIVE ::: Precise delineation of the boundaries of the frontal sinus is a crucial step when harvesting a frontal osteoplastic flap. We present a novel technique using the ENT magnetic navigation system. ::: ::: ::: METHODS ::: Nineteen patients affected by different pathologies involving the frontal sinus underwent an osteoplastic flap procedure using the ENT magnetic navigation system between January 2009 and April 2011. ::: ::: ::: RESULTS ::: The ENT magnetic navigation system was found to be a safe and accurate tool for delineating the frontal sinus boundaries. No intraoperative complications occurred during the osteoplastic procedures. --- paper_title: Next generation distal locking for intramedullary nails using an electromagnetic X-ray-radiation-free real-time navigation system. paper_content: BACKGROUND ::: Distal locking marks one challenging step during intramedullary nailing that can lead to an increased irradiation and prolonged operation times. The aim of this study was to evaluate the reliability and efficacy of an X-ray-radiation-free real-time navigation system for distal locking procedures. ::: ::: ::: METHODS ::: A prospective randomized cadaver study with 50 standard free-hand fluoroscopic-guided and 50 electromagnetic-guided distal locking procedures was performed. All procedures were timed using a stopwatch. Intraoperative fluoroscopy exposure time and absorbed radiation dose (mGy) readings were documented. All tibial nails were locked with two mediolateral and one anteroposterior screw. Successful distal locking was accomplished once correct placement of all three screws was confirmed. ::: ::: ::: RESULTS ::: Successful distal locking was achieved in 98 cases. No complications were encountered using the electromagnetic navigation system. Eight complications arose during free-hand fluoroscopic distal locking. Undetected secondary drill slippage on the ipsilateral cortex accounted for most problems followed by undetected intradrilling misdirection causing a fissural fracture of the contralateral cortex while screw insertion in one case. Compared with the free-hand fluoroscopic technique, electromagnetically navigated distal locking provides a median time benefit of 244 seconds without using ionizing radiation. ::: ::: ::: CONCLUSION ::: Compared with the standard free-hand fluoroscopic technique, the electromagnetic guidance system used in this study showed high reliability and was associated with less complications, took significantly less time, and used no radiation exposure for distal locking procedures. ::: ::: ::: LEVEL OF EVIDENCE ::: Therapeutic study, level II. --- paper_title: Freehand placement of depth electrodes using electromagnetic frameless stereotactic guidance. paper_content: The presurgical evaluation of patients with epilepsy often requires an intracranial study in which both subdural grid electrodes and depth electrodes are needed. Performing a craniotomy for grid placement with a stereotactic frame in place can be problematic, especially in young children, leading some surgeons to consider frameless stereotaxy for such surgery. The authors report on the use of a system that uses electromagnetic impulses to track the tip of the depth electrode. Ten pediatric patients with medically refractory focal lobar epilepsy required placement of both subdural grid and intraparenchymal depth electrodes to map seizure onset. Presurgical frameless stereotaxic targeting was performed using a commercially available electromagnetic image-guided system. Freehand depth electrode placement was then performed with intraoperative guidance using an electromagnetic system that provided imaging of the tip of the electrode, something that has not been possible using visually or sonically based systems. Accuracy of placement of depth electrodes within the deep structures of interest was confirmed postoperatively using CT and CT/MR imaging fusion. Depth electrodes were appropriately placed in all patients. Electromagnetic-tracking-based stereotactic targeting improves the accuracy of freehand placement of depth electrodes in patients with medically refractory epilepsy. The ability to track the electrode tip, rather than the electrode tail, is a major feature that enhances accuracy. Additional advantages of electromagnetic frameless guidance are discussed. --- paper_title: Free-hand CT-based electromagnetically guided interventions: accuracy, efficiency and dose usage. paper_content: The purpose of this paper was to evaluate computed tomography (CT) based electromagnetically tip-tracked (EMT) interventions in various clinical applications. An EMT system was utilized to perform percutaneous interventions based on CT datasets. Procedure times and spatial accuracy of needle placement were analyzed using logging data in combination with periprocedurally acquired CT control scans. Dose estimations in comparison to a set of standard CT-guided interventions were carried out. Reasons for non-completion of planned interventions were analyzed. Twenty-five procedures scheduled for EMT were analyzed, 23 of which were successfully completed using EMT. The average time for performing the procedure was 23.7 ± 17.2 min. Time for preparation was 5.8 ± 7.3 min while the interventional (skin-to-target) time was 2.7 ± 2.4 min. The average puncture length was 7.2 ± 2.5 cm. Spatial accuracy was 3.1 ± 2.1 mm. Non-completed procedures were due to patient movement and reference fixation problems. Radiation doses (dosis-length-product) were significantly lower (p = 0.012) for EMT-based interventions (732 ± 481 mGy x cm) in comparison to the control group of standard CT-guided interventions (1343 ± 1054 mGy x cm). Electromagnetic navigation can accurately guide percutaneous interventions in a variety of indications. Accuracy and time usage permit the routine use of the utilized system. Lower radiation exposure for EMT-based punctures provides a relevant potential for dose saving. --- paper_title: Image guided surgery of paranasal sinuses and anterior skull base - Five years experience with the InstaTrak ® -System* paper_content: SUMMARY We report on our experience with navigational tools in paranasal sinus and anterior skull base surgery, especially with electromagnetic guidance systems. During the last five years we operated over 80 selected cases with the InstaTrak ® system from VTI (Lawrence, MS, USA). Applicability and user friendliness were explored. The InstaTrak ® 3500 employs a Sun ® Workstation and is a frameless and free-arm and navigation system. Two different suction devices, used as sensors (receivers), and one transmitter are interconnected to this workstation. The position of the tip of the aspirator is displayed as a pair of crosshairs on the screen in axial, coronal and sagittal planes of the patient’s CTscan on the computerscreen online. Our results showed high accuracy-level, usually better than one millimeter and a setup-time less than ten minutes, on average. No additional personnel is required in the OR. We believe that the system enhances efficacy in selected cases like revision surgery, tumor surgery or difficult anterior skull base surgery. However, one should consider that medicolegal responsibility stays always with the surgeon and not with any navigation system. --- paper_title: Electromagnetic navigation for percutaneous guide-wire insertion: Accuracy and efficiency compared to conventional fluoroscopic guidance paper_content: The combination of electromagnetic (EM) navigation with intraoperative fluoroscopic images has the potential to create the ideal environment for spinal surgical applications. This technology enhances standard intraoperative fluoroscopic information for localization of the pedicle entry point and trajectory and may be an effective alternative to other image-guided surgery (IGS) systems. This study was performed to assess the accuracy and time efficiency (placement and fluoroscopy) using EM navigation versus conventional fluoroscopy in the placement of pedicle guide-wires. ::: ::: Kirschner wire (K-wire) placement was performed in cadavers from T8 to S1 using EM navigation versus conventional fluoroscopy. Time for set-up, placement, and fluoroscopy was recorded. After insertion, the accuracy for each level was assessed for the presence and location of facet joint, pedicle, or vertebral cortical perforation using computed tomography imaging with multiplanar reconstructions. ::: ::: K-wire placements were 100% successful for both methods. Comparing EM-based IGS-assisted placement with the conventional fluoroscopy method showed a longer set-up time of 9.6 min versus 3.6 min, respectively. However, mean placement times of 6.3 min versus 9.7 min (P = 0.005) and mean fluoroscopy times of 11 s versus 48 s (P < 0.0001) were both shorter for the EM group. There were no significant differences in the proportion of pedicle, vertebral body, or facet joint breaches. A higher proportion of ideal trajectories was achieved in the EM group. Therefore, we have shown that an EM IGS system can assist the spine surgeon in minimally invasive pedicle screw insertion by providing high-accuracy K-wire placement with a significant reduction in fluoroscopy time. --- paper_title: Electromagnetic-guided postpyloric tube placement in children: Pilot study of its use as a rescue therapy paper_content: Summary Background & aims Postpyloric feeding is frequently indicated in clinical practice. Postpyloric enteral tube placement can be cumbersome, necessitating use of fluoroscopy and/or endoscopy. Guiding devices have been developed to facilitate bedside tube placement. We aimed to evaluate the feasibility and safety of postpyloric enteral tube placement in children using an electromagnetic-guided system as a rescue strategy in case blind tube insertion failed. Method In a prospective pilot study in 10 children postpyloric enteral tube placement using an electromagnetic-guided placement was attempted after blind placement failure. Results Postpyloric enteral tube placement was successful in 6 of the 10 included patients. No adverse events occurred and it was well tolerated. Conclusion This pilot study suggests that electromagnetic-guided placement as a rescue technique for postpyloric enteral tube placement can prevent the use of fluoroscopy and/or endoscopic placement in a substantial portion of patients. --- paper_title: Needle and catheter navigation using electromagnetic tracking for computer-assisted C-arm CT interventions paper_content: Integrated solutions for navigation systems with CT, MR or US systems become more and more popular for medical products. Such solutions improve the medical workflow, reduce hardware, space and costs requirements. The purpose of our project was to develop a new electromagnetic navigation system for interventional radiology which is integrated into C-arm CT systems. The application is focused on minimally invasive percutaneous interventions performed under local anaesthesia. Together with a vacuum-based patient immobilization device and newly developed navigation tools (needles, panels) we developed a safe and fully automatic navigation system. The radiologist can directly start with navigated interventions after loading images without any prior user interaction. The complete system is adapted to the requirements of the radiologist and to the clinical workflow. For evaluation of the navigation system we performed different phantom studies and achieved an average accuracy of better than 2.0 mm. --- paper_title: Magnetic navigation in ultrasound-guided interventional radiology procedures paper_content: Aim To evaluate the usefulness of magnetic navigation in ultrasound (US)-guided interventional procedures. Materials and methods Thirty-seven patients who were scheduled for US-guided interventional procedures (20 liver cancer ablation procedures and 17 other procedures) were included. Magnetic navigation with three-dimensional (3D) computed tomography (CT), magnetic resonance imaging (MRI), 3D US, and position-marking magnetic navigation were used for guidance. The influence on clinical outcome was also evaluated. Results Magnetic navigation facilitated applicator placement in 15 of 20 ablation procedures for liver cancer in which multiple ablations were performed; enhanced guidance in two small liver cancers invisible on conventional US but visible at CT or MRI; and depicted the residual viable tumour after transcatheter arterial chemoembolization for liver cancer in one procedure. In four of 17 other interventional procedures, position-marking magnetic navigation increased the visualization of the needle tip. Magnetic navigation was beneficial in 11 (55%) of 20 ablation procedures; increased confidence but did not change management in five (25%); added some information but did not change management in two (10%); and made no change in two (10%). In the other 17 interventional procedures, the corresponding numbers were 1 (5.9%), 2 (11.7%), 7 (41.2%), and 7 (41.2%), respectively (p = 0.002). Conclusion Magnetic navigation in US-guided interventional procedure provides solutions in some difficult cases in which conventional US guidance is not suitable. It is especially useful in complicated interventional procedures such as ablation for liver cancer. --- paper_title: A novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery paper_content: With the increased use and development of image-guided surgical applications, there is a need for methods of ::: analysis of the accuracy and precision of the components which compose these systems. One primary component ::: of an image-guided surgery system is the position tracking system which allows for the localization of a tool within ::: the surgical field and provides information which is translated back to the images. Previously much work has been ::: done in characterizing these systems for spatial accuracy and precision. Much of this previous work examines ::: single tracking systems or modalities. We have devised a method which allows for the characterization of a novel ::: tracking system independent of modality and location. We describe the development of a phantom system which ::: allows for rapid design and creation of surfaces with different geometries. We have also demonstrated a method of ::: analysis of the data generated by this phantom system, and used it to compare Biosense-Webster's CartoXP TM , ::: and Northern Digital's Aurora TM magnetic trackers. We have determined that the accuracy and precision of the ::: CartoXP was best, followed closely by the Aurora's dome volume, then the Aurora's cube volume. The mean ::: accuracy for all systems was better than 3mm and decays with distance from the field generator. --- paper_title: CT-guided percutaneous lung biopsy: comparison of conventional CT fluoroscopy to CT fluoroscopy with electromagnetic navigation system in 60 consecutive patients. paper_content: PURPOSE ::: To determine if use of an electromagnetic navigation system (EMN) decreases radiation dose and procedure time of CT fluoroscopy guided lung biopsy in lesions smaller than 2.5 cm. ::: ::: ::: MATERIALS/METHODS ::: 86 consecutive patients with small lung masses (<2.5 cm) were approached. 60 consented and were randomized to undergo biopsy with CT fluoroscopy (CTF) (34 patients) or EMN (26 patients). Technical failure required conversion to CTF in 8/26 EMN patients; 18 patients completed biopsy with EMN. Numerous biopsy parameters were compared as described below. ::: ::: ::: RESULTS ::: Average fluoroscopy time using CTF was 28.2s compared to 35.0 s for EMN (p=0.1). Average radiation dose was 117 mGy using CTF and 123 mGy for EMN (p=0.7). Average number of needle repositions was 3.7 for CTF and 4.4 for EMN (p=0.4). Average procedure time was 15 min for CTF and 20 min for EMN (p=0.01). There were 7 pneumothoracesin the CTF group and 6 pneumothoraces in the EMN group (p=0.7). One pneumothorax in the CTF group and 3 pneumothoraces in the EMN group required chest tube placement (p=0.1). One pneumothorax patient in each group required hospital admission. Diagnostic specimens were obtained in 31/34 patients in the CTF group and 22/26 patients in the EMN group (p=0.4). ::: ::: ::: CONCLUSIONS ::: EMN was not statistically different than CTF for fluoroscopy time, radiation dose, number of needle repositions, incidence of pneumothorax, need for chest tube, or diagnostic yield. Procedure time was increased with EMN. --- paper_title: How Does Electromagnetic Navigation Stack Up Against Infrared Navigation in Minimally Invasive Total Knee Arthroplasties paper_content: Abstract Forty-six primary total knee arthroplasties were performed using either an electromagnetic (EM) or infrared (IR) navigation system. In this IRB-approved study, patients were evaluated clinically and for accuracy using spiral computed tomographic imaging and 36-in standing radiographs. Although EM navigation was subject to metal interference, it was not as drastic as line-of-sight interference with IR navigation. Mechanical alignment was ideal in 92.9% of EM and 90.0% of IR cases based on spiral computed tomographic imaging and 100% of EM and 95% of IR cases based on x-ray. Individual measurements of component varus/valgus and sagittal measurements showed EM to be equivalent to IR, with both systems producing subdegree accuracy in 95% of the readings. --- paper_title: Intracranial Image-Guided Neurosurgery: Experience with a new Electromagnetic Navigation System paper_content: Summary.Summary.Background: The aim of image-guided neurosurgery is to accurately project computed tomography (CT) or magnetic resonance imaging (MRI) data into the operative field for defining anatomical landmarks, pathological structures and tumour margins. To achieve this end, different image-guided and computer-assisted, so-called “neuronavigation” systems have been developed in order to offer the neurosurgeon precise spatial information.Method: The present study reports on the experience gained with a prototype of the NEN-NeuroGuardTM neuronavigation system (Nicolet Biomedical, Madison, WI, USA). It utilises a pulsed DC electromagnetic field for determining the location in space of surgical instruments to which miniaturised sensors are attached. The system was evaluated in respect to its usefulness, ease of integration into standard neurosurgical procedures, reliability and accuracy.Findings: The NEN-system was used with success in 24 intracranial procedures for lesions including both gliomas and cerebral metastases. It allowed real-time display of surgical manoeuvres on pre-operative CT or MR images without a stereotactic frame or a robotic arm. The mean registration error associated with MRI was 1.3 mm (RMS error) and 1.5 mm (RMS error) with CT-data. The average intra-operative target-localising error was 3.2 mm (± 1.5 mm SD). Thus, the equipment was of great help in planning and performing skin incisions and craniotomies as well as in reaching deep-seated lesions with a minimum of trauma.Interpretation: The NEN-NeuroGuardTM system is a very user-friendly and reliable tool for image-guided neurosurgery. It does not have the limitations of a conventional stereotactic frame. Due to its electromagnetic technology it avoids the “line-of-sight” problem often met by optical navigation systems since its sensors remain active even when situated deep inside the skull or hidden, for example, by drapes or by the surgical microscope. --- paper_title: Electromagnetic navigation bronchoscopy: A descriptive analysis paper_content: Electromagnetic navigation bronchoscopy (ENB) is an exciting new bronchoscopic technique that promises accurate navigation to peripheral pulmonary target lesions, using technology similar to a car global positioning system (GPS) unit. Potential uses for ENB include biopsy of peripheral lung lesions, pleural dye marking of nodules for surgical wedge resection, placement of fiducial markers for stereotactic radiotherapy, and therapeutic insertion of brachytherapy catheters into malignant tissue. This article will describe the ENB procedure, review the published literature, compare ENB to existing biopsy techniques, and outline the challenges for widespread implementation of this new technology. --- paper_title: Assessment of the ablated area after radiofrequency ablation by the spread of bubbles: comparison with virtual sonography with magnetic navigation. paper_content: BACKGROUND/AIMS ::: The purpose of this study was to investigate whether bubble images after radiofrequency ablation (RFA) can predict the ablated area. ::: ::: ::: METHODOLOGY ::: The spread of bubbles 5 minutes after RFA were compared with the unenhanced area of virtual sonography with magnetic navigation in two RFA methods: expandable needle and cool-tip needle. ::: ::: ::: RESULTS ::: Thirty-one hepatocellular carcinoma nodules were treated by RFA with either an expandable needle or cool-tip needle (n=14 and n=17, respectively) and examined. In the 14 nodules treated by expandable needle, bubble images (puncture direction; r=0.833, p=0.0002, perpendicular direction; r=0.803, p=0.0005) were closely correlated with the unenhanced area of virtual sonography. On the other hand, in 17 nodules treated by cool-tip needle, there was no correlation between the bubble images and virtual sonography (puncture direction; r=0.590, p=0.0127, perpendicular direction; r=0.342, p=0.180). ::: ::: ::: CONCLUSIONS ::: The observation of bubbles with the expandable needle can accurately predict the ablated area and is helpful for assessing local control of RFA. --- paper_title: Navigation Systems for Ablation paper_content: Navigation systems, devices, and intraprocedural software are changing the way interventional oncology is practiced. Before the development of precision navigation tools integrated with imaging systems, thermal ablation of hard-to-image lesions was highly dependent on operator experience, spatial skills, and estimation of positron emission tomography–avid or arterial-phase targets. Numerous navigation systems for ablation bring the opportunity for standardization and accuracy that extends the operator's ability to use imaging feedback during procedures. In this report, existing systems and techniques are reviewed and specific clinical applications for ablation are discussed to better define how these novel technologies address specific clinical needs and fit into clinical practice. --- paper_title: A comparison of blood loss in minimally invasive surgery with and without electromagnetic computer navigation in total knee arthroplasty. paper_content: OBJECTIVE ::: To compare the blood loss after minimally invasive surgery total knee arthroplasty (MIS-TKA) between the procedures performed with and without electromagnetic computer navigation. ::: ::: ::: MATERIAL AND METHOD ::: Eighty patients were recruited for a cohort study of the minimally invasive surgery total knee arthroplasty (MIS-TKA) for the treatment of osteoarthritis. They were divided into two groups, 40 patients had a computer-assisted surgery procedure for the minimally invasive surgery total knee arthroplasty (CAS-MIS-TKA) and the other 40 patients had a conventional procedure for the minimally invasive surgery total knee arthroplasty (MIS-TKA). The surgery in both groups was carried out by a single surgeon at one institution using a uniform approach. The blood loss in each group was evaluated and analyzed for the statistical difference. ::: ::: ::: RESULTS ::: The result showed that the mean blood loss from the drainage of the CAS-MIS-TKA group (389.88 +/- 215.57 milliliters) was slightly lower than the MIS-TKA group (425.25 +/- 269.40 milliliters), which had no significant difference (p-value 0.519). Moreover, the whole blood loss in the CAS-MIS-TKA group (948.45 +/- 431.63 milliliters) was slightly lower than the MIS-TKA group (1075.32 +/- 419.02 milliliters). The difference was also not statistically significant. ::: ::: ::: CONCLUSION ::: Electromagnetic computer-assisted surgery did not reduce blood loss in the minimally invasive surgery total knee arthroplasty (MIS-TKA). --- paper_title: Reduction in patient-reported acute morbidity in prostate cancer patients treated with 81-Gy Intensity-modulated radiotherapy using reduced planning target volume margins and electromagnetic tracking: assessing the impact of margin reduction study. paper_content: OBJECTIVE ::: To investigate whether patient-reported quality of life after high-dose external beam intensity-modulated radiotherapy for prostate cancer can be improved by decreasing planning target volume margins while using real-time tumor tracking. ::: ::: ::: METHODS ::: Study patients underwent radiotherapy with nominal 3-mm margins and electromagnetic real-time tracking. Morbidity was assessed before and at the end of radiotherapy using Expanded Prostate Cancer Index Composite (EPIC) questionnaires. Changes in scores were compared between the Assessing Impact of Margin Reduction (AIM) study cohort and the comparator Prostate Cancer Outcomes and Satisfaction with Treatment Quality Assessment (PROST-QA) cohort, treated with conventional margins. ::: ::: ::: RESULTS ::: The 64 patients in the prospective AIM study had generally less favorable clinical characteristics than the 153 comparator patients. Study patients had similar or slightly poorer pretreatment EPIC scores than comparator patients in bowel, urinary, and sexual domains. AIM patients receiving radiotherapy had less bowel morbidity than the comparator group as measured by changes in mean bowel and/or rectal domain EPIC scores from pretreatment to 2 months after start of treatment (-1.5 vs -16.0, P = .001). Using a change in EPIC score >0.5 baseline standard deviation as the measure of clinical relevance, AIM study patients experienced meaningful decline in only 1 health-related quality of life domain (urinary) whereas decline in 3 health-related quality of life domains (urinary, sexual, and bowel/rectal) was observed in the PROST-QA comparator cohort. ::: ::: ::: CONCLUSIONS ::: Prostate cancer patients treated with reduced margins and tumor tracking had less radiotherapy-related morbidity than their counterparts treated with conventional margins. Highly contoured intensity-modulated radiotherapy shows promise as a successful strategy for reducing morbidity in prostate cancer treatment. --- paper_title: The Cathlocator: a novel non-radiological method for the localization of enteral tubes. paper_content: Safe placement of nasogastric tubes requires reliable positioning of the tip of the tube within the stomach. Radiology and aspiration are currently used to confirm tube position, but suffer from significant problems of cost and efficacy, respectively. We have developed a novel method to locate the position of a catheter tip within the body, using the detection of low energy electromagnetic field generated in a coil located in the catheter with an external hand-held unit (Cathlocator). In vitro, the unit detected the distance of the coil from the detector with an accuracy of 0.1 cm over a range of 4-12 cm. In vivo studies were performed in 11 healthy volunteers using a purpose-built manometric assembly that incorporated the signal generating coil in its tip. In all subjects the Cathlocator showed the position of the signal generating coil to be cranial to the xiphisternum when manometric and transmucosal potential difference criteria showed it to be located above the lower oesophageal sphincter. When the coil was within the stomach, the Cathlocator identified its position within the epigastric, umbilical and left hypochondrial regions of the abdomen. The distance of the coil from the surface was significantly greater when in the duodenum mean (+/- s.e.m. 7.6 +/- 0.3 cm; P < 0.001) and oesophagus (8.6 +/- 0.2 cm; P < 0.002) than the stomach (5.0 +/- 0.4 cm). In one subject studied twice there was a close correlation between the location and depth measured by the device on each occasion. The Cathlocator is a novel non-radiological device that has the potential to be useful in the placement of gastrointestinal catheters. --- paper_title: A novel technique for tailoring frontal osteoplastic flaps using the ENT magnetic navigation system. paper_content: CONCLUSION ::: The ENT magnetic navigation system is potentially useful and offers the most accurate technique for harvesting frontal osteoplastic flaps. It represents a valid tool in the wide range of instruments available to rhinologists. ::: ::: ::: OBJECTIVE ::: Precise delineation of the boundaries of the frontal sinus is a crucial step when harvesting a frontal osteoplastic flap. We present a novel technique using the ENT magnetic navigation system. ::: ::: ::: METHODS ::: Nineteen patients affected by different pathologies involving the frontal sinus underwent an osteoplastic flap procedure using the ENT magnetic navigation system between January 2009 and April 2011. ::: ::: ::: RESULTS ::: The ENT magnetic navigation system was found to be a safe and accurate tool for delineating the frontal sinus boundaries. No intraoperative complications occurred during the osteoplastic procedures. --- paper_title: Computer-aided navigation in neurosurgery paper_content: The article comprises three main parts: a historical review on navigation, the mathematical basics for calculation and the clinical applications of navigation devices. Main historical steps are described from the first idea till the realisation of the frame-based and frameless navigation devices including robots. In particular the idea of robots can be traced back to the Iliad of Homer, the first testimony of European literature over 2500 years ago. In the second part the mathematical calculation of the mapping between the navigation and the image space is demonstrated, including different registration modalities and error estimations. The error of the navigation has to be divided into the technical error of the device calculating its own position in space, the registration error due to inaccuracies in the calculation of the transformation matrix between the navigation and the image space, and the application error caused additionally by anatomical shift of the brain structures during operation. In the third part the main clinical fields of application in modern neurosurgery are demonstrated, such as localisation of small intracranial lesions, skull-base surgery, intracerebral biopsies, intracranial endoscopy, functional neurosurgery and spinal navigation. At the end of the article some possible objections to navigation-aided surgery are discussed. --- paper_title: Freehand placement of depth electrodes using electromagnetic frameless stereotactic guidance. paper_content: The presurgical evaluation of patients with epilepsy often requires an intracranial study in which both subdural grid electrodes and depth electrodes are needed. Performing a craniotomy for grid placement with a stereotactic frame in place can be problematic, especially in young children, leading some surgeons to consider frameless stereotaxy for such surgery. The authors report on the use of a system that uses electromagnetic impulses to track the tip of the depth electrode. Ten pediatric patients with medically refractory focal lobar epilepsy required placement of both subdural grid and intraparenchymal depth electrodes to map seizure onset. Presurgical frameless stereotaxic targeting was performed using a commercially available electromagnetic image-guided system. Freehand depth electrode placement was then performed with intraoperative guidance using an electromagnetic system that provided imaging of the tip of the electrode, something that has not been possible using visually or sonically based systems. Accuracy of placement of depth electrodes within the deep structures of interest was confirmed postoperatively using CT and CT/MR imaging fusion. Depth electrodes were appropriately placed in all patients. Electromagnetic-tracking-based stereotactic targeting improves the accuracy of freehand placement of depth electrodes in patients with medically refractory epilepsy. The ability to track the electrode tip, rather than the electrode tail, is a major feature that enhances accuracy. Additional advantages of electromagnetic frameless guidance are discussed. --- paper_title: Fixation, registration, and image-guided navigation using a thermoplastic facial mask in electromagnetic navigation-guided radiofrequency thermocoagulation. paper_content: OBJECTIVE ::: For fixation, registration, and image-guided navigation, the aim of this study was to evaluate a thermoplastic facial mask with plastic markers in achieving frameless stereotactic radiofrequency thermocoagulation (RFT). ::: ::: ::: STUDY DESIGN ::: A thermoplastic facial mask was remolded according to each subject's face. Six markers were placed on the surface and 6 inside. Series of 1.25-mm- and 2.5-mm-slice computerized tomography (CT) scans were made to provide radiologic data. During the phantom study, each plastic sphere inside was selected in turn as the target for frameless stereotaxy. The clinical Hartel puncture of the foramen ovale (FO) was imitated using an electromagnetic navigation system. Navigation-guided RFT was tried in 3 patients. ::: ::: ::: RESULTS ::: The mean location error was 1.29 mm (SD ± 0.39 mm). No significant difference (P > .05) was proven between 1.25-mm and 2.5-mm CT slice acquisition for the image datasets used. The FO punctures in clinical trials were successful and confirmed by CT. ::: ::: ::: CONCLUSIONS ::: Registration and fixation via a fiducial marker-based thermoplastic facial mask is accurate and feasible for use in navigation-guided RFT. --- paper_title: Image guided surgery of paranasal sinuses and anterior skull base - Five years experience with the InstaTrak ® -System* paper_content: SUMMARY We report on our experience with navigational tools in paranasal sinus and anterior skull base surgery, especially with electromagnetic guidance systems. During the last five years we operated over 80 selected cases with the InstaTrak ® system from VTI (Lawrence, MS, USA). Applicability and user friendliness were explored. The InstaTrak ® 3500 employs a Sun ® Workstation and is a frameless and free-arm and navigation system. Two different suction devices, used as sensors (receivers), and one transmitter are interconnected to this workstation. The position of the tip of the aspirator is displayed as a pair of crosshairs on the screen in axial, coronal and sagittal planes of the patient’s CTscan on the computerscreen online. Our results showed high accuracy-level, usually better than one millimeter and a setup-time less than ten minutes, on average. No additional personnel is required in the OR. We believe that the system enhances efficacy in selected cases like revision surgery, tumor surgery or difficult anterior skull base surgery. However, one should consider that medicolegal responsibility stays always with the surgeon and not with any navigation system. --- paper_title: A comparison of image guidance systems for sinus surgery paper_content: Objective: Intraoperative computed tomographic guidance systems are available which utilize either electromagnetic (radiofrequency) or optical (infrared) signals to localize instruments within the surgical field. The objective of this study was to compare the use of these two different image guidance technologies for sinus surgery. Study Design: Prospective cohort study. Methods: The electromagnetic-based InstaTrak system (n= 24) and the optical-based Stealth-Station (n = 49) were compared in a series of 73 consecutive sinus series which utilized image guidance technology. Results: Both the electromagnetic and optical systems provided anatomic localization to within 2 mm during surgery. Intraoperative reregistration was effective in correcting for any anatomic drift. There were no intraoperative complications. Mean operative times were 156.3 ± 8.9 minutes for the electromagnetic and 139.2 ± 17.7 minutes for the optical system (P <.05). The average intraoperative blood loss did not differ significantly between groups (electromagnetic, 190.6 ± 28.7 mL; optical, 172.4 ± 23.0 mL). Each system was noted to have limitations. The presence of metallic objects in the operative field interfered with functioning of the electromagnetic system, whereas the optical system required a clear line of sight to be maintained between the infrared camera and surgical handpiece. Both systems required specialized headsets to be worn by patients during surgery to monitor head position. The electromagnetic system also required these headsets to be worn during the preoperative computed tomography scan. Conclusion: Although these two image guidance systems both proved valuable for anatomic localization during sinus surgery, individual preferences can be based on distinct differences in their design and operation. --- paper_title: An electromagnetic navigation system for transbronchial interventions with a novel approach to respiratory motion compensation. paper_content: PURPOSE ::: Bronchoscopic interventions, such as transbronchial needle aspiration (TBNA), are commonly performed procedures to diagnose and stage lung cancer. However, due to the complex structure of the lung, one of the main challenges is to find the exact position to perform a biopsy and to actually hit the biopsy target (e.g., a lesion). Today, most interventions are accompanied by fluoroscopy to verify the position of the biopsy instrument, which means additional radiation exposure for the patient and the medical staff. Furthermore, the diagnostic yield of TBNA is particularly low for peripheral lesions. ::: ::: ::: METHODS ::: To overcome these problems the authors developed an image-guided, electromagnetic navigation system for transbronchial interventions. The system provides real time positioning information for the bronchoscope and a transbronchial biopsy instrument with only one preoperatively acquired computed tomography image. A twofold respiratory motion compensation method based on a particle filtering approach allows for guidance through the entire respiratory cycle. In order to evaluate our system, 18 transbronchial interventions were performed in seven ventilated swine lungs using a thorax phantom. ::: ::: ::: RESULTS ::: All tracked bronchoscope positions were corrected to the inside of the tracheobronchial tree and 80.2% matched the correct bronchus. During regular respiratory motion, the mean overall targeting error for bronchoscope tracking and TBNA needle tracking was with compensation on 10.4 ± 1.7 and 10.8 ± 3.0 mm, compared to 14.4 ± 1.9 and 13.3 ± 2.7 mm with compensation off. The mean fiducial registration error (FRE) was 4.2 ± 1.1 mm. ::: ::: ::: CONCLUSIONS ::: The navigation system with the proposed respiratory motion compensation method allows for real time guidance during bronchoscopic interventions, and thus could increase the diagnostic yield of transbronchial biopsy. --- paper_title: Electromagnetic-guided postpyloric tube placement in children: Pilot study of its use as a rescue therapy paper_content: Summary Background & aims Postpyloric feeding is frequently indicated in clinical practice. Postpyloric enteral tube placement can be cumbersome, necessitating use of fluoroscopy and/or endoscopy. Guiding devices have been developed to facilitate bedside tube placement. We aimed to evaluate the feasibility and safety of postpyloric enteral tube placement in children using an electromagnetic-guided system as a rescue strategy in case blind tube insertion failed. Method In a prospective pilot study in 10 children postpyloric enteral tube placement using an electromagnetic-guided placement was attempted after blind placement failure. Results Postpyloric enteral tube placement was successful in 6 of the 10 included patients. No adverse events occurred and it was well tolerated. Conclusion This pilot study suggests that electromagnetic-guided placement as a rescue technique for postpyloric enteral tube placement can prevent the use of fluoroscopy and/or endoscopic placement in a substantial portion of patients. --- paper_title: Novel Magnetic Technology for Intraoperative Intracranial Frameless Navigation: In Vivo and in Vitro Results paper_content: OBJECTIVE ::: To characterize the accuracy of the Magellan electromagnetic navigation system (Biosense Webster, Tirat HaCarmel, Israel) and to demonstrate the feasibility of its use in image-guided neurosurgical applications. ::: ::: ::: DESCRIPTION OF INSTRUMENTATION ::: The Magellan system was developed to provide real-time tracking of the distal tips of flexible catheters, steerable endoscopes, and other surgical instruments, using ultra-low electromagnetic fields and a novel miniature position sensor for image-correlated intraoperative navigation and mapping applications. ::: ::: ::: METHODS ::: An image registration procedure was performed, and static and qualitative accuracies were assessed in a series of phantom, animal, and human neurosurgical studies. ::: ::: ::: EXPERIENCE AND RESULTS ::: During the human study phase, an accuracy error of up to 5 mm was deemed acceptable. Results demonstrated that this degree of accuracy was maintained throughout all procedures. All anatomic landmarks were reached with precision and were accurately viewed on the display screen. Navigation that relied on the system was also successful. No interference with operating room equipment was noted. The accuracy of the system was maintained during regular surgical procedures, using standard surgical tools. ::: ::: ::: CONCLUSION ::: The system provides precise lesion localization without limiting the line of vision, the mobility of the surgeon, or the flexibility of instruments. Electromagnetic navigation promises new advances in neuronavigation and frameless stereotactic surgery. --- paper_title: Electromagnetic navigation in lung cancer: research update. paper_content: Unfortunately, flexible bronchoscopy, the least invasive bronchoscopic procedure, is of limited value for obtaining tissue from lesions in the peripheral segments of the lung. Biopsy success is further compromised if the lesion is less than 3 cm in diameter. The main limitation of flexible bronchoscopy is the difficulty in reaching peripheral lesions with the accessory tools. In this paper, we will discuss a new bronchoscopic advance in the diagnosis and treatment of lung cancer. Once extended beyond the tip of the bronchoscope, these tools are difficult to guide to the desired location. Localizing the lesion under fluoroscopy is difficult, and alternative diagnostic guidance methods, such as computer tomography-guided bronchoscopy and endobronchial ultrasound, are more demanding. Therefore, new methods for navigation and localization are needed. One of these new technologies is electromagnetic navigation bronchoscopy. The aim of this special report is to provide an analysis of the published literature. A literature search was constructed and performed on PubMed to identify the literature from 2000 to 2008. The search words were 'electromagnetic navigation', 'coin lesion', 'solitary pulmonary nodule' and 'lung cancer'. We review a number of recent studies that utilize electromagnetic navigation and guidance, and analyze their performance characteristics for clinical applications of the technology. Electromagnetic navigation is likely to play an increasing and integral role in the diagnosis and staging of lung cancer in the near future. Electromagnetic registration may impact both the staging and diagnosis of peripheral lesions. --- paper_title: Image-Guided Endoscopic Surgery: Results of Accuracy and Performance in a Multicenter Clinical Study Using an Electromagnetic Tracking System paper_content: Image-guided surgery has recently been described in the literature as a useful technology for improved functional endoscopic sinus surgery localization. Image-guided surgery yields accurate knowledge of the surgical field boundaries, allowing safer and more thorough sinus surgery. We have previously reviewed our initial experience with The InstaTrak System. This article presents a multicenter clinical study (n=55) that assesses the system's capability for localizing structures in critical surgical sites. The purpose of this paper is to present quantitative data on accuracy and performance. We describe several new advances including an automated registration technique that eliminates the redundant computed tomography scan, compensation for head movement, and the ability to use interchangeable instruments. --- paper_title: Preliminary experience with electromagnetic navigation system in TKA. paper_content: Abstract Accuracy of implant positioning and precise reconstruction of leg alignment offers the best way to achieve good long-term results in total knee arthroplasty. Computer instrumentation was developed to improve the final position of the component and restore the mechanical axis. Current navigation systems use either optical or electromagnetic tracking. The advantage of the Electromagnetic (EM) navigation system is that no line-of-sight issues are present. However, special iron-free instruments are required. This report analyzes the postoperative radiological results of 32 knees treated using an EM system. All the measurements were recorded using software able to subtend angles automatically by five physicians, three radiologist and two orthopedic residents not involved with the surgery. Each radiograph was measured three times, in random order, and at delayed intervals. We found an ideal alignment for the mechanical axis (180 ± 3°) in 30 out of 32 cases, whereas all the patients achieved a value of 90° ± 3° for both femoral and tibial frontal component angles. An apparently over-corrected implant position for the sagittal femoral component was reported, with a mean value of 11.2° ± 3.6. The mean position of the tibial component was 90.6° ± 2.8; just four measurements were outside of the ± 3° of the desired value. EM is safe and there were no complications related to this system. An almost perfect correlation was found between the mechanical axis value of the EM navigation system (179.8° ± 1.8) and the median value of the all reviewers (180.3° ± 1.9) with a difference of 0.5°. --- paper_title: Fusion of MRI and sonography image for breast cancer evaluation using real-time virtual sonography with magnetic navigation: first experience. paper_content: OBJECTIVE ::: We recently developed a real-time virtual sonography (RVS) system that enables simultaneous display of both sonography and magnetic resonance imaging (MRI) cutaway images of the same site in real time. The aim of this study was to evaluate the role of RVS in the management of enhancing lesions visualized with MRI. ::: ::: ::: METHODS ::: Between June 2006 and April 2007, 65 patients underwent MRI for staging of known breast cancer at our hospital. All patients were examined using mammography, sonography, MRI and RVS before surgical resection. Results were correlated with histopathologic findings. MRI was obtained on a 1.5 T imager, with the patient in the supine position using a flexible body surface coil. Detection rate was determined for index tumors and incidental enhancing lesions (IELs), with or without RVS. ::: ::: ::: RESULTS ::: Overall sensitivity for detecting index tumors was 85% (55/65) for mammography, 91% (59/65) for sonography, 97% (63/65) for MRI and 98% (64/65) for RVS. Notably, in one instance in which the cancer was not seen on MRI, RVS detected it with the supplementation of sonography. IELs were found in 26% (17/65) of the patients. Of 23 IELs that were detected by MRI, 30% (7/23) of IELs could be identified on repeated sonography alone, but 83% (19/23) of them were identified using the RVS system (P = 0.001). The RVS system was able to correctly project enhanced MRI information onto a body surface, as we checked sonography form images. ::: ::: ::: CONCLUSIONS ::: Our results suggest that the RVS system can identify enhancing breast lesions with excellent accuracy. --- paper_title: Electromagnetic navigation bronchoscopy paper_content: In 2009, lung cancer was estimated to be the second most common form of cancer diagnosed in men, after prostate, and the second, after breast cancer, in women. It is estimated that it caused 159,390 deaths more than breast, colon and prostate cancers combined. While age-adjusted death rates for this cancer have been declining since 2000, they remain high. --- paper_title: Successful placement of postpyloric enteral tubes using electromagnetic guidance in critically ill children* paper_content: OBJECTIVES ::: Initiation of postpyloric feeding is often delayed by difficulties in placement of enteral tubes. We evaluated the effectiveness of bedside postpyloric enteral tube (PET) placement using an electromagnetic (EM)-guided device. We hypothesized that: 1) EM-guided placement of PETs would be successful more often than standard blind placement with a shorter total time to successful placement and 2) the EM-guided technique would have similar overall costs to the standard technique. ::: ::: ::: DESIGN ::: Prospective cohort trial with serial control groups in a pediatric intensive care unit at a tertiary care children's hospital. ::: ::: ::: INTERVENTIONS ::: We collected data on a cohort of consecutive pediatric intensive care unit patients who underwent PET placement by standard blind technique followed by a cohort who underwent EM-guided placement. The primary outcome measure was successful placement determined by abdominal radiography. ::: ::: ::: MEASUREMENTS AND MAIN RESULTS ::: One hundred seven patients were evaluated in the trial: 57 in the standard group and 50 in the EM-guided group. Demographic data, percent intubated, and admission diagnosis were similar in both groups. Forty-one of 50 patients (82%) in the EM-guided group had successful placement compared with 22 of 57 in the standard group (38%) (p < 0.0001). The average time to successful placement was 1.7 vs. 21 hours in the EM-guided group and standard group, respectively (p < 0.0001). Children in the EM-guided group received fewer radiographs (p = 0.007) and were given more prokinetic drugs (p = 0.045). There were no episodes of pneumothorax in either group. After controlling for prokinetic drug use, EM-guided placement was more likely to result in successful placement than the standard blind technique (odds ratio 6.4, 95% confidence interval 2.5-16.3). An annual placement rate of 250 PETs by EM guidance, based on our institution's current utilization rates, is associated with a cost savings of $55.46 per PET placed. ::: ::: ::: CONCLUSION ::: EM guidance is an efficient and cost-effective method of bedside PET placement. --- paper_title: Magnetic versus manual catheter navigation for ablation of free wall accessory pathways in children. paper_content: BACKGROUND ::: Transcatheter ablation of accessory pathway (AP)-mediated tachycardia is routinely performed in children. Little data exist regarding the use of magnetic navigation (MN) and its potential benefits for ablation of AP-mediated tachycardia in this population. ::: ::: ::: METHODS AND RESULTS ::: We performed a retrospective review of prospectively gathered data in children undergoing radiofrequency ablation at our institution since the installation of MN (Stereotaxis Inc, St. Louis, MO) in March 2009. The efficacy and safety between an MN-guided approach and standard manual techniques for mapping and ablation of AP-mediated tachycardia were compared. During the 26-month study period, 145 patients underwent radiofrequency ablation for AP-mediated tachycardia. Seventy-three patients were ablated with MN and 72 with a standard manual approach. There were no significant differences in demographic factors between the 2 groups with a mean cohort age of 13.1±4.0 years. Acute success rates were equivalent with 68 of 73 (93.2%) patients in the MN group being successfully ablated versus 68 of 72 (94.4%) patients in the manual group (P=0.889). During a median follow-up of 21.4 months, there were no recurrences in the MN group and 2 recurrences in the manual group (P=0.388). There were no differences in time to effect, number of lesions delivered, or average ablation power. There was also no difference in total procedure time, but fluoroscopy time was significantly reduced in the MN group at 14.0 (interquartile range, 3.8-23.9) minutes compared with the manual group at 28.1 (interquartile range, 15.3-47.3) minutes (P<0.001). There were no complications in either group. ::: ::: ::: CONCLUSIONS ::: MN is a safe and effective approach to ablate AP-mediated tachycardia in children. --- paper_title: Update on radiation-based therapies for prostate cancer. paper_content: PURPOSE OF REVIEW ::: This overview summarizes recent developments in radiation-based therapy for prostate cancer. ::: ::: ::: RECENT FINDINGS ::: Radiation dose escalation continues to be validated as an effective strategy in prostate cancer. Adjuvant radiation therapy became the standard of care after long-term follow-up of the pivotal Southwest Oncology Group 8794 trial demonstrated an overall survival benefit in patients with pT3 disease or positive margin after prostatectomy. Strategies such as hypofractionation and stereotactic body radiation therapy are becoming more common but have yet to be validated in a large trial. New technologies such as Calypso 4D real-time tumor tracking and volumetric-modulated arc therapy promise to potentially increase cure rates and decrease toxicity due to increased accuracy of radiation delivery. ::: ::: ::: SUMMARY ::: Radiation therapy continues to play a prominent role in the management of prostate cancer. However, new strategies and technologies such as hypofractionation, stereotactic body radiation therapy, volumetric-modulated arc therapy, and Calypso tumor tracking must be prospectively validated. --- paper_title: A review of RFID localization: Applications and techniques paper_content: Indoor localization has been actively researched recently due to security and safety as well as service matters. Previous research and development for indoor localization includes infrared, wireless LAN and ultrasonic. However, these technologies suffer either from the limited accuracy or lacking of the infrastructure. Radio Frequency Identification (RFID) is very attractive because of reasonable system price, and reader reliability. The RFID localization can be categorized into tag and reader localizations. In this paper, major localization techniques for both tag and reader localizations are reviewed to provide the readers state of the art of the indoor localization algorithms. The advantage and disadvantage of each technique for particular applications were also discussed. --- paper_title: A new system to perform continuous target tracking for radiation and surgery using non-ionizing alternating current electromagnetics paper_content: Abstract A new technology based on alternating current (AC) nonionizing electromagnetic fields has been developed to enable precise localization and continuous tracking of mobile soft tissue targets or tumors during radiation therapy and surgery. The technology utilizes miniature, permanently implanted, wireless transponders (Beacon™ Transponders) and a novel array to enable objective localization and continuous tracking of targets in three-dimensional space. The characteristics of this system include use of safe, nonionizing electromagnetic fields; negligible tissue interaction, delivery of inherently objective three-dimensional data, continuous operation during external beam radiation therapy or surgery. Feasibility testing has shown the system was capable of locating and continuously tracking transponders with submillimeter accuracy at tracking rates of up to 10 Hz and was unaffected by operational linear accelerator. Preclinical animal studies have shown the transponders to be suitable for permanent implantation and early human clinical studies have demonstrated the feasibility of inserting transponders into the prostate. This system will enable accurate initial patient setup and provide real-time target tracking demanded by today's highly conformal radiation techniques and has significant potential as a surgical navigation platform. --- paper_title: Study on an experimental AC electromagnetic tracking system paper_content: 3D tracking system is one of the key devices to realize the sense of immersion and human computer interaction in a virtual or augmented reality system. This paper presents the design of an experimental AC electromagnetic 3D tracking system that is based on AC magnetic field transmitting and sensing. The proposed system is composed of 3-axis orthogonal magnetic sensor, 3-axis orthogonal magnetic transmitter, 2-axis accelerometers, data acquisition and processing system etc. After obtaining the orientation of the receiver by measuring earth magnetic and gravity field with the DC output of magnetic sensors and accelerometers, the position can be calculated from the received AC magnetic field generated by the magnetic transmitter. The design of the experimental system on the basis of theoretical analysis is presented in detail and the results of the actual experiment prove the feasibility of the proposed system. --- paper_title: In Vivo Validation of a Hybrid Tracking System for Navigation of an Ultrathin Bronchoscope Within Peripheral Airways paper_content: Transbronchial biopsy of peripheral lung nodules is hindered by the inability to access lesions endoluminally due to the large diameter of conventional bronchoscopes. An ultrathin scanning fiber bronchoscope has recently been developed to advance image-guided biopsy several branching generations deeper into the peripheral airways. However, navigating a potentially complex 3-D path to the region of interest presents a challenge to the bronchoscopist. An accompanying guidance system has also been developed to track the bronchoscope through the airways, and display its position and intended path on a virtual display. Intraoperative localization of the bronchoscope was achieved by combining electromagnetic tracking (EMT) and image-based tracking (IBT). An error-state Kalman filter was used to model the disagreement between the two tracking sources. The positional tracking error was reduced from 14.22 and 14.92 mm by independent EMT and IBT, respectively, to 6.74 mm using the hybrid approach. Hybrid tracking of the scope orientation and respiratory motion compensation further improved tracking accuracy and stability, resulting in an average tracking error of 3.33 mm and 10.01°. --- paper_title: Review on Patents about Magnetic Localisation Systems for in vivo Catheterizations paper_content: Abstract: in vivo Catheterizations are usually performed by physicians using X-Ray fluoroscopic guide and contrast-media. The X-Ray exposure both of the patient and of the operators can induce collateral effects. The present review describes the status of the art on recent patents about magnetic position/orientation indicators capable to drive the probe during in-vivo medical diagnostic or interventional procedures. They are based on the magnetic field produced by sources and revealed by sensors. Possible solutions are: the modulated magnetic field produced by a set of coils positioned externally to the patient is measured by sensors installed on the intra-body probe; the magnetic field produced by a thin permanent magnet installed on the intra-body probe is measured by magnetic field sensors positioned outside the patient body. In either cases, position and orientation of the probe are calculated in real time: this allows the elimination of repetitive X-Ray scans used to monitor the probe. The aim of the proposed systems is to drive the catheter inside the patient vascular tree with a reduction of the X-Ray exposure both of the patient and of the personnel involved in the intervention. The present paper intends also to highlight advantages/disadvantages of the presented solutions. --- paper_title: Passive tracking of catheters and guidewires by contrast‐enhanced MR fluoroscopy paper_content: Passive MR tracking of catheters and guidewires is usually done by dynamically imaging a single thick slab, subtracting a baseline image, and combining the result with a previously acquired MR angiogram. In the in vitro and in vivo experiments reported here, it is demonstrated that this approach may be greatly simplified by using a suitable intravascular contrast agent. The proposed method, contrast-enhanced MR fluoroscopy, combines tracking and angiography into a single sequence and allows direct visualization of the magnetically prepared parts of catheters and guidewires with respect to the vasculature at a frame rate of about one image per 1.5 seconds. Contrast-enhanced MR fluoroscopy, although still limited in temporal resolution, thus obviates the need for subtraction and overlay techniques and eliminates the sensitivity of tracking to subject motion between acquisitions. Magn Reson Med 45:17–23, 2001. © 2001 Wiley-Liss, Inc. --- paper_title: Quality assurance for clinical implementation of an electromagnetic tracking system paper_content: The Calypso Medical 4D localization system utilizes alternating current electromagnetics for accurate, real-time tumor tracking. A quality assurance program to clinically implement this system is described here. Testing of the continuous electromagnetic tracking system (Calypso Medical Technologies, Seattle, WA) was performed using an in-house developed four-dimensional stage and a quality assurance fixture containing three radiofrequency transponders at independently measured locations. The following tests were performed to validate the Calypso system: (a) Localization and tracking accuracy, (b) system reproducibility, (c) measurement of the latency of the tracking system, and (d) measurement of transmission through the Calypso table overlay and the electromagnetic array. The translational and rotational localization accuracies were found to be within 0.01 cm and 1.0 degree, respectively. The reproducibility was within 0.1 cm. The average system latency was measured to be within 303 ms. The attenuation by the Calypso overlay was measured to be 1.0% for both 6 and 18 MV photons. The attenuations by the Calypso array were measured to be 2% and 1.5% for 6 and 18 MV photons, respectively. For oblique angles, the transmission was measured to be 3% for 6 MV, while it was 2% for 18 MV photons. A quality assurance process has been developed for the clinical implementation of an electromagnetic tracking system in radiation therapy. --- paper_title: Electromagnetic Servoing—A New Tracking Paradigm paper_content: Electromagnetic (EM) tracking is highly relevant for many computer assisted interventions. This is in particular due to the fact that the scientific community has not yet developed a general solution for tracking of flexible instruments within the human body. Electromagnetic tracking solutions are highly attractive for minimally invasive procedures, since they do not require line of sight. However, a major problem with EM tracking solutions is that they do not provide uniform accuracy throughout the tracking volume and the desired, highest accuracy is often only achieved close to the center of tracking volume. In this paper, we present a solution to the tracking problem, by mounting an EM field generator onto a robot arm. Proposing a new tracking paradigm, we take advantage of the electromagnetic tracking to detect the sensor within a specific sub-volume, with known and optimal accuracy. We then use the more accurate and robust robot positioning for obtaining uniform accuracy throughout the tracking volume. Such an EM servoing methodology guarantees optimal and uniform accuracy, by allowing us to always keep the tracked sensor close to the center of the tracking volume. In this paper, both dynamic accuracy and accuracy distribution within the tracking volume are evaluated using optical tracking as ground truth. In repeated evaluations, the proposed method was able to reduce the overall error from 6.64±7.86 mm to a significantly improved accuracy of 3.83±6.43 mm. In addition, the combined system provides a larger tracking volume, which is only limited by the reach of the robot and not the much smaller tracking volume defined by the magnetic field generator. --- paper_title: A Machine Learning Approach for Deformable Guide-Wire Tracking in Fluoroscopic Sequences paper_content: Deformable guide-wire tracking in fluoroscopic sequences is a challenging task due to the low signal to noise ratio of the images and the apparent complex motion of the object of interest. Common tracking methods are based on data terms that do not differentiate well between medical tools and anatomic background such as ribs and vertebrae. A data term learned directly from fluoroscopic sequences would be more adapted to the image characteristics and could help to improve tracking. In this work, our contribution is to learn the relationship between features extracted from the original image and the tracking error. By randomly deforming a guide-wire model around its ground truth position in one single reference frame, we explore the space spanned by these features. Therefore, a guide-wire motion distribution model is learned to reduce the intrisic dimensionality of this feature space. Random deformations and the corresponding features can be then automatically generated. In a regression approach, the function mapping this space to the tracking error is learned. The resulting data term is integrated into a tracking framework based on a second-order MAP-MRF formulation which is optimized by QPBO moves yielding high-quality tracking results. Experiments conducted on two fluoroscopic sequences show that our approach is a promising alternative for deformable tracking of guide-wires. --- paper_title: New spatial localizer based on fiber optics with applications in 3D ultrasound imaging paper_content: Spatial localizers provide a reference coordinate system and make the tracking of various objects in 3D space feasible. A number of different spatial localizers are currently available. Several factors that determine the suitability of a position sensor for a specific clinical application are accuracy, ease of use, and robustness of performance when used in a clinical environment. In this paper, we present a new and low-cost sensor with performance unaffected by a the materials present in the operating environment. This new spatial localizer consists of a flexible tape with a number of fiber optic sensor along its length. The main idea is that we can obtain the position and orientation of the end of the tape with respect to its base. The end and base of the tape are locations along its length determined by the physical location of the fiber optic sensors. Using this tape, we tracked an ultrasound probe and formed 3D US data sets. In order to validate the geometric accuracy of those 3D data sets, we measured known volumes of water-filled balloons. Our results indicate that we can measure volumes with accuracy between 2-16 percent. Given the fact that the sensor is under further development and refinement, we expect that this sensor could be an accurate, cost-effective and robust alternative in many medical applications, e.g., image-guided surgery and 3D ultrasound imaging.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration paper_content: In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz. --- paper_title: Standardized assessment of new electromagnetic field generators in an interventional radiology setting. paper_content: PURPOSE ::: Two of the main challenges associated with electromagnetic (EM) tracking in computer-assisted interventions (CAIs) are (1) the compensation of systematic distance errors arising from the influence of metal near the field generator (FG) or the tracked sensor and (2) the optimized setup of the FG to maximize tracking accuracy in the area of interest. Recently, two new FGs addressing these issues were proposed for the well-established Aurora(®) tracking system [Northern Digital, Inc. (NDI), Waterloo, Canada]: the Tabletop 50-70 FG, a planar transmitter with a built-in shield that compensates for metal distortions emanating from treatment tables, and the prototypical Compact FG 7-10, a mobile generator designed to be attached to mobile imaging devices. The purpose of this paper was to assess the accuracy and precision of these new FGs in an interventional radiology setting. ::: ::: ::: METHODS ::: A standardized assessment protocol, which uses a precisely machined base plate to measure relative error in position and orientation, was applied to the two new FGs as well as to the well-established standard Aurora(®) Planar FG. The experiments were performed in two different settings: a reference laboratory environment and a computed tomography (CT) scanning room. In each setting, the protocol was applied to three different poses of the measurement plate within the tracking volume of the three FGs. ::: ::: ::: RESULTS ::: The two new FGs provided higher precision and accuracy within their respective measurement volumes as well as higher robustness with respect to the CT scanner compared to the established FG. Considering all possible 5 cm distances on the grid, the error of the Planar FG was increased by a factor of 5.94 in the clinical environment (4.4 mm) in comparison to the error in the laboratory environment (0.8 mm). In contrast, the mean values for the two new FGs were all below 1 mm with an increase in the error by factors of only 2.94 (Reference: 0.3 mm; CT: 0.9 mm) and 1.04 (both: 0.5 mm) in the case of the Tabletop FG and the Compact FG, respectively. ::: ::: ::: CONCLUSIONS ::: Due to their high accuracy and robustness, the Tabletop FG and the Compact FG could eliminate the need for compensation of EM field distortions in certain CT-guided interventions. --- paper_title: Upgrade of an optical navigation system with a permanent electromagnetic position control: a first step towards "navigated control" for liver surgery. paper_content: INTRODUCTION ::: The main problems of navigation in liver surgery are organ movement and deformation. With a combination of direct optical and indirect electromagnetic tracking technology, visualisation and positional control of surgical instruments within three-dimensional ultrasound data and registration of organ movements can be realised simultaneously. ::: ::: ::: METHODS ::: Surgical instruments for liver resection were localised with an infrared-based navigation system (Polaris). Movements of the organ itself were registered using an electromagnetic navigation system (Aurora). The combination of these two navigation techniques and a new surgical navigation procedure focussed on a circumscribed critical dissection area were applied for the first time in liver resections. ::: ::: ::: RESULTS ::: This new technique was effectively implemented. The position of the surgical instrument was localised continuously. Repeated position control with observation of the navigation screen was not necessary. During surgical resection, a sonic warning signal was activated when the surgical instrument entered a "no touch" area--an area of reduced safety margin. ::: ::: ::: CONCLUSION ::: Optical tracking of surgical instruments and simultaneous electromagnetic registration of organ position is feasible in liver resection. --- paper_title: Image-guided interventions : technology and applications paper_content: Overview and History of Image-Guided Interventions.- Tracking Devices.- Visualization in Image-Guided Interventions.- Augmented Reality.- Software.- Rigid Registration.- Nonrigid Registration.- Model-Based Image Segmentation for Image-Guided Interventions.- Imaging Modalities.- MRI-Guided FUS and its Clinical Applications.- Neurosurgical Applications.- Computer-Assisted Orthopedic Surgery.- Thoracoabdominal Interventions.- Real-Time Interactive MRI for Guiding Cardiovascular Surgical Interventions.- Three-Dimensional Ultrasound Guidance and Robot Assistance for Prostate Brachytherapy.- Radiosurgery.- Radiation Oncology.- Assessment of Image-Guided Interventions. --- paper_title: Robust Intraoperative US Probe Tracking Using a Monocular Endoscopic Camera paper_content: In the context of minimally-invasive procedures involving both endoscopic video and ultrasound, we present a vision-based method to track the ultrasound probe using a standard monocular video laparoscopic instrument. This approach requires only cosmetic modification to the ultrasound probe and obviates the need for magnetic tracking of either instrument. We describe an Extended Kalman Filter framework that solves for both the feature correspondence and pose estimation, and is able to track a 3D pattern on the surface of the ultrasound probe in near real-time. The tracking capability is demonstrated by performing an ultrasound calibration of a visually-tracked ultrasound probe, using a standard endoscopic video camera. Ultrasound calibration resulted in a mean TRE of 2.3mm, and comparison with an external optical tracker demonstrated a mean FRE of 4.4mm between the two tracking systems. --- paper_title: A non-disruptive technology for robust 3d tool tracking for ultrasound-guided interventions paper_content: In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with realtime 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 ± 0.16mm throughout the imaging volume of 55° × 27° × 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact. ---
Title: Electromagnetic Tracking in Medicine—A Review of Technology, Validation, and Applications Section 1: Introduction Description 1: Introduce the concept of electromagnetic (EM) tracking and its importance in modern patient care, especially in computer-assisted interventions (CAI). Section 2: Methods of Literature Research Description 2: Describe the methodology employed to systematically search and categorize relevant literature on EM tracking in medicine. Section 3: Fundamentals Description 3: Provide an overview of the physical principles, magnetic sensors, tracking techniques, field generators, sources of error, and commercial systems relevant to EM tracking. Section 4: System Assessment Description 4: Discuss various assessment protocols, study results, and the discussion around them to evaluate the accuracy and robustness of different EM tracking systems. Section 5: Distortion Compensation Description 5: Explain different approaches to optimize EM tracking systems by passive protection and active compensation to improve accuracy and robustness in clinical settings. Section 6: Clinical Applications Description 6: Highlight diverse medical applications of EM tracking, discuss commercial systems available, and review clinical evidence supporting their effectiveness. Section 7: Summary and Outlook Description 7: Summarize the findings from the literature review and offer outlooks on future developments and potential advancements in the field of EM tracking in medicine. Section 8: Conclusion Description 8: Conclude the paper by summarizing the usefulness of EM tracking in specific medical applications and emphasizing the importance of evaluation and risk analysis in clinical practice.
Project scheduling under uncertainty: Survey and research potentials
12
--- paper_title: One-machine rescheduling heuristics with efficiency and stability as criteria paper_content: Abstract Heuristics for the problem of rescheduling a machine on occurrence of an unforeseen disruption are developed. The criteria include minimization of the makespan (schedule efficiency) and the impact of the schedule change (schedule stability). The impact of schedule change is a non-regular performance measure defined in two ways: (1) the starting time deviations between the new schedule and the original schedule, and (2) a measure of the sequence difference between the two schedules. Three local search procedures are developed for the bicriterion problem and a set of experiments are conducted to test the efficacy of the heuristics. The heuristic solutions are shown to be effective in that the schedule stability can be increased significantly with little or no sacrifice in makespan. --- paper_title: Predictable scheduling of a job shop subject to breakdowns paper_content: Schedule modification may delay or render infeasible the execution of external activities planned on the basis of the predictive schedule. Thus it is of interest to develop predictive schedules which can absorb disruptions without affecting planned external activities, while maintaining high shop performance. We present a predictable scheduling approach where the predictive schedule is built with such objectives. The procedure inserts additional idle time into the schedule to absorb the impacts of breakdowns. The amount and location of the additional idle time is determined from the breakdown and repair distributions as well as the structure of the predictive schedule. The effects of disruptions on planned support activities are measured by the deviations of job completion times in the realized schedule from those in the predictive schedule. We apply our approach to minimizing maximum lateness in a job shop environment with random machine breakdowns, and show that it provides high predictability with minor sacrifices in shop performance. --- paper_title: Scheduling of Projects with Stochastic Evolution Structure paper_content: The so-called classical project networks used by the network techniques CPM, PERT, and MPM, only allow for modelling projects whose evolution in time is uniquely specified in advance (cf. Elmaghraby 1977 and Moder et al. 1983). Here, each project activity is carried out exactly once during a single project execution and it is not possible to return to activities previously performed (that is, no feedback is permitted). Many practical projects, however, do not meet those conditions, for example, R&D projects and projects in production management where quality control is included and thus some feedback may occur. --- paper_title: Stochastic network project scheduling with non-consumable limited resources paper_content: Abstract This paper presents a newly developed resource constrained scheduling model for a PERT type project. Several non-consumable activity related resources, such as machines or manpower, are imbedded in the model. Each activity in a project requires resources of various types with fixed capacities. Each type of resource is in limited supply with a resource limit that is fixed at the same level throughout the project duration. For each activity, its duration is a random variable with given density function. The problem is to determine starting time values Sij for each activity (i,j) entering the project, i.e., the timing of feeding-in resources for that activity. Values Sij are not calculated beforehand and are random values conditional on our decisions. The model's objective is to minimize the expected project duration. Determination of values Sij is carried out at decision points when at least one activity is ready to be operated and there are free available resources. If, at a certain point of time, more than one activity is ready to be operated but the available amount of resources is limited, a competition among the activities is carried out in order to choose those activities which can be supplied by the resources and which have to be operated first. We suggest carrying out the competition by solving a zero-one integer programming problem to maximize the total contribution of the accepted activities to the expected project duration. For each activity, its contribution is the product of the average duration of the activity and its probability of being on the critical path in the course of the project's realization. Those probability values are calculated via simulation. Solving a zero-one integer programming problem at each decision point results in the following policy: the project management takes all measures to first operate those activities that, being realized, have the greatest effect of decreasing the expected project duration. Only afterwards, does the management take care of other activities. A heuristic algorithm for resource constrained project scheduling is developed. A numerical example is presented. --- paper_title: Probe Backtrack Search for Minimal Perturbation in Dynamic Scheduling paper_content: This paperdescribes an algorithm designed to minimally reconfigure schedulesin response to a changing environment. External factors havecaused an existing schedule to become invalid, perhaps due tothe withdrawal of resources, or because of changes to the setof scheduled activities. The total shift in the start and endtimes of already scheduled activities should be kept to a minimum.This optimization requirement may be captured using a linearoptimization function over linear constraints. However, the disjunctivenature of the resource constraints impairs traditional mathematicalprogramming approaches. The unimodular probing algorithm interleavesconstraint programming and linear programming. The linear programmingsolver handles only a controlled subset of the problem constraints,to guarantee that the values returned are discrete. Using probebacktracking, a complete, repair-based method for search, thesevalues are simply integrated into constraint programming. Unimodularprobing is compared with alternatives on a set of dynamic schedulingbenchmarks, demonstrating its effectiveness.In the final discussion, we conjecture that analogous probebacktracking strategies may obtain performance improvements overconventional backtrack algorithms for a broad range of constraintsatisfaction and optimization problems. --- paper_title: A polynomial activity insertion algorithm in a multi-resource schedule with cumulative constraints and multiple modes paper_content: Abstract In this paper, a polynomial activity insertion algorithm in a multi-resource schedule with cumulative constraints, general precedence relations and multiple modes is proposed. Insertion objective is to minimize the resulting impact on maximum lateness, while keeping some essential characteristics of the initial schedule. A new disjunctive arc-based representation with multiple capacities associated to resource arcs is proposed. Under specific constraints, some simple dominance rules allow to find an optimal insertion position in the digraph with low-computational requirements. --- paper_title: Matchup Scheduling with Multiple Resources, Release Dates and Disruptions paper_content: This paper considers the rescheduling of operations with release dates and multiple resources when disruptions prevent the use of a preplanned schedule. The overall strategy is to follow the preschedule until a disruption occurs. After a disruption, part of the schedule is reconstructed to match up with the preschedule at some future time. Conditions are given for the optimality of this approach. A practical implementation is compared with the alternatives of preplanned static scheduling and myopic dynamic scheduling. A set of practical test problems demonstrates the advantages of the matchup approach. We also explore the solution of the matchup scheduling problem and show the advantages of an integer programming approach for allocating resources to jobs. --- paper_title: One-machine rescheduling heuristics with efficiency and stability as criteria paper_content: Abstract Heuristics for the problem of rescheduling a machine on occurrence of an unforeseen disruption are developed. The criteria include minimization of the makespan (schedule efficiency) and the impact of the schedule change (schedule stability). The impact of schedule change is a non-regular performance measure defined in two ways: (1) the starting time deviations between the new schedule and the original schedule, and (2) a measure of the sequence difference between the two schedules. Three local search procedures are developed for the bicriterion problem and a set of experiments are conducted to test the efficacy of the heuristics. The heuristic solutions are shown to be effective in that the schedule stability can be increased significantly with little or no sacrifice in makespan. --- paper_title: Rescheduling manufacturing systems: A framework of strategies, policies, and methods paper_content: Many manufacturing facilities generate and update production schedules, which are plans that state when certain controllable activities (e.g., processing of jobs by resources) should take place. Production schedules help managers and supervisors coordinate activities to increase productivity and reduce operating costs. Because a manufacturing system is dynamic and unexpected events occur, rescheduling is necessary to update a production schedule when the state of the manufacturing system makes it infeasible. Rescheduling updates an existing production schedule in response to disruptions or other changes. Though many studies discuss rescheduling, there are no standard definitions or classification of the strategies, policies, and methods presented in the rescheduling literature. This paper presents definitions appropriate for most applications of rescheduling manufacturing systems and describes a framework for understanding rescheduling strategies, policies, and methods. This framework is based on a wide variety of experimental and practical approaches that have been described in the rescheduling literature. The paper also discusses studies that show how rescheduling affects the performance of a manufacturing system, and it concludes with a discussion of how understanding rescheduling can bring closer some aspects of scheduling theory and practice. --- paper_title: Match-up scheduling under a machine breakdown paper_content: When a machine breakdown forces a modified flow shop (MFS) out of the prescribed state, the proposed strategy reschedules part of the initial schedule to match up with the preschedule at some point. The objective is to create a new schedule that is consistent with the other production planning decisions like material flow, tooling and purchasing by utilizing the time critical decision making concept. We propose a new rescheduling strategy and a match-up point determination procedure through a feedback mechanism to increase both the schedule quality and stability. The proposed approach is compared with alternative reactive scheduling methods under different experimental settings. --- paper_title: Analysis of reactive scheduling problems in a job shop environment paper_content: Abstract In this paper, we study the reactive scheduling problems in a stochastic manufacturing environment. Specifically, we test the several scheduling policies under machine breakdowns in a classical job shop system. In addition, we measure the effect of system size and type of work allocation (uniform and bottleneck) on the system performance. The performance of the system is measured for the mean tardiness and makespan criteria. We also investigate a partial scheduling scheme under both deterministic and stochastic environments for several system configurations. --- paper_title: Rescheduling of identical parallel machines under machine eligibility constraints paper_content: Abstract In this study, we address a rescheduling problem in parallel machine environments under machine eligibility constraints. We consider total flow time as efficiency measure and the number of jobs processed on different machines in the initial and revised schedules as a stability measure. We present an optimizing algorithm for minimizing the stability measure subject to the constraint that the efficiency measure is at its minimum level. We then propose several heuristic procedures to generate a set of approximate efficient schedules relative to efficiency and stability measures. --- paper_title: Knowledge-based reactive scheduling paper_content: Abstract Reactive scheduling has emerged as a new concept in production planning and control over the past few years. It is attracting the increased interest of both academic and industrial researchers in developing available knowledge-based techniques in real-time shop floor control applications and providing advanced tools for subsequent industrial applications. In this paper, we provide an overview of research results in the domain of knowledge-based reactive scheduling and some related industrial applications. Since reactive scheduling is a new and not well-defined paradigm, we start by examining some definitions of the problem given by different practitioners in the field. We then examine alternative knowledge-representation technologies and reasoning approaches which, because of t heir flexibility and reactive capability, are often applied in real-time decision-making environments. This is followed by a review of some reported industrial applications, and a summary on major areas for further researc... --- paper_title: Reactive Scheduling Systems paper_content: In most practical environments, scheduling is an ongoing reactive process where evolving and changing circumstances continually force reconsideration and revision of pre-established plans. Scheduling research has traditionally ignored this “process view” of the problem, focusing instead on optimization of performance under idealized assumptions of environmental stability and solution executability. In this paper, we present work aimed at the development of reactive scheduling systems, which approach scheduling as a problem of maintaining a prescriptive solution over time, and emphasize objectives (e.g., solution continuity, system responsiveness) which relate directly to effective development and use of schedules in dynamic environments. We describe OPIS, a scheduling system designed to incrementally revise schedules in response to changes to solution constraints. OPIS implements a constraint-directed approach to reactive scheduling. Constraint analysis is used to prioritize outstanding problems in the current schedule, identify important modification goals, and estimate the possibilities for efficient and non-disruptive schedule modification. This information, in turn, provides a basis for selecting among a set of alternative modification actions, which differ in conflict resolution and schedule improvement capabilities, computational requirements and expected disruptive effects. --- paper_title: Project scheduling : a research handbook paper_content: Scope and Relevance of Project Scheduling.- The Project Scheduling Process.- Classification of Project Scheduling Problems.- Temporal Analysis: The Basic Deterministic Case.- Temporal Analysis: Advanced Topics.- The Resource-Constrained Project Scheduling Problem.- Resource-Constrained Scheduling: Advanced Topics.- Project Scheduling with Multiple Activity Execution Modes.- Stochastic Project Scheduling.- Robust and Reactive Scheduling. --- paper_title: Preselective strategies for the optimization of stochastic project networks under resource constraints paper_content: This article deals with a stochastic version of the optimization problem for project networks under resource constraints. In this, activity durations are assumed to be realized according to some joint probability distribution and the aim of optimization is to minimize the expected overall project cost (monotonically increasing with project duration). Certain strategies are known that constitute feasible solutions to this problem, the best studied of which are the so-called ES strategies (“earliest start” with regard to fixed project structures). In this paper, a considerably broader class of strategies is introduced, namely preselective strategies. It is shown that this generalization, for which an algorithmic approach remains possible, preserves almost all the desirable behavior known for ES strategies. In particular, the number of “essential” strategies remains finite and even minimal optimum-determining sets of such strategies can, in general, be characterized. Also, the analytic behavior is still proper and there is considerable “stability” to weak convergence of the joint distribution of activity durations as well as to a. e. convergence of the cost function. Last but not least, possible generalization to arbitrary regular cost functions is again imminent. --- paper_title: Understanding Simulation Solutions to Resource Constrained Project Scheduling Problems with Stochastic Task Durations paper_content: AbstractThe project scheduling problem domain is an important research and applications area of engineering management. Recently introduced project scheduling software such as Risk+, @Risk for Project, SCRAM and Risk Master have facilitated the use of simulation to solve project scheduling problems with stochastic task durations. Practitioners, however, should be made aware that the solution algorithm used in these software systems is based on the implicit assumption of perfect information, an assumption that jeopardizes the feasibility of solution results. This paper discusses the impact of assuming perfect information, introduces a multi-period stochastic programming based model of the project scheduling problem with stochastic task durations, and presents an alternative simulation algorithm that does not assume the availability of perfect information. A simple case study is used to illustrate the practical implications of applying simulation to address project scheduling problems with stochastic task d... --- paper_title: Scheduling tasks with AND/OR precedence constraints paper_content: In traditional precedence-constrained scheduling a task is ready to execute when all its predecessors are complete. Such a task is called an AND task. The authors allow certain tasks, known as OR tasks, to be ready when just one of their predecessors is complete. They analyze the complexity of two types of real-time AND/OR task scheduling problems. In the first type of problem, all predecessors of every OR task must eventually be completed, but in the second type of problem, some OR predecessors may be left unscheduled. The authors present two priority-driven heuristic algorithms that may be used to schedule AND/OR task systems on m processors to minimize completion time, and analyze the worst-case performance of these algorithms. > --- paper_title: Scheduling of project networks paper_content: The paper deals with the network optimization problem of minimizing regular project cost subject to an arbitrary precedence relation on the sets of activities and to arbitrarily many resource constraints. The treatment is done via a purely structural approach that considerably extends the disjunctive graph concept. It is based on so-called feasible posets and includes a quite deep and useful representation theorem. This theorem permits many insights concerning the analytical behaviour of the optimal value function, the description and counting of all essentially different optimization problems, the nature of Graham anomalies, connections with the on-line stochastic generalizations, and several others. In addition, it also allows the design of a quite powerful class of branch-and-bound algorithms for such problems, which is based on an iterative construction of feasible posets. Using so-called distance matrices, this approach permits the restriction of the exponential part of the algorithm to the often comparatively small set of ‘resource and cost essential’ jobs. The paper reports on computational experience with this algorithm for examples from the building industry and includes a rough comparison with the integer programming approach by Talbot and Patterson. --- paper_title: Using tabu search to schedule activities of stochastic resource-constrained projects paper_content: Abstract In this paper, a higher level heuristic procedure “tabu search” is proposed to provide good solutions to resource-constrained, randomized activity duration project scheduling problems. Our adaptation of tabu search uses multiple tabu lists, randomized short-term memory, and multiple starting schedules as a means of search diversification. The proposed method proves to be an efficient way to find good solutions to both deterministic and stochastic problems. For the deterministic problems, most of the optimal schedules of the test projects investigated are found. Computational results are presented which establish the superiority of tabu search over the existing heuristic algorithms. --- paper_title: Stochastic network project scheduling with non-consumable limited resources paper_content: Abstract This paper presents a newly developed resource constrained scheduling model for a PERT type project. Several non-consumable activity related resources, such as machines or manpower, are imbedded in the model. Each activity in a project requires resources of various types with fixed capacities. Each type of resource is in limited supply with a resource limit that is fixed at the same level throughout the project duration. For each activity, its duration is a random variable with given density function. The problem is to determine starting time values Sij for each activity (i,j) entering the project, i.e., the timing of feeding-in resources for that activity. Values Sij are not calculated beforehand and are random values conditional on our decisions. The model's objective is to minimize the expected project duration. Determination of values Sij is carried out at decision points when at least one activity is ready to be operated and there are free available resources. If, at a certain point of time, more than one activity is ready to be operated but the available amount of resources is limited, a competition among the activities is carried out in order to choose those activities which can be supplied by the resources and which have to be operated first. We suggest carrying out the competition by solving a zero-one integer programming problem to maximize the total contribution of the accepted activities to the expected project duration. For each activity, its contribution is the product of the average duration of the activity and its probability of being on the critical path in the course of the project's realization. Those probability values are calculated via simulation. Solving a zero-one integer programming problem at each decision point results in the following policy: the project management takes all measures to first operate those activities that, being realized, have the greatest effect of decreasing the expected project duration. Only afterwards, does the management take care of other activities. A heuristic algorithm for resource constrained project scheduling is developed. A numerical example is presented. --- paper_title: Project Scheduling with Stochastic Activity Interruptions paper_content: In this chapter we address the problem of scheduling the activities of a resource-constrained project, some of which may be interrupted by an uncertain amount of time. The resources may be, for example, machines in a jobshop, computers with specialized software packages (as those needed for engineering designs), or highly specialized technicians. --- paper_title: Minimizing weighted tardiness of jobs with stochastic interruptions in parallel machines paper_content: Abstract In this paper, we address the problem of minimizing expected total weighted tardiness of jobs that have stochastic interruptions and that are processed on a set of parallel machines. Our research generalizes the problem of scheduling parallel machines to minimize total weighted tardiness. The proposed solution method is based on the scatter search methodology and implements an innovative structured combination procedure. Extensive computational testing with more than 400 problem instances shows the merit of the proposed solution method. --- paper_title: A Stochastic Branch-and-Bound Approach to Activity Crashing in Project Management paper_content: Many applications such as project scheduling, workflow modeling, or business process re-engineering incorporate the common idea that a product, task, or service consisting of interdependent time-related activities should be produced or performed within given time limits. In real-life applications, certain measures like the use of additional manpower, the assignment of highly-skilled personnel to specific jobs, or the substitution of equipment are often considered as means of increasing the probability of meeting a due date and thus avoiding penalty costs. This paper investigates the problem of selecting, from a set of possible measures of this kind, the combination of measures that is the most cost-efficient. Assuming stochastic activity durations, the computation of the optimal combination of measures may be very expensive in terms of runtime. In this article, we introduce a powerful stochastic optimization approach to determine a set of efficient measures that crash selected activities in a stochastic activity network. Our approach modifies the conventional Stochastic Branch-and-Bound, using a heuristic--instead of exact methods--to solve the deterministic subproblem. This modification spares computational time and by doing so provides an appropriate method for solving various related applications of combinatorial stochastic optimization. A comparative computational study shows that our approach not only outperforms standard techniques but also definitely improves conventional Stochastic Branch-and-Bound. --- paper_title: Critical path planning under uncertainty paper_content: This paper concerns a CPM network in which individual job times are random variables. Specifically the time for each job consists of a component which is a linear function of the investment (up to some maximum) in that job and a random variable that is independent of the investment. It is desired to find the minimum investment required as a function of expected project completion time. The problem is solved by a cutting plane technique in which the investment allocations yield feasibility cuts. Because of the special structure of this problem, these cuts can be generated by solving a sequence of longest path problems in an acyclic network. --- paper_title: Activity time-cost tradeoffs under time and cost chance constraints paper_content: We describe a stochastic extension of the critical path method time-cost tradeoff model. This extension includes four fundamental formulations of time-cost tradeoff models that represent different assumptions of the effect of the changing performance speed on the frequency distribution parameters of the activity duration, as well as the effect of the random activity duration on the activity cost. The formulations are based on clean fractile methods and therefore avoid the unfeasible necessity for additional groups of experts' estimates for each possible performance speed. Our formulations also enable us to consider both stochastic time and cost parameters. We developed several ideas for formulating the relationships between time-cost tradeoffs and two chance constraints for a single activity, the time chance constraint and the cost chance constraint. The developed formulas for multiobjective models can easily be incorporated into time-cost optimization procedures, thus enabling the procedures to optimize cost risks as well as time risks. --- paper_title: A fuzzy set approach to activity scheduling for product development paper_content: Industries need to effectively manage their product development processes to reduce the product development time and cost. Due to incomplete design information at the early stage of product development, the duration of each activity is difficult to estimate accurately. The objective of this research is to develop a methodology to schedule product development projects having imprecise temporal information. The research problem is formulated as a fuzzy constraint satisfaction problem and a new method based on possibility theory is proposed to determine the satisfaction degrees of fuzzy temporal constraints. Based on the proposed method, a fuzzy scheduling procedure is developed to construct a schedule with the least possibility of being late and to maximize the satisfaction degrees of all fuzzy temporal constraints. Moreover, the computational efficiency of the proposed approach is also discussed. The proposed methodology can produce more satisfactory schedules in an uncertain product development environment. --- paper_title: A fuzzy robust scheduling approach for product development projects paper_content: Abstract Efficient scheduling of a product development project is difficult, since a development project is usually unique in nature and high level of design imprecision exists at the early stages of product development. Moreover, risk-averse project managers are often more interested in estimating the risk of a schedule being late over all potential realizations. The objective of this research is to develop a robust scheduling methodology based on fuzzy set theory for uncertain product development projects. The imprecise temporal parameters involved in the project are represented by fuzzy sets. A measure of schedule robustness based on qualitative possibility theory is proposed to guide the search process to determine the robust schedule; i.e., the schedule with the best worst-case performance. A genetic algorithm approach is developed for solving the problem with acceptable performance. An example of electronic product development project is used to illustrate the concept developed. --- paper_title: Uncertainty Modelling in Software Development Projects (With Case Study) paper_content: A project scheduling model tailored specifically for software development projects is proposed in this study. The model incorporates uncertainties related to activity durations and network topology. The first type of uncertainty exists due to error-prone coding which might result in elongated task durations caused by validation and debugging sessions. Furthermore, in practice, macro-activities represent groups of sub-tasks in order to simplify the planning and monitoring of the project. Due to the aggregation, it is more difficult to be precise on the duration of a macro-activity. --- paper_title: Stochastic Versus Fuzzy Approaches to Multiobjective Mathematical Programming under Uncertainty paper_content: I. The General Framework.- 1. Multiobjective programming under uncertainty : scope and goals of the book.- 2. Multiobjective programming : basic concepts and approaches.- 3. Stochastic programming : numerical solution techniques by semi-stochastic approximation methods.- 4. Fuzzy programming : a survey of recent developments.- II. The Stochastic Approach.- 1. Overview of different approaches for solving stochastic programming problems with multiple objective functions.- 2. "STRANGE" : an interactive method for multiobjective stochastic linear programming, and "STRANGE-MOMIX" : its extension to integer variables.- 3. Application of STRANGE to energy studies.- 4. Multiobjective stochastic linear programming with incomplete information : a general methodology.- 5. Computation of efficient solutions of stochastic optimization problems with applications to regression and scenario analysis.- III. The Fuzzy Approach.- 1. Interactive decision-making for multiobjective programming problems with fuzzy parameters.- 2. A possibilistic approach for multiobjective programming problems. Efficiency of solutions.- 3. "FLIP" : an interactive method for multiobjective linear programming with fuzzy coefficients.- 4. Application of "FLIP" method to farm structure optimization under uncertainty.- 5. "FULPAL" : an interactive method for solving (multiobjective) fuzzy linear programming problems.- 6. Multiple objective linear programming problems in the presence of fuzzy coefficients.- 7. Inequality constraints between fuzzy numbers and their use in mathematical programming.- 8. Using fuzzy logic with linguistic quantifiers in multiobjective decision making and optimization: A step towards more human-consistent models.- IV. Stochastic Versus Fuzzy Approaches and Related Issues.- 1. Stochastic versus possibilistic multiobjective programming.- 2. A comparison study of "STRANGE" and "FLIP".- 3. Multiobjective mathematical programming with inexact data. --- paper_title: Project scheduling : a research handbook paper_content: Scope and Relevance of Project Scheduling.- The Project Scheduling Process.- Classification of Project Scheduling Problems.- Temporal Analysis: The Basic Deterministic Case.- Temporal Analysis: Advanced Topics.- The Resource-Constrained Project Scheduling Problem.- Resource-Constrained Scheduling: Advanced Topics.- Project Scheduling with Multiple Activity Execution Modes.- Stochastic Project Scheduling.- Robust and Reactive Scheduling. --- paper_title: A fuzzy project scheduling approach to minimize schedule risk for product development paper_content: The efficient management of product development projects is important to reduce the required development time and cost. However, each project is unique in nature and the duration of activities involed in a project often cannot be predicted accurately. The uncertainty of activity duration may lead to incorrect scheduling decisions. The objective of this research is to develop a fuzzy scheduling methodology to deal with this problem. Possibility theory is used to model the uncertain and flexible temporal information. The concept of schedule risk is proposed to evaluate the schedule performance. A fuzzy beam search algorithm is developed to determine a schedule with the minimum schedule risk and the start time of each activity is selected to maximize the minimum satisfaction degrees of all temporal constraints. In addition, the properties of schedule risk are also discussed. We show that the proposed methodology can assist project managers in selecting a schedule with the least possibility of being late in an uncertain scheduling environment. An example with an electronic product development project is used to illustrate the developed approach. --- paper_title: Fuzzy project scheduling system for software development paper_content: Abstract This paper presents an FPS (Fuzzy Project Scheduling) decision support system applied to software project scheduling. The purpose of the FPS system is to allocate resources (software engineers) among dependent activities (system design, user interface design, modules implementation, modules integration and tests), taking into account one of two criteria: project completion time and maximum lateness, under uncertain time parameters of activities. By time parameters we understand durations, ready times and due dates of particular activities. Uncertainty of these parameters is modelled by means of L-R fuzzy numbers. A general procedure for transforming the fuzzy scheduling problem into a set of associate deterministic problems is based on the use of α-cuts. Optimistic and pessimistic schedules are heuristically generated for given α-levels. Aggregation of optimistic and pessimitic values of a minimized criterion for all α-levels gives a fuzzy result. Comparison of fuzzy result is based on a compensation of areas determined by the membership functions. The FPS system is described and its application to software project management is presented on a real example. --- paper_title: Fuzzy priority heuristics for project scheduling paper_content: Abstract This paper presents a generalization of the known priority heuristic method for solving resource-constrained project scheduling problems (RCPS) with uncertain time parameters. The generalization consists of handling fuzzy time parameters instead of crisp ones. In order to create priority lists, a fuzzy ordering procedure has been proposed. The serial and parallel scheduling procedures which usually operate on these lists have also been extended to handle fuzzy time parameters. The performance of the method is presented on an example problem. --- paper_title: Reactive Scheduling - Improving the Robustness of Schedules and Restricting the Effects of Shop Floor Disturbances by Fuzzy Reasoning paper_content: Abstract Practical scheduling usually has to reach to many unpredictable events and uncertainties in the production environment. Although often possible in theory, it is undesirable to reschedule from scratch in such cases. Since the surrounding organization will be prepared for the predicted schedule, it is important to change only those features of the schedule that are necessary. We show how, on one side, fuzzy logic can be used to support the construction of schedules that are robust with respect to changes due to certain types of event. On the other side, we show how a reaction can be restricted to a small environment by means of fuzzy constraints and a repair-based problem-solving strategy. We demonstrate the proposed representation and problem-solving method by introducing a scheduling application in a steelmaking plant. We construct a preliminary schedule by taking into account only the most likely duration of operations. This schedule is iteratively "repaired" until some threshold evaluation is found. A repair is found with a local search procedure based on Tabu Search. Finally, we show which events can lead to reactive scheduling and how this is supported by the repair strategy. --- paper_title: On the optimal management of project risk paper_content: The uncertainty of project networks has been mainly considered as the randomness of duration of the activities. However, another major problem for project managers is the uncertainty due to the randomness of the amount of resources required by each activity which can be expressed by the randomness of its cost. Such randomness can seriously affect the discounted cost of the project and it may be strongly correlated with the duration of the activity.In this paper, a model considering the randomness of both the cost and the duration of each activity is introduced and the problem of project scheduling is studied in terms of the project's discounted cost and of the risk of not meeting its completion time. The adoption of the earliest (latest) starting time for each activity decreases (increases) the risk of delays but increases (decreases) the discounted cost of the project. Therefore, an optimal compromise has to be achieved. This problem of optimization is studied in terms of the probability of the duration and of the discounted cost of the project falling outside the acceptable domain (Risk function) using the concept of float factor as major decision variable. This last concept is proposed to help the manager to synthetize the large number of the decision variables representing each schedule for the studied project. Numerical results are also presented for a specific project network. --- paper_title: ROBUSTNESS MEASURES AND ROBUST SCHEDULING FOR JOB SHOPS paper_content: Abstract A robust schedule is defined as a schedule that is insensitive to unforeseen shop floor disturbances given an assumed control policy. In this paper, a definition of schedule robustness is developed which comprises two components: post-disturbance make-span and post-disturbance makespan variability. We have developed robustness measures and robust scheduling methods for the case where a “right-shift” control policy is used. On occurrence of a disruption, the right-shift policy maintains the scheduling sequence while delaying the unfinished jobs as much as necessary to accommodate the disruption. An exact measure of schedule robustness is derived for the case in which only a single disruption occurs within the planning horizon. A surrogate measure is developed for the more complex case in which multiple disruptions may occur. This surrogate measure is then embedded in a genetic algorithm to generate robust schedules for job-shops. Experimental results show that robust schedules significantly outper... --- paper_title: Enhancing real-time schedules to tolerate transient faults paper_content: We present a scheme to guarantee that the execution of real-time tasks can tolerate transient and intermittent faults assuming any queue-based scheduling technique. The scheme is based on reserving sufficient slack: in a schedule such that a task can be re-executed before its deadline without compromising guarantees given to other tasks. Only enough slack is reserved in the schedule to guarantee fault tolerance if at most one fault occurs within a time interval. This results in increased schedulability and a very low percentage of deadline misses even if no restriction is placed on the fault separation. We provide two algorithms to solve the problem of adding fault tolerance to a queue of real-time tasks. The first is a dynamic programming optimal solution and the second is a greedy heuristic which closely approximates the optimal. --- paper_title: The Shifting Bottleneck Procedure for Job Shop Scheduling paper_content: We describe an approximation method for solving the minimum makespan problem of job shop scheduling. It sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced. Every time after a new machine is sequenced, all previously established sequences are locally reoptimized. Both the bottleneck identification and the local reoptimization procedures are based on repeatedly solving certain one-machine scheduling problems. Besides this straight version of the Shifting Bottleneck Procedure, we have also implemented a version that applies the procedure to the nodes of a partial search tree. Computational testing shows that our approach yields consistently better results than other procedures discussed in the literature. A high point of our computational testing occurred when the enumerative version of the Shifting Bottleneck Procedure found in a little over five minutes an optimal schedule to a notorious ten machines/ten jobs problem on which many algorithms have been run for hours without finding an optimal solution. --- paper_title: Predictable scheduling of a job shop subject to breakdowns paper_content: Schedule modification may delay or render infeasible the execution of external activities planned on the basis of the predictive schedule. Thus it is of interest to develop predictive schedules which can absorb disruptions without affecting planned external activities, while maintaining high shop performance. We present a predictable scheduling approach where the predictive schedule is built with such objectives. The procedure inserts additional idle time into the schedule to absorb the impacts of breakdowns. The amount and location of the additional idle time is determined from the breakdown and repair distributions as well as the structure of the predictive schedule. The effects of disruptions on planned support activities are measured by the deviations of job completion times in the realized schedule from those in the predictive schedule. We apply our approach to minimizing maximum lateness in a job shop environment with random machine breakdowns, and show that it provides high predictability with minor sacrifices in shop performance. --- paper_title: On the optimal management of project risk paper_content: The uncertainty of project networks has been mainly considered as the randomness of duration of the activities. However, another major problem for project managers is the uncertainty due to the randomness of the amount of resources required by each activity which can be expressed by the randomness of its cost. Such randomness can seriously affect the discounted cost of the project and it may be strongly correlated with the duration of the activity.In this paper, a model considering the randomness of both the cost and the duration of each activity is introduced and the problem of project scheduling is studied in terms of the project's discounted cost and of the risk of not meeting its completion time. The adoption of the earliest (latest) starting time for each activity decreases (increases) the risk of delays but increases (decreases) the discounted cost of the project. Therefore, an optimal compromise has to be achieved. This problem of optimization is studied in terms of the probability of the duration and of the discounted cost of the project falling outside the acceptable domain (Risk function) using the concept of float factor as major decision variable. This last concept is proposed to help the manager to synthetize the large number of the decision variables representing each schedule for the studied project. Numerical results are also presented for a specific project network. --- paper_title: Robust scheduling of a two-machine flow shop with uncertain processing times paper_content: This paper focuses on manufacturing environments where job processing times are uncertain. In these settings, scheduling decision makers are exposed to the risk that an optimal schedule with respect to a deterministic or stochastic model will perform poorly when evaluated relative to actual processing times. Since the quality of scheduling decisions is frequently judged as if processing times were known a priori, robust scheduling, i.e., determining a schedule whose performance (compared to the associated optimal schedule) is relatively insensitive to the potential realizations of job processing times, provides a reasonable mechanism for hedging against the prevailing processing time uncertainty. In this paper we focus on a two-machine flow shop environment in which the processing times of jobs are uncertain and the performance measure of interest is system makespan. We present a measure of schedule robustness that explicitly considers the risk of poor system performance over all potential realizations of job processing times. We discuss two alternative frameworks for structuring processing time uncertainty. For each case, we define the robust scheduling problem, establish problem complexity, discuss properties of robust schedules, and develop exact and heuristic solution approaches. Computational results indicate that robust schedules provide effective hedges against processing time uncertainty while maintaining excellent expected makespan performance. --- paper_title: Robust Discrete Optimization and Its Applications paper_content: Preface. 1. Approaches to Handle Uncertainty In Decision Making. 2. A Robust Discrete Optimization Framework. 3. Computational Complexity Results of Robust Discrete Optimization Problems. 4. Easily Solvable Cases of Robust Discrete Optimization Problems. 5. Algorithmic Developments for Difficult Robust Discrete Optimization Problems. 6. Robust 1-Median Location Problems: Dynamic Aspects and Uncertainty. 7. Robust Scheduling Problems. 8. Robust Uncapacitated Network Design and International Sourcing Problems. 9. Robust Discrete Optimization: Past Successes and Future Challenges. --- paper_title: Robust Scheduling to Hedge Against Processing Time Uncertainty in Single-Stage Production paper_content: Schedulers confronted with significant processing time uncertainty often discover that a schedule which is optimal with respect to a deterministic or stochastic scheduling model yields quite poor performance when evaluated relative to the actual processing times. In these environments, the notion of schedule robustness, i.e., determining the schedule with the best worst-case performance compared to the corresponding optimal solution over all potential realizations of job processing times, is a more appropriate guide to schedule selection. In this paper, we formalize the robust scheduling concept for scheduling situations with uncertain or variable processing times. To illustrate the development of solution approaches for a robust scheduling problem, we consider a single-machine environment where the performance criterion of interest is the total flow time over all jobs. We define two measures of schedule robustness, formulate the robust scheduling problem, establish its complexity, describe properties of the optimal schedule, and present exact and heuristic solution procedures. Extensive computational results are reported to demonstrate the efficiency and effectiveness of the proposed solution procedures. --- paper_title: RanGen: A Random Network Generator for Activity-on-the-Node Networks paper_content: In this paper, we describe RanGen, a random network generator for generating activity-on-the-node networks and accompanying data for different classes of project scheduling problems. The objective is to construct random networks which satisfy preset values of the parameters used to control the hardness of a problem instance. Both parameters which are related to the network topology and resource-related parameters are implemented. The network generator meets the shortcomings of former network generators since it employs a wide range of different parameters which have been shown to serve as possible predictors of the hardness of different project scheduling problems. Some of them have been implemented in former network generators while others have not. --- paper_title: On the construction of stable project baseline schedules paper_content: The vast majority of project scheduling efforts assume complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. In reality, however, project activities are subject to considerable uncertainty, which generally leads to numerous schedule disruptions. It is of interest to develop pre-schedules that can absorb disruptions in activity durations without affecting the planning of other activities, such that co-ordination of resources and material procurement for each of the activities can be performed as smoothly as possible. The objective of this paper is to develop and evaluate various approaches for constructing a stable preschedule, which is unlikely to undergo major changes when it needs to be repaired as a reaction to minor activity duration disruptions. --- paper_title: β-Robust scheduling for single-machine systems with uncertain processing times paper_content: In scheduling environments with processing time uncertainty, system performance is determined by both the sequence in which jobs are ordered and the actual processing times of jobs. For these situations, the risk of achieving substandard system performance can be an important measure of scheduling effectiveness. To hedge this risk requires an explicit consideration of both the mean and the variance of system performance associated with alternative schedules, and motivates a β-robustness objective to capture the likelihood that a schedule yields actual performance no worse than a given target level. In this paper we focus on β-robust scheduling issues in single-stage production environments with uncertain processing times. We define a general β-robust scheduling objective, formulate the β-robust scheduling problem that results when job processing times are independent random variables and the performance measure of interest is the total flow time across all jobs, establish problem complexity, and develop exact and heuristic solution approaches. We then extend the β-robust scheduling model to consider situations where the uncertainty associated with individual job processing times can be selectively controlled through resource allocation. Computational results are reported to demonstrate the efficiency and effectiveness of the solution procedures. --- paper_title: A polynomial activity insertion algorithm in a multi-resource schedule with cumulative constraints and multiple modes paper_content: Abstract In this paper, a polynomial activity insertion algorithm in a multi-resource schedule with cumulative constraints, general precedence relations and multiple modes is proposed. Insertion objective is to minimize the resulting impact on maximum lateness, while keeping some essential characteristics of the initial schedule. A new disjunctive arc-based representation with multiple capacities associated to resource arcs is proposed. Under specific constraints, some simple dominance rules allow to find an optimal insertion position in the digraph with low-computational requirements. --- paper_title: Scheduling independent tasks to reduce mean finishing time paper_content: Sequencing to minimize mean finishing time (or mean time in system) is not only desirable to the user, but it also tends to minimize at each point in time the storage required to hold incomplete tasks. In this paper a deterministic model of independent tasks is introduced and new results are derived which extend and generalize the algorithms known for minimizing mean finishing time. In addition to presenting and analyzing new algorithms it is shown that the most general mean-finishing-time problem for independent tasks is polynomial complete, and hence unlikely to admit of a non-enumerative solution. --- paper_title: Stability and resource allocation in project planning paper_content: The majority of resource-constrained project scheduling efforts assume perfect information about the scheduling problem to be solved and a static deterministic environment within which the precomputed baseline schedule is executed. In reality, project activities are subject to considerable uncertainty, which generally leads to numerous schedule disruptions. In this paper, we present a resource allocation model that protects a given baseline schedule against activity duration variability. A branch-and-bound algorithm is developed that solves the proposed resource allocation problem. We report on computational results obtained on a set of benchmark problems. --- paper_title: RanGen: A Random Network Generator for Activity-on-the-Node Networks paper_content: In this paper, we describe RanGen, a random network generator for generating activity-on-the-node networks and accompanying data for different classes of project scheduling problems. The objective is to construct random networks which satisfy preset values of the parameters used to control the hardness of a problem instance. Both parameters which are related to the network topology and resource-related parameters are implemented. The network generator meets the shortcomings of former network generators since it employs a wide range of different parameters which have been shown to serve as possible predictors of the hardness of different project scheduling problems. Some of them have been implemented in former network generators while others have not. --- paper_title: Criticality in Resource Constrained Networks paper_content: Project managers readily adopted the concept of the critical path as an aid to identifying those activities most worthy of their attention and possible action. However, current project management packages do not offer a useful measure of criticality in resource constrained projects. A revised method of calculating resource constrained float is presented, together with a discussion of its use in project management. While resource constrained criticality appears to be a practical and useful tool in the analysis of project networks, care is needed in its interpretation as any calculation of such float is conditional on the particular resource allocation employed. A number of other measures of an activity's importance in a network are described and compared in an application to an aircraft development. A quantitative comparison of the measures is developed based on a simulation of the process of management identifying the key activities and directing their control efforts. Resource constrained float appears to be a useful single measure of an activity's importance, encapsulating several useful pieces of management information. However, there are some circumstances in which other measures might be preferred. --- paper_title: A new method for workshop real time scheduling paper_content: Workshop real time scheduling is one of the key factors in improving manufacturing system efficiency. This is especially true for workshops in which various products are processed simultaneously, and use multipurpose machines. Real time scheduling is appropriate to handle perturbations in the environment of the manufacturing process, a major issue at the shop floor level. The products to be processed have release times and due dates and the resources are multipurpose machines. A decision support system for real time scheduling is described. It is based on an original approach, aiming at searching for characteristics of a set of schedules compatible with the main manufacturing constraints to be satisfied. This set of schedules is obtained by defining sequences of groups of permutable operations for every resource. A method to find such a set is described. We emphasize the use of this group sequence as a decision support system. Significant states and events requiring real time decisions are identified and ... --- paper_title: Characterization of a set of schedules in a multiple resource context paper_content: ABSTRACT For real time scheduling, a decision aid approach, based on the characterization of a set of schedules compatible with the problem constraints, is proposed. This set is specified in terms of sequences of permutable operations on the resources. Generating such a group sequence is described for the scheduling problem where products have to be performed according to a specified routeing with release times and due dates. Each product operation requires a set of resources, each of them being selected inside a pool of several resources. --- paper_title: Sensitivity analysis of scheduling algorithms paper_content: Abstract We are interested in this work in studying the performances of static scheduling policies in presence of on-line disturbances. In the general case, the sensitivity of the schedules, i.e., the degradation of the performance of the solution due to the disturbances, is linear in the magnitude of the perturbation. Our main result within this paper is to show that in some scheduling contexts, namely the case of independent tasks, the sensitivity can be guaranteed not to exceed the square root of the magnitude of the perturbation. --- paper_title: A new method for workshop real time scheduling paper_content: Workshop real time scheduling is one of the key factors in improving manufacturing system efficiency. This is especially true for workshops in which various products are processed simultaneously, and use multipurpose machines. Real time scheduling is appropriate to handle perturbations in the environment of the manufacturing process, a major issue at the shop floor level. The products to be processed have release times and due dates and the resources are multipurpose machines. A decision support system for real time scheduling is described. It is based on an original approach, aiming at searching for characteristics of a set of schedules compatible with the main manufacturing constraints to be satisfied. This set of schedules is obtained by defining sequences of groups of permutable operations for every resource. A method to find such a set is described. We emphasize the use of this group sequence as a decision support system. Significant states and events requiring real time decisions are identified and ... --- paper_title: Stochastic network project scheduling with non-consumable limited resources paper_content: Abstract This paper presents a newly developed resource constrained scheduling model for a PERT type project. Several non-consumable activity related resources, such as machines or manpower, are imbedded in the model. Each activity in a project requires resources of various types with fixed capacities. Each type of resource is in limited supply with a resource limit that is fixed at the same level throughout the project duration. For each activity, its duration is a random variable with given density function. The problem is to determine starting time values Sij for each activity (i,j) entering the project, i.e., the timing of feeding-in resources for that activity. Values Sij are not calculated beforehand and are random values conditional on our decisions. The model's objective is to minimize the expected project duration. Determination of values Sij is carried out at decision points when at least one activity is ready to be operated and there are free available resources. If, at a certain point of time, more than one activity is ready to be operated but the available amount of resources is limited, a competition among the activities is carried out in order to choose those activities which can be supplied by the resources and which have to be operated first. We suggest carrying out the competition by solving a zero-one integer programming problem to maximize the total contribution of the accepted activities to the expected project duration. For each activity, its contribution is the product of the average duration of the activity and its probability of being on the critical path in the course of the project's realization. Those probability values are calculated via simulation. Solving a zero-one integer programming problem at each decision point results in the following policy: the project management takes all measures to first operate those activities that, being realized, have the greatest effect of decreasing the expected project duration. Only afterwards, does the management take care of other activities. A heuristic algorithm for resource constrained project scheduling is developed. A numerical example is presented. ---
Title: Project scheduling under uncertainty: Survey and research potentials Section 1: Introduction Description 1: Introduce the context and importance of project scheduling, highlighting the challenges and sources of uncertainty. Section 2: Reactive Scheduling Description 2: Discuss the strategies for revising or re-optimizing baseline schedules in response to unexpected events. Section 3: Generating a Baseline Schedule Description 3: Outline the approaches for developing baseline schedules, including considerations for variability and robustness. Section 4: Stochastic Project Scheduling Description 4: Provide an overview of the literature and methodologies for scheduling under stochastic conditions, focusing on resource-constrained problems and scheduling policies. Section 5: Scheduling Policies Description 5: Explain different scheduling policies used in stochastic project scheduling to handle uncertainties in activity durations. Section 6: Stochastic Activity Interruptions Description 6: Discuss the methodologies for handling interruptions and time/cost trade-offs in a stochastic environment. Section 7: Fuzzy Project Scheduling Description 7: Describe the use of fuzzy set theory to model activity durations under imprecision rather than uncertainty. Section 8: Proactive (Robust) Project Scheduling Description 8: Examine techniques for constructing robust schedules that can withstand uncertainties. Section 9: Redundancy-Based Techniques Description 9: Discuss the application of redundancy, either in resources or time, to enhance schedule robustness. Section 10: Robust Machine Scheduling Techniques Description 10: Explore robust scheduling methods and their applications in machine scheduling, highlighting potential adaptations for project environments. Section 11: Sensitivity Analysis Description 11: Review the methods for analyzing schedule sensitivity to parameter changes and their relevance to project scheduling. Section 12: Summary and Suggestions for Further Research Description 12: Summarize the findings and suggest directions for future research in project scheduling under uncertainty.
Animation on the web: a survey
11
--- paper_title: Representing Progressive Dynamic 3D Meshes and Applications paper_content: Dynamic 3D mesh sequences, also called 3D animation, have been widely used in the movie and gaming industries. However, the huge storage requirements of dynamic mesh data make it problematic for a number of applications such as rendering, transmission over a network, and displaying on mobile devices. This paper proposes a multi resolution representation of 3D animation that results displaying 3D animation progressively. The proposed method transforms traditional 3D animation representation into progressive representation, which takes up less storage and memory space. The progressive representation is constructed by compressing the animation into a base animation with sequenced refining operators. The base animation is viewable and can be reconstructed by transmitting for only a few seconds, which is also called thumbnail animation in this paper, the more detailed animation can be refined smoothly with applying refining operators. As the results, the resolution of animation can be free to change at real-time, where the resolution can be increased automatically or even controlled by user. Moreover, the progressive animation is more suitable for transmitting and displaying through the network. --- paper_title: Displaced dynamic expression regression for real-time facial tracking and animation paper_content: We present a fully automatic approach to real-time facial tracking and animation with a single video camera. Our approach does not need any calibration for each individual user. It learns a generic regressor from public image datasets, which can be applied to any user and arbitrary video cameras to infer accurate 2D facial landmarks as well as the 3D facial shape from 2D video frames. The inferred 2D landmarks are then used to adapt the camera matrix and the user identity to better match the facial expressions of the current user. The regression and adaptation are performed in an alternating manner. With more and more facial expressions observed in the video, the whole process converges quickly with accurate facial tracking and animation. In experiments, our approach demonstrates a level of robustness and accuracy on par with state-of-the-art techniques that require a time-consuming calibration step for each individual user, while running at 28 fps on average. We consider our approach to be an attractive solution for wide deployment in consumer-level applications. --- paper_title: Predictive compression of dynamic 3D meshes paper_content: An efficient algorithm for compression of dynamic time-consistent 3D meshes is presented. Such a sequence of meshes contains a large degree of temporal statistical dependencies that can be exploited for compression using DPCM. The vertex positions are predicted at the encoder from a previously decoded mesh. The difference vectors are further clustered in an octree approach. Only a representative for a cluster of difference vectors is further processed providing a significant reduction of data rate. The representatives are scaled and quantized and finally entropy coded using CABAC, the arithmetic coding technique used in H.264/MPEG4-AVC. The mesh is then reconstructed at the encoder for prediction of the next mesh. In our experiments we compare the efficiency of the proposed algorithm in terms of bit-rate and quality compared to static mesh coding and interpolator compression indicating a significant improvement in compression efficiency. --- paper_title: Static 3D triangle mesh compression overview paper_content: 3D triangle meshes are extremely used to model discrete surfaces, and almost always represented with two tables: one for geometry and another for connectivity. While the raw size of a triangle mesh is of around 200 bits per vertex, by coding cleverly (and separately) those two distinct kinds of information it is possible to achieve compression ratios of 15:1 or more. Different techniques must be used depending on whether single-rate vs. progressive bitstreams are sought; and, in the latter case, on whether or not hierarchically nested meshes are desirable during reconstruction. --- paper_title: Blast: a binary large structured transmission format for the web paper_content: Recent advances in Web technology, especially real-time 3D content using WebGL, require an efficient way to transfer binary data. Images, audio and video have respective HTML tags and accompanying data formats that transparently handle binary transmission and decompression. 3D data, on the other hand, has to be handled explicitly by the client application. In contrast to images, audio and video, 3D data is inhomogeneous and neither common formats nor compression algorithms have been established for the Web. Despite the many existing formats for binary transmission of 3D data none has been able to provide a general binary format for all kinds of 3D data including meshes, textures, animations, and materials. Existing formats are domain-specific and fixed on a certain set of input data and thus too specific to handle other types of data. Blast is a general container format for structured binary transmission on the Web that can be used for all types of 3D scene data. Instead of defining a fixed set of encodings and compression algorithms Blast exploits the code on demand paradigm to provide a simple yet powerful encoder-agnostic basis to leverage existing domain-specific solutions and compression techniques. Because streaming is of primary importance for a good user experience Blast is designed on the basis of self-contained chunks to enable JavaScript clients to utilize Web Workers for parallel decoding and to provide early feedback to the user. --- paper_title: Streaming compressed 3D data on the web using JavaScript and WebGL paper_content: With the development of Web3D technologies, the delivery and visualization of 3D models on the web is now possible and is bound to increase both in the industry and for the general public. However the interactive remote visualization of 3D graphic data in a web browser remains a challenging issue. Indeed, most of existing systems suffer from latency (due to the data downloading time) and lack of adaptation to heterogeneous networks and client devices (i.e. the lack of levels of details); these drawbacks seriously affect the quality of user experience. This paper presents a technical solution for streaming and visualization of compressed 3D data on the web. Our approach leans upon three strong features: (1) a dedicated progressive compression algorithm for 3D graphic data with colors producing a binary compressed format which allows a progressive decompression with several levels of details; (2) the introduction of a JavaScript halfedge data structure allowing complex geometrical and topological operations on a 3D mesh; (3) the multi-thread JavaScript/WebGL implementation of the decompression scheme allowing 3D data streaming in a web browser. Experiments and comparison with existing solutions show promising results in terms of latency, adaptability and quality of user experience. --- paper_title: SRC - a streamable format for generalized web-based 3D data transmission paper_content: A problem that still remains with today's technologies for 3D asset transmission is the lack of progressive streaming of all relevant mesh and texture data, with a minimal number of HTTP requests. Existing solutions, like glTF or X3DOM's geometry formats, either send all data within a single batch, or they introduce an unnecessary large number of requests. Furthermore, there is still no established format for a joined, interleaved transmission of geometry data and texture data. Within this paper, we propose a new container file format, entitled Shape Resource Container (SRC). Our format is optimized for progressive, Web-based transmission of 3D mesh data with a minimum number of HTTP requests. It is highly configurable, and more powerful and flexible than previous formats, as it enables a truly progressive transmission of geometry data, partial sharing of geometry between meshes, direct GPU uploads, and an interleaved transmission of geometry and texture data. We also demonstrate how our new mesh format, as well as a wide range of other mesh formats, can be conveniently embedded in X3D scenes, using a new, mini-malistic X3D ExternalGeometry node. --- paper_title: Progressive Animation Sequences paper_content: 3D animations have been widely used in the movie and gaming industries. However, the huge storage requirements of dynamic mesh data make it difficult for a number of applications such as rendering, transmitting over a network, etc.This study proposes a multiresolution representation for 3D animation. The results show that the coarse animation appears by using only 10% of the original faces. More detailed animation appears by using 80% of original faces, but only requires 3% of original storage space. Consequently, this approach enables a continuous level of detail for dynamic 3D mesh sequences. --- paper_title: Mesh Geometry Compression for Mobile Graphics paper_content: This paper presents a compression scheme for mesh geometry, which is suitable for mobile graphics. The main focus is to enable real-time decoding of compressed vertex positions while providing reasonable compression ratios. Our scheme is based on local quantization of vertex positions with mesh partitioning. To prevent visual seams along the partitioning boundaries, we constrain the locally quantized cells of all mesh partitions to have the same size and aligned local axes. We propose a mesh partitioning algorithm to minimize the size of locally quantized cells, which relates to the distortion of a restored mesh. Vertex coordinates are stored in main memory and transmitted to graphics hardware for rendering in the quantized form, saving memory space and system bus bandwidth. Decoding operation is combined with model geometry transformation, and the only overhead to restore vertex positions is one matrix multiplication for each mesh partition. --- paper_title: Feature Oriented Progressive Lossless Mesh Coding paper_content: A feature-oriented generic progressive lossless mesh coder (FOLProM) is proposed to encode triangular meshes with arbitrarily complex geometry and topology. In this work, a sequence of levels of detail (LODs) are generated through iterative vertex set split and bounding volume subdivision. The incremental geometry and connectivity updates associated with each vertex set split and/or bounding volume subdivision are entropy coded. Due to the visual importance of sharp geometric features, the whole geometry coding process is optimized for a better presentation of geometric features, especially at low coding bitrates. Feature-oriented optimization in FOLProM is performed in hierarchy control and adaptive quantization. Efficient coordinate representation and prediction schemes are employed to reduce the entropy of data significantly. Furthermore, a simple yet efficient connectivity coding scheme is proposed. It is shown that FOLProM offers a significant rate-distortion (R-D) gain over the prior art, which is especially obvious at low bitrates. --- paper_title: Progressive Animation Sequences paper_content: 3D animations have been widely used in the movie and gaming industries. However, the huge storage requirements of dynamic mesh data make it difficult for a number of applications such as rendering, transmitting over a network, etc.This study proposes a multiresolution representation for 3D animation. The results show that the coarse animation appears by using only 10% of the original faces. More detailed animation appears by using 80% of original faces, but only requires 3% of original storage space. Consequently, this approach enables a continuous level of detail for dynamic 3D mesh sequences. --- paper_title: Geometry videos: a new representation for 3D animations paper_content: We present the "Geometry Video," a new data structure to encode animated meshes. Being able to encode animated meshes in a generic source-independent format allows people to share experiences. Changing the viewpoint allows more interaction than the fixed view supported by 2D video. Geometry videos are based on the "Geometry Image" mesh representation introduced by Gu et al. 4. Our novel data structure provides a way to treat an animated mesh as a video sequence (i.e., 3D image) and is well suited for network streaming. This representation also offers the possibility of applying and adapting existing mature video processing and compression techniques (such as MPEG encoding) to animated meshes. This paper describes an algorithm to generate geometry videos from animated meshes.The main insight of this paper, is that Geometry Videos re-sample and re-organize the geometry information, in such a way, that it becomes very compressible. They provide a unified and intuitive method for level-of-detail control, both in terms of mesh resolution (by scaling the two spatial dimensions) and of frame rate (by scaling the temporal dimension). Geometry Videos have a very uniform and regular structure. Their resource and computational requirements can be calculated exactly, hence making them also suitable for applications requiring level of service guarantees. --- paper_title: Progressive compression for lossless transmission of triangle meshes paper_content: Lossless transmission of 3D meshes is a very challenging and timely problem for many applications, ranging from collaborative design to engineering. Additionally, frequent delays in transmissions call for progressive transmission in order for the end user to receive useful successive refinements of the final mesh. In this paper, we present a novel, fully progressive encoding approach for lossless transmission of triangle meshes with a very fine granularity. A new valence-driven decimating conquest, combined with patch tiling and an original strategic retriangulation is used to maintain the regularity of valence. We demonstrate that this technique leads to good mesh quality, near-optimal connectivity encoding, and therefore a good rate-distortion ratio throughout the transmission. We also improve upon previous lossless geometry encoding by decorrelating the normal and tangential components of the surface. For typical meshes, our method compresses connectivity down to less than 3.7 bits per vertex, 40% better in average than the best methods previously reported [5, 18]; we further reduce the usual geometry bit rates by 20% in average by exploiting the smoothness of meshes. Concretely, our technique can reduce an ascii VRML 3D model down to 1.7% of its size for a 10-bit quantization (2.3% for a 12-bit quantization) while providing a very progressive reconstruction. --- paper_title: A progressive view-dependent technique for interactive 3-D mesh transmission paper_content: A view-dependent graphics streaming scheme is proposed in this work that facilitates interactive streaming and browsing of three-dimensional (3-D) graphics models. First, a 3-D model is split into several partitions. Second, each partition is simplified and coded independently. Finally, the compressed data is sent in order of relevance to the user's requests to maximize visual quality. Specifically, the server can transmit visible parts in detail, while cutting out invisible parts. Experimental results demonstrate that the proposed algorithm reduces the required transmission bandwidth, and provides an acceptable visual quality even at low bit rates. --- paper_title: Static 3D triangle mesh compression overview paper_content: 3D triangle meshes are extremely used to model discrete surfaces, and almost always represented with two tables: one for geometry and another for connectivity. While the raw size of a triangle mesh is of around 200 bits per vertex, by coding cleverly (and separately) those two distinct kinds of information it is possible to achieve compression ratios of 15:1 or more. Different techniques must be used depending on whether single-rate vs. progressive bitstreams are sought; and, in the latter case, on whether or not hierarchically nested meshes are desirable during reconstruction. --- paper_title: Geometry images paper_content: Surface geometry is often modeled with irregular triangle meshes. The process of remeshing refers to approximating such geometry using a mesh with (semi)-regular connectivity, which has advantages for many graphics applications. However, current techniques for remeshing arbitrary surfaces create only semi-regular meshes. The original mesh is typically decomposed into a set of disk-like charts, onto which the geometry is parametrized and sampled. In this paper, we propose to remesh an arbitrary surface onto a completely regular structure we call a geometry image. It captures geometry as a simple 2D array of quantized points. Surface signals like normals and colors are stored in similar 2D arrays using the same implicit surface parametrization --- texture coordinates are absent. To create a geometry image, we cut an arbitrary mesh along a network of edge paths, and parametrize the resulting single chart onto a square. Geometry images can be encoded using traditional image compression algorithms, such as wavelet-based coders. --- paper_title: 3D graphics on the web: A survey paper_content: Abstract In recent years, 3D graphics has become an increasingly important part of the multimedia web experience. Following on from the advent of the X3D standard and the definition of a declarative approach to presenting 3D graphics on the web, the rise of WebGL has allowed lower level access to graphics hardware of ever increasing power. In parallel, remote rendering techniques permit streaming of high-quality 3D graphics onto a wide range of devices, and recent years have also seen much research on methods of content delivery for web-based 3D applications. All this development is reflected in the increasing number of application fields for the 3D web. In this paper, we reflect this activity by presenting the first survey of the state of the art in the field. We review every major approach to produce real-time 3D graphics rendering in the browser, briefly summarise the approaches for remote rendering of 3D graphics, before surveying complementary research on data compression methods, and notable application fields. We conclude by assessing the impact and popularity of the 3D web, reviewing the past and looking to the future. --- paper_title: View-dependent refinement of progressive meshes paper_content: Level-of-detail (LOD) representations are an important tool for realtime rendering of complex geometric environments. The previously introduced progressive mesh representation defines for an arbitrary triangle mesh a sequence of approximating meshes optimized for view-independent LOD. In this paper, we introduce a framework for selectively refining an arbitrary progressive mesh according to changing view parameters. We define efficient refinement criteria based on the view frustum, surface orientation, and screen-space geometric error, and develop a real-time algorithm for incrementally refining and coarsening the mesh according to these criteria. The algorithm exploits view coherence, supports frame rate regulation, and is found to require less than 15% of total frame time on a graphics workstation. Moreover, for continuous motions this work can be amortized over consecutive frames. In addition, smooth visual transitions (geomorphs) can be constructed between any two selectively refined meshes. A number of previous schemes create view-dependent LOD meshes for height fields (e.g. terrains) and parametric surfaces (e.g. NURBS). Our framework also performs well for these special cases. Notably, the absence of a rigid subdivision structure allows more accurate approximations than with existing schemes. We include results for these cases as well as for general meshes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation Display algorithms; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional --- paper_title: Surface simplification using quadric error metrics paper_content: Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations --- paper_title: Blast: a binary large structured transmission format for the web paper_content: Recent advances in Web technology, especially real-time 3D content using WebGL, require an efficient way to transfer binary data. Images, audio and video have respective HTML tags and accompanying data formats that transparently handle binary transmission and decompression. 3D data, on the other hand, has to be handled explicitly by the client application. In contrast to images, audio and video, 3D data is inhomogeneous and neither common formats nor compression algorithms have been established for the Web. Despite the many existing formats for binary transmission of 3D data none has been able to provide a general binary format for all kinds of 3D data including meshes, textures, animations, and materials. Existing formats are domain-specific and fixed on a certain set of input data and thus too specific to handle other types of data. Blast is a general container format for structured binary transmission on the Web that can be used for all types of 3D scene data. Instead of defining a fixed set of encodings and compression algorithms Blast exploits the code on demand paradigm to provide a simple yet powerful encoder-agnostic basis to leverage existing domain-specific solutions and compression techniques. Because streaming is of primary importance for a good user experience Blast is designed on the basis of self-contained chunks to enable JavaScript clients to utilize Web Workers for parallel decoding and to provide early feedback to the user. --- paper_title: Enhancing X3DOM declarative 3D with rigid body physics support paper_content: Given that physics can be fundamental for realistic and interactive Web3D applications, a number of JavaScript versions of physics engines have been introduced during the past years. This paper presents the implementation of the rigid body physics component, as defined by the X3D specification, in the X3DOM environment, and the creation of dynamic 3D interactive worlds. We briefly review the state of the art in current technologies for Web3D graphics, including HTML5, WebGL and X3D, and then explore the significance of physics engines in building realistic Web3D worlds. We include a comprehensive review of JavaScript physics engine libraries, and proceed to summarize the significance of our implementation while presenting in detail the methodology followed. The results obtained so far from our cross-browser experiments demonstrate that real-time interactive scenes with hundreds of rigid bodies can be constructed and operate with acceptable frame rates, while the allowing the user to maintain the scene control. --- paper_title: Deformation Sensitive Decimation paper_content: In computer graphics, many automatic methods for simplifying polygonal meshes have been developed. These techniques work very well for static meshes; however, they do not handle the case of deforming objects. Simplifying any single pose of a deforming object may destroy detail required to represent the shape in a different pose. We present an automatic technique for simplifying deforming character meshes that takes a set of examples as input and produces a set of simplied examples as output. The method preserves detail required for deformation and maintains connectivity information across the examples. This technique is applicable to many current skinning algorithms including example-driven techniques, tting techniques and Linear Blend Skinning. --- paper_title: Multi-Layer Level of Detail For Character Animation paper_content: Real-time animation of human-like characters has been an active research area in computer graphics. Nowadays, more and more applications need to render various realistic scenes with human motion in crowds for interactive virtual environments. Animation and level of detail are well explored fields but little has been done to generate level of detail automatically for dynamic articulated meshes. Our approach is based on the combination of three interesting layers for run-time level of detail in character crowd animation: the skeleton, the mesh and the motion. We build a Multiresolution Skeletal Graph to simplify the skeleton topology progressively. In contrast with previous works, we use a Dual-Graph Based Simplification for articulated meshes, where the triangle decimation is driven by triangle compactness, to build a dynamic, continuous, progressive and selective mesh level of detail. We also present Power Skinning to ensure the stability of Linear Smooth Skinning, during the simplification, with an efficient multi-weight update rule. --- paper_title: SRC - a streamable format for generalized web-based 3D data transmission paper_content: A problem that still remains with today's technologies for 3D asset transmission is the lack of progressive streaming of all relevant mesh and texture data, with a minimal number of HTTP requests. Existing solutions, like glTF or X3DOM's geometry formats, either send all data within a single batch, or they introduce an unnecessary large number of requests. Furthermore, there is still no established format for a joined, interleaved transmission of geometry data and texture data. Within this paper, we propose a new container file format, entitled Shape Resource Container (SRC). Our format is optimized for progressive, Web-based transmission of 3D mesh data with a minimum number of HTTP requests. It is highly configurable, and more powerful and flexible than previous formats, as it enables a truly progressive transmission of geometry data, partial sharing of geometry between meshes, direct GPU uploads, and an interleaved transmission of geometry and texture data. We also demonstrate how our new mesh format, as well as a wide range of other mesh formats, can be conveniently embedded in X3D scenes, using a new, mini-malistic X3D ExternalGeometry node. --- paper_title: LPM: lightweight progressive meshes towards smooth transmission of Web3D media over internet paper_content: Transmission of Web3D media over the Internet can be slow, especially when downloading huge 3D models through relatively limited bandwidth. Currently, 3D compression and progressive meshes are used to alleviate the problem, but these schemes do not consider similarity among the 3D components, leaving rooms for improvement in terms of efficiency. This paper proposes a similarity-aware 3D model reduction method, called Lightweight Progressive Meshes (LPM). The key idea of LPM is to search similar components in a 3D model, and reuse them through the construction of a Lightweight Scene Graph (LSG). The proposed LPM offers three significant benefits. First, the size of 3D models can be reduced for transmission without almost any precision loss of the original models. Second, when rendering, decompression is not needed to restore the original model, and instanced rendering can be fully exploited. Third, it is extremely efficient under very limited bandwidth, especially when transmitting large 3D scenes. Performance on real data justifies the effectiveness of our LPM, which improves the state-of-the-art in Web3D media transmission. --- paper_title: Surface simplification using quadric error metrics paper_content: Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations --- paper_title: Progressive skinning for video game character animations paper_content: The cartridge is made of plastic material and comprises a shell having an exit slot and end covers mounted at the ends of said shell. The cartridge consists of three substantially semicylindrical members, which by means of two hinges are connected to each other so that two semicylindrical members when folded together constitute a shell which encloses the roll of film and the third semicylindrical member covers said shell and defines a film passage in the shape of an arc of a circle, and the exit slot of the cartridge, and the two outer semicylindrical members are interconnected. --- paper_title: Coarse-grained multiresolution structures for mobile exploration of gigantic surface models paper_content: We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices. --- paper_title: Instant texture transmission using bandwidth-optimized progressive interlacing images paper_content: We present an adaptive bandwidth-optimized approach for progressive image transmission to allow instant textured rendering in web-based 3D applications. Applications utilizing a lot of image data require an adaptive technology to build responsive user interfaces. This applies especially for the use in mobile networks. While several approaches provide a progressive geometry streaming, they do not focus on a fast and simple texture streaming for 3D web applications. However, standard 2D image transmission technologies are usually inappropriate within a 3D context. In 2D, image size as well as resolution are often set during the authoring phase, whereas in 3D applications size and displayed resolution of textured 3D objects depend on the virtual camera. An adaptive texture transmission mechanism has to consider this as part of its control function. Our approach thus combines progressive texture streaming with an adaptive number of refinement levels without any pixel retransmission. It enables a fast user feedback and reduces the transmitted data to a minimum without loss of visual quality. Moreover, our GPUII approach (GPU-based Image Interlacing) allows an easy integration into existing applications while also reducing the CPU load. --- paper_title: Spatial data structures for accelerated 3D visibility computation to enable large model visualization on the web paper_content: The visualization of massive 3D models is an intensively examined field of research. Due to their rapidly growing complexity of such models, visualisation them in real-time will never be possible through a higher speed of rasterization alone. Instead, a practical solution has to reduce the amount of data to be processed, using a fast visibility determination. In recent years, the combination of Javascript and WebGL raised attention for the possibility of rendering hardware-accelerated 3D graphics directly in the browser. However, when compared to desktop applications, they are still fighting with their disadvantages of a generally slower execution speed, or a downgraded set of functionality. We demonstrate the integration of spatial data structures, computed on the client side, using latest technology trends to mitigate the shortcomings of the 3D Web environment. We employ comparably small bounding volume hierarchies to accelerate our visibility determination, as well as to enable specific culling techniques. This allows for an interactive visualization of such massive 3D data sets. Our in-depth analysis of different data structures and environments shows which combination of data structure and visibility determination techniques are currently the best fit for the Web. --- paper_title: Progressive Compression of Manifold Polygon Meshes paper_content: This paper presents a new algorithm for the progressive compression of manifold polygon meshes. The input surface is decimated by several traversals that generate successive levels of detail through a specific patch decimation operator which combines vertex removal and local remeshing. The mesh connectivity is encoded by two lists of Boolean error predictions based on the mesh geometry: one for the inserted edges and the other for the faces with a removed center vertex. The mesh geometry is encoded with a barycentric error prediction of the removed vertex coordinates and a local curvature prediction. We also include two methods that improve the rate-distortion performance: a wavelet formulation with a lifting scheme and an adaptive quantization technique. Experimental results demonstrate the effectiveness of our approach in terms of compression rates and rate-distortion performance. --- paper_title: Enhancing X3DOM declarative 3D with rigid body physics support paper_content: Given that physics can be fundamental for realistic and interactive Web3D applications, a number of JavaScript versions of physics engines have been introduced during the past years. This paper presents the implementation of the rigid body physics component, as defined by the X3D specification, in the X3DOM environment, and the creation of dynamic 3D interactive worlds. We briefly review the state of the art in current technologies for Web3D graphics, including HTML5, WebGL and X3D, and then explore the significance of physics engines in building realistic Web3D worlds. We include a comprehensive review of JavaScript physics engine libraries, and proceed to summarize the significance of our implementation while presenting in detail the methodology followed. The results obtained so far from our cross-browser experiments demonstrate that real-time interactive scenes with hundreds of rigid bodies can be constructed and operate with acceptable frame rates, while the allowing the user to maintain the scene control. --- paper_title: 3D graphics on the web: A survey paper_content: Abstract In recent years, 3D graphics has become an increasingly important part of the multimedia web experience. Following on from the advent of the X3D standard and the definition of a declarative approach to presenting 3D graphics on the web, the rise of WebGL has allowed lower level access to graphics hardware of ever increasing power. In parallel, remote rendering techniques permit streaming of high-quality 3D graphics onto a wide range of devices, and recent years have also seen much research on methods of content delivery for web-based 3D applications. All this development is reflected in the increasing number of application fields for the 3D web. In this paper, we reflect this activity by presenting the first survey of the state of the art in the field. We review every major approach to produce real-time 3D graphics rendering in the browser, briefly summarise the approaches for remote rendering of 3D graphics, before surveying complementary research on data compression methods, and notable application fields. We conclude by assessing the impact and popularity of the 3D web, reviewing the past and looking to the future. --- paper_title: An event-based framework for animations in X3D paper_content: Animations are frequently used and beneficial for different scenarios in information visualization [Robertson et al. 2008; Kriglstein et al. 2012]. The X3D format [ISO and IEC 2008] provides a sound basis to create information visualizations and animations, but its concept of routes, interpolators and sensors for animating elements has some weaknesses. A major problem is that X3D has no built-in event-based mechanism to trigger reusable complex animations. The widespread use of the event-based approach in web technology shows, that it is an efficient concept adaptable to different needs. The lack of such a mechanism in X3D makes it difficult in some scenarios to create animated and interactive scenes. This paper will show a concept capable of solving these problems in X3D and the implementation of the concept as a framework. The framework helps to create dynamic information visualization scenes and makes it possible to create and combine reusable animations. Furthermore the framework provides ways to trigger these animations at given times to orchestrate the animations of a scene. The triggering mechanism also has the advantage to separate the animations from the static aspects of a scene. --- paper_title: Enhancing X3DOM declarative 3D with rigid body physics support paper_content: Given that physics can be fundamental for realistic and interactive Web3D applications, a number of JavaScript versions of physics engines have been introduced during the past years. This paper presents the implementation of the rigid body physics component, as defined by the X3D specification, in the X3DOM environment, and the creation of dynamic 3D interactive worlds. We briefly review the state of the art in current technologies for Web3D graphics, including HTML5, WebGL and X3D, and then explore the significance of physics engines in building realistic Web3D worlds. We include a comprehensive review of JavaScript physics engine libraries, and proceed to summarize the significance of our implementation while presenting in detail the methodology followed. The results obtained so far from our cross-browser experiments demonstrate that real-time interactive scenes with hundreds of rigid bodies can be constructed and operate with acceptable frame rates, while the allowing the user to maintain the scene control. --- paper_title: A scalable architecture for the HTML5/X3D integration model X3DOM paper_content: We present a scalable architecture, which implements and further evolves the HTML/X3D integration model X3DOM introduced in [Behr et al. 2009]. The goal of this model is to integrate and update declarative X3D content directly in the HTML DOM tree. The model was previously presented in a very abstract and generic way by only suggesting implementation strategies. The available open-source x3dom.js architecture provides concrete solutions to the previously open points and extents the generic model if necessary. The outstanding feature of the architecture is to provide a single declarative interface to application developers and at the same time support of various backends through a powerful fallback-model. This fallback-model does not provide a single implementation strategy for the runtime and rendering module but supports different methods transparently. This includes native browser implementations and X3D-plugins as well as a WebGL-based scene-graph, which allows running the content without the need for installing additional plugins on all browsers that support WebGL. The paper furthermore discusses generic aspects of the architecture like encoding and introspection, but also provides details concerning two backends. It shows how the system interfaces with X3D-plugins and WebGL and also discusses implementation specific features and limitations. --- paper_title: Declarative AR in the Web with XML3D and Xflow paper_content: Augmented Reality is about enhancing video streams from the real world with dynamic, interactive, rich-media content. We propose that HTML5 browsers (the de-facto runtime-engines and viewers for dynamic, interactive, rich-media 2D environments) are becoming ideal AR engines ‐ when extended with support for dynamic and interactive 3D content as well as efficient image and computer vision processing. Early but fully functional prototypes have already been developed using our XML3D and Xflow technology and will be demonstrated. Modern Web technologies already provide native support for hardware-accelerated 3D graphics (WebGL) as well as access to cameras and many other sensors. Recently, JavaScript implementations in browsers became significantly faster (e.g. with asm.js) and upcoming extensions like WebCL (Khronos) or Parallel-Javascript/RiverTrail (Intel/Mozilla) promise to make massively parallel computing readily available within the browser as well. This enables efficient image processing, feature detection, and tracking, but also real-time animations and other compute intensive processing ‐ right within the browser. As a result, it is now becoming possible to design Web-based AR applications that completely run in any modern Web browser without extensions or plug-ins. Such AR applications can take full advantage of the browsers rich-media environment and benefit from the Webs huge ecosystem, such as cloud computing, service-oriented architectures, Open Linked Data, and many more. This would enable millions of Web Developers to easily integrate AR technology directly into their Web applications. Web browsers will provide AR at your fingertips: A single click on a URL can fully immerse the user in arbitrary AR experiences even on mobile devices. We show how XML3D, an extension to HTML5 for declarative 3D graphics, and Xflow, a declarative language for dataflow processing, provide a high-level framework to develop AR applications on the Web, while encapsulating and abstracting from low-level image and AR processing. Our approach does not provide functionality as a large, monolithic library, but rather as a collection of smaller building-blocks that can flexibly be combined and reused in many ways depending on the needs of applications and developers. We also show how AR operators can be accelerated in the browser with emerging Web technologies such as Parallel JavaScript (RiverTrail). --- paper_title: Spatial data structures for accelerated 3D visibility computation to enable large model visualization on the web paper_content: The visualization of massive 3D models is an intensively examined field of research. Due to their rapidly growing complexity of such models, visualisation them in real-time will never be possible through a higher speed of rasterization alone. Instead, a practical solution has to reduce the amount of data to be processed, using a fast visibility determination. In recent years, the combination of Javascript and WebGL raised attention for the possibility of rendering hardware-accelerated 3D graphics directly in the browser. However, when compared to desktop applications, they are still fighting with their disadvantages of a generally slower execution speed, or a downgraded set of functionality. We demonstrate the integration of spatial data structures, computed on the client side, using latest technology trends to mitigate the shortcomings of the 3D Web environment. We employ comparably small bounding volume hierarchies to accelerate our visibility determination, as well as to enable specific culling techniques. This allows for an interactive visualization of such massive 3D data sets. Our in-depth analysis of different data structures and environments shows which combination of data structure and visibility determination techniques are currently the best fit for the Web. --- paper_title: Interactive visualization of volumetric data with WebGL in real-time paper_content: This article presents and discusses the implementation of a direct volume rendering system for the Web, which articulates a large portion of the rendering task in the client machine. By placing the rendering emphasis in the local client, our system takes advantage of its power, while at the same time eliminates processing from unreliable bottlenecks (e.g. network). The system developed articulates in efficient manner the capabilities of the recently released WebGL standard, which makes available the accelerated graphic pipeline (formerly unusable). The dependency on specially customized hardware is eliminated, and yet efficient rendering rates are achieved. The Web increasingly competes against desktop applications in many scenarios, but the graphical demands of some of the applications (e.g. interactive scientific visualization by volume rendering), have impeded their successful settlement in Web scenarios. Performance, scalability, accuracy, security are some of the many challenges that must be solved before visual Web applications popularize. In this publication we discuss both performance and scalability of the volume rendering by WebGL ray-casting in two different but challenging application domains: medical imaging and radar meteorology. --- paper_title: Enhancing X3DOM declarative 3D with rigid body physics support paper_content: Given that physics can be fundamental for realistic and interactive Web3D applications, a number of JavaScript versions of physics engines have been introduced during the past years. This paper presents the implementation of the rigid body physics component, as defined by the X3D specification, in the X3DOM environment, and the creation of dynamic 3D interactive worlds. We briefly review the state of the art in current technologies for Web3D graphics, including HTML5, WebGL and X3D, and then explore the significance of physics engines in building realistic Web3D worlds. We include a comprehensive review of JavaScript physics engine libraries, and proceed to summarize the significance of our implementation while presenting in detail the methodology followed. The results obtained so far from our cross-browser experiments demonstrate that real-time interactive scenes with hundreds of rigid bodies can be constructed and operate with acceptable frame rates, while the allowing the user to maintain the scene control. --- paper_title: An event-based framework for animations in X3D paper_content: Animations are frequently used and beneficial for different scenarios in information visualization [Robertson et al. 2008; Kriglstein et al. 2012]. The X3D format [ISO and IEC 2008] provides a sound basis to create information visualizations and animations, but its concept of routes, interpolators and sensors for animating elements has some weaknesses. A major problem is that X3D has no built-in event-based mechanism to trigger reusable complex animations. The widespread use of the event-based approach in web technology shows, that it is an efficient concept adaptable to different needs. The lack of such a mechanism in X3D makes it difficult in some scenarios to create animated and interactive scenes. This paper will show a concept capable of solving these problems in X3D and the implementation of the concept as a framework. The framework helps to create dynamic information visualization scenes and makes it possible to create and combine reusable animations. Furthermore the framework provides ways to trigger these animations at given times to orchestrate the animations of a scene. The triggering mechanism also has the advantage to separate the animations from the static aspects of a scene. --- paper_title: Spatial data structures for accelerated 3D visibility computation to enable large model visualization on the web paper_content: The visualization of massive 3D models is an intensively examined field of research. Due to their rapidly growing complexity of such models, visualisation them in real-time will never be possible through a higher speed of rasterization alone. Instead, a practical solution has to reduce the amount of data to be processed, using a fast visibility determination. In recent years, the combination of Javascript and WebGL raised attention for the possibility of rendering hardware-accelerated 3D graphics directly in the browser. However, when compared to desktop applications, they are still fighting with their disadvantages of a generally slower execution speed, or a downgraded set of functionality. We demonstrate the integration of spatial data structures, computed on the client side, using latest technology trends to mitigate the shortcomings of the 3D Web environment. We employ comparably small bounding volume hierarchies to accelerate our visibility determination, as well as to enable specific culling techniques. This allows for an interactive visualization of such massive 3D data sets. Our in-depth analysis of different data structures and environments shows which combination of data structure and visibility determination techniques are currently the best fit for the Web. ---
Title: Animation on the Web: A Survey Section 1: What is Animation? Description 1: Discusses the definition and types of animation, with a focus on real-time rendering and transmission of animation data on the web. Section 2: Animation of Static Objects Description 2: Explores the methods and principles behind animating static objects, primarily using geometric transformations and numerical integration. Section 3: Animation of Deformable Objects Description 3: Discusses the animation of objects that change shape or volume due to external forces, and the different approaches for animating deformable objects. Section 4: Overview Of The Challenges Description 4: Outlines the unique challenges and requirements of web-based animation, including data transmission, compression, and rendering complexities. Section 5: Transmission Of Data Description 5: Covers the technical aspects of transmitting 3D animation data over the internet, emphasizing the issues of latency, bandwidth, and the need for structured formats. Section 6: Compression & Preprocessing Description 6: Describes the techniques for compressing and preprocessing animation data to make it suitable for transmission and real-time rendering. Section 7: Rendering Description 7: Surveys the principal rendering contexts and technologies for web-based animations, including a discussion of WebGL and related APIs. Section 8: Transmission over the Web Description 8: Compares various methods and formats for transmitting animation data over the web, focusing on practical solutions and current technologies. Section 9: How existing formats and/or libraries deal with Animation Description 9: Reviews existing file formats and libraries, assessing their approach to handling animation data on the web. Section 10: Parallelism in JavaScript Description 10: Examines approaches to enhance JavaScript performance for handling large animation data through parallelism and GPU processing. Section 11: Conclusion Description 11: Summarizes the current state and future prospects of animation on the web, identifying open challenges and potential directions for further research.
Seeing the forest through the trees: A review of integrated environmental modelling tools
6
--- paper_title: Uncertainty Management in Integrated Assessment Modeling: Towards a Pluralistic Approach paper_content: Integrated Assessment (IA) is an evolving research community that aims to address complex societal issues through an interdisciplinary process. The most-widely used method in Integrated Assessment is modeling. The state of the art in Integrated Assessment modeling is described in this paper in terms of history, general features, classes of models, and in terms of the strengths and weaknesses, and the dilemmasand challenges modelers face. One of the key challenges is the issue of uncertainty management. The paper outlines the sources and types of uncertainty modelers are confronted with. It then discusses how uncertainties are currently managed inIntegrated Assessment modeling, on which evaluation it is argued that complementary methods are needed that allow for pluralistic uncertainty management. The paperfinalises with discussing pluralistic concepts and approaches that are currently explored in the IA community and that seem promising in view of the challenge to incorporate explicitly more than one hidden perspective in models. --- paper_title: Integrated environmental modeling: A vision and roadmap for the future paper_content: Integrated environmental modeling (IEM) is inspired by modern environmental problems, decisions, and policies and enabled by transdisciplinary science and computer capabilities that allow the environment to be considered in a holistic way. The problems are characterized by the extent of the environmental system involved, dynamic and interdependent nature of stressors and their impacts, diversity of stakeholders, and integration of social, economic, and environmental considerations. IEM provides a science-based structure to develop and organize relevant knowledge and information and apply it to explain, explore, and predict the behavior of environmental systems in response to human and natural sources of stress. During the past several years a number of workshops were held that brought IEM practitioners together to share experiences and discuss future needs and directions. In this paper we organize and present the results of these discussions. IEM is presented as a landscape containing four interdependent elements: applications, science, technology, and community. The elements are described from the perspective of their role in the landscape, current practices, and challenges that must be addressed. Workshop participants envision a global scale IEM community that leverages modern technologies to streamline the movement of science-based knowledge from its sources in research, through its organization into databases and models, to its integration and application for problem solving purposes. Achieving this vision will require that the global community of IEM stakeholders transcend social, and organizational boundaries and pursue greater levels of collaboration. Among the highest priorities for community action are the development of standards for publishing IEM data and models in forms suitable for automated discovery, access, and integration; education of the next generation of environmental stakeholders, with a focus on transdisciplinary research, development, and decision making; and providing a web-based platform for community interactions (e.g., continuous virtual workshops). --- paper_title: Again, and Again, and Again … paper_content: . . . ::: ::: Replication—The confirmation of results and conclusions from one study obtained independently in another—is considered the scientific gold standard. New tools and technologies, massive amounts of data, long-term studies, interdisciplinary approaches, and the complexity of the questions being asked are complicating replication efforts, as are increased pressures on scientists to advance their research. The five Perspectives in this section (and associated News and Careers stories, Readers' Poll, and Editorial) explore some of the issues associated with replicating results across various fields. ::: ::: Ryan (p. 1229) highlights the excitement and challenges that come with field-based research. In particular, observing processes as they occur in nature allows for discovery but makes replication difficult, because the precise conditions surrounding the observations are unique. Further, although laboratory research allows for the specification of experimental conditions, the conclusions may not apply to the real world. Debate about the merits of lab-based and field-based studies has been a persistent theme over time. Tomasello and Call (p. 1227) further contribute to this debate in their discussion of some obvious barriers to replication in primate cognition and behavior research (small numbers of subjects, expense, and ethics issues) as well as more subtle ones, such as the nontrivial challenge of designing tasks that elicit complex cognitive behaviors. ::: ::: New technologies continue to produce a deluge of data of different varieties, raising expectations for new knowledge that will translate into meaningful therapeutics and insights into health. Ioannidis and Khoury (p. 1230) outline multiple steps for validating such large-scale data on the path to clinical utility and make suggestions for incentives (and penalties) that could enhance the availability of reliable data and analyses. ::: ::: Peng (p. 1226) discusses the need for a minimum standard of reproducibility in computer sciences, arguing that enough information about methods and code should be available for independent researchers to reach consistent conclusions using original raw data. Specifically, he describes a model that one journal has used to make this a reality. ::: ::: The need to convince the public that data are replicable has grown as science and public policy-making intersect, an issue that has has beset climate change studies. As Santer et al. (p. 1232) describe, having multiple groups examining the same data and generating new data has led to robust conclusions. ::: ::: The importance of replication and reproducibility for scientists is unquestioned. Sometimes attempts to replicate reveal scientific uncertainties. This is one of the main ways that science progresses (see associated News stories of faster-than-light neutrinos and sirtuins, pp. 1200 and 1194). Unfortunately, in rare instances (compared to the body of scientific work), it can also indicate fraud (see the Editorial by Crocker and Cooper, p. 1182). How do we promote the publication of replicable data? The authors in this section come up with possibilities that are targeted at funders, journals, and the research culture itself. In the Readers'Poll, you can make your views known as well. --- paper_title: Bridging the Gaps Between Design and Use: Developing Tools to Support Environmental Management and Policy paper_content: A method is provided for compensating an output signal of an electronic device having an input end electrically connected to a feedback device and an output end electrically connected to a load. This method includes steps of (a) measuring a standard voltage Vo of the load, (b) determining an input voltage Vi and an input current Ii of the electronic device, and (c) generating a feedback signal based on said input voltage Vi. The method further includes steps of (d) determining a maximum current Imax and a minimum current Imin of the feedback signal passing through the feedback device, (e) defining an estimated output current Io of the electronic device based on the maximum current Imax and the minimum current Imin, and (f) compensating the output signal of the electronic device according to a compensating factor d determined by the standard voltage Vo, the input voltage Vi, the input current Ii, and the estimated output current Io, of the electronic device. The estimated output current Io is calculated according to the maximum current Imax and the minimum current Imin by a first equation of Io=Imax+kImin where k is a constant. --- paper_title: Integrated Assessment and Modelling: Features, Principles and Examples for Catchment Management paper_content: Abstract To meet the challenges of sustainability and catchment management requires an approach that assesses resource usage options and environmental impacts integratively. Assessment must be able to integrate several dimensions: the consideration of multiple issues and stakeholders, the key disciplines within and between the human and natural sciences, multiple scales of system behaviour, cascading effects both spatially and temporally, models of the different system components, and multiple databases. Integrated assessment (IA) is an emerging discipline and process that attempts to address the demands of decision makers for management that has ecological, social and economic values and considerations. This paper summarises the features of IA and argues the role for models and information systems as a prime activity. Given the complex nature of IA problems, the broad objectives of IA modelling should be to understand the directions and magnitudes of change in relation to management interventions so as to be able to differentiate between associated outcome sets. Central to this broad objective is the need for improved techniques of uncertainty and sensitivity analysis that can provide a measure of confidence in the ability to differentiate between different decisions. Three examples of problems treated with an IA approach are presented. The variations in the way that the different dimensions are integrated in the modelling are discussed to highlight the sorts of choices that can be made in model construction. The conclusions stress the importance of IA as a process, not just as a set of outcomes, and define some of the deficiencies to be overcome. --- paper_title: Accessible Reproducible Research paper_content: As use of computation in research grows, new tools are needed to expand recording, reporting, and reproduction of methods and data. --- paper_title: Standards-Based Computing Capabilities for Distributed Geospatial Applications paper_content: Researchers face increasingly large repositories of geospatial data stored in different locations and in various formats. To address this problem, the Open Geospatial Consortium and the Open Grid Forum are collaborating to develop standards for distributed geospatial computing. --- paper_title: An IT perspective on integrated environmental modelling: The SIAT case paper_content: Policy makers have a growing interest in integrated assessments of policies. The Integrated Assessment Modelling (IAM) community is reacting to this interest by extending the application of model development from pure scientific analysis towards application in decision making or policy context by giving tools a higher capability for analysis targeted at non-experts, but intelligent users. Many parties are involved in the construction of such tools including modellers, domain experts and tool users, resulting in as many views on the proposed tool. During tool development research continues which leads to advanced understanding of the system and may alter early specifications. Accumulation of changes to the initial design obscures the design, usually vastly increasing the number of defects in the software. The software engineering community uses concepts, methods and practices to deal with ambiguous specifications, changing requirements and incompletely conceived visions, and to design and develop maintainable/extensible quality software. The aim of this paper is to introduce modellers to software engineering concepts and methods which have the potential to improve model and tool development using experiences from the development of the Sustainability Impact Assessment Tool. These range from choosing a software development methodology for planning activities and coordinating people, technical design principles impacting maintainability, quality and reusability of the software to prototyping and user involvement. It is argued that adaptive development methods seem to best fit research projects, that typically have unclear upfront and changing requirements. The break-down of a system into elements that overlap as little as possible in features and behaviour helps to divide the work across teams and to achieve a modular and flexible system. However, this must be accompanied by proper automated testing methods and automated continuous integration of the elements. Prototypes, screen sketches and mock-ups are useful to align the different views, build a shared vision of required functionality and to match expectations. --- paper_title: Taverna , lessons in creating a workflow environment for the life sciences paper_content: Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The myGrid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science. --- paper_title: Comparing modelling frameworks- A workshop approach. paper_content: Abstract Of concern to the environmental modelling community is the proliferation of individual, and individualistic, models and the time associated with common model development tasks such as data transformation, coding of models, and visualisation. One way of addressing this problem is the adoption of modelling frameworks. These frameworks, or environments, support modular model development through provision of libraries of core environmental modelling modules, as well as reusable tools for data manipulation, analysis and visualisation. Such frameworks have a range of features and requirements related to the architecture, protocols and methods of operation, and it is difficult to compare the modelling workload and performance of alternative frameworks without using them to undertake identical, or similar modelling tasks. This paper describes the outcomes of a workshop to compare three frameworks – the Spatial Modelling Environment (SME), Tarsier and the Integrated Component Modelling System (ICMS). A simple environmental problem linking hillslope flow and soil erosion processes with a receiving water store was designed and then implemented in the three frameworks. It was found that the SME and Tarsier contained many components well suited to handling complex spatial and temporal models, with ICMS being an integrated framework tailored for smaller scale problems. Of the three tested frameworks, the SME proved superior in supporting problem description, Tarsier provided more flexibility in linking and validating the model components, and ICMS served as an effective prototyping tool. The test problem, and associated data and parameters, are described in detail to allow others to undertake this test. --- paper_title: An overview of model integration for environmental applications—components, frameworks and semantics paper_content: Abstract In recent years, pressure has increased on environmental scientist/modellers to both undertake good science in an efficient and timely manner, under increasing resource constraints, and also to ensure that the science being performed is immediately relevant to a particular environmental management context. At the same time, environmental management is changing, with increasing requirements for multi-scale and multi-objective assessment and decision making that considers economic and social systems, as well as the ecosystem. Integration of management activities, and also of the modelling undertaken to support management, has become a high priority. To solve the problems of application and integration, knowledge encapsulation in models is being undertaken in a way that both meets the needs for good science, and also provides the conceptual and technical structures required for broader and more integrated application of that knowledge by managers. To support this modelling, tools and technologies from computer science and software engineering are being transferred to applied environmental science fields, and a range of new modelling and software development approaches are being pursued. The papers in this Special Issue provide examples of the integrated modelling concepts and applications that have been, or are being, developed. These include the use of object-oriented concepts, component-based modelling techniques and modelling frameworks, as well as the emerging use of integrated modelling platforms and metadata support for modelling semantics. This paper provides an overview of the science and management imperatives underlying recent developments, discusses the technological and conceptual developments that have taken place, and highlights some of the semantic, operational and process requirements that need to be addressed now that the technological aspects of integrated modelling are well advanced. --- paper_title: An overview of the open modelling interface and environment (the OpenMI) paper_content: Abstract The paper reports on a Framework 5 project, HarmonIT, which is developing an open modelling interface and environment (the OpenMI) to simplify the linking of water related simulation models. Its purpose is to support the implementation of the Water Framework Directive. If successful, it will facilitate the simulation of process interactions and make it easier for catchment managers to explore the wider implications of the policy options open to them. The main deliverable of the project is the definition of a standard interface that will allow new and existing models to exchange data at run time. To help users adapt their models to use the interface, and then link and run them with other models, the standard will be supported by software tools for migration, linking, monitoring performance and displaying results. The paper describes the work to date, outlines the key features of the OpenMI's architecture and explains the actions being taken to sustain the OpenMI into the future. --- paper_title: A component-Based Framework for Simulating Agricultural Production and Externalities paper_content: Although existing simulation tools can be used to study the impact of agricultural management on production activities in specific environments, they suffer from several limitations. They are largely specialized for specific production activities: arable crops/cropping systems, grassland, orchards, agro-forestry, livestock etc. Also, they often have a restricted ability to simulate system externalities which may have a negative environmental impact. Furthermore, the structure of such systems neither allows an easy plug-in of modules for other agricultural production activities, nor the use of alternative components for simulating processes. Finally, such systems are proprietary systems of either research groups or projects which inhibits further development by third parties. --- paper_title: Integrated Modelling Frameworks for Environmental Assessment and Decision Support paper_content: Modern management of environmental resources defines problems from a holistic and integrated perspective, imposing strong requirements to Environmental Decision Support Systems (EDSSs) and Integrated Assessment Tools (IATs), which tend to be increasingly complex in terms of software architecture and computational power in order to cope with the type of problems they must solve. Such systems need to support methodologies and techniques ranging from agent-based modelling to participatory decision-making. Sometimes EDSSs and IATs are built from scratch, often with limited resources, by non-programmers. The disadvantages of this approach, which can quickly become overly expensive in terms of delivery time and resources required, have been addressed by the development of suites of software engineering tools called Environmental Integrated Modelling Frameworks (EIMFs). EIMFs have typically been designed as a response to the increasing complexity of building and delivering EDSSs and IATs. Modelling and simulation tools and frameworks have been adopted at a large scale in the management science and operations research disciplines, and standards for developing and expanding them have been developed. In contrast, no modelling framework has been universally adopted within the environmental modelling domain, and the number of environmental modelling frameworks is still growing. In this book chapter, we strive to address the above issues and clearly identify the essential characteristics of an EIMF. This book chapter also advocates the development of open standards for the exchange and re-use of modelling knowledge, including data sets, models, and procedures in order to facilitate improved communication among the leading EIMFs --- paper_title: The architecture of the Earth System Modeling Framework paper_content: The Earth System Modeling Framework (ESMF) project is developing a standard software platform for Earth system models. The standard, which defines a component architecture and a support infrastructure, is being developed under open-software practices. Target applications range from operational numerical weather prediction to climate-system change and predictability studies. --- paper_title: The Common Modelling Protocol: A hierarchical framework for simulation of agricultural and environmental systems paper_content: Abstract A modular approach to simulation modelling offers significant advantages for its application to agricultural and environmental questions, including re-use of model equations in different contexts and with different user-interfaces; configuration of model structures that are most appropriate to a given problem; and facilitation of collaboration between modelling teams. This paper describes the Common Modelling Protocol (CMP), a generic, open and platform-independent framework for modular simulation modelling that is in widespread use. The CMP is distinguished from existing simulation frameworks by taking an explicitly hierarchical view of the biophysical system being simulated and by representing continuous and discontinuous processes equally naturally. Modules of model logic are represented in the CMP by entities known as “components”. Each component may possess “properties” that convey the value of the quantities in its equations and “event handlers” that compute model logic. Low-level information-transfers in the CMP are carried out by means of a message-passing system. Co-ordinated sequences of messages carry out tasks such as initialization, exchange of variable values and the control of computation order. Extensible Markup Language (XML) is used in the protocol for tasks such as denoting data types, submitting simulations for execution and describing components to user-interface software. Examples are presented showing how the CMP can be used to couple modules developed by different teams and to configure a complex model structure. The choices and trade-offs encountered when building a framework for modular simulation are analyzed, using the CMP and other simulation frameworks as examples. The kinds of scientific issues that arise when the CMP is used to realize collaboration between modelling groups are discussed. --- paper_title: A Component Architecture for High-Performance Scientific Computing paper_content: The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry. --- paper_title: OpenMI: Open modelling interface paper_content: Management issues in many sectors of society demand integrated analysis that can be supported by integrated modelling. Since all-inclusive modelling software is difficult to achieve, and possibly even undesirable, integrated modelling requires the linkage of individual models or model components that address specific domains. Emerging from the water sector, the OpenMI has been developed with the purpose of being the glue that can link together model components from various origins. The OpenMI provides a standardized interface to define, describe and transfer data on a time basis between software components that run simultaneously, thus supporting systems where feedback between the modelled processes is necessary in order to achieve physically sound results. The OpenMI allows the linking of models with different spatial and temporal representations: for example, linking river models and groundwater models, where the river model typically uses a one-dimensional grid and a short timestep and the groundwater model uses a two- or three-dimensional grid and a longer timestep. The OpenMI is designed to accommodate the easy migration of existing modelling systems, since their re-implementation may not be economically feasible due to the large investments that have been put into the development and testing of these systems. --- paper_title: A taxonomy of scientific workflow systems for grid computing paper_content: With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research. --- paper_title: Guest editors' introduction to the special section on scientific workflows paper_content: Business-oriented workflows have been studied since the 70's under various names (office automation, workflow management, business process management) and by different communities, including the database community. Much basic and applied research has been conducted over the years, e.g. theoretical studies of workflow languages and models (based on Petri-nets or process calculi), their properties, transactional behavior, etc. --- paper_title: Workflows and e-Science: An overview of workflow system features and capabilities paper_content: Scientific workflow systems have become a necessary tool for many applications, enabling the composition and execution of complex analysis on distributed resources. Today there are many workflow systems, often with overlapping functionality. A key issue for potential users of workflow systems is the need to be able to compare the capabilities of the various available tools. There can be confusion about system functionality and the tools are often selected without a proper functional analysis. In this paper we extract a taxonomy of features from the way scientists make use of existing workflow systems and we illustrate this feature set by providing some examples taken from existing workflow systems. The taxonomy provides end users with a mechanism by which they can assess the suitability of workflow in general and how they might use these features to make an informed choice about which workflow system would be a good choice for their particular application. --- paper_title: A taxonomy and survey on autonomic management of applications in grid computing environments paper_content: In Grid computing environments, the availability, performance, and state of resources, applications, services, and data undergo continuous changes during the life cycle of an application. Uncertainty is a fact in Grid environments, which is triggered by multiple factors, including: (1) failures, (2) dynamism, (3) incomplete global knowledge, and (4) heterogeneity. Unfortunately, the existing Grid management methods, tools, and application composition techniques are inadequate to handle these resource, application and environment behaviors. The aforementioned characteristics impose serious requirements on the Grid programming and runtime systems if they wish to deliver efficient performance to scientific and commercial applications. To overcome the above challenges, the Grid programming and runtime systems must become autonomic or self-managing in accordance with the high-level behavior specified by system administrators. Autonomic systems are inspired by biological systems that deal with similar challenges of complexity, dynamism, heterogeneity, and uncertainty. To this end, we propose a comprehensive taxonomy that characterizes and classifies different software components and high-level methods that are required for autonomic management of applications in Grids. We also survey several representative Grid computing systems that have been developed by various leading research groups in the academia and industry. The taxonomy not only highlights the similarities and differences of state-of-the-art technologies utilized in autonomic application management from the perspective of Grid computing, but also identifies the areas that require further research initiatives. We believe that this taxonomy and its mapping to relevant systems would be highly useful for academic- and industry-based researchers, who are engaged in the design of Autonomic Grid and more recently, Cloud computing systems. Copyright © 2011 John Wiley & Sons, Ltd. --- paper_title: Taverna: A tool for the composition and enactment of bioinformatics workflows paper_content: Motivation:In silico experiments in bioinformatics involve the co-ordinated use of computational tools and information repositories. A growing number of these resources are being made available with programmatic access in the form of Web services. Bioinformatics scientists will need to orchestrate these Web services in workflows as part of their analyses. ::: ::: Results: The Taverna project has developed a tool for the composition and enactment of bioinformatics workflows for the life sciences community. The tool includes a workbench application which provides a graphical user interface for the composition of workflows. These workflows are written in a new language called the simple conceptual unified flow language (Scufl), where by each step within a workflow represents one atomic task. Two examples are used to illustrate the ease by which in silico experiments can be represented as Scufl workflows using the workbench application. ::: ::: Availability: The Taverna workflow system is available as open source and can be downloaded with example Scufl workflows from http://taverna.sourceforge.net --- paper_title: VisMashup: Streamlining the Creation of Custom Visualization Applications paper_content: Visualization is essential for understanding the increasing volumes of digital data. However, the process required to create insightful visualizations is involved and time consuming. Although several visualization tools are available, including tools with sophisticated visual interfaces, they are out of reach for users who have little or no knowledge of visualization techniques and/or who do not have programming expertise. In this paper, we propose VisMashup, a new framework for streamlining the creation of customized visualization applications. Because these applications can be customized for very specific tasks, they can hide much of the complexity in a visualization specification and make it easier for users to explore visualizations by manipulating a small set of parameters. We describe the framework and how it supports the various tasks a designer needs to carry out to develop an application, from mining and exploring a set of visualization specifications (pipelines), to the creation of simplified views of the pipelines, and the automatic generation of the application and its interface. We also describe the implementation of the system and demonstrate its use in two real application scenarios. --- paper_title: Scientific Workflow Management and the Kepler System paper_content: Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery “pipelines”. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational community infrastructure (a.k.a. “the Grid”). However, this infrastructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on Kepler, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of Kepler and its underlying Ptolemyii system, planned extensions, and areas of future research. Kepler is a communitydriven, open source project, and we always welcome related projects and new contributors to join. --- paper_title: Programming Scientific and Distributed Workflow with Triana Services paper_content: In this paper, we discuss a real-world application scenario that uses three distinct types of workflow within the Triana problem-solving environment: serial scientific workflow for the data processing of gravitational wave signals; job submission workflows that execute Triana services on a testbed; and monitoring workflows that examine and modify the behaviour of the executing application. We briefly describe the Triana distribution mechanisms and the underlying architectures that we can support. Our middleware independent abstraction layer, called the Grid Application Prototype (GAP), enables us to advertise, discover and communicate with Web and peer-to-peer (P2P) services. We show how gravitational wave search algorithms have been implemented to distribute both the search computation and data across the European GridLab testbed, using a combination of Web services, Globus interaction and P2P infrastructures. --- paper_title: Triana Applications within Grid Computing and Peer to Peer Environments paper_content: An overview of the Triana Problem Solving Environment is provided – with a particular focus on the GAP application-level interface, for integration with Grid Computing and Peer-to-Peer infrastructure. GAP is a Java-based subset of the Grid Application Toolkit interface (being implemented in the GridLab project), and an outline of its current functionality, usage and mappings to three supported underlying middleware derivatives: JXTA, Web Services, and P2PS (a simplified Peer-to-Peer platform) are provided. The motivation behind the development of P2PS is given – emphasising its minimal, but effective Peer-to-Peer mechanisms that allow scalable, decentralized discovery and communication amongst cooperating P2PS peers within highly unstable environments. A summary of three application use cases illustrating the range of scenarios that such a system addresses is also provided. --- paper_title: VisTrails: visualization meets data management paper_content: Scientists are now faced with an incredible volume of data to analyze. To successfully analyze and validate various hypothesis, it is necessary to pose several queries, correlate disparate data, and create insightful visualizations of both the simulated processes and observed phenomena. Often, insight comes from comparing the results of multiple visualizations. Unfortunately, today this process is far from interactive and contains many error-prone and time-consuming tasks. As a result, the generation and maintenance of visualizations is a major bottleneck in the scientific process, hindering both the ability to mine scientific data and the actual use of the data. The VisTrails system represents our initial attempt to improve the scientific discovery process and reduce the time to insight. In VisTrails, we address the problem of visualization from a data management perspective: VisTrails manages the data and metadata of a visualization product. In this demonstration, we show the power and flexibility of our system by presenting actual scenarios in which scientific visualization is used and showing how our system improves usability, enables reproducibility, and greatly reduces the time required to create scientific visualizations. --- paper_title: Taverna , lessons in creating a workflow environment for the life sciences paper_content: Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The myGrid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science. --- paper_title: Virtual research environments in scholarly work and communications paper_content: Purpose – The purpose of this paper is to investigate the implications of the emergence of virtual research environments (VREs) and related e‐research tools for scholarly work and communications processes.Design/methodology/approach – The concepts of VREs and of e‐research more generally are introduced and relevant literature is reviewed. On this basis, the authors discuss the developing role they play in research practices across a number of disciplines and how scholarly communication is beginning to evolve in response to the opportunities these new tools open up and the challenges they raise.Findings – Virtual research environments are beginning to change the ways in which researchers go about their work and how they communicate with each other and with other stakeholders such as publishers and service providers. The changes are driven by the changing landscape of data production, curation and (re‐)use, by new scientific methods, by changes in technology supply and the increasingly interdisciplinary nat... --- paper_title: Why Linked Data is Not Enough for Scientists paper_content: Scientific data represents a significant portion of the linked open data cloud and scientists stand to benefit from the data fusion capability this will afford. Publishing linked data into the cloud, however, does not ensure the required reusability. Publishing has requirements of provenance, quality, credit, attribution and methods to provide the reproducibility that enables validation of results. In this paper we make the case for a scientific data publication model on top of linked data and introduce the notion of Research Objects as first class citizens for sharing and publishing. Highlights? We identify and characterise different aspects of reuse and reproducibility. ? We examine requirements for such reuse. ? We propose a scientific data publication model that layers on top of linked data publishing. --- paper_title: Virtual research environments in scholarly work and communications paper_content: Purpose – The purpose of this paper is to investigate the implications of the emergence of virtual research environments (VREs) and related e‐research tools for scholarly work and communications processes.Design/methodology/approach – The concepts of VREs and of e‐research more generally are introduced and relevant literature is reviewed. On this basis, the authors discuss the developing role they play in research practices across a number of disciplines and how scholarly communication is beginning to evolve in response to the opportunities these new tools open up and the challenges they raise.Findings – Virtual research environments are beginning to change the ways in which researchers go about their work and how they communicate with each other and with other stakeholders such as publishers and service providers. The changes are driven by the changing landscape of data production, curation and (re‐)use, by new scientific methods, by changes in technology supply and the increasingly interdisciplinary nat... --- paper_title: Galaxy: A platform for interactive large-scale genome analysis paper_content: Accessing and analyzing the exponentially expanding genomic sequence and functional data pose a challenge for biomedical researchers. Here we describe an interactive system, Galaxy, that combines the power of existing genome annotation databases with a simple Web portal to enable users to search remote resources, combine data from independent queries, and visualize the results. The heart of Galaxy is a flexible history system that stores the queries from each user; performs operations such as intersections, unions, and subtractions; and links to other computational tools. Galaxy can be accessed at http://g2.bx.psu.edu. --- paper_title: Ecoinformatics: supporting ecology as a data-intensive science paper_content: Ecology is evolving rapidly and increasingly changing into a more open, accountable, interdisciplinary, collaborative and data-intensive science. Discovering, integrating and analyzing massive amounts of heterogeneous data are central to ecology as researchers address complex questions at scales from the gene to the biosphere. Ecoinformatics offers tools and approaches for managing ecological data and transforming the data into information and knowledge. Here, we review the state-of-the-art and recent advances in ecoinformatics that can benefit ecologists and environmental scientists as they tackle increasingly challenging questions that require voluminous amounts of data across disciplines and scales of space and time. We also highlight the challenges and opportunities that remain. --- paper_title: THE SEEK: A PLATFORM FOR SHARING DATA AND MODELS IN SYSTEMS BIOLOGY paper_content: Abstract Systems biology research is typically performed by multidisciplinary groups of scientists, often in large consortia and in distributed locations. The data generated in these projects tend to be heterogeneous and often involves high-throughput “omics” analyses. Models are developed iteratively from data generated in the projects and from the literature. Consequently, there is a growing requirement for exchanging experimental data, mathematical models, and scientific protocols between consortium members and a necessity to record and share the outcomes of experiments and the links between data and models. The overall output of a research consortium is also a valuable commodity in its own right. The research and associated data and models should eventually be available to the whole community for reuse and future analysis. The SEEK is an open-source, Web-based platform designed for the management and exchange of systems biology data and models. The SEEK was originally developed for the SysMO (systems biology of microorganisms) consortia, but the principles and objectives are applicable to any systems biology project. The SEEK provides an index of consortium resources and acts as gateway to other tools and services commonly used in the community. For example, the model simulation tool, JWS Online, has been integrated into the SEEK, and a plug-in to PubMed allows publications to be linked to supporting data and author profiles in the SEEK. The SEEK is a pragmatic solution to data management which encourages, but does not force, researchers to share and disseminate their data to community standard formats. It provides tools to assist with management and annotation as well as incentives and added value for following these recommendations. Data exchange and reuse rely on sufficient annotation, consistent metadata descriptions, and the use of standard exchange formats for models, data, and the experiments they are derived from. In this chapter, we present the SEEK platform, its functionalities, and the methods employed for lowering the barriers to adoption of standard formats. As the production of biological data continues to grow, in systems biology and in the life sciences in general, the need to record, manage, and exploit this wealth of information in the future is increasing. We promote the SEEK as a data and model management tool that can be adapted to the specific needs of a particular systems biology project. --- paper_title: A comparative evaluation of technical solutions for long-term data repositories in integrative biodiversity research paper_content: Abstract The current study investigates existing infrastructure, its technical solutions and implemented standards for data repositories related to integrative biodiversity research. The storage and reuse of complex biodiversity data in central databases are becoming increasingly important, particularly in attempts to cope with the impacts of environmental change on biodiversity and ecosystems. From the data side, the main challenge of biodiversity repositories is to deal with the highly interdisciplinary and heterogeneous character of standardized and unstandardized data and metadata covering information from genes to ecosystems. Furthermore, the technical improvements in data acquisition techniques produce ever larger data volumes, which represent a challenge for database structure and proper data exchange. The current study is based on comprehensive in-depth interviews and an online survey addressing IT specialists involved in database development and operation. The results show that metadata are already well established, but that non-meta data still is largely unstandardized across various scientific communities. For example, only a third of all repositories in our investigation use internationally unified semantic standard checklists for taxonomy. The study also showed that database developers are mostly occupied with the implementation of state of the art technology and solving operational problems, leaving no time to implement user's requirements. One of the main reasons for this dissatisfying situation is the undersized and unreliable funding situation of most repositories, as reflected by the marginally small number of permanent IT staff members. We conclude that a sustainable data management system that fosters the future use and reuse of these valuable data resources requires the development of fewer, but more permanent data repositories using commonly accepted standards for their long-term data. This can only be accomplished through the consolidation of hitherto widely scattered small and non-permanent repositories. --- paper_title: Digital Earth 2020: towards the vision for the next decade paper_content: Abstract This position paper is the outcome of a brainstorming workshop organised by the International Society for Digital Earth (ISDE) in Beijing in March 2011. It argues that the vision of Digital Earth (DE) put forward by Vice-President Al Gore 13 years ago needs to be re-evaluated in the light of the many developments in the fields of information technology, data infrastructures and earth observation that have taken place since. The paper identifies the main policy, scientific and societal drivers for the development of DE and illustrates the multi-faceted nature of a new vision of DE grounding it with a few examples of potential applications. Because no single organisation can on its own develop all the aspects of DE, it is essential to develop a series of collaborations at the global level to turn the vision outlined in this paper into reality. --- paper_title: Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences paper_content: Increased reliance on computational approaches in the life sciences has revealed grave concerns about how accessible and reproducible computation-reliant results truly are. Galaxy http://usegalaxy.org, an open web-based platform for genomic research, addresses these problems. Galaxy automatically tracks and manages data provenance and provides support for capturing the context and intent of computational methods. Galaxy Pages are interactive, web-based documents that provide users with a medium to communicate a complete computational analysis. --- paper_title: Progress in integrated assessment and modelling paper_content: Environmental processes have been modelled for decades. However. the need for integrated assessment and modeling (IAM) has,town as the extent and severity of environmental problems in the 21st Century worsens. The scale of IAM is not restricted to the global level as in climate change models, but includes local and regional models of environmental problems. This paper discusses various definitions of IAM and identifies five different types of integration that Lire needed for the effective solution of environmental problems. The future is then depicted in the form of two brief scenarios: one optimistic and one pessimistic. The current state of IAM is then briefly reviewed. The issues of complexity and validation in IAM are recognised as more complex than in traditional disciplinary approaches. Communication is identified as a central issue both internally among team members and externally with decision-makers. stakeholders and other scientists. Finally it is concluded that the process of integrated assessment and modelling is considered as important as the product for any particular project. By learning to work together and recognise the contribution of all team members and participants, it is believed that we will have a strong scientific and social basis to address the environmental problems of the 21st Century. --- paper_title: Standards-Based Computing Capabilities for Distributed Geospatial Applications paper_content: Researchers face increasingly large repositories of geospatial data stored in different locations and in various formats. To address this problem, the Open Geospatial Consortium and the Open Grid Forum are collaborating to develop standards for distributed geospatial computing. --- paper_title: Service chaining architectures for applications implementing distributed geographic information processing paper_content: Service-Oriented Architectures can be used as a framework for enabling distributed geographic information processing (DGIP). The Open Geospatial Consortium (OGC) has published several standards for services. These can be composed into service chains that support the execution of workflows constituting complex DGIP applications. In this paper, we introduce a basic architecture and building blocks for building DGIP applications based on service chains. We investigate the issues arising from the composition of OGC services into such service chains. We study various architectural patterns in order to guide application developers in their task of implementing DGIP applications based on service chains. More specifically, we focus on the control flow and data flow patterns in the execution of a workflow. These issues are illustrated with an example from the domain of risk management-a forest fire risk mapping scenario. --- paper_title: Which Service Interfaces fit the Model Web paper_content: The Model Web has been proposed as a concept for integrating scientific models in an interoperable and collaborative manner. However, four years after the initial idea was formulated, there is still no stable long term solution. Multiple authors propose Web Service based approaches to model publication and chaining, but current implementations are highly case specific and lack flexibility. This paper discusses the Web Service interfaces, which are required for supporting integrated environmental modeling in a sustainable manner. We explore ways to expose environmental models and their components using Web Service interfaces. Our discussions present work in progress for establishing the Web Services technological grounds for simplifying information publication and exchange within the Model Web. As a main outcome, this contribution identifies challenges in respect to the required geo- processing and relates them to currently available Web Service standards. --- paper_title: Measuring complexity in OGC web services XML schemas: pragmatic use and solutions paper_content: The use of standards in the geospatial domain, such as those defined by the Open Geospatial Consortium (OGC), for exchanging data has brought a great deal of interoperability upon which systems can be built in a reliable way. Unfortunately, these standards are becoming increasingly complex, making their implementation an arduous task. The use of appropriate software metrics can be very useful to quantify different properties of the standards that ultimately may suggest different solutions to deal with problems related to their complexity. In this regard, we present in this article an attempt to measure the complexity of the schemas associated with the OGC implementation specifications. We use a comprehensive set of metrics to provide a multidimensional view of this complexity. These metrics can be used to evaluate the impact of design decisions, study the evolution of schemas, and so on. We also present and evaluate different solutions that could be applied to overcome some of the problems associated with the complexity of the schemas. --- paper_title: BPELPower-A BPEL execution engine for geospatial web services paper_content: The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms. --- paper_title: Scientific versus Business Workflows paper_content: The formal concept of a workflow has existed in the business world for a long time. An entire industry of tools and technology devoted to workflow management has been developed and marketed to meet the needs of commercial enterprises. The Workflow Management Coalition (WfMC) has existed for over ten years and has developed a large set of reference models, documents, and standards. Why has the scientific community not adopted these existing standards? While it is not uncommon for the scientific community to reinvent technology rather than purchase existing solutions, there are issues involved in the technical applications that are unique to science, and we will attempt to characterize some of these here. There are, however, many core concepts that have been developed in the business workflow community that directly relate to science, and we will outline them below. --- paper_title: Service-Oriented Computing: State of the Art and Research Challenges paper_content: Service-oriented computing promotes the idea of assembling application components into a network of services that can be loosely coupled to create flexible, dynamic business processes and agile applications that span organizations and computing platforms. An SOC research road map provides a context for exploring ongoing research activities. --- paper_title: WPS mediation: An approach to process geospatial data on different computing backends paper_content: The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances. --- paper_title: REST-based semantic feature catalogue services paper_content: The exchange of scientific datasets online and their subsequent use by service-centric applications requires semantic description of the data objects, or features, being transacted. This is particularly the case in the Earth Systems Sciences. Semantic repositories provide a partial answer to generating rich content. Ideally these repositories should be founded in a framework that permits cross-referencing between independently established semantic data-stores and which provides for a loose coupling between repositories and the agents or clients that will use them. We investigated the applicability of using an ISO 19110-based Feature Catalogue as a cross-domain, semantic repository for various Earth Systems Science communities of interest. Our aim was to develop a repository and a set of services capable of providing semantic content for consumption by smart clients. The constraint applied throughout the research was to develop a set of tools that would present a very low uptake barrier for programmers and domain specialists alike. To meet this challenge, we used Representational State Transfer (REST)-based services to expose content from an enhanced implementation of an ISO 19110-based Feature Catalogue. This article describes how the ISO 19110 conceptual model was augmented during implementation to cater for the requirements of two large multi-disciplinary science groups: the Scientific Committee on Antarctic Research and the Australian Ocean Data Network. The reasons for opting for a REST-based service pattern are discussed and the three REST service types that were developed are described. --- paper_title: Linked Data: Evolving the Web Into a Global Data Space paper_content: The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study. --- paper_title: A RESTful proxy and data model for linked sensor data paper_content: Abstract The vision of a Digital Earth calls for more dynamic information systems, new sources of information, and stronger capabilities for their integration. Sensor networks have been identified as a major information source for the Digital Earth, while Semantic Web technologies have been proposed to facilitate integration. So far, sensor data are stored and published using the Observations & Measurements standard of the Open Geospatial Consortium (OGC) as data model. With the advent of Volunteered Geographic Information and the Semantic Sensor Web, work on an ontological model gained importance within Sensor Web Enablement (SWE). In contrast to data models, an ontological approach abstracts from implementation details by focusing on modeling the physical world from the perspective of a particular domain. Ontologies restrict the interpretation of vocabularies toward their intended meaning. The ongoing paradigm shift to Linked Sensor Data complements this attempt. Two questions have to be addressed: (1) ... --- paper_title: Enhancing integrated environmental modelling by designing resource-oriented interfaces paper_content: Integrated environmental modelling is gaining momentum for addressing grand scientific challenges such as monitoring the environment for change detection and forecasting environmental conditions along with the consequences for society. Such challenges can only be addressed by a multi-disciplinary approach, in which socio-economic, geospatial, and environmental information becomes inter-connected. However, existing solutions cannot be seamlessly integrated and current interaction paradigms prevent mainstream usage of the existing technology. In particular, it is still difficult to access and join harmonized data and processing algorithms that are provided by different environmental information infrastructures. In this paper we take a novel approach for integrated environmental modelling based on the notion of inter-linked resources on the Web. We present design practices for creating resource-oriented interfaces, driven by an interaction protocol built on the combination of valid linkages to enhance resource integration, accompanied by associated recommendations for implementation. The suggested resource-oriented approach provides a solution to the problems identified above, but still requires intense prototyping and experimentation. We discuss the central open issues and present a roadmap for future research. Highlights? Resource-oriented interfaces for linking of environmental resources and models. ? Design practices for creating resource-oriented interfaces. ??Implementation recommendations for inter-linked resources. --- paper_title: Integrated environmental modeling: A vision and roadmap for the future paper_content: Integrated environmental modeling (IEM) is inspired by modern environmental problems, decisions, and policies and enabled by transdisciplinary science and computer capabilities that allow the environment to be considered in a holistic way. The problems are characterized by the extent of the environmental system involved, dynamic and interdependent nature of stressors and their impacts, diversity of stakeholders, and integration of social, economic, and environmental considerations. IEM provides a science-based structure to develop and organize relevant knowledge and information and apply it to explain, explore, and predict the behavior of environmental systems in response to human and natural sources of stress. During the past several years a number of workshops were held that brought IEM practitioners together to share experiences and discuss future needs and directions. In this paper we organize and present the results of these discussions. IEM is presented as a landscape containing four interdependent elements: applications, science, technology, and community. The elements are described from the perspective of their role in the landscape, current practices, and challenges that must be addressed. Workshop participants envision a global scale IEM community that leverages modern technologies to streamline the movement of science-based knowledge from its sources in research, through its organization into databases and models, to its integration and application for problem solving purposes. Achieving this vision will require that the global community of IEM stakeholders transcend social, and organizational boundaries and pursue greater levels of collaboration. Among the highest priorities for community action are the development of standards for publishing IEM data and models in forms suitable for automated discovery, access, and integration; education of the next generation of environmental stakeholders, with a focus on transdisciplinary research, development, and decision making; and providing a web-based platform for community interactions (e.g., continuous virtual workshops). --- paper_title: A scientific workflow environment for Earth system related studies paper_content: Many separate tasks must be performed to configure, run, and analyze Earth system modeling applications. This work is motivated by the complexities of running a large modeling system on a high performance network and the need to reduce those complexities, particularly for the average user. Scientific workflow systems can be used to simplify these task and their relationships, although how to implement such systems is still an open research area. In this paper, we present a methodology to combine a scientific workflow and modeling framework approach to create a standardized work environment and provide a first example of a self-describing Earth system model. We then show the results of an example workflow that is based on the proposed methodology. The example workflow allows running and analyzing a global circulation model on both a grid computing environment and a cluster system, with meaningful abstractions for the model and computing environment. As can be seen through this example, a layered approach to collecting provenance and metadata information has the added benefit of documenting a run in far greater detail than before. This approach facilitates exploration of runs and leads to possible reproducibility. ---
Title: Seeing the forest through the trees: A review of integrated environmental modelling tools Section 1: Introduction Description 1: This section introduces the background of natural hazards, the need for integrated modelling (IM), and the aims and structure of this review. Section 2: Engineering concepts of reuse Description 2: This section discusses the challenges and concepts related to importing and reusing existing models or components into IM tools, including white-box and black-box approaches. Section 3: Methodology and data Description 3: This section explains the methodology employed in the review, including literature search and selection processes, and the classification of IM tools by viewpoints. Section 4: Viewpoint-based analysis of IM tools Description 4: This section analyzes the selected IM tools by different viewpoints, including component-based modelling frameworks, scientific workflow systems, virtual research environments, service-based modelling, and resource-based modelling. Section 5: Cross-viewpoint discussion Description 5: This section discusses the potential relationships and connections between various viewpoints in terms of reusability. Section 6: Concluding remarks Description 6: This section summarizes the findings, presents concluding remarks on the reusability in IM, and suggests future research directions.
An Overview of 3D Object Grasp Synthesis Algorithms
9
--- paper_title: On the stability of grasped objects paper_content: A grasped object is defined to be in equilibrium if the sum of all forces and moments acting on a body equals zero. An equilibrium grasp may be stable or unstable. Force closed grasps are a well-known subset of equilibrium grasps, and they are known to be stable. However, not all stable grasps are force closed, including many common and easily obtainable grasps. In this paper, we classify the categories of equilibrium grasps and establish a general framework for the determination of the stability of a grasp. In order to analyze the stability of grasps with multiple contacts, we first model the compliance at each contact. We develop expressions for the changes in contact forces as a function of the rigid body relative motion between the fingers and the grasped object. The stability of a grasp is shown to depend on the local curvature of the contacting bodies, as well as the magnitude and arrangement of the contact forces. We then derive results providing simple criteria to determine the stability of a grasped object, including the special but important limiting case of rigid bodies where the contact compliance is zero. --- paper_title: Constructing Force- Closure Grasps paper_content: This paper presents fast and simple algorithms for directly constructing force-closure grasps based on the shape of the grasped object. The synthesis of force-closure grasps finds in dependent regions of contact for the fingertips, such that the motion of the grasped object is totally constrained. A force- closure grasp implies equilibrium grasps exist. In the reverse direction, we show that most nonmarginal equilibrium grasps are force-closure grasps. --- paper_title: The condition for contact grasp stability paper_content: The author distinguishes between two types of grasp stability, called spatial grasp stability and contact grasp stability. The former is the tendency of the grasped object to return to an equilibrium location in space; the latter is the tendency of the points of contact to return to an equilibrium position on the object's surface. It is shown, via examples, that spatial stability cannot capture certain intuitive concepts of grasp stability, and hence that any full understanding of grasp stability must include contact stability. A model of how the positions of the points of contact evolve in time on the surface of the grasped object in the absence of any external force or active feedback is derived. From this model, a condition is obtained which determines whether or not a two-fingered grasp is contact stable. > --- paper_title: On computing three-finger force-closure grasps of polygonal objects paper_content: This paper presents a new approach to the computation of stable grasps of polygonal objects. The authors consider the case of a hand equipped with three hard fingers and assume point contact with friction. They prove a sufficient condition for force-closure grasps that leads to a system of linear constraints in the position of the fingers along the polygonal edges. All regions satisfying these constraints are found by a new projection algorithm based on linear parameter elimination accelerated by simplex techniques. Maximal object segments where fingers can be positioned independently are found by linear optimization within the grasp regions. The approach has been implemented and examples are presented. > --- paper_title: Distributed impedance control of multiple robot systems paper_content: This paper proposes the distributed impedance approach as a new formulation of multiple robot systems control. In this approach, each cooperating manipulator is provided with its own independent impedance controller. In addition, along selected degrees of freedom, force control is achieved through an external loop, in order to improve control of the object's internal loading. Extensive stability analysis is performed based on a realistic model that includes robots impedance and object dynamics. Experiments are performed using two cooperating industrial robots holding an object through point contacts. Force and position control actions are suitably dispatched to achieve both internal loading control and object position control. The performance of the system is demonstrated for transporting tasks. --- paper_title: On the Closure Properties of Robotic Grasping paper_content: The form-closure and force-closure properties of robotic grasp ing are investigated. Loosely speaking, these properties are related to the capability of the robot to inhibit motions of the workpiece in spite of externally applied forces. In this article, form-closure is considered as a purely geometric property of a set of unilateral (contact) constraints, such as those applied on a workpiece by a mechanical fixture, while force-closure is related to the capability of the particular robotic device being considered to apply forces through contacts. The concepts of partial form- and force-closure properties are introduced and discussed, and an algorithm is proposed to obtain a synthetic geometric description of partial form-closure constraints. Al though the literature abounds with form-closure tests, proposed algorithms for testing force-closure are either approximate or computationally expensive. This article proves the equiva lence of force-closure analysis with the study of the equilibria of an ordinary... --- paper_title: On the stability and instantaneous velocity of grasped frictionless objects paper_content: An efficient quantitative test for form closure valid for any number of contact points is formulated as a linear program, the optimal objective value of which provides a measure of how far a grasp is from losing form closure. When the grasp does not have form closure, manipulation planning requires a means for predicting the object's stability and instantaneous velocity, given the joint velocities of the hand. The classical approach to computing these quantities is to solve the systems of kinematic inequalities corresponding to all possible combinations of separating or sliding at the contacts. All combinations resulting in the interpenetration of bodies or the infeasibility of the equilibrium equations are rejected. The remaining combination is consistent with all the constraints and is used to compute the velocity of the manipulated object and the contact forces, which indicate whether or not the object is stable. A linear program whose solution yields the same information as the classical approach, usually without explicit testing of all possible combinations of contact interactions, is formulated. > --- paper_title: Computing n-Finger Form-Closure Grasps on Polygonal Objects paper_content: This paper presents an efficient algorithm for computing all n-finger form-closure grasps on a polygonal object based on a new sufficient and necessary condition for form-closure. With this new condition, it is possible to transfer the problem of computing the form-closure grasp in R3 to one in R1. We demonstrate that the non-form-closure grasps consist of two convex polytopes in the space of n parameters representing grasp points on sides of the polygon. The proposed algorithm works efficiently for n ≤ 3 and takes O(n3n/2) time for n > 3, where n denotes the number of the fingers. The algorithm has been implemented and its efficiency has been confirmed with two examples. --- paper_title: Synthesis of Force-Closure Grasps on 3-D Objects Based on the Q Distance paper_content: The synthesis of force-closure grasps on three-dimensional (3-D) objects is a fundamental issue in robotic grasping and dextrous manipulation. In this paper, a numerical force-closure test is developed based on the concept of Q distance. With some mild and realistic assumptions, the proposed test criterion is differentiable almost everywhere and its derivative can be calculated exactly. On this basis, we present an algorithm for planning force-closure grasps, which is implemented by applying descent search to the proposed numerical test in the grasp configuration space. The algorithm is generally applicable to planning optimal force-closure grasps on 3-D objects with curved surfaces and with arbitrary number of contact points. The effectiveness and efficiency of the algorithm are demonstrated by using simulation examples. --- paper_title: Constructing stable grasps in 3D paper_content: This paper presents fast and simple algorithms for directly constructing stable grasps in 3D. The synthesis of stable grasps constructs virtual springs at the contacts, such that the grasped object is stable, and has a desired stiffness matrix about its stable equilibrium. The paper develops a simple geometric relation between the stiffness of the grasp and the spatial configuration of the virtual springs at the contacts. The stiffness of the grasp also depends on whether the points of contact stick, or slide without friction on the edges of the object. --- paper_title: Generalized stability of compliant grasps paper_content: We develop a geometric framework for the stability analysis of multifingered grasps and propose a measure of grasp stability for arbitrary perturbations and loading conditions. The measure requires a choice of metric on the group of rigid body displacements. We show that although the stability of a grasp itself does not depend on the choice of metric, comparison of the stability of different grasps depends on the metric. Finally, we provide some insight into the choice of metrics for stability analysis. --- paper_title: Liapunov Stability of Force-Controlled Grasps with a Multi-Fingered Hand paper_content: Holding an object stably is a building block for dexterous manipulation with a multi-fingered hand. In recent years a rather large body of literature related to this topic has developed. These works isolate some desired property of a grasp and use this property as the definition of stable grasp. To varying degrees, these approaches ignore the system dynamics. The purpose of this article is to put grasp stability on a more basic and fundamental foundation by defining grasp stability in terms of the well-established stability theory of differential equations. This approach serves to unify the field and to bring a large body of knowledge to bear on the field. Some relationships between the stability concepts used here and the previously used grasp stability concepts are discussed. A hierarchy of three levels of approach to the problem is treated. We consider that the grasp force applied is a basic consideration, in terms of ensuring both that there is sufficient force to prevent dropping the object and that the forces are not too large to cause breakage. As a result, we first investigate the use of constant force grasps. It is shown that with the proper combination of finger locations and grasp forces, such grasps can be Liapunov stable, and methods are presented that help find such grasps. It is also shown that such grasps cannot be asymptotically stable. To produce asymptotic stability, one must alter the forces applied to an object when the object deviates from equilibrium, and a linear feedback force law is given for this. It results in local asymptotic stability guaranteeing convergence to the desired grasp equilibrium from all states within a region of attraction in the state space. Some of the results are similar to results obtained previously, but this time they have a stronger meaning in terms of the dynamic response of the system. In the third level, a nonlinear force law is given that, to within certain limitations, produces global asymptotic grasp stability, so that all initial states are guaranteed to converge to the desired grasp equilibrium. The method is robust to large classes of inaccuracies in the implementation. --- paper_title: Robotic grasping and contact: a review paper_content: In this paper, we survey the field of robotic grasping and the work that has been done in this area over the last two decades, with a slight bias toward the development of the theoretical framework and analytical results in this area. --- paper_title: On computing immobilizing grasps of 3-D curved objects paper_content: We propose an algorithm for searching a form-closure grasp on a 3-D discretized curved object. The algorithm first randomly selects an initial set of seven contacts from the large collection of candidate contacts and check its form-closure property by the test algorithm developed in our early work. For the non-form-closure grasp, the candidate contacts are classified into set S where the contact wrenches lie on the same side of the separating facet as the initial wrenches and set D where the contact wrenches lie on the different side. The separating facet can be calculated through the test algorithm. Then the initial seven contacts are iteratively improved by exchanging with the candidate contacts in set D. The best-first motion together with random motion are used in the exchange procedure to ensure the convex hull of the seven contact wrenches will approach the origin step by step until the origin is completely contained. Finally, the algorithm has been implemented and its efficiency has been ascertained by three examples. --- paper_title: On computing robust n-finger force-closure grasps of 3D objects paper_content: The paper deals with computing frictional force-closure grasps of 3D objects problem. The key idea of the presented work is the demonstration that wrenches associated to any three non-aligned contact points of 3D objects form a basis of their corresponding wrench space. This result permits the formulation of a new sufficient force-closure test. Our approach works with general objects, modelled with a set of points, and with any number n of contacts (n e 4). A quality criterion is also introduced. A corresponding algorithm for computing robust force-closure grasps has been developed. Its efficiency is confirmed by comparing it to the classical convex-hull method [26]. --- paper_title: On Computing Four-Finger Equilibrium and Force-Closure Grasps of Polyhedral Objects paper_content: This article addresses the problem of computing stable grasps of three-dimensional polyhedral objects. We consider the case of a hand equipped with four hard fingers and assume point contact with friction. We prove new necessary and sufficient conditions for equilibrium and force closure, and present a geometric characterization of all possible types of four-finger equilibrium grasps. We then focus on concurrent grasps, for which the lines of action of the four contact forces all intersect in a point. In this case, the equilibrium conditions are linear in the unknown grasp parameters, which reduces the problem of computing the stable grasp regions in configuration space to the problem of constructing the eight-dimensional projec tion of an ll-dimensinnal polytope. We present two projection methods: the first one uses a simple Gaussian elimination ap proach, while the second one relies on a novel output-sensitive contour-tracking algorithm. Finally, we use linear optimization within the valid configuration... --- paper_title: Qualitative test and force optimization of 3-D frictional form-closure grasps using linear programming paper_content: This paper formalizes qualitative test of 3D frictional form-closure grasps of n robotic fingers as a problem of linear programming (LP). It is well-known that a sufficient and necessary condition for form-closure grasps is that the origin of the wrench space lies inside the convex hull of primitive contact wrenches. We demonstrate that the problem of querying whether the origin lies inside the convex hull is equivalent to a ray-shooting problem, which is dual to a LP problem based on the duality between convex hulls and convex polytopes. Furthermore, this paper addresses a problem of minimizing the L/sub 1/ norm of the grasp forces balancing an external wrench, which can be also transformed to a ray-shooting problem. We have implemented the algorithms and confirmed their real-time efficiency for qualitative test and grasp force optimization. --- paper_title: Constructing stable grasps in 3D paper_content: This paper presents fast and simple algorithms for directly constructing stable grasps in 3D. The synthesis of stable grasps constructs virtual springs at the contacts, such that the grasped object is stable, and has a desired stiffness matrix about its stable equilibrium. The paper develops a simple geometric relation between the stiffness of the grasp and the spatial configuration of the virtual springs at the contacts. The stiffness of the grasp also depends on whether the points of contact stick, or slide without friction on the edges of the object. --- paper_title: On characterizing and computing three- and four-finger force-closure grasps of polyhedral objects paper_content: The problem of characterizing the force-closure grasps of a three-dimensional object by a hand equipped with three or four hard fingers is addressed. Several necessary and several sufficient conditions for force-closure are proved. For polyhedral objects, sufficient conditions that are linear in the unknown parameters are proved. This reduces the problem of computing force-closure grasps of polyhedral objects to the problem of projecting a polytope onto some linear subspace. An efficient, output-sensitive algorithm is presented for computing the projection, together with an algorithm using linear programming for computing maximal grasp regions. > --- paper_title: Automatic grasp planning using shape primitives paper_content: Automatic grasp planning for robotic hands is a difficult problem because of the huge number of possible hand configurations. However, humans simplify the problem by choosing an appropriate prehensile posture appropriate for the object and task to be performed. By modeling an object as a set of shape primitives, such as spheres, cylinders, cones and boxes, we can use a set of rules to generate a set of grasp starting positions and pregrasp shapes that can then be tested on the object model. Each grasp is tested and evaluated within our grasping simulator "GraspIt!", and the best grasps are presented to the user. The simulator can also plan grasps in a complex environment involving obstacles and the reachability constraints of a robot arm. --- paper_title: Synthesis of Force-Closure Grasps on 3-D Objects Based on the Q Distance paper_content: The synthesis of force-closure grasps on three-dimensional (3-D) objects is a fundamental issue in robotic grasping and dextrous manipulation. In this paper, a numerical force-closure test is developed based on the concept of Q distance. With some mild and realistic assumptions, the proposed test criterion is differentiable almost everywhere and its derivative can be calculated exactly. On this basis, we present an algorithm for planning force-closure grasps, which is implemented by applying descent search to the proposed numerical test in the grasp configuration space. The algorithm is generally applicable to planning optimal force-closure grasps on 3-D objects with curved surfaces and with arbitrary number of contact points. The effectiveness and efficiency of the algorithm are demonstrated by using simulation examples. --- paper_title: Modeling manufacturing grips and correlations with the design of robotic hands paper_content: This paper represents the first part of an effort to codify the knowledge required for manipulation tasks in a small-batch manufacturing cell. The motivation for this work is to pave the way for robots that can independently determine how to grasp and manipulate parts in a limited environment and to facilitate the design of advanced, but cost-effective manufacturing hands. We begin with an examination of grasps used by humans working with tools and metal parts. The grips are compared in terms of power, contact area, friction, damping and tactile sensitivity. The comparison leads to a grip taxonomy in which grasps are mapped against task-related quantities (such as power) and object-related quantities (such as slenderness). The examinations of the task requirements and grasps suggest a number of general principles for the design and control of manufacturing hands. --- paper_title: Examples of 3D grasp quality computations paper_content: Previous grasp quality research is mainly theoretical, and has assumed that contact types and positions are given, in order to preserve the generality of the proposed quality measures. The example results provided by these works either ignore hand geometry and kinematics entirely or involve only the simplest of grippers. We present a unique grasp analysis system that, when given a 3D object, hand, and pose for the hand, can accurately determine the types of contacts that will occur between the links of the hand and the object, and compute two measures of quality for the grasp. Using models of two articulated robotic hands, we analyze several grasps of a polyhedral model of a telephone handset, and we use a novel technique to visualize the 6D space used in these computations. In addition, we demonstrate the possibility of using this system for synthesizing high quality grasps by performing a search over a subset of possible hand configurations. --- paper_title: Fast planning of precision grasps for 3D objects paper_content: In the near future, more and more robots will be used for servicing tasks, tasks in hazardous environments or space applications. Dextrous hands are a powerful and flexible tool to interact with these real world environments that are not specially tailored for robots. In order to grasp and manipulate real world objects, grasp planning systems are required. Grasp planning for general 3D objects is quite a complex problem requiring a large amount of computing time. Fast algorithms are required to integrate grasp planners in online planning systems for robots. This paper presents an heuristic approach towards fast planning of precision grasps for realistic, arbitrarily shaped 3D objects. In this approach a number of feasible grasp candidates are generated heuristically. These grasp candidates are qualified using an efficiently computable grasp quality measure and the best candidate is chosen. It is shown that only a relatively small number of grasp candidates has to be generated in order to obtain a good-although not optimal-grasp. --- paper_title: Grasping the dice by dicing the grasp paper_content: Many methods for generating and analyzing grasps have been developed in the recent years. They gave insight and comprehension of grasping with robot hands but many of them are rather complicated to implement and of high computational complexity. In this paper we study if the basic quality criterion for grasps, the force-closure property, is in principle easy or difficult to reach. We show that it is not necessary to generate optimal grasps, due to a certain quality measure, for real robot grasping tasks where an average quality grasp is acceptable. We present statistical data that confirm our opinion that a randomized grasp generation algorithm is fast and suitable for the planning of robot grasping tasks. --- paper_title: Task-oriented quality measures for dextrous grasping paper_content: We propose a new and efficient approach to compute task oriented quality measures for dextrous grasps. Tasks can be specified as a single wrench to be applied, as a rough direction in form of a wrench cone, or as a complex wrench polytope. Based on the linear matrix inequality formalism to treat the friction cone constraints we formulate respective convex optimization problems, whose solutions give the maximal applicable wrench in the task direction together with the needed contact forces. Numerical experiments show that application to complex grasps with many contacts is possible. --- paper_title: Manipulability of Robotic Mechanisms paper_content: This paper discusses the manipulating ability of robotic mechanisms in positioning and orienting end-effectors and proposes a measure of manipulability. Some properties of this measure are obtained, the best postures of various types of manipulators are given, and a four-degree-of-freedom finger is considered from the viewpoint of the measure. The pos tures somewhat resemble those of human arms and fingers. --- paper_title: Task-Oriented Grasping using Hand Preshapes and Task Frames paper_content: In this paper we present a robot that is able to perform daily manipulation tasks in a home environment, such as opening doors and drawers. Taking as input a simplified object model and the task to perform, the robot automatically finds a grasp suitable for the task and performs it. For this, we identify a set of hand preshapes and classify them according to the grasp wrench space they generate. Given a task, the robot selects the most suitable hand preshape and automatically plans a set of actions in order to reach the object and to perform the task, taking continuously into account the task forces. The concept of hand preshape is extended for the inclusion of a task frame, which is a concept from task planning, thus filling the gap between the grasp and the task. --- paper_title: Task oriented optimal grasping by multifingered robot hands paper_content: We discuss the problem of optimal grasping to an object by a multifingered robot hand. We axiomatize using screw theory and elementary differential geometry the concept of a grasp and characterize its stability. Three quality measures for evaluating a grasp are then proposed. The last quality measure is task oriented and needs the development of a procedure for modeling tasks as ellipsoids in the wrench space of the object. Numerical computations of these quality measures and the selection of an optimal grasp are addressed in detail. Several examples are given using these quality measures to show that they are consistent with human grasping experience. --- paper_title: Robot manipulability paper_content: This paper demonstrates fundamental problems with dexterity measures found throughout the robotics literature and offers a methodology for correcting those problems. Measures of robot dexterity derived from eigenvalues, eigenvectors, similarity transformations, singular-value decompositions and the Moore-Penrose inverse of the manipulator Jacobian do not have invariant physical meaning. The paper presents manipulability ellipsoids and manipulability screw-subspaces for both redundant and nonredundant manipulators. > --- paper_title: Task Compatibility of Manipulator Postures paper_content: In performing a manipulation task, humans tend to adopt arm postures that most effectively utilize the motion and strength capabilities of the arm. Selecting arm postures that are compatible with the task requirements has become almost instinctive to humans. By mimicking this approach in robotic manipulation, we can exploit the full capability of a manipu lator in performing a task. An index is proposed for measur ing the compatibility of manipulator postures with respect to a generalized task description. The manipulator is viewed as a mechanical transformer, with joint space velocity and force as input and task space velocity and force as output. Optimi zation of the index corresponds to matching the velocity and force transmission characteristics to the task requirements. The applications of this index to manipulator redundancy utilization and workspace design are also discussed. --- paper_title: Imitation in Animals and Artifacts paper_content: The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as animal behavior, artificial intelligence, computer science, comparative psychology, neuroscience, primatology, and linguistics. This volume represents a first step toward integrating research from those studying imitation in humans and other animals, and those studying imitation through the construction of computer software and robots. Imitation is of particular importance in enabling robotic or software agents to share skills without the intervention of a programmer and in the more general context of interaction and collaboration between software agents and humans. Imitation provides a way for the agent -- whether biological or artificial -- to establish a "social relationship" and learn about the demonstrator's actions, in order to include them in its own behavioral repertoire. Building robots and software agents that can imitate other artificial or human agents in an appropriate way involves complex problems of perception, experience, context, and action, solved in nature in various ways by animals that imitate. --- paper_title: A survey of robot learning from demonstration paper_content: We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research. --- paper_title: Computational Approaches to Motor Learning by Imitation paper_content: Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. --- paper_title: Learning techniques in a dataglove based telemanipulation system for the DLR hand paper_content: We present a setup to control a four-finger anthropomorphic robot hand using a dataglove. To be able to accurately use the dataglove we implemented a nonlinear learning calibration using a novel neural network technique. Experiments show that a resulting positioning error not exceeding 1.8 mm, but typically 0.5 mm, per finger can be obtained; this accuracy is sufficiently precise for grasping tasks. Based on the dataglove calibration we present a solution for the mapping of human and artificial hand workspaces that enables an operator to intuitively and easily telemanipulate objects with the artificial hand. --- paper_title: Detection and evaluation of grasping positions for autonomous agents paper_content: The action of grasping an object is very important for a humanoid agent with hand, evolving autonomously in a virtual world. If an agent is given the order "grasp the object", he has to determine the portion of the object to grasp. In order to determine a grasping position, it is necessary to recognize the form of the 3-dimensional geometry, and detect portions which are suitable for grasp. Assuming that the object is grasped by one hand, we propose two techniques for detecting appropriate portions to be grasped on the surface of an object and for generating the grasping postures. In this paper, our main contributions are as follows: the first is the detection of appropriate portions to be grasped on the surface of an object; the second is the formation of the hand shape to grasp various shaped objects. Using the two proposing methods, grasped positions and the hand shapes for each of them are computed. Moreover, these positions are evaluated by their stabilities using the robotics technique --- paper_title: Human-to-Robot Mapping of Grasps paper_content: We are developing a Programming by Demonstration (PbD) system for which recognition of objects and pick-and-place actions represent basic building blocks for task learning. An important capability in this system is automatic isual recognition of human grasps, and methods for mapping the human grasps to the functionally corresponding robot grasps. This paper describes the grasp recognition system, focusing on the human-to-robot mapping. The visual grasp classification and grasp orientation regression is described in our IROS 2008 paper [1]. In contrary to earlier approaches, no articulated 3D reconstruction of the hand over time is taking place. The input data consists of a single image of the human hand. The hand shape is classified as one of six grasps by finding similar hand shapes in a large database of grasp images. From the database, the hand orientation is also estimated. The recognized grasp is then mapped to one of three predefined Barrett hand grasps. Depending on the type of robot grasp, a precomputed grasp strategy is selected. The strategy is further parameterized by the orientation of the hand relative to the environment show purposes. --- paper_title: Robot grasp synthesis from virtual demonstration and topology-preserving environment reconstruction paper_content: Automatic environment modeling is an essential requirement for intelligent robots to execute manipulation tasks. Object recognition and workspace reconstruction also enable 3D user interaction and programming of assembly operations. In this paper a novel method for synthesizing robot grasps from demonstration is presented. The system allows learning and classification of human grasps demonstrated in virtual reality as well as teaching of robot grasps and simulation of manipulation tasks. Both virtual grasp demonstration and grasp synthesis take advantage of a topology-preserving approach for automatic workspace modeling with a monocular camera. The method is based on the computation of edge-face graphs. The algorithm works in real-time and shows high scalability in the number of objects thus allowing accurate reconstruction and registration from multiple views. Grasp synthesis is performed mimicking the human hand pre-grasp motion with data smoothing. Experiments reported in the paper have tested the capabilities of both the vision algorithm and the grasp synthesizer. --- paper_title: Learning techniques in a dataglove based telemanipulation system for the DLR hand paper_content: We present a setup to control a four-finger anthropomorphic robot hand using a dataglove. To be able to accurately use the dataglove we implemented a nonlinear learning calibration using a novel neural network technique. Experiments show that a resulting positioning error not exceeding 1.8 mm, but typically 0.5 mm, per finger can be obtained; this accuracy is sufficiently precise for grasping tasks. Based on the dataglove calibration we present a solution for the mapping of human and artificial hand workspaces that enables an operator to intuitively and easily telemanipulate objects with the artificial hand. --- paper_title: Learning and Evaluation of the Approach Vector for Automatic Grasp Generation and Planning paper_content: In this paper, we address the problem of automatic grasp generation for robotic hands where experience and shape primitives are used in synergy so to provide a basis not only for grasp generation but also for a grasp evaluation process when the exact pose of the object is not available. One of the main challenges in automatic grasping is the choice of the object approach vector, which is dependent both on the object shape and pose as well as the grasp type. Using the proposed method, the approach vector is chosen not only based on the sensory input but also on experience that some approach vectors will provide useful tactile information that finally results in stable grasps. A methodology for developing and evaluating grasp controllers is presented where the focus lies on obtaining stable grasps under imperfect vision. The method is used in a teleoperation or a programming by demonstration setting where a human demonstrates to a robot how to grasp an object. The system first recognizes the object and grasp type which can then be used by the robot to perform the same action using a mapped version of the human grasping posture. --- paper_title: Modeling manufacturing grips and correlations with the design of robotic hands paper_content: This paper represents the first part of an effort to codify the knowledge required for manipulation tasks in a small-batch manufacturing cell. The motivation for this work is to pave the way for robots that can independently determine how to grasp and manipulate parts in a limited environment and to facilitate the design of advanced, but cost-effective manufacturing hands. We begin with an examination of grasps used by humans working with tools and metal parts. The grips are compared in terms of power, contact area, friction, damping and tactile sensitivity. The comparison leads to a grip taxonomy in which grasps are mapped against task-related quantities (such as power) and object-related quantities (such as slenderness). The examinations of the task requirements and grasps suggest a number of general principles for the design and control of manufacturing hands. --- paper_title: Human-to-Robot Mapping of Grasps paper_content: We are developing a Programming by Demonstration (PbD) system for which recognition of objects and pick-and-place actions represent basic building blocks for task learning. An important capability in this system is automatic isual recognition of human grasps, and methods for mapping the human grasps to the functionally corresponding robot grasps. This paper describes the grasp recognition system, focusing on the human-to-robot mapping. The visual grasp classification and grasp orientation regression is described in our IROS 2008 paper [1]. In contrary to earlier approaches, no articulated 3D reconstruction of the hand over time is taking place. The input data consists of a single image of the human hand. The hand shape is classified as one of six grasps by finding similar hand shapes in a large database of grasp images. From the database, the hand orientation is also estimated. The recognized grasp is then mapped to one of three predefined Barrett hand grasps. Depending on the type of robot grasp, a precomputed grasp strategy is selected. The strategy is further parameterized by the orientation of the hand relative to the environment show purposes. --- paper_title: Imitation in Animals and Artifacts paper_content: The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as animal behavior, artificial intelligence, computer science, comparative psychology, neuroscience, primatology, and linguistics. This volume represents a first step toward integrating research from those studying imitation in humans and other animals, and those studying imitation through the construction of computer software and robots. Imitation is of particular importance in enabling robotic or software agents to share skills without the intervention of a programmer and in the more general context of interaction and collaboration between software agents and humans. Imitation provides a way for the agent -- whether biological or artificial -- to establish a "social relationship" and learn about the demonstrator's actions, in order to include them in its own behavioral repertoire. Building robots and software agents that can imitate other artificial or human agents in an appropriate way involves complex problems of perception, experience, context, and action, solved in nature in various ways by animals that imitate. --- paper_title: Improved Switching among Temporally Abstract Actions paper_content: In robotics and other control applications it is commonplace to have a preexisting set of controllers for solving subtasks, perhaps hand-crafted or previously learned or planned, and still face a difficult problem of how to choose and switch among the controllers to solve an overall task as well as possible. In this paper we present a framework based on Markov decision processes and semi-Markov decision processes for phrasing this problem, a basic theorem regarding the improvement in performance that can be obtained by switching flexibly between given controllers, and example applications of the theorem. In particular, we show how an agent can plan with these high-level controllers and then use the results of such planning to find an even better plan, by modifying the existing controllers, with negligible additional cost and no re-planning. In one of our examples, the complexity of the problem is reduced from 24 billion state-action pairs to less than a million state-controller pairs. --- paper_title: Detection and evaluation of grasping positions for autonomous agents paper_content: The action of grasping an object is very important for a humanoid agent with hand, evolving autonomously in a virtual world. If an agent is given the order "grasp the object", he has to determine the portion of the object to grasp. In order to determine a grasping position, it is necessary to recognize the form of the 3-dimensional geometry, and detect portions which are suitable for grasp. Assuming that the object is grasped by one hand, we propose two techniques for detecting appropriate portions to be grasped on the surface of an object and for generating the grasping postures. In this paper, our main contributions are as follows: the first is the detection of appropriate portions to be grasped on the surface of an object; the second is the formation of the hand shape to grasp various shaped objects. Using the two proposing methods, grasped positions and the hand shapes for each of them are computed. Moreover, these positions are evaluated by their stabilities using the robotics technique --- paper_title: An SVM learning approach to robotic grasping paper_content: Finding appropriate stable grasps for a hand (either robotic or human) on an arbitrary object has proved to be a challenging and difficult problem. The space of grasping parameters coupled with the degrees-of-freedom and geometry of the object to be grasped creates a high-dimensional, non-smooth manifold. Traditional search methods applied to this manifold are typically not powerful enough to find appropriate stable grasping solutions, let alone optimal grasps. We address this issue in this paper, which attempts to find optimal grasps of objects using a grasping simulator. Our unique approach to the problem involves a combination of numerical methods to recover parts of the grasp quality surface with any robotic hand, and contemporary machine learning methods to interpolate that surface, in order to find the optimal grasp. --- paper_title: Robotic Grasping of Novel Objects using Vision paper_content: We consider the problem of grasping novel objects, specifically objects that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Furthermore, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires nor tries to build a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained by means of supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. --- paper_title: Planning optimal grasps paper_content: The authors address the problem of planning optimal grasps. Two general optimality criteria that consider the total finger force and the maximum finger force are introduced and discussed. Their formalization using various metrics on a space of generalized forces is detailed. The geometric interpretation of the two criteria leads to an efficient planning algorithm. An example of its use in a robotic environment equipped with two-jaw and three-jaw is described. > --- paper_title: Learning Grasping Points with Shape Context paper_content: This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. --- paper_title: Functional Object Class Detection Based on Learned Affordance Cues paper_content: Current approaches to visual object class detection mainly focus on the recognition of basic level categories, such as cars, motorbikes, mugs and bottles. Although these approaches have demonstrated impressive performance in terms of recognition, their restriction to these categories seems inadequate in the context of embodied, cognitive agents. Here, distinguishing objects according to functional aspects based on object affordances is important in order to enable manipulation of and interaction between physical objects and cognitive agent. ::: ::: In this paper, we propose a system for the detection of functional object classes, based on a representation of visually distinct hints on object affordances (affordance cues). It spans the complete range from tutordriven acquisition of affordance cues, learning of corresponding object models, and detecting novel instances of functional object classes in real images. --- paper_title: Data-Driven Grasp Synthesis Using Shape Matching and Task-Based Pruning paper_content: Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments. --- paper_title: A hybrid approach for grasping 3D objects paper_content: The paper presents a novel strategy that learns to associate a grasp to an unknown object/task. A hybrid approach combining empirical and analytical methods is proposed. The empirical step ensures task-compatibility by learning to identify the object graspable part in accordance with humans choice. The analytical step permits contact points generation guaranteeing the grasp stability. The robotic hand kinematics are also taken into account. The corresponding results are illustrated using GraspIt interface [1]. --- paper_title: A new strategy combining empirical and analytical approaches for grasping unknown 3D objects paper_content: This paper proposes a novel strategy for grasping 3D unknown objects in accordance with their corresponding task. We define the handle or the natural grasping component of an object as the part chosen by humans to pick up this object. When humans reach out to grasp an object, it is generally in the aim of accomplishing a task. Thus, the chosen grasp is quite related to the object task. Our approach learns to identify object handles by imitating humans. In this paper, a new sufficient condition for computing force-closure grasps on the obtained handle is also proposed. Several experiments were conducted to test the ability of the algorithm to generalize to new objects. They also show the adaptability of our strategy to the hand kinematics. ---
Title: An Overview of 3D Object Grasp Synthesis Algorithms Section 1: Introduction Description 1: Provide an introduction to the field of robotic grasping, outlining the goals and challenges of grasp synthesis algorithms. Section 2: Background and Terminology Description 2: Explain the basic functions and essential properties of a gripper, including terminology such as equilibrium, stability, force-closure, and form-closure. Section 3: Analytical Approaches Description 3: Review grasp synthesis algorithms that are based on analytical approaches involving geometric, kinematic, and dynamic formulations. Section 4: Force-Closure Grasps Description 4: Present techniques for finding force-closure grasps for 3D objects, including both force-closure grasp synthesis for 3D objects and optimal force-closure grasp synthesis. Section 5: Task Compatibility Description 5: Discuss the importance of task-oriented grasping and the criteria for ensuring that a grasp is suitable for the task at hand. Section 6: Discussion on Analytical Approaches Description 6: Analyze the effectiveness and limitations of analytical approaches for grasp synthesis, particularly regarding computational complexity and task-specific adaptability. Section 7: Empirical Approaches Description 7: Examine empirical grasping approaches based on classification and learning methods, including those focused on human observation and object observation. Section 8: Discussion on Empirical Approaches Description 8: Evaluate the benefits and drawbacks of empirical approaches, and discuss the challenges in achieving natural grasps. Section 9: Conclusion Description 9: Summarize the findings of the survey, comparing analytical and empirical approaches, and highlight open problems and future research directions in grasp synthesis.
The Complexity of Surjective Homomorphism Problems -- a Survey
9
--- paper_title: Computing Role Assignments of Chordal Graphs paper_content: In social network theory, a simple graph G is called k-role assignable if there is a surjective mapping that assigns a number from {1,...,k} called a role to each vertex of G such that any two vertices with the same role have the same sets of roles assigned to their neighbors. The decision problem whether such a mapping exists is called the k-ROLE ASSIGNMENT problem. This problem is known to be NP-complete for any fixed k ≥ 2. In this paper we classify the computational complexity of the k-Role Assignment problem for the class of chordal graphs. We show that for this class the problem becomes polynomially solvable for k = 2, but remains NP-complete for any k ≥ 3. This generalizes results of Sheng and answers his open problem. --- paper_title: The stubborn problem is stubborn no more: a polynomial algorithm for 3-compatible colouring and the stubborn list partition problem paper_content: One of the driving problems in the CSP area is the Dichotomy Conjecture, formulated in 1993 by Feder and Vardi [STOC'93], stating that for any fixed relational structure G the Constraint Satisfaction Problem CSP(G) is either NP--complete or polynomial time solvable. A large amount of research has gone into checking various specific cases of this conjecture. One such variant which attracted a lot of attention in the recent years is the LIST MATRIX PARTITION problem. In 2004 Cameron et al. [SODA'04] classified almost all LIST MATRIX PARTITION variants for matrices of size at most four. The only case which resisted the classification became known as the STUBBORN PROBLEM. In this paper we show a result which enables us to finish the classification - thus solving a problem which resisted attacks for the last six years. Our approach is based on a combinatorial problem known to be at least as hard as the STUBBORN PROBLEM - the 3-COMPATIBLE COLOURING problem. In this problem we are given a complete graph with each edge assigned one of 3 possible colours and we want to assign one of those 3 colours to each vertex in such a way that no edge has the same colour as both of its endpoints. The tractability of the 3-COMPATIBLE COLOURING problem has been open for several years and the best known algorithm prior to this paper is due to Feder et al. [SODA'05] - a quasipolynomial algorithm with a n^O(log n / log log n) time complexity. In this paper we present a polynomial-time algorithm for the 3-COMPATIBLE COLOURING problem and consequently we prove a dichotomy for the k-COMPATIBLE COLOURING problem. --- paper_title: The Complexity of the List Partition Problem for Graphs paper_content: The $k$-partition problem is as follows: Given a graph $G$ and a positive integer $k$, partition the vertices of $G$ into at most $k$ parts $A_1, A_2, \ldots , A_k$, where it may be specified that $A_i$ induces a stable set, a clique, or an arbitrary subgraph, and pairs $A_i, A_j (i \neq j)$ be completely nonadjacent, completely adjacent, or arbitrarily adjacent. The list $k$-partition problem generalizes the $k$-partition problem by specifying for each vertex $x$, a list $L(x)$ of parts in which it is allowed to be placed. Many well-known graph problems can be formulated as list $k$-partition problems: e.g., 3-colorability, clique cutset, stable cutset, homogeneous set, skew partition, and 2-clique cutset. We classify, with the exception of two polynomially equivalent problems, each list 4-partition problem as either solvable in polynomial time or NP-complete. In doing so, we provide polynomial-time algorithms for many problems whose polynomial-time solvability was open, including the list 2-clique cutset problem. This also allows us to classify each list generalized 2-clique cutset problem and list generalized skew partition problem as solvable in polynomial time or NP-complete. --- paper_title: List Partitions paper_content: List partitions generalize list colorings and list homomorphisms. (We argue that they may be called list "semihomomorphisms.") Each symmetric matrix M over 0,1,* defines a list partition problem. Different choices of the matrix M lead to many well-known graph theoretic problems, often related to graph perfection, including the problem of recognizing split graphs, finding homogeneous sets, clique cutsets, stable cutsets, and so on. The recent proof of the strong perfect graph theorem employs three kinds of decompositions that can be viewed as list partitions. ::: We develop tools which allow us to classify the complexity of many list partition problems and, in particular, yield the complete classification for small matrices M. Along the way, we obtain a variety of specific results, including generalizations of Lovasz's communication bound on the number of clique-versus-stable-set separators, polynomial time algorithms to recognize generalized split graphs, a polynomial algorithm for the list version of the clique cutset problem, and the first subexponential algorithm for the skew cutset problem of Chvatal. We also show that the dichotomy (NP-complete versus polynomial time solvable), conjectured for certain graph homomorphism problems, would, if true, imply a slightly weaker dichotomy (NP-complete versus quasi-polynomial) for our list partition problems. --- paper_title: A complete complexity classification of the role assignment problem paper_content: In social network theory a society is often represented by a simple graph G, where vertices stand for individuals and edges represent relationships between those individuals. The description of the social network is tried to be simplified by assigning roles to the individuals, such that the neighborhood relation is preserved. Formally, for a fixed graph R we ask for a vertex mapping r: VG → VR, such that r(NG(u)) = NR(r(u)) for all u ∈ VG.If such a mapping exists the graph G is called R-role assignable and the corresponding decision problem is called the R-role assignment problem. Kristiansen and Telle conjectured that the R-role assignment problem is an NP-complete problem for any simple connected graph R on at least three vertices. In this paper we prove their conjecture. In addition, we determine the computational complexity of the role assignment problem for nonsimple and disconnected role graphs, as these are considered in social network theory as well. --- paper_title: Tractable conservative constraint satisfaction problems paper_content: In a constraint satisfaction problem (CSP), the aim is to find an assignment of values to a given set of variables, subject to specified constraints. The CSP is known to be NP-complete in general. However, certain restrictions on the form of the allowed constraints can lead to problems solvable in polynomial time. Such restrictions are usually imposed by specifying a constraint language. The principal research direction aims to distinguish those constraint languages, which give rise to tractable CSPs from those which do not. We achieve this goal for the widely used variant of the CSP, in which the set of values for each individual variable can be restricted arbitrarily. Restrictions of this type can be expressed by including in a constraint language all possible unary constraints. Constraint languages containing all unary constraints will be called conservative. We completely characterize conservative constraint languages that give rise to CSP classes solvable in polynomial time. In particular, this result allows us to obtain a complete description of those (directed) graphs H for which the List H-Coloring problem is polynomial time solvable. --- paper_title: Tractable cases of the extended global cardinality constraint paper_content: We study the consistency and domain consistency problem for extended global cardinality (EGC) constraints. An EGC constraint consists of a set X of variables, a set D of values, a domain $D(x) \subseteq D$ for each variable x, and a "cardinality set" K(d) of non-negative integers for each value d. The problem is to instantiate each variable x with a value in D(x) such that for each value d, the number of variables instantiated with d belongs to the cardinality set K(d). It is known that this problem is NP-complete in general, but solvable in polynomial time if all cardinality sets are intervals. First we pinpoint connections between EGC constraints and general factors in graphs. This allows us to extend the known polynomial-time case to certain non-interval cardinality sets. Second we consider EGC constraints under restrictions in terms of the treewidth of the value graph (the bipartite graph representing variable-value pairs) and the cardinality-width (the largest integer occurring in the cardinality sets). We show that EGC constraints can be solved in polynomial time for instances of bounded treewidth, where the order of the polynomial depends on the treewidth. We show that (subject to the complexity theoretic assumption FPT???W[1]) this dependency cannot be avoided without imposing additional restrictions. If, however, also the cardinality-width is bounded, this dependency gets removed and EGC constraints can be solved in linear time. --- paper_title: On the Algebraic Structure of Combinatorial Problems paper_content: Abstract We describe a general algebraic formulation for a wide range of combinatorial problems including Satisfiability, Graph Colorability and Graph Isomorphism In this formulation each problem instance is represented by a pair of relational structures, and the solutions to a given instance are homomorphisms between these relational structures. The corresponding decision problem consists of deciding whether or not any such homomorphisms exist. We then demonstrate that the complexity of solving this decision problem is determined in many cases by simple algebraic properties of the relational structures involved. This result is used to identify tractable subproblems of Satisfiability , and to provide a simple test to establish whether a given set of Boolean relations gives rise to one of these tractable subproblems. --- paper_title: The Complexity of Rooted Phylogeny Problems paper_content: Several computational problems in phylogenetic reconstruction can be formulated as restrictions of the following general problem: given a formula in conjunctive normal form where the literals are rooted triples, is there a rooted binary tree that satisfies the formula? If the formulas do not contain disjunctions, the problem becomes the famous rooted triple consistency problem, which can be solved in polynomial time by an algorithm of Aho, Sagiv, Szymanski, and Ullman. If the clauses in the formulas are restricted to disjunctions of negated triples, Ng, Steel, and Wormald showed that the problem remains NP-complete. We systematically study the computational complexity of the problem for all such restrictions of the clauses in the input formula. For certain restricted disjunctions of triples we present an algorithm that has sub-quadratic running time and is asymptotically as fast as the fastest known algorithm for the rooted triple consistency problem. We also show that any restriction of the general rooted phylogeny problem that does not fall into our tractable class is NP-complete, using known results about the complexity of Boolean constraint satisfaction problems. Finally, we present a pebble game argument that shows that the rooted triple consistency problem (and also all generalizations studied in this paper) cannot be solved by Datalog. --- paper_title: The Computational Structure of Monotone Monadic SNP and Constraint Satisfaction: A Study through Datalog and Group Theory paper_content: This paper starts with the project of finding a large subclass of NP which exhibits a dichotomy. The approach is to find this subclass via syntactic prescriptions. While the paper does not achieve this goal, it does isolate a class (of problems specified by) "monotone monadic SNP without inequality" which may exhibit this dichotomy. We justify the placing of all these restrictions by showing, essentially using Ladner's theorem, that classes obtained by using only two of the above three restrictions do not show this dichotomy. We then explore the structure of this class. We show that all problems in this class reduce to the seemingly simpler class CSP. We divide CSP into subclasses and try to unify the collection of all known polytime algorithms for CSP problems and extract properties that make CSP problems NP-hard. This is where the second part of the title, "a study through Datalog and group theory," comes in. We present conjectures about this class which would end in showing the dichotomy. --- paper_title: CLASSIFYING THE COMPLEXITY OF CONSTRAINTS USING FINITE ALGEBRAS∗ paper_content: Many natural combinatorial problems can be expressed as constraint satisfaction problems. This class of problems is known to be NP-complete in general, but certain restrictions on the form of the constraints can ensure tractability. Here we show that any set of relations used to specify the allowed forms of constraints can be associated with a finite universal algebra and we explore how the computational complexity of the corresponding constraint satisfaction problem is connected to the properties of this algebra. Hence, we completely translate the problem of classifying the complexity of restricted constraint satisfaction problems into the language of universal algebra. ::: We introduce a notion of "tractable algebra," and investigate how the tractability of an algebra relates to the tractability of the smaller algebras which may be derived from it, including its subalgebras and homomorphic images. This allows us to reduce significantly the types of algebras which need to be classified. Using our results we also show that if the decision problem associated with a given collection of constraint types can be solved efficiently, then so can the corresponding search problem. We then classify all finite strictly simple surjective algebras with respect to tractability, obtaining a dichotomy theorem which generalizes Schaefer's dichotomy for the generalized satisfiability problem. Finally, we suggest a possible general algebraic criterion for distinguishing the tractable and intractable cases of the constraint satisfaction problem. --- paper_title: Tractable conservative constraint satisfaction problems paper_content: In a constraint satisfaction problem (CSP), the aim is to find an assignment of values to a given set of variables, subject to specified constraints. The CSP is known to be NP-complete in general. However, certain restrictions on the form of the allowed constraints can lead to problems solvable in polynomial time. Such restrictions are usually imposed by specifying a constraint language. The principal research direction aims to distinguish those constraint languages, which give rise to tractable CSPs from those which do not. We achieve this goal for the widely used variant of the CSP, in which the set of values for each individual variable can be restricted arbitrarily. Restrictions of this type can be expressed by including in a constraint language all possible unary constraints. Constraint languages containing all unary constraints will be called conservative. We completely characterize conservative constraint languages that give rise to CSP classes solvable in polynomial time. In particular, this result allows us to obtain a complete description of those (directed) graphs H for which the List H-Coloring problem is polynomial time solvable. --- paper_title: Complexity Classifications of Boolean Constraint Satisfaction Problems paper_content: Preface 1. Introduction 2. Complexity Classes 3. Boolean Constraint Satisfaction Problems 4. Characterizations of Constraint Functions 5. Implementation of Functions and Reductions 6. Classification Theorems for Decision, Counting and Quantified Problems 7. Classification Theorems for Optimization Problems 8. Input-Restricted Constrained Satisfaction Problems 9. The Complexity of the Meta-Problems 10. Concluding Remarks Bibliography Index. --- paper_title: List Homomorphisms and Circular Arc Graphs paper_content: , H, and lists \(\), a list homomorphism of G to Hwith respect to the listsL is a mapping \(\), such that \(\) for all \(\), and \(\) for all \(\). The list homomorphism problem for a fixed graph H asks whether or not an input graph G together with lists \(\), \(\), admits a list homomorphism with respect to L. We have introduced the list homomorphism problem in an earlier paper, and proved there that for reflexive graphs H (that is, for graphs H in which every vertex has a loop), the problem is polynomial time solvable if H is an interval graph, and is NP-complete otherwise. Here we consider graphs H without loops, and find that the problem is closely related to circular arc graphs. We show that the list homomorphism problem is polynomial time solvable if the complement of H is a circular arc graph of clique covering number two, and is NP-complete otherwise. For the purposes of the proof we give a new characterization of circular arc graphs of clique covering number two, by the absence of a structure analogous to Gallai's asteroids. Both results point to a surprising similarity between interval graphs and the complements of circular arc graphs of clique covering number two. --- paper_title: RETRACTIONS TO PSEUDOFORESTS paper_content: For a fixed graph $H$, let $\textsc{Ret}(H)$ denote the problem of deciding whether a given input graph is retractable to $H$. We classify the complexity of $\textsc{Ret}(H)$ when $H$ is a graph (with loops allowed) where each connected component has at most one cycle, i.e., a pseudoforest. In particular, this result extends the known complexity classifications of $\textsc{Ret}(H)$ for reflexive and irreflexive cycles to general cycles. Our approach is based mainly on algebraic techniques from universal algebra that previously have been used for analyzing the complexity of constraint satisfaction problems. --- paper_title: Compaction, Retraction, and Constraint Satisfaction paper_content: In this paper, we show a very close relationship among the compaction, retraction, and constraint satisfaction problems in the context of reflexive and bipartite graphs. The compaction and retraction problems are special graph coloring problems, and the constraint satisfaction problem is well known to have an important role in artificial intelligence. The relationships we present provide evidence that, similar to %as for the retraction problem, it is likely to be difficult to determine whether for every fixed reflexive or bipartite graph, the compaction problem is polynomial time solvable or NP-complete. In particular, the relationships that we present relate to a long-standing open problem concerning the equivalence of the compaction and retraction problems. --- paper_title: Undirected connectivity in log-space paper_content: We present a deterministic, log-space algorithm that solves st-connectivity in undirected graphs. The previous bound on the space complexity of undirected st-connectivity was log4/3(ṡ) obtained by Armoni, Ta-Shma, Wigderson and Zhou (JACM 2000). As undirected st-connectivity is complete for the class of problems solvable by symmetric, nondeterministic, log-space computations (the class SL), this algorithm implies that SL = L (where L is the class of problems solvable by deterministic log-space computations). Independent of our work (and using different techniques), Trifonov (STOC 2005) has presented an O(log n log log n)-space, deterministic algorithm for undirected st-connectivity. ::: Our algorithm also implies a way to construct in log-space a fixed sequence of directions that guides a deterministic walk through all of the vertices of any connected graph. Specifically, we give log-space constructible universal-traversal sequences for graphs with restricted labeling and log-space constructible universal-exploration sequences for general graphs. --- paper_title: CLASSIFYING THE COMPLEXITY OF CONSTRAINTS USING FINITE ALGEBRAS∗ paper_content: Many natural combinatorial problems can be expressed as constraint satisfaction problems. This class of problems is known to be NP-complete in general, but certain restrictions on the form of the constraints can ensure tractability. Here we show that any set of relations used to specify the allowed forms of constraints can be associated with a finite universal algebra and we explore how the computational complexity of the corresponding constraint satisfaction problem is connected to the properties of this algebra. Hence, we completely translate the problem of classifying the complexity of restricted constraint satisfaction problems into the language of universal algebra. ::: We introduce a notion of "tractable algebra," and investigate how the tractability of an algebra relates to the tractability of the smaller algebras which may be derived from it, including its subalgebras and homomorphic images. This allows us to reduce significantly the types of algebras which need to be classified. Using our results we also show that if the decision problem associated with a given collection of constraint types can be solved efficiently, then so can the corresponding search problem. We then classify all finite strictly simple surjective algebras with respect to tractability, obtaining a dichotomy theorem which generalizes Schaefer's dichotomy for the generalized satisfiability problem. Finally, we suggest a possible general algebraic criterion for distinguishing the tractable and intractable cases of the constraint satisfaction problem. --- paper_title: The complexity of colouring by semicomplete digraphs paper_content: The following problem, known as the H-colouring problem, is studied. An H-colouring of a directed graph D is a mapping $f:V( D ) \to V( H )$ such that $( f( x ),f( y ) )$ is an edge of H whenever $( x,y )$ is an edge of D. The H-colouring problem is the following. Instance: A directed graph D. Question: Does there exist an H-colouring of D? In this paper it is shown that for semicomplete digraphs T the T-colouring problem is NP-complete when T has more than one directed cycle, and polynomially decidable otherwise. --- paper_title: Compaction, Retraction, and Constraint Satisfaction paper_content: In this paper, we show a very close relationship among the compaction, retraction, and constraint satisfaction problems in the context of reflexive and bipartite graphs. The compaction and retraction problems are special graph coloring problems, and the constraint satisfaction problem is well known to have an important role in artificial intelligence. The relationships we present provide evidence that, similar to %as for the retraction problem, it is likely to be difficult to determine whether for every fixed reflexive or bipartite graph, the compaction problem is polynomial time solvable or NP-complete. In particular, the relationships that we present relate to a long-standing open problem concerning the equivalence of the compaction and retraction problems. --- paper_title: Tractable conservative constraint satisfaction problems paper_content: In a constraint satisfaction problem (CSP), the aim is to find an assignment of values to a given set of variables, subject to specified constraints. The CSP is known to be NP-complete in general. However, certain restrictions on the form of the allowed constraints can lead to problems solvable in polynomial time. Such restrictions are usually imposed by specifying a constraint language. The principal research direction aims to distinguish those constraint languages, which give rise to tractable CSPs from those which do not. We achieve this goal for the widely used variant of the CSP, in which the set of values for each individual variable can be restricted arbitrarily. Restrictions of this type can be expressed by including in a constraint language all possible unary constraints. Constraint languages containing all unary constraints will be called conservative. We completely characterize conservative constraint languages that give rise to CSP classes solvable in polynomial time. In particular, this result allows us to obtain a complete description of those (directed) graphs H for which the List H-Coloring problem is polynomial time solvable. --- paper_title: List Homomorphisms and Circular Arc Graphs paper_content: , H, and lists \(\), a list homomorphism of G to Hwith respect to the listsL is a mapping \(\), such that \(\) for all \(\), and \(\) for all \(\). The list homomorphism problem for a fixed graph H asks whether or not an input graph G together with lists \(\), \(\), admits a list homomorphism with respect to L. We have introduced the list homomorphism problem in an earlier paper, and proved there that for reflexive graphs H (that is, for graphs H in which every vertex has a loop), the problem is polynomial time solvable if H is an interval graph, and is NP-complete otherwise. Here we consider graphs H without loops, and find that the problem is closely related to circular arc graphs. We show that the list homomorphism problem is polynomial time solvable if the complement of H is a circular arc graph of clique covering number two, and is NP-complete otherwise. For the purposes of the proof we give a new characterization of circular arc graphs of clique covering number two, by the absence of a structure analogous to Gallai's asteroids. Both results point to a surprising similarity between interval graphs and the complements of circular arc graphs of clique covering number two. --- paper_title: On disconnected cuts and separators paper_content: Abstract For a connected graph G = ( V , E ) , a subset U ⊆ V is called a disconnected cut if U disconnects the graph, and the subgraph induced by U is disconnected as well. A natural condition is to impose that for any u ∈ U , the subgraph induced by ( V ∖ U ) ∪ { u } is connected. In that case, U is called a minimal disconnected cut. We show that the problem of testing whether a graph has a minimal disconnected cut is NP -complete. We also show that the problem of testing whether a graph has a disconnected cut separating two specified vertices, s and t , is NP -complete. --- paper_title: FINDING H-PARTITIONS EFFICIENTLY ∗ paper_content: We study the concept of an H-partition of the vertex set of a graph G, which includes all vertex partitioning problems into four parts which we require to be nonempty with only external constraints according to the structure of a model graph H, with the exception of two cases, one that has already been classified as polynomial, and the other one remains unclassified. In the context of more general vertex-partition problems, the problems addressed in this paper have these properties: non-list, 4-part, external constraints only (no internal constraints), each part non-empty. We describe tools that yield for each problem considered in this paper a simple and low complexity polynomial-time algorithm. Mathematics Subject Classification. 05C85, 68R10. --- paper_title: Covering graphs with few complete bipartite subgraphs paper_content: We consider computational problems on covering graphs with bicliques (complete bipartite subgraphs). Given a graph and an integer k, the biclique cover problem asks whether the edge-set of the graph can be covered with at most k bicliques; the biclique partition problem is defined similarly with the additional condition that the bicliques are required to be mutually edge-disjoint. The biclique vertex-cover problem asks whether the vertex-set of the given graph can be covered with at most k bicliques, the biclique vertex-partition problem is defined similarly with the additional condition that the bicliques are required to be mutually vertex-disjoint. All these four problems are known to be NP-complete even if the given graph is bipartite. In this paper, we investigate them in the framework of parameterized complexity: do the problems become easier if k is assumed to be small? We show that, considering k as the parameter, the first two problems are fixed-parameter tractable, while the latter two problems are not fixed-parameter tractable unless P=NP. --- paper_title: The external constraint 4 nonempty part sandwich problem paper_content: List partitions generalize list colourings. Sandwich problems generalize recognition problems. The polynomial dichotomy (NP-complete versus polynomial) of list partition problems is solved for 4-dimensional partitions with the exception of one problem (the list stubborn problem) for which the complexity is known to be quasipolynomial. Every partition problem for 4 nonempty parts and only external constraints is known to be polynomial with the exception of one problem (the 2K"2-partition problem) for which the complexity of the corresponding list problem is known to be NP-complete. The present paper considers external constraint 4 nonempty part sandwich problems. We extend the tools developed for polynomial solutions of recognition problems obtaining polynomial solutions for most corresponding sandwich versions. We extend the tools developed for NP-complete reductions of sandwich partition problems obtaining the classification into NP-complete for some external constraint 4 nonempty part sandwich problems. On the other hand and additionally, we propose a general strategy for defining polynomial reductions from the 2K"2-partition problem to several external constraint 4 nonempty part sandwich problems, defining a class of 2K"2-hard problems. Finally, we discuss the complexity of the Skew Partition Sandwich Problem. --- paper_title: Tractable conservative constraint satisfaction problems paper_content: In a constraint satisfaction problem (CSP), the aim is to find an assignment of values to a given set of variables, subject to specified constraints. The CSP is known to be NP-complete in general. However, certain restrictions on the form of the allowed constraints can lead to problems solvable in polynomial time. Such restrictions are usually imposed by specifying a constraint language. The principal research direction aims to distinguish those constraint languages, which give rise to tractable CSPs from those which do not. We achieve this goal for the widely used variant of the CSP, in which the set of values for each individual variable can be restricted arbitrarily. Restrictions of this type can be expressed by including in a constraint language all possible unary constraints. Constraint languages containing all unary constraints will be called conservative. We completely characterize conservative constraint languages that give rise to CSP classes solvable in polynomial time. In particular, this result allows us to obtain a complete description of those (directed) graphs H for which the List H-Coloring problem is polynomial time solvable. --- paper_title: Parameterizing cut sets in a graph by the number of their components paper_content: For a connected graph G=(V,E), a subset U?V is a disconnected cut if U disconnects G and the subgraph GU induced by U is disconnected as well. A cut U is a k-cut if GU contains exactly k(?1) components. More specifically, a k-cut U is a (k,?)-cut if V?U induces a subgraph with exactly ?(?2) components. The Disconnected Cut problem is to test whether a graph has a disconnected cut and is known to be NP-complete. The problems k-Cut and (k,?)-Cut are to test whether a graph has a k-cut or (k,?)-cut, respectively. By pinpointing a close relationship to graph contractibility problems we show that (k,?)-Cut is in P for k=1 and any fixed constant ??2, while it is NP-complete for any fixed pair k,??2. We then prove that k-Cut is in P for k=1 and NP-complete for any fixed k?2. On the other hand, for every fixed integer g?0, we present an FPT algorithm that solves (k,?)-Cut on graphs of Euler genus at most g when parameterized by k+?. By modifying this algorithm we can also show that k-Cut is in FPT for this graph class when parameterized by k. Finally, we show that Disconnected Cut is solvable in polynomial time for minor-closed classes of graphs excluding some apex graph. --- paper_title: The Computational Complexity of Disconnected Cut and 2K2-Partition paper_content: For a connected graph G=(V,E), a subset U of V is called a disconnected cut if U disconnects the graph and the subgraph induced by U is disconnected as well. We show that the problem to test whether a graph has a disconnected cut is NP-complete. This problem is polynomially equivalent to the following problems: testing if a graph has a 2K2-partition, testing if a graph allows a vertex-surjective homomorphism to the reflexive 4-cycle and testing if a graph has a spanning subgraph that consists of at most two bicliques. Hence, as an immediate consequence, these three decision problems are NP-complete as well. This settles an open problem frequently posed in each of the four settings. --- paper_title: Coloring Mixed Hypertrees paper_content: A mixed hypergraph is a hypergraph with edges classified as of type 1 or type 2. A vertex coloring is strict if no edge of type 1 is totally multicolored, and no edge of type 2 monochromatic. The chromatic spectrum of a mixed hypergraph is the set of integers k for which there exists a strict coloring using exactly k different colors. A mixed hypertree is a mixed hypergraph in which every hyperedge induces a subtree of the given underlying tree. We prove that mixed hypertrees have continuous spectra (unlike general hypergraphs, whose spectra may contain gaps [cf. Jiang et al.: The chromatic spectrum of mixed hypergraphs, submitted]. We prove that determining the upper chromatic number (the maximum of the spectrum) of mixed hypertrees is NP-hard, and we identify several polynomially solvable classes of instances of the problem. --- paper_title: A dichotomy theorem for constraint satisfaction problems on a 3-element set paper_content: The Constraint Satisfaction Problem (CSP) provides a common framework for many combinatorial problems. The general CSP is known to be NP-complete; however, certain restrictions on a possible form of constraints may affect the complexity and lead to tractable problem classes. There is, therefore, a fundamental research direction, aiming to separate those subclasses of the CSP that are tractable and those which remain NP-complete.Schaefer gave an exhaustive solution of this problem for the CSP on a 2-element domain. In this article, we generalise this result to a classification of the complexity of the CSP on a 3-element domain. The main result states that every subproblem of the CSP is either tractable or NP-complete, and the criterion separating them is that conjectured in Bulatov et al. [2005] and Bulatov and Jeavons [2001b]. We also characterize those subproblems for which standard constraint propagation techniques provide a decision procedure. Finally, we exhibit a polynomial time algorithm which, for a given set of allowed constraints, outputs if this set gives rise to a tractable problem class. To obtain the main result and the algorithm, we extensively use the algebraic technique for the CSP developed in Jeavons [1998b], Bulatov et al.[2005], and Bulatov and Jeavons [2001b]. --- paper_title: Algorithms for partition of some class of graphs under compaction paper_content: The compaction problem is to partition the vertices of an input graph G onto the vertices of a fixed target graph H, such that adjacent vertices of G remain adjacent in H, and every vertex and nonloop edge of H is covered by some vertex and edge of G respectively, i.e., the partition is a homomorphism of G onto H (except the loop edges). Various computational complexity results, including both NPcompleteness and polynomial time solvability, have been presented earlier for this problem for various class of target graphs H. In this paper, we pay attention to the input graphs G, and present polynomial time algorithms for the problem for some class of input graphs, keeping the target graph H general as any reflexive or irreflexive graph. Our algorithms also give insight as for which instances of the input graphs, the problem could possibly be NP-complete for certain target graphs. With the help of our results, we are able to further refine the structure of the input graph that would be necessary for the problem to be possibly NP-complete, when the target graph is a cycle. Thus, when the target graph is a cycle, we enhance the class of input graphs for which the problem is polynomial time solvable. --- paper_title: The approximability of three-valued MAX CSP paper_content: In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3. --- paper_title: Complexity Classifications of Boolean Constraint Satisfaction Problems paper_content: Preface 1. Introduction 2. Complexity Classes 3. Boolean Constraint Satisfaction Problems 4. Characterizations of Constraint Functions 5. Implementation of Functions and Reductions 6. Classification Theorems for Decision, Counting and Quantified Problems 7. Classification Theorems for Optimization Problems 8. Input-Restricted Constrained Satisfaction Problems 9. The Complexity of the Meta-Problems 10. Concluding Remarks Bibliography Index. --- paper_title: Approximation for Maximum Surjective Constraint Satisfaction Problems paper_content: Maximum surjective constraint satisfaction problems (Max-Sur-CSPs) are computational problems where we are given a set of variables denoting values from a finite domain B and a set of constraints on the variables. A solution to such a problem is a surjective mapping from the set of variables to B such that the number of satisfied constraints is maximized. We study the approximation performance that can be acccchieved by algorithms for these problems, mainly by investigating their relation with Max-CSPs (which are the corresponding problems without the surjectivity requirement). Our work gives a complexity dichotomy for Max-Sur-CSP(B) between PTAS and APX-complete, under the assumption that there is a complexity dichotomy for Max-CSP(B) between PO and APX-complete, which has already been proved on the Boolean domain and 3-element domains. ---
Title: The Complexity of Surjective Homomorphism Problems -- a Survey Section 1: Introduction Description 1: Provide an overview of surjective homomorphism problems, their significance, related problems, and the main goals of the survey. Section 2: Preliminaries Description 2: Introduce basic concepts, definitions, and notations related to homomorphisms and surjective homomorphisms in mathematics and computer science. Section 3: Relationship between the problems Description 3: Explain the interconnections and reductions among different types of homomorphism problems, including surjective homomorphisms, retraction, and compaction. Section 4: Further simple classifications Description 4: Discuss specific classifications and reductions that help determine the complexity of various surjective homomorphism problems. Section 5: Difficulty of a full classification Description 5: Explore the challenges and reasons why a full complexity classification of surjective homomorphisms is difficult to achieve. Section 6: A renaissance in foresting Description 6: Discuss recent advancements and results in the classification of surjective homomorphism problems for specific graph types. Section 7: Recent work on Sur-Hom(C_Ref) Description 7: Present recent studies and results on specific surjective homomorphism problems, particularly Sur-Hom(C_Ref). Section 8: The no-rainbow-colouring problem Description 8: Introduce and analyze the no-rainbow-colouring problem and its relation to surjective homomorphism problems. Section 9: Final Remarks Description 9: Conclude the survey with an overview of ongoing research, open questions, and future directions in the study of surjective homomorphism problems.
A Survey of Media Processing Approaches
6
--- paper_title: AMD's 3DNow!/sup TM/ vectorization for signal processing applications paper_content: AMD's 3DNow!/sup TM/ technology provides substantial speedup for digital signal processing applications. A set of DSP routines is vectorized with the 3DNow!/sup TM/ technology. The simplicity of the vector unit makes it easier to convert the conventional DSP programs into vector operations, and thus reduces the learning curve. The performance gain from typical DSP routines such as FIR, IIR and FFT indicates that the speedup can reach up to 1.5 compared to the conventional host-based signal processing units. 3D games and multimedia applications benefit from the technology. The vectorization can be integrated into compilers for the ease of use in increasing the performance of the signal processing applications. --- paper_title: The M-PIRE MPEG-4 codec DSP and its macroblock engine paper_content: M-PIRE is a programmable MPEG-4 multimedia codec VLSI for mobile and stationary applications. It integrates a RISC core, two separate DSPs, a 64-bit dual-issue VLIW macroblock engine, and an autonomous I/O processor on a single chip to cope with the high flexibility and processing demands of the MPEG-4 standard. The first M-PIRE implementation will consume 90 mm/sup 2/ in 0.25 /spl mu/ CMOS technology. It will support real-time video and audio processing of MPEG-4 simple profile or ITU H.26x standards; future designs of M-PIRE will add support for higher MPEG-4 profiles. This paper focuses on the architecture, instruction set, and performance of M-PIRE's macroblock engine, which carries most of the workload in MPEG-4 video processing. --- paper_title: A 0.8 /spl mu/ 100-MHz 2-D DCT core processor paper_content: The discrete cosine transform (DCT) has been commonly adopted in many transformation applications such as image, video, and facsimile. A VLSI architecture and implementation of a high speed 2-dimensional DCT core processor with 0.8 /spl mu/ technology is presented. This architecture applies a fast DCT algorithm and multiplier-accumulator based on the distributed algorithm, which has contributed to reduce the hardware requirement and to achieve high speed operation. The transpose memory inserted between each dimension of DCT is partitioned in order to reduce further hardware overhead. Furthermore, this 2-dimensional DCT scheme satisfies the accuracy specification of CCITT recommendation MPEG. > --- paper_title: AltiVec extension to PowerPC accelerates media processing paper_content: There is a clear trend in personal computing toward multimedia-rich applications. These applications will incorporate a wide variety of multimedia technologies, including audio and video compression, 2D image processing, 3D graphics, speech and handwriting recognition, media mining, and narrow/broadband signal processing for communication. In response to this demand, major microprocessor vendors have announced architectural extensions to their general-purpose processors in an effort to improve their multimedia performance. Intel extended IA-32 with MMX and SSE (alias KNI), Sun enhanced Sparc with VIS, Hewlett-Packard added MAX to its PA-RISC architecture, Silicon Graphics extended the MIPS architecture with MDMX, and Digital (now Compaq) added MVI to Alpha. This article describes the most recent, and what we believe to be the most comprehensive, addition to this list: PowerPC's AltiVec, AltiVec speeds not only media processing but also nearly any application in which data parallelism exists, as demonstrated by a cycle-accurate simulation of Motorola's MPC 7400, the heart of Apple G4 systems. --- paper_title: The Garp architecture and C compiler paper_content: Various projects and products have been built using off-the-shelf field-programmable gate arrays (FPGAs) as computation accelerators for specific tasks. Such systems typically connect one or more FPGAs to the host computer via an I/O bus. Some have shown remarkable speedups, albeit limited to specific application domains. Many factors limit the general usefulness of such systems. Long reconfiguration times prevent the acceleration of applications that spread their time over many different tasks. Low-bandwidth paths for data transfer limit the usefulness of such systems to tasks that have a high computation-to-memory-bandwidth ratio. In addition, standard FPGA tools require hardware design expertise which is beyond the knowledge of most programmers. To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it. They are also investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications. They present their results in this article. --- paper_title: A 600 MHz 2D-DCT processor for MPEG applications paper_content: In this paper we present the design of a 2D discrete cosine transform (2D-DCT) processor and its implementation using 0.6 /spl mu/m GaAs technology. The architecture of the processor, that resembles an FCT-MMM (fast cosine transform-matrix matrix multiplication) architecture, was development using distributed arithmetic (DA) in order to reduce the area required. The processor has about 50k transistors and occupies an area of 31.8 mm/sup 2/. It is able to process 400 Mpixels per second and at a clock frequency of 600 MHz, which is far beyond the requirements for real time high definition moving pictures in the MPEG-2 standard. Special consideration is given to the implementation of a transposition RAM which constitutes the bottleneck of the algorithm. A 64 word/spl times/12 bit, 1 ns access time transposition RAM was developed using a new dynamic RAM cell. --- paper_title: Video Image Processing with the Sonic Architecture paper_content: Current industrial video-processing systems use a mixture of high-performance workstations and application-specific integrated circuits. However, video image processing in the professional broadcast environment requires more computational power and data throughput than most of today's general-purpose computers can provide. In addition, using ASICs for video image processing is both inflexible and expensive. Configurable computing offers an appropriate alternative for broadcast video image editing and manipulation by combining the flexibility, programmability, and economy of general-purpose processors with the performance of dedicated ASICs. Sonic is a configurable computing system that performs real-time video image processing. The authors describe how it implements algorithms for two-dimensional linear transforms, fractal image generation, filters, and other video effects. Sonic's flexible and scalable architecture contains configurable processing elements that accelerate software applications and support the use of plug-in software. --- paper_title: AMD 3DNow! technology: architecture and implementations paper_content: The AMD-K6-2 microprocessor is the first implementation of AMD 3DNow!, a technology innovation for the x86 architecture that drives today's personal computers. 3DNow! technology is a set of 21 new instructions designed to open the traditional processing bottlenecks for floating-point-intensive and multimedia applications. Using these instructions, applications can implement more powerful solutions to create a more entertaining and productive PC platform. Examples of the type of improvements that 3DNow! technology enables are faster frame rates on high-resolution scenes, better physical modeling of real-world environments, sharper and more detailed 3D imaging, smoother video playback, and near theater-quality audio. Future AMD processors such as the AMD-K7, designed to operate at frequencies greater than 500 MHz, should provide even higher performance implementations of 3DNow! technology. --- paper_title: UltraSparc I: a four-issue processor supporting multimedia paper_content: UItraSpare I is a second-generation superscalar processor. It is a high performance, highly integrated, four issue superscalar processor based on the Spare Version 9 64-bit RISC architecture. We have extended the core instruction set to include graphics instructions that provide the most common operations related to two dimensional image processing; two- and three-dimensional graphics and image compression algorithms; and parallel operations on pixel data with 8-, 16-, and 32-bit components. Additional, new memory access instructions support the very high bandwidth requirements typical of graphics and multimedia applications. --- paper_title: A 0.8 /spl mu/ 100-MHz 2-D DCT core processor paper_content: The discrete cosine transform (DCT) has been commonly adopted in many transformation applications such as image, video, and facsimile. A VLSI architecture and implementation of a high speed 2-dimensional DCT core processor with 0.8 /spl mu/ technology is presented. This architecture applies a fast DCT algorithm and multiplier-accumulator based on the distributed algorithm, which has contributed to reduce the hardware requirement and to achieve high speed operation. The transpose memory inserted between each dimension of DCT is partitioned in order to reduce further hardware overhead. Furthermore, this 2-dimensional DCT scheme satisfies the accuracy specification of CCITT recommendation MPEG. > --- paper_title: The MAP1000A VLIM mediaprocessor paper_content: Presents the MAP1000A, an alternative to using custom ASICs for each multimedia-processing task. It is a single-chip, programmable mediaprocessor that also makes use of general-purpose RISC processing and a view framework. It provides a new programmable infrastructure with the cost, performance, and power characteristics suitable for replacing RISCs and ASICs in consumer electronics, communications, and imaging applications while, retaining a completely high-level-language programming approach. This single-chip mediaprocessor handles all digital functions in high-level-language software with significantly improved performance and without increased system cost or development complexity. --- paper_title: Use IRAM for rasterization paper_content: A new intelligent RAM (IRAM) architecture based on embedding DRAM memory and many simple processors on a chip is introduced. The architecture has the advantage of being able to perform massive parallel computing and of being usable as plain DRAM. Some key parts of the architecture are described. We also show that the architecture can be used in graphics applications where high rasterizer performance is required. --- paper_title: Multimedia Enhanced General-Purpose Processors paper_content: This paper gives an overview of the multimedia instructions that have been added to the instruction set architectures of general-purpose microprocessors to accelerate media processing. Examples are MAX, MMX and VIS, the multimedia extensions for PA-RISC, ix86, and SPARC processor architectures. We describe subword parallelism, a low overhead form of SIMD parallelism, and the classes of instructions needed to support subword parallel computations efficiently. Features described include arithmetic operations with saturation, averaging, multiply alternatives, data rearrangement primitives like Permute and Mix, formatting instructions, conditional execution, and complex instructions. --- paper_title: UltraSPARC-III: a 3rd generation 64 b SPARC microprocessor paper_content: UltraSPARC-III (US-III) is a 64 b 800 MHz 4-instruction-issue superscalar microprocessor for high-performance desktop workstation, work group server, and enterprise server platforms. On-chip caches include a 64 kB 4-way associative for data, 32 kB 4-way associative for instructions, a 2 k B 4-way associative data prefetch cache, and a 2 kB 4-way associative write. A 90 kB on-chip tag array supports the off-chip 8 MB unified second-level cache. The 23 M-transistor chip in a 0.15 /spl mu/m, 7-layer metal process consumes 60 W from a 1.5 V supply. --- paper_title: 64-bit and multimedia extensions in the PA-RISC 2.0 architecture paper_content: This paper describes the architectural extensions to the PA-RISC 1.1 architecture to enable 64-bit processing of integers and pointers. It also describes MAX, the Multi-media Acceleration eXtensions which speed up the processing of multimedia and other applications with parallelism at the intra instruction, or subword, level. Other additions to the PA-RISC 2.0 architecture include performance enhancements with respect to memory hierarchy management, branch penalty reduction, and floating-point performance. --- paper_title: An X86 microprocessor with multimedia extensions paper_content: This sixth-generation X86 instruction-set compatible microprocessor implements a set of multimedia extensions. Instruction predecoding to identify instruction boundaries begins during filling of the 32 kB two-way set associative instruction cache after which the predecode bits are stored in the 20 kB predecode cache. The processor decodes up to two X86 instructions per clock, most of which are decoded by hardware into one to four RISC-like operations, called RISC86 Ops, whereas the uncommon instructions are mapped into ROM-resident RISC sequences. The instruction scheduler buffers up to 24 RISC86 operations, using register renaming with a total of 48 registers. Up to six RISC86 instructions are issued out-of-order to seven parallel execution units, speculatively executed and retired in order. The branch algorithm uses two-level branch prediction based on an 8192-entry branch history table, a 16-entry branch target cache and a 16-entry return address stack. The 10.18/spl times/15.38 mm/sup 2/ die contains 8.8M transistors. The chip is in 0.35 /spl mu/m CMOS using five layers of metal, shallow trench isolation, and tungsten local interconnect. --- paper_title: VIS speeds new media processing paper_content: UltraSparc's Visual Instruction Set, described here in detail, accelerates some widely used media-processing algorithms by as much as seven times. Today's new media, increasingly sophisticated 3D graphics environments, videoconferencing, MPEG video playback, 3D visualization, image processing, and so on, demand enhancements to conventional RISC instruction sets, which were not originally designed to handle such applications. The Visual Instruction Set (VIS) is a comprehensive set of RISC-style instructions targeted at accelerating this new media processing. --- paper_title: UltraSparc I: a four-issue processor supporting multimedia paper_content: UItraSpare I is a second-generation superscalar processor. It is a high performance, highly integrated, four issue superscalar processor based on the Spare Version 9 64-bit RISC architecture. We have extended the core instruction set to include graphics instructions that provide the most common operations related to two dimensional image processing; two- and three-dimensional graphics and image compression algorithms; and parallel operations on pixel data with 8-, 16-, and 32-bit components. Additional, new memory access instructions support the very high bandwidth requirements typical of graphics and multimedia applications. --- paper_title: The Internet Streaming SIMD Extensions paper_content: Because floating-point computation is the heart of 3D geometry, speeding up floating-point computation is vital to overall 3D performance. To produce a visually perceptible difference in graphics applications, Intel's 32-bit processors-based on the IA-32 architecture-required an increase of 1.5 to 2 times the native floating-point performance. One path to better performance involves studying how the system uses data. Today's 3D applications can execute a lot faster by differentiating between data used repeatedly and streaming data-data used only once and then discarded. The Pentium III's new floating-point extension lets programmers designate data as streaming and provides instructions that handle this data efficiently. The authors designed the Internet Streaming SIMD Extensions (ISSE) to enable a new level of visual computing on the volume PC platform. They discuss their results in terms of boosting the performance of 3D and video applications. --- paper_title: A 2.2 GOPS video DSP with 2-RISC MIMD, 6-PE SIMD architecture for real-time MPEG2 video coding/decoding paper_content: In multimedia applications, various video encoding/decoding standards such as MPEG2, MPEG1 and emerging algorithms call for a DSP solution of the extremely computation-intensive tasks. Several DSPs have been developed based on intensive pipeline processing at the macro-block level. In these DSPs, macroblock-based pipeline memory slices are needed for each pipeline stage. Programmability is limited by the hard-wired macros to be incorporated such as DCT and Quantizer. A microprocessor or a media-processor with multimedia-enhanced instructions has not yet been applied to MPEG2 encoding. This DSP for real-time codec applications has the following features: (a) extensive use of data parallelism inside the macro-block data structure, (b) flexible data path for coding algorithms to enhance gate utilization and to reduce the use of macro-block pipeline memory, (c) data path design suitable for (but not limited to) fast DCT/IDCT algorithms. --- paper_title: A 60-MHz 240-mW MPEG-4 videophone LSI with 16-Mb embedded DRAM paper_content: A 240-mW single-chip MPEG-4 videophone LSI with a 16-Mb embedded DRAM is fabricated utilizing a 0.25-/spl mu/m CMOS triple-well quad-metal technology. The videophone LSI is applied to the 3GPP 3G-324M video-telephony standard for IMT-2000, and implements the MPEG-4 video SPL1 codec, the AMR speech codec, and the ITU-T H.223 Annex B multiplexing/demultiplexing at the same time. Three 16-bit multimedia-extended RISC processors, dedicated hardware accelerators, and a 16-Mb embedded DRAM are integrated on a 10.84 mm/spl times/10.84 mm die. It also integrates camera, display, audio, and network interfaces required for a mobile video-phone terminal. In addition to conventional low-power techniques, such as clock gating and parallel operation, some new low-power techniques are also employed. These include an embedded DRAM with optimized configuration, a low-power motion estimator, and the adoption of the variable-threshold voltage CMOS (VT-CMOS). The MPEG-4 videophone LSI consumes 240 mW at 60 MHz, which is only 22% of that for a conventional multichip design. Variable threshold voltage CMOS reduces standby leakage current to 26 /spl mu/A, which is only 17% of that for the conventional CMOS design. --- paper_title: The MAP1000A VLIM mediaprocessor paper_content: Presents the MAP1000A, an alternative to using custom ASICs for each multimedia-processing task. It is a single-chip, programmable mediaprocessor that also makes use of general-purpose RISC processing and a view framework. It provides a new programmable infrastructure with the cost, performance, and power characteristics suitable for replacing RISCs and ASICs in consumer electronics, communications, and imaging applications while, retaining a completely high-level-language programming approach. This single-chip mediaprocessor handles all digital functions in high-level-language software with significantly improved performance and without increased system cost or development complexity. --- paper_title: SH4 RISC multimedia microprocessor paper_content: Unique, floating-point length-4 vector instructions prove more effective than conventional SIMD architecture for 3D graphics processing. --- paper_title: A high performance DSP architecture "MSPM" for digital image processing using embedded DRAM ASIC technologies paper_content: This paper describes the architecture of the "MSPM: Multimedia Signal Processor with embedded Memory". It has been developed to evaluate the architectural effects of 0.35-micron embedded DRAM technologies on the performance. The chip has achieved 24 times the performance of an ordinary RISC processor at the same frequency, and also ten times the performance of an ordinary 16 bit DSP. The improvement has been made possible by the following architectural characteristics: direct-memory-referring instruction sets; SIMD-type-parallel-executing features; byte-aligned-word-access features; and multi-instruction migrating features. The MSPM achieved 800 MOPS@66 MHz with 1.1 Gbyte/s bandwidth. Integration of a 16 Mb DRAM with 128 bit data bus is also reported. --- paper_title: An area efficient video/audio codec for portable multimedia application paper_content: In this paper, we present an area efficient video and audio single chip encoder/decoder for portable multimedia application. The single-chip called as VASP (Video Audio Signal Processor) consists of a video signal processing block and an audio signal processing block. This chip has a mixed hardware/software architecture to combine performance and flexibility. The video signal processing block was designed to implement hardwired solution of pixel input/output, full pixel motion estimation, half pixel motion estimation, discrete cosine transform, quantization, run length coding, host interface, and 16 bit RISC type internal controller. The audio signal processing block is implemented with a software solution using 16 bit fixed point DSP. This chip contains 142,300 gates, 22 kbits FIFO, 107 kbits SRAM, and 556 kbits ROM, and the chip size was 9.02 mm/spl times/9.06 mm which was fabricated using 0.5 micron 3-layers metal CMOS technology. --- paper_title: A media processor for multimedia signal processing applications paper_content: A multimedia system requires processing capabilities that include controlling functions as well as high throughput. For the consumer market, this processor must also satisfy low cost constraints. An efficient solution is the use of a dual-issue RISC architecture with key enhancements to target the high computation needs of multimedia applications. The RISC design methodology ensures its ease of programming and efficient controlling functions while its dual-issue approach lead to a high resource utilization. Key enhancements to this architecture include flexibility in the issuing control of the dual instructions; special instructions to implement SIMD parallelism; and full flexibility of half-word precision and double-word precision arithmetic instructions. --- paper_title: A 2000-MOPS embedded RISC processor with a Rambus DRAM controller paper_content: We have developed a 0.25-/spl mu/m, 200-MHz embedded RISC processor for multimedia applications. This processor has a dual-issue superscalar datapath that consists of a 32-bit integer unit and a 64-bit single-instruction multiple-data (SIMD) function unit that together have a total of five multiply-adders. An on-chip concurrent Rambus DRAM (C-RDRAM) controller uses interleaved transactions to increase the memory bandwidth of the Rambus channel to 533 Mb/s. The controller also reduces latency by using the transaction interleaving and instruction prefetching. A 64-bit, 200-MHz internal bus transfers data among the CPU core, the C-RDRAM, and the peripherals. These high-data-rate channels improve CPU performance because they eliminate a bottleneck in the data supply. The datapath part of this chip was designed using a functional macrocell library that included placement information for leaf cells and resulted in the SIMD function unit of this chip's having 68000 transistors per square millimeter. --- paper_title: The D30V/MPEG multimedia processor paper_content: MPEG-2 decoding and encoding are important applications for multimedia systems. Real-time capability and low-cost implementation are the main design considerations for these systems. Due to the high computational requirements of real-time applications, multimedia systems typically use special-purpose processors to handle data. However, due to the inherent inflexibility of their designs, these dedicated processors are of little use in various application environments-digital videocassette recorders, for example. This article introduces Mitsubishi's D30V/MPEG multimedia processor, which integrates a dual-issue RISC with minimal hardware support for a real-time MPEG-2 decoder. This approach is advantageous because of the small chip area it requires and the flexibility of the easy-to-program RISC processor for multimedia applications. --- paper_title: High-speed and low-power real-time programmable video multi-processor for MPEG-2 multimedia chip on 0.6 /spl mu/m TLM CMOS technology paper_content: We developed a Video Multi Processor (VMP) for image compression and decompression schemes of MPEG (especially MPEG-2) in this study. The VMP would apply to programmable architecture, various flexibilities to implement real-time image compression algorithm, and other many applications such as DVD-CD ROM authoring tool and videophone/teleconferencing systems. IO architecture of the VMP is designed for the multi-processor functionality in which uses many VMPs according to required arithmetic quantities of the system. Further, the architecture of the VMP system is simplified by processing the necessary peripheral IO system operations within the processor. --- paper_title: Design space exploration for future TriMedia CPUs paper_content: It is widely recognized that fine-grain parallelism can greatly enhance a processor's performance for signal processing applications. For this reason, future generation TriMedias will combine VLIW and subword parallelism in a single CPU. We present a snapshot of the new CPUs design process: the outlines are clear but fine tuning is still ongoing. We present the design flow and 'workbench' that the designers use for further tuning. --- paper_title: The MAP1000A VLIM mediaprocessor paper_content: Presents the MAP1000A, an alternative to using custom ASICs for each multimedia-processing task. It is a single-chip, programmable mediaprocessor that also makes use of general-purpose RISC processing and a view framework. It provides a new programmable infrastructure with the cost, performance, and power characteristics suitable for replacing RISCs and ASICs in consumer electronics, communications, and imaging applications while, retaining a completely high-level-language programming approach. This single-chip mediaprocessor handles all digital functions in high-level-language software with significantly improved performance and without increased system cost or development complexity. --- paper_title: A 600 MHz 2D-DCT processor for MPEG applications paper_content: In this paper we present the design of a 2D discrete cosine transform (2D-DCT) processor and its implementation using 0.6 /spl mu/m GaAs technology. The architecture of the processor, that resembles an FCT-MMM (fast cosine transform-matrix matrix multiplication) architecture, was development using distributed arithmetic (DA) in order to reduce the area required. The processor has about 50k transistors and occupies an area of 31.8 mm/sup 2/. It is able to process 400 Mpixels per second and at a clock frequency of 600 MHz, which is far beyond the requirements for real time high definition moving pictures in the MPEG-2 standard. Special consideration is given to the implementation of a transposition RAM which constitutes the bottleneck of the algorithm. A 64 word/spl times/12 bit, 1 ns access time transposition RAM was developed using a new dynamic RAM cell. --- paper_title: Architecture and design of a Talisman-compatible multimedia processor paper_content: This paper describes the architecture, functionality, and design of a Talisman-compatible multimedia processor (TM-PC) from Philips Semiconductors. "Talisman" is the code name of a new graphics and multimedia hardware architecture (from Microsoft Corp.) that aims at achieving the performance of high-end three-dimensional graphics workstations at consumer price points. The TM-PC is a programmable processor with a high-performance, very long instruction word central processing unit (CPU) core. The CPU core, aided by an array of peripheral devices (multimedia coprocessors and input-output units), facilitates concurrent processing of audio, video, graphics, and communication data. Designed specifically for the Microsoft Talisman project, TM-PC is a derivative of Philips' TM-1 media processor and is tailored to be used in a variety of PC-based functions as a plug-in board on the peripheral component interconnect (PCI) bus. In the design of the TM-PC, the functionality of most of the blocks from the TM-1 has been kept unchanged; the primary changes in the existing blocks have been in the main memory and the PCI interfaces, and a new block, called VPB, has been added to support virtual frame buffer functionality as well as video graphics adapter and Soundblaster emulation capability. The major emphasis of this paper is on the design details of the new VPB module and an explanation of how it fits with the rest of the TM-1 design. --- paper_title: The D30V/MPEG multimedia processor paper_content: MPEG-2 decoding and encoding are important applications for multimedia systems. Real-time capability and low-cost implementation are the main design considerations for these systems. Due to the high computational requirements of real-time applications, multimedia systems typically use special-purpose processors to handle data. However, due to the inherent inflexibility of their designs, these dedicated processors are of little use in various application environments-digital videocassette recorders, for example. This article introduces Mitsubishi's D30V/MPEG multimedia processor, which integrates a dual-issue RISC with minimal hardware support for a real-time MPEG-2 decoder. This approach is advantageous because of the small chip area it requires and the flexibility of the easy-to-program RISC processor for multimedia applications. --- paper_title: Exploiting Java instruction/thread level parallelism with horizontal multithreading paper_content: Java bytecodes can be executed with the following three methods: a Java interpreter running on a particular machine interprets bytecodes; a Just-in-Time (JIT) compiler translates bytecodes to the native primitives of the particular machine and the machine executes the translated codes; and a Java processor executes bytecodes directly. The first two methods require no special hardware support for the execution of Java bytecodes and are widely used currently. The last method requires an embedded Java processor, picoJavaI or picoJavaII for instance. The picoJavaI and picoJavaII are simple pipelined processors with no ILP (instruction level parallelism) and TLP (thread level parallelism) supports. A so-called MAJC (microprocessor architecture for Java computing) design can exploit ILP and TLP by using a modified VLIW (very long instruction word) architecture and vertical multithreading technique, but it has its own instruction set and cannot execute Java bytecodes directly. In this paper, we investigate a processor architecture which can directly execute Java bytecodes meanwhile can exploit Java ILP and TLP simultaneously. The proposed processor consists of multiple slots implementing horizontal multithreading and multiple functional units shared by all threads executed in parallel. Our architectural simulation results show that the Java processor could achieve an average 20 IPC (instructions per cycle), or 7.33 EIPC (effective IPC), with 8 slots and a 4-instruction scheduling window for each slot. We also check other configurations and give the utilization of functional units as well as the performance improvement with various kinds of working loads. --- paper_title: A 0.8 /spl mu/ 100-MHz 2-D DCT core processor paper_content: The discrete cosine transform (DCT) has been commonly adopted in many transformation applications such as image, video, and facsimile. A VLSI architecture and implementation of a high speed 2-dimensional DCT core processor with 0.8 /spl mu/ technology is presented. This architecture applies a fast DCT algorithm and multiplier-accumulator based on the distributed algorithm, which has contributed to reduce the hardware requirement and to achieve high speed operation. The transpose memory inserted between each dimension of DCT is partitioned in order to reduce further hardware overhead. Furthermore, this 2-dimensional DCT scheme satisfies the accuracy specification of CCITT recommendation MPEG. > --- paper_title: A 600 MHz 2D-DCT processor for MPEG applications paper_content: In this paper we present the design of a 2D discrete cosine transform (2D-DCT) processor and its implementation using 0.6 /spl mu/m GaAs technology. The architecture of the processor, that resembles an FCT-MMM (fast cosine transform-matrix matrix multiplication) architecture, was development using distributed arithmetic (DA) in order to reduce the area required. The processor has about 50k transistors and occupies an area of 31.8 mm/sup 2/. It is able to process 400 Mpixels per second and at a clock frequency of 600 MHz, which is far beyond the requirements for real time high definition moving pictures in the MPEG-2 standard. Special consideration is given to the implementation of a transposition RAM which constitutes the bottleneck of the algorithm. A 64 word/spl times/12 bit, 1 ns access time transposition RAM was developed using a new dynamic RAM cell. --- paper_title: The D30V/MPEG multimedia processor paper_content: MPEG-2 decoding and encoding are important applications for multimedia systems. Real-time capability and low-cost implementation are the main design considerations for these systems. Due to the high computational requirements of real-time applications, multimedia systems typically use special-purpose processors to handle data. However, due to the inherent inflexibility of their designs, these dedicated processors are of little use in various application environments-digital videocassette recorders, for example. This article introduces Mitsubishi's D30V/MPEG multimedia processor, which integrates a dual-issue RISC with minimal hardware support for a real-time MPEG-2 decoder. This approach is advantageous because of the small chip area it requires and the flexibility of the easy-to-program RISC processor for multimedia applications. --- paper_title: A flexible processor architecture for MPEG-4 image compositing paper_content: This paper proposes a new array architecture for MPEG-4 image compositing. The emerging MPEG4 standard for multimedia applications allows script-based compositing of audiovisual scenes from multiple audio and visual objects. MPEG-4 supports both, natural (video) and synthetic (3D) visual objects or a combination of both. Objects can be manipulated, e.g. positioned, rotated, warped or duplicated by user interaction. A coprocessor architecture is presented, that works in parallel to an MPEG-4 video- and audio-decoder, and performs computation and bandwidth intensive low-level tasks for image compositing. The processor consists of an SIMD array of 16 DSPs to reach the required processing power for real-time image warping, alpha-blending and 3D rendering tasks. A programmable architecture allows one to adapt the processing resources to the specific needs of different tasks and applications. The processor has an object-oriented cache architecture with 2D virtual address space (e.g. textures), that allows concurrent and conflict-free access to shared data objects for all 16 DSPs. Especially I/O intensive tasks like texture-mapping, alpha-blending, image warping, z-buffer and shading algorithms benefit from shared memory caches and the possibility to preload data before it is accessed. --- paper_title: The D30V/MPEG multimedia processor paper_content: MPEG-2 decoding and encoding are important applications for multimedia systems. Real-time capability and low-cost implementation are the main design considerations for these systems. Due to the high computational requirements of real-time applications, multimedia systems typically use special-purpose processors to handle data. However, due to the inherent inflexibility of their designs, these dedicated processors are of little use in various application environments-digital videocassette recorders, for example. This article introduces Mitsubishi's D30V/MPEG multimedia processor, which integrates a dual-issue RISC with minimal hardware support for a real-time MPEG-2 decoder. This approach is advantageous because of the small chip area it requires and the flexibility of the easy-to-program RISC processor for multimedia applications. --- paper_title: Reconfigurable media processing paper_content: Multimedia processing is becoming increasingly important with a range of applications. Existing approaches for processing multimedia data can be broadly classified into two categories, namely: (i) microprocessors with extended media processing capabilities; and (ii) dedicated implementations (ASICs). The complexity, variety of techniques and tools associated with multimedia processing points to the opportunities for reconfigurable computing devices which will be able to adapt the underlying hardware dynamically in response to changes in the input data or processing environment. The paper proposes a novel approach to design a dynamically reconfigurable processor by performing hardware software co-design for a media processing application. As an example, the analysis of the shape coding module of MPEG-4 is chosen to demonstrate the potential for reconfigurability. ---
Title: A Survey of Media Processing Approaches Section 1: INTRODUCTION Description 1: Provide an overview of the various media processing techniques, highlight the challenges presented by multimedia computing, and introduce the need for reconfigurable processing in multimedia environments. Section 2: MEDIA PROCESSING APPROACHES Description 2: Discuss the factors influencing the design of cost-effective media processing solutions and provide a classification of existing media processing strategies and solutions. Section 3: GENERAL-PURPOSE PROGRAMMABLE PROCESSORS Description 3: Detail the architecture and features of CISC-and RISC-based processors, explore their implementation of media processing functionalities, and compare their effectiveness in handling multimedia applications. Section 4: SPECIAL-PURPOSE PROGRAMMABLE PROCESSORS Description 4: Examine the architectures and functionalities of special-purpose programmable processors, focusing on how they exploit redundancies and parallelism at various levels in media processing algorithms. Section 5: DEDICATED IMPLEMENTATIONS Description 5: Analyze the dedicated hardware implementations for media processing, including monolithic and distributed approaches, and discuss their application in real-world scenarios. Section 6: CONCLUSION Description 6: Summarize the findings of the survey, highlight the gaps in existing media processing solutions, and justify the need to explore reconfigurable computing for mobile multimedia applications.
Sensors for Robotic Hands: A Survey of State of the Art
5
--- paper_title: A Method for the Control of Multigrasp Myoelectric Prosthetic Hands paper_content: This paper presents the design and preliminary experimental validation of a multigrasp myoelectric controller. The described method enables direct and proportional control of multigrasp prosthetic hand motion among nine characteristic postures using two surface electromyography electrodes. To assess the efficacy of the control method, five nonamputee subjects utilized the multigrasp myoelectric controller to command the motion of a virtual prosthesis between random sequences of target hand postures in a series of experimental trials. For comparison, the same subjects also utilized a data glove, worn on their native hand, to command the motion of the virtual prosthesis for similar sequences of target postures during each trial. The time required to transition from posture to posture and the percentage of correctly completed transitions were evaluated to characterize the ability to control the virtual prosthesis using each method. The average overall transition times across all subjects were found to be 1.49 and 0.81 s for the multigrasp myoelectric controller and the native hand, respectively. The average transition completion rates for both were found to be the same (99.2%). Supplemental videos demonstrate the virtual prosthesis experiments, as well as a preliminary hardware implementation. --- paper_title: Performance characteristics of anthropomorphic prosthetic hands paper_content: In this paper we set forth a review of performance characteristics for both common commercial prosthetics as well as anthropomorphic research devices. Based on these specifications as well as surveyed results from prosthetic users, ranges of hand attributes are evaluated and discussed. End user information is used to describe the performance requirements for prosthetic hands for clinical use. --- paper_title: Human Hand Function paper_content: 1. Historical Overview and general introduction 2. Evolutionary development and anatomy of the hand 3. Sensory neurophysiology 4. Tactile sensing 5. Active haptic sensing 6. Prehension 7. Non-prehensile skilled movements 8. End-effector constraints 9. Hand function across the lifespan 10. Applications 11. Summary, conclusions and future directions --- paper_title: The MANUS-HAND Dextrous Robotics Upper Limb Prosthesis: Mechanical and Manipulation Aspects paper_content: Dextrous artificial hand design and manipulation is an active research topic. A very interesting practical application is the field of upper limb prosthetics. This paper presents the mechanical design and manipulation aspects of the MANUS-HAND project to develop a multifunctional upper limb prosthesis. The kinematics of our design makes use of the so-called underactuated principle and leads to an innovative design that triples the performance of currently existing commercial hand prosthesis. In addition, the thumb design allows its positioning both in flexion and opposition. As a consequence, up to four grasping modes (cylindrical, precision, hook and lateral) are available with just two actuators. ::: ::: The proposed impedance control approach allows the fingers to behave as virtual springs. Given the difficulty of including the user in the control loop, this approach is based on an autonomous coordination and control of the grasp. As a consequence, the requirements on the Human Machine interface are reduced. At the end of the paper, we briefly describe the clinical trials that were set up for evaluation purposes. --- paper_title: The evolution of functional hand replacement: From iron prostheses to hand transplantation. paper_content: The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation. --- paper_title: Functional Restoration of Adults and Children with Upper Extremity Amputation paper_content: History of Arm Amputation, Prosthetic Restoration, and Arm Amputation Rehabilitation Surgical Aspects of Arm Amputation. Amputation Levels and Surgical Techniques Upper Extremity Salvage and Reconstruction for Trauma and Tumors Upper Extremity Amputation Revision and Reconstruction Surgical Options for Brachial Plexus Injury Comprehensive Management for the Arm Amputee. Rehabilitation Planning for the Upper Extremity Amputee Self-Determination of the Person with an Upper Extremity Amputation Integrating Psychological and Medical Care: Practice Recommendations for Amputation Pain Management for Upper Extremity Amputation Evaluation of the Adolescent and Adult with Upper Extremity Amputation The Prosthetist's Evaluation and Planning Process with the Upper Extremity Amputee Foot Skills and Other Alternatives to Hand-Use Postoperative and Preprosthetic Preparation Functional Skills Training with Body-Powered and Externally Powered Prostheses Prosthetic Restoration in Arm Amputation. Prosthetic Prescription Aesthetic Restorations for the Upper Limb Amputee Overview of Body-Powered Upper Extremity Prostheses Externally Powered Prostheses for the Adult Transradial and Wrist Disarticulation Amputee External-Power for the Transhumeral Amputee of Powered Upper Extremity Prostheses Creative Prosthetic Solutions for Bilateral Upper Extremity Amputation Prosthetic Rehabilitation of Glenohumeral Level Deficiencies Recreation and Sports Adaptations Case Studies of Upper Extremity Amputations and Prosthetic Restoration Pediatric Upper Extremity Amputation. Evaluation of a Child with Congenital Upper Extremity Limb Deficiency Training the Child with a Unilateral Upper-Extremity Prosthesis Pediatric Case Studies of Upper Extremity Limb Deficiencies Outcomes in Upper Extremity Amputation. Follow-up, Outcomes, and Long-term Experiences in Adults with Upper Extremity Amputation Return to Work Issues for the Upper Extremity Amputee Historical Trends and the Future. Research Trends for the 21st Century. --- paper_title: Android Hands: A State-of-the-Art Report paper_content: Humans have adjusted their space, their actions, and their performed tasks according to their morphology, abilities, and limitations. Thus, the properties of a social robot should fit within these predetermined boundaries when, and if it is beneficial for the user, and the notion of the task. On such occasions, android and humanoid hand models should have similar structure, functions, and performance as the human hand. In this paper we present the anatomy, and the key functionalities of the human hand followed by a literature review on android/humanoid hands for grasping and manipulating objects, as well as prosthetic hands, in order to inform roboticists about the latest available technology, and assist their efforts to describe the state-of-the-art in this field.Copyright © 2014 by ASME --- paper_title: Results of an Internet survey of myoelectric prosthetic hand users paper_content: The results of a survey of 54 persons with upper limb amputations who anonymously completed a questionnaire on an Internet homepage are presented. The survey ran for four years and the participants were divided into groups of females, males, and children. It was found that the most individuals employ their myoelectric hand prosthesis for 8 hours or more. However, the survey also revealed a high level of dissatisfaction with the weight and the grasping speed of the devices. Activities for which prostheses should be useful were stated to include handicrafts, personal hygiene, using cutlery, operation of electronic and domestic devices, and dressing/undressing. Moreover, additional functions, e.g., a force feedback system, independent movements of the thumb, the index finger, and the wrist, and a better glove material are priorities that were identified by the users as being important improvements the users would like to see in myoelectric prostheses. --- paper_title: Paré and prosthetics: the early history of artificial limbs. paper_content: There is evidence for the use of prostheses from the times of the ancient Egyptians. Prostheses were developed for function, cosmetic appearance and a psycho-spiritual sense of wholeness. Amputation was often feared more than death in some cultures. It was believed that it not only affected the amputee on earth, but also in the afterlife. The ablated limbs were buried and then disinterred and reburied at the time of the amputee's death so the amputee could be whole for eternal life. One of the earliest examples comes from the 18th dynasty of ancient Egypt in the reign of Amenhotep II in the fifteenth century bc. A mummy in the Cairo Museum has clearly had the great toe of the right foot amputated and replaced with a prosthesis manufactured from leather and wood. The first true rehabilitation aids that could be recognized as prostheses were made during the civilizations of Greece and Rome. During the Dark Ages prostheses for battle and hiding deformity were heavy, crude devices made of available materials - wood, metal and leather. Such were the materials available to Ambroise Paré who invented both upper-limb and lower-limb prostheses. His 'Le Petit Lorrain', a mechanical hand operated by catches and springs, was worn by a French Army captain in battle. Subsequent refinements in medicine, surgery and prosthetic science greatly improved amputation surgery and the function of prostheses. What began as a modified crutch with a wooden or leather cup and progressed through many metamorphoses has now developed into a highly sophisticated prosthetic limb made of space-age materials. --- paper_title: The Sensory Somatotopic Map of the Human Hand Demonstrated at 4 Tesla paper_content: Abstract Recent attempts at high-resolution sensory-stimulated fMRI performed at 1.5 T have had very limited success at demonstrating a somatotopic organization for individual digits. Our purpose was to determine if functional MRI at 4 T can demonstrate the sensory somatotopic map of the human hand. Sensory functional MRI was performed at 4 T in five normal volunteers using a low-frequency vibratory stimulus on the pad of each finger of the left hand. A simple motor control task was also performed. The data were normalized to a standard atlas, and individual and group statistical parametric maps (SPMs) were computed for each task. Volume of activation and distribution of cluster maxima were compared for each task. For three of the subjects, the SPMs demonstrated a somatotopic organization of the sensory cortex. The group SPMs demonstrated a clear somatotopic organization of the sensory cortex. The thumb to fifth finger were organized, in general, with a lateral to medial, inferior to superior, and anterior to posterior relationship. There was overlap in the individual SPMs between fingers. The sensory activation spanned a space of 12–18 mm (thumb to fifth finger) on the primary sensory cortex. The motor activation occurred consistently at the superior-most extent of the sensory activation within and across subjects. The sensory somatotopic map of the human hand can be identified at 4 T. High-resolution imaging at 4 T can be useful for detailed functional imaging studies. --- paper_title: Robot Evolution: The Development of Anthrobotics paper_content: From the Publisher: ::: Since the creation of the first modern robots in the 1950s, robotics has developed rapidly and in diverse directions; the term robot (from the Czech word for drudgery) now applies to a spectrum of creations, from mechanical limbs bolted to factory floors to computer-driven bipeds with human-like capabilities. But the urge to create "mechanical men" to perform mundane, repetitive, and even complex human tasks is nearly as old as civilization itself. The ancient Greeks built automata, as did the Egyptians and the Japanese. Leonardo da Vinci designed mechanical men, and entertainment robots were all the rage in eighteenth-century Europe. Robot Evolution is unique in robotics literature, at once a comprehensive pictorial history of robots and a technical guide to robot designs, devices, and systems. Author and robot expert Mark E. Rosheim reviews and describes the gamut of robot mechanisms, from ancient to state-of-the-art, from subcomponents such as joints, grippers, and actuators to completely integrated systems equipped with artificial intelligence, sensors, and autonomous mobility. Rosheim chronicles the development and increasing complexity of these systems, using the kinesiology of human body parts as a framework for evaluating the kinematics of robotic components and explaining how these components are used to emulate human motion. Particular emphasis is placed on the most advanced current devices and promising experimental designs. Supplemented with hundreds of photographs, drawings, and illustrated tables, Robot Evolution is written in a clear, forthright style and organized to provide quick and easy access to information. Separate chapters are devoted to robot arms, wrists, hands, and legs, and each chapter contains examples of several different design approaches to the same problem or component. The advantages and disadvantages of each design are discussed in detail along with preferred applications and specific functions of each device. An annotated bib --- paper_title: Tactile Sensing—From Humans to Humanoids paper_content: Starting from human ?sense of touch,? this paper reviews the state of tactile sensing in robotics. The physiology, coding, and transferring tactile data and perceptual importance of the ?sense of touch? in humans are discussed. Following this, a number of design hints derived for robotic tactile sensing are presented. Various technologies and transduction methods used to improve the touch sense capability of robots are presented. Tactile sensing, focused to fingertips and hands until past decade or so, has now been extended to whole body, even though many issues remain open. Trend and methods to develop tactile sensing arrays for various body sites are presented. Finally, various system issues that keep tactile sensing away from widespread utility are discussed. --- paper_title: The selection of mechanical actuators based on performance indices paper_content: A method is presented for selecting the type of actuator best suited to a given task, in the early stages of engineering design. The selection is based on matching performance characteristics of the actuator, such as force and displacement, to the requirements of the given task. The performance characteristics are estimated from manufacturers9 data and from simple models of performance limitation such as heat generation and resonance. Characteristics are presented in a graphical form which allows for a direct and systematic comparison of widely different systems of actuation. The actuators considered include man-made actuators (such as hydraulic, solenoid and shape memory alloy) and naturally occurring actuators (such as the muscles of animals and plants). --- paper_title: Hands for Dexterous Manipulation and Robust Grasping: A Difficult Road Towards Simplicity paper_content: In this paper, an attempt at summarizing the evolution and the state of the art in the field of robot hands is made. In such exposition, a critical evaluation of what in the author's view are the leading ideas and emerging trends is privileged with respect to exhaustiveness of citations. The survey is focused mainly on three types of functional requirements a machine hand can be assigned in an artificial system, namely, manipulative dexterity, grasp robustness, and human operability. A basic distinction is made between hands designed for mimicking the human anatomy and physiology,and hands designed to meet restricted, practical requirements. In the latter domain, arguments are presented in favor of a -minimalistic" attitude in the design of hands for practical applications, i.e., use the least number of actuators, the simplest set of sensors, etc., for a given task. To achieve this rather obvious engineering goal is a challenge to our community. The paper illustrates some of the new sometimes difficult, problems that are brought about by building and controlling simpler, more practical devices. --- paper_title: Hands for Dexterous Manipulation and Robust Grasping: A Difficult Road Towards Simplicity paper_content: In this paper, an attempt at summarizing the evolution and the state of the art in the field of robot hands is made. In such exposition, a critical evaluation of what in the author's view are the leading ideas and emerging trends is privileged with respect to exhaustiveness of citations. The survey is focused mainly on three types of functional requirements a machine hand can be assigned in an artificial system, namely, manipulative dexterity, grasp robustness, and human operability. A basic distinction is made between hands designed for mimicking the human anatomy and physiology,and hands designed to meet restricted, practical requirements. In the latter domain, arguments are presented in favor of a -minimalistic" attitude in the design of hands for practical applications, i.e., use the least number of actuators, the simplest set of sensors, etc., for a given task. To achieve this rather obvious engineering goal is a challenge to our community. The paper illustrates some of the new sometimes difficult, problems that are brought about by building and controlling simpler, more practical devices. --- paper_title: Sensory motor systems of artificial and natural hands paper_content: The surgeon Ambroise Paré designed an anthropomorphic hand for wounded soldiers in the 16th century. Since that time, there have been advances in technology through the use of computer-aided design, modern materials, electronic controllers and sensors to realise artificial hands which have good functionality and reliability. Data from touch, object slip, finger position and temperature sensors, mounted in the fingers and on the palm, can be used in feedback loops to automatically hold objects. A study of the natural neuromuscular systems reveals a complexity which can only in part be realised today with technology. Highlights of the parallels and differences between natural and artificial hands are discussed with reference to the Southampton Hand. The anatomical structure of parts of the natural systems can be made artificially such as the antagonist muscles using tendons. Theses solutions look promising as they are based on the natural form but in practice lack the desired physical specification. However, concepts of the lower spinal loops can be mimicked in principle. Some future devices will require greater skills from the surgeon to create the interface between the natural system and an artificial device. Such developments may offer a more natural control with ease of use for the limb deficient person. --- paper_title: Effective emotional expressions with expression humanoid robot WE-4RII: integration of humanoid robot hand RCH-1 paper_content: The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. We considered that human hands play an important role in communication because human hands have grasping, sensing and emotional expression abilities. Then, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-1 (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. Furthermore, we confirmed that RCH-1 and WE-4RII had effective emotional expression ability because the correct recognition rate of WE-4RII's emotional expressions was higher than the WE-4R's one. In this paper, we describe the mechanical features of WE-4RII. --- paper_title: Experimental analysis and performance comparison of three different prosthetic hands designed according to a biomechatronic approach paper_content: Three different artificial hands have been designed in the framework of collaboration between the INAIL/RTR Centre and the Arts of Lab of Scuola Superiore Sant'Anna (SSSA) with the aim of developing a novel generation of prosthetic hands with increasing performance in restoring grasping functionalities in case of hand amputation. After having illustrated the philosophy and the methods of the biomechatronic approach to the design of prosthetic devices, the three different artificial hand prototypes are presented and described. A novel experimental protocol to assess hand performance has been conceived in order to provide quantitative parameters to analyze and compare various hands. The most important physical quantities to be rigorously measured and compared are the forces exerted on the object in different locations on the hand and in different grasping conditions. In particular, the test protocol is composed of six different experimental tests intended to measure the grasping force during the precision grip, the power grasp and the lateral pinch. The results obtained by applying the test protocol to the three prototypes are presented and discussed. In conclusion the paper presents the specifications of a novel prosthetic hand coming out from analysis presented in this paper. --- paper_title: The MANUS-HAND Dextrous Robotics Upper Limb Prosthesis: Mechanical and Manipulation Aspects paper_content: Dextrous artificial hand design and manipulation is an active research topic. A very interesting practical application is the field of upper limb prosthetics. This paper presents the mechanical design and manipulation aspects of the MANUS-HAND project to develop a multifunctional upper limb prosthesis. The kinematics of our design makes use of the so-called underactuated principle and leads to an innovative design that triples the performance of currently existing commercial hand prosthesis. In addition, the thumb design allows its positioning both in flexion and opposition. As a consequence, up to four grasping modes (cylindrical, precision, hook and lateral) are available with just two actuators. ::: ::: The proposed impedance control approach allows the fingers to behave as virtual springs. Given the difficulty of including the user in the control loop, this approach is based on an autonomous coordination and control of the grasp. As a consequence, the requirements on the Human Machine interface are reduced. At the end of the paper, we briefly describe the clinical trials that were set up for evaluation purposes. --- paper_title: The Cyberhand: on the design of a cybernetic prosthetic hand intended to be interfaced to the peripheral nervous system paper_content: The objective of the project described in this paper is the development of a cybernetic prosthesis, replicating as much as possible the sensory-motor capabilities of the natural hand. The human hand is not only an effective tool but also an ideal instrument to acquire information from the external environment. The development of a truly human-like artificial hand is probably the most widely known paradigm of "bionics". The Cyberhand Project aims to obtain a cybernetic prosthetic hand interfaced to the peripheral nervous system. In particular this paper is focused on the hand mechanisms design and it presents preliminary results in developing the three fingered anthropomorphic hand prototype and its sensory system. --- paper_title: Development and Control of a Multifunctional Prosthetic Hand with Shape Memory Alloy Actuators paper_content: In this research paper, non-conventional actuation technology, based on shape memory alloys, is employed for the development of an innovative low-cost five-fingered prosthetic hand. By exploiting the unique properties of these alloys, a compact, silent and modular actuation system is implemented and integrated in a lightweight and anthropomorphic rapid-prototyped hand chassis. A tendon-driven underactuated mechanism provides the necessary dexterity while keeping the mechanical and control complexity of the device low. Tactile sensors are integrated in the fingertips improving the overall hand control. Embedded custom-made electronics for hand interfacing and control are also presented and analyzed. For the position control of each digit, a novel resistance feedback control scheme is devised and implemented. The functionality and performance of the developed hand is demonstrated in grasp experiments with common objects.When compared to the current most advanced commercial devices, the technology applied in this prototype provides a series of improvements in terms of size, weight, and noise, which will enable upper limb amputees to carry out their basic daily tasks more comfortably. --- paper_title: Thick-film force and slip sensors for a prosthetic hand paper_content: In an attempt to improve the functionality of a prosthetic hand device, a new fingertip has been developed that incorporates sensors to measure temperature and grip force, and to detect the onset of object slip from the hand. The sensors have been implemented using thick film printing technology and exploit the piezoresistive characteristics of commercially available screen printing resistive pastes and the piezoelectric properties of proprietary lead-zirconate-titanate (PZT) formulated pastes. This paper describes the design and production of these different types of sensor and presents some results from initial investigations. --- paper_title: The SPRING Hand: Development of a Self-Adaptive Prosthesis for Restoring Natural Grasping paper_content: Commercially available prosthetic hands are simple grippers with one or two degrees of freedom; these pinch type devices have two rigid fingers in opposition to a rigid thumb. This paper focuses on an innovative approach for the design of a myoelectric prosthetic hand. The new prosthesis features underactuated mechanisms in order to achieve a natural grasping behavior and a good distribution of pinching forces. In this paper it is shown that underactuation allows reproducing most of the grasping behaviors of the human hand, without augmenting the mechanical and control complexity. --- paper_title: Artificial Redirection of Sensation From Prosthetic Fingers to the Phantom Hand Map on Transradial Amputees: Vibrotactile Versus Mechanotactile Sensory Feedback paper_content: This work assesses the ability of transradial amputees to discriminate multi-site tactile stimuli in sensory discrimination tasks. It compares different sensory feedback modalities using an artificial hand prosthesis in: 1) a modality matched paradigm where pressure recorded on the five fingertips of the hand was fed back as pressure stimulation on five target points on the residual limb; and 2) a modality mismatched paradigm where the pressures were transformed into mechanical vibrations and fed back. Eight transradial amputees took part in the study and were divided in two groups based on the integrity of their phantom map; group A had a complete phantom map on the residual limb whereas group B had an incomplete or nonexisting map. The ability in localizing stimuli was compared with that of 10 healthy subjects using the vibration feedback and 11 healthy subjects using the pressure feedback (in a previous study), on their forearms, in similar experiments. Results demonstrate that pressure stimulation surpassed vibrotactile stimulation in multi-site sensory feedback discrimination. Furthermore, we demonstrate that subjects with a detailed phantom map had the best discrimination performance and even surpassed healthy participants for both feedback paradigms whereas group B had the worst performance overall. Finally, we show that placement of feedback devices on a complete phantom map improves multi-site sensory feedback discrimination, independently of the feedback modality. --- paper_title: Intelligent multifunction myoelectric control of hand prostheses. paper_content: Intuitive myoelectric prosthesis control is difficult to achieve due to the absence of proprioceptive feedback, which forces the user to monitor grip pressure by visual information. Existing myoelectric hand prostheses form a single degree of freedom pincer motion that inhibits the stable prehension of a range of objects. Multi-axis hands may address this lack of functionality, but as with multifunction devices in general, serve to increase the cognitive burden on the user. Intelligent hierarchical control of multiple degree-of-freedom hand prostheses has been used to reduce the need for visual feedback by automating the grasping process. This paper presents a hybrid controller that has been developed to enable different prehensile functions to be initiated directly from the user's myoelectric signal. A digital signal processor (DSP) regulates the grip pressure of a new six-degree-of-freedom hand prosthesis thereby ensuring secure prehension without continuous visual feedback. --- paper_title: Design and development of an underactuated prosthetic hand paper_content: Current prosthetic hands are basically simple grippers with one or two degrees of freedom, which barely restore the capability of the thumb-index pinch. Although most amputees consider this performance as acceptable for usual tasks, there is ample room for improvement by exploiting recent progresses in mechatronic design and technology. This paper focus on an innovative approach for the design and development of prosthetic hands based on underactuated mechanisms. Furthermore, it describes the development and a preliminary analysis of a first prototype of an underactuated prosthetic hand. --- paper_title: A Flexible High Resolution Tactile Imager with Video Signal Output. paper_content: A high resolution and sheetlike form imaging tactile sensor with video signal output has been developed. The sensor has a 64×64 array of sensing elements on a flexible PC board with 1mm spatial resolution. Since the sensor outputs pressure distribution as a video signal, real time tactile image can be observed by using TV monitor. Moreover the same hardware and software of a vision system can be used for measured data handlings, tactile image processing and recording are very simple. As the sensor is made up of a sheet of pressure conductive rubber and stripe electrodes, there are undesiable current passes between sensing elements. Thus these undesiable current passes must be cut within a short scanning priod (500 ns for each element). We used a ground potential method and proved it useful under such a high speed scanning condition. The properties of pressure conductive rubber, including hysteresis and creap effects, are presented. A spatial filtering effect of an elastic cover for a tactile sensor is also analyzed. And it is proved this effect is very important. The final section shows measured data and image processing examples. --- paper_title: Multiple finger, passive adaptive grasp prosthetic hand paper_content: Abstract This paper describes the mechanical features of an experimental, multiple finger, prosthetic hand which has been designed for children in the 7–11 year age group. Conventional prosthetic hands exist for this age group, but they have limited mechanical function. The experimental hand presented is able to perform passive adaptive grasp, that is, the ability of the fingers to conform to the shape of an object held within the hand. During grasping, the four fingers and thumb are able to flex inwards independently, to conform to the shape of the object. This passive design is simple and effective, not requiring sensors or electronic processing. The adaptive grasp system developed here results in a hand with reduced size and weight compared to other experimental hands, and has increased mechanical function and cosmetic appearance compared to conventional prosthetic hands. --- paper_title: The DEKA Arm: Its features, functionality, and evolution during the Veterans Affairs Study to optimize the DEKA Arm paper_content: BACKGROUND AND AIM ::: DEKA Integrated Solutions Corp. (DEKA) was charged by the Defense Advanced Research Project Agency to design a prosthetic arm system that would be a dramatic improvement compared with the existing state of the art. The purpose of this article is to describe the two DEKA Arm prototypes (Gen 2 and Gen 3) used in the Veterans Affairs Study to optimize the DEKA Arm. ::: ::: ::: TECHNIQUE ::: This article reports on the features and functionality of the Gen 2 and Gen 3 prototypes discussing weight, cosmesis, grips, powered movements Endpoint, prosthetic controls, prosthetist interface, power sources, user notifications, troubleshooting, and specialized socket features; pointing out changes made during the optimization efforts. ::: ::: ::: DISCUSSION ::: The DEKA Arm is available in three configurations: radial configuration, humeral configuration, and shoulder configuration. All configurations have six preprogrammed grip patterns and four wrist movements. The humeral configuration has four powered elbow movements. The shoulder configuration uses Endpoint Control to perform simultaneous multi-joint movements. Three versions of foot controls were used as inputs. The Gen 3 incorporated major design changes, including a compound wrist that combined radial deviation with wrist flexion and ulnar deviation with wrist extension, an internal battery for the humeral configuration and shoulder configuration, and embedded wrist display. ::: ::: ::: CLINICAL RELEVANCE ::: The DEKA Arm is an advanced upper limb prosthesis, not yet available for commercial use. It has functionality that surpasses currently available technology. This manuscript describes the features and functionality of two prototypes of the DEKA Arm, the Gen 2 and the Gen 3. --- paper_title: Restoring Natural Sensory Feedback in Real-Time Bidirectional Hand Prostheses paper_content: Hand loss is a highly disabling event that markedly affects the quality of life. To achieve a close to natural replacement for the lost hand, the user should be provided with the rich sensations that we naturally perceive when grasping or manipulating an object. Ideal bidirectional hand prostheses should involve both a reliable decoding of the user's intentions and the delivery of nearly "natural" sensory feedback through remnant afferent pathways, simultaneously and in real time. However, current hand prostheses fail to achieve these requirements, particularly because they lack any sensory feedback. We show that by stimulating the median and ulnar nerve fascicles using transversal multichannel intrafascicular electrodes, according to the information provided by the artificial sensors from a hand prosthesis, physiologically appropriate (near-natural) sensory information can be provided to an amputee during the real-time decoding of different grasping tasks to control a dexterous hand prosthesis. This feedback enabled the participant to effectively modulate the grasping force of the prosthesis with no visual or auditory feedback. Three different force levels were distinguished and consistently used by the subject. The results also demonstrate that a high complexity of perception can be obtained, allowing the subject to identify the stiffness and shape of three different objects by exploiting different characteristics of the elicited sensations. This approach could improve the efficacy and "life-like" quality of hand prostheses, resulting in a keystone strategy for the near-natural replacement of missing hands. --- paper_title: A control system for multi-fingered robotic hand with distributed touch sensor paper_content: A robotic hand and its control system is developed. This hand has five fingers and 22 DOF. (16 for fingers and 6 for arm). The surface of the hand is covered with a distributed touch sensor that has more than 500 measuring points. This system, can control the positon, orientation, velocity and force of multiple points on the hand simultaneously. The effectiveness of this system is shown by the experiment in which the robotic hand holds a pair of scissors and achieves a paper cutting task. The event driven task execution system is also developed and used for the experiment. This system, watches the event which signals the change of the constrained state of the hand and switches the control points and their behaviors on the hand dynamically according to the detected event. --- paper_title: Design of the TUAT/Karlsruhe humanoid hand paper_content: The increasing demand for robotic applications in dynamic unstructured environments is motivating the need for dextrous end-effectors which can cope with the wide variety of tasks and objects encountered in these environments. The human hand is a very complex grasping tool that can handle objects of different sizes and shapes. Many research activities have been carried out to develop artificial robot hands with capabilities similar to the human hand. In this paper the mechanism and design of a new humanoid-type hand (called TUAT/Karlsruhe Humanoid Hand) with human-like manipulation abilities is discussed. The new hand is designed for the humanoid robot ARMAR which has to work autonomously or interactively in cooperation with humans and for an artificial lightweight arm for handicapped persons. The arm is developed as close as possible to the human arm and is driven by spherical ultrasonic motors. The ideal end-effector for such an artificial arm or a humanoid would be able to use the tools and objects that a person uses when working in the same environment. Therefore a new hand is designed for anatomical consistency with the human hand. This includes the number of fingers and the placement and motion of the thumb, the proportions of the link lengths and the shape of the palm. It can also perform most part of human grasping types. The TUAT/Karlsruhe Humanoid Hand possesses 20 DOF and is driven by one actuator which can be placed into or around the hand. --- paper_title: Development of a lightweight and adaptable multiple-axis hand prosthesis. paper_content: The last few decades have produced significant improvements in the design of upper limb prostheses through the increasing use of technology. However the limited function exhibited by these devices remains rooted in their single degree of freedom format. Commercial myoelectric hand prostheses warrant high grip forces to ensure stable prehension due to a planar pincer movement. Hence precise and conscious effort is required on the part of the user to ensure optimum grip. Consumers have shown dissatisfaction with the status quo due to the excessive weight and poor function of existing artificial hands. Increasing the number of grasping patterns and improving the visual feedback from an object in the hand are cited as key objectives. This paper outlines the development of the six-axis Southampton-Remedi hand prosthesis that addresses these design issues by maintaining stable prehension with minimal grip force. Constraints such as modularity, anthropomorphism, and low weight and power consumption are factors that have been adhered to throughout the design process. --- paper_title: The development of soft gripper for the versatile robot hand paper_content: Abstract This paper deals with a new type of soft gripper which can softly and gently conform to objects of any shape and hold them with uniform pressure. This gripping function is realized by means of a mechanism consisting of multi-links and series of pulleys which can be simply actuated by a pair of wires. The possibilities of this gripper are demonstrated by a pair of mechanical model. --- paper_title: The SmartHand transradial prosthesis paper_content: Background ::: Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. --- paper_title: Design of a Multigrasp Transradial Prosthesis paper_content: This paper describes the design and performance of a new prosthetic hand capable of multiple grasp configurations, and capable of fingertip forces and speeds comparable to those used by healthy subjects in typical activities of daily living. The hand incorporates four motor units within the palm, which together drive sixteen joints through tendon actuation. Each motor unit consists of a brushless motor that drives one or more tendons through a custom two-way clutch and pulley assembly. After presenting the design of the prosthesis, the paper presents a characterization of the hand's performance. This includes its ability to provide eight grasp postures, as well as its ability to provide fingertip forces and finger speeds comparable to those described in the biomechanics literature corresponding to activities of daily living. --- paper_title: UBH 3: an anthropomorphic hand with simplified endo-skeletal structure and soft continuous fingerpads paper_content: The paper describes work in progress at the University of Bologna concerning the design of a new anthropomorphic robot hand. The hand is based on the modular assembly of articulated fingers that adopt an original configuration of their structure, made with rigid links connected by elastic hinges that are coaxially crossed by flexible tendons. This innovative design is suitable to host distributed sensory equipment and continuous compliant cover, allowing a high level of anthropomorphism together with great structural simplification, reliability enhancement and cost reduction. Furthermore, the proposed solution is very flexible, as it can be adapted to many different hand configurations and is not dependent on a particular type of actuation, being compatible with future availability of any kind of artificial muscles. --- paper_title: Developments of new anthropomorphic robot hand and its master slave system paper_content: This paper presents a newly developed anthropomorphic robot hand called KH Hand type S, which has high potential of dexterous manipulation and displaying hand shape, and its master slave system using the bilateral controller for five-fingers robot hand. The robot hand is improved by reducing the weight, the backlash of transmission, and the friction between gears by using elastic body. Expression of Japanese finger alphabet is shown. In order to demonstrate the dexterous grasping and manipulating an object, the experiment of peg-in-hole task controlled by bilateral controller is shown. These results denote that the KH Hand type S has a high potential to perform dexterous object manipulation like the human hand. --- paper_title: Optoelectronic joint angular sensor for robotic fingers paper_content: The present paper reports the results of the development of a novel joint angular sensor conceived for integration in tendon-driven robotic hands and in data gloves used in virtual reality systems. The sensor is based on a couple LED/photodiode, mounted to two contiguous phalanges of a University of Bologna (UB) hand finger. When the joint between the considered phalanges flexes, the photocurrent measured by the photodetector changes with the angular displacement. An experimental model of the sensor is set up in order to select the optimal positioning of the components over the phalanges and an optical motion capture system is used to calibrate the sensor. The complete characterization of the sensor in terms of repeatability, linearity and noise presented in the paper together with its low cost confirm that the sensor can be effectively exploited both in feedback control loops for robotic systems as well as in data acquisition systems for virtual reality applications. --- paper_title: Shape classification in rotation manipulation by universal robot hand paper_content: We propose a method for shape classification in continuous rotation manipulation by a multi-fingered robot hand. Our robot hand has five fingers equipped with tactile sensors. Each tactile sensor can measure a pressure distribution once every 10(ms) while the robot hand rotates the object continuously. Our proposed classification method consists of the following processes: A kurtosis is calculated from each pressure distribution, and it quantifies shape of the current contact surface. By rotating an object and measuring a time-series pressure distribution, the hand obtains a time-series kurtosis. Finally, a evaluated value is calculated between the time-series kurtosis and reference patterns through a continuous dynamic programming (CDP) matching scheme. The contact shape is classified if the evaluated value is lower than a threshold. We show the effectiveness of our method through experiments. --- paper_title: DLR hand II: experiments and experience with an anthropomorphic hand paper_content: At our institute, two generations of antropomorphic hands have been designed. In quite a few experiments and demonstrations we could show the abilities of our hands and gain a lot of experience in what artificial hands can do, what abilities they need and where their limitations lie. In this paper, we would like to give an overview over the experiments performed with the DLR hands, our hands abilities and the things that need to be done in the near future. --- paper_title: Door opening control using the multi-fingered robotic hand for the indoor service robot paper_content: KIST service robot is composed of a mobile platform, a 6 DOF robotic manipulator, and a multi-fingered robotic hand. We discuss motion control and coordination strategy in order to deal with uncertainty problem in practical applications. A door opening is our target task. Since the environment is not prepared for a service robot, it is essential to deal with various uncertainties due to a robot positioning error, sensing error, as well as manipulation errors. In this paper, practical parameter estimation schemes are proposed from the viewpoint of coordinative motion control of a hand, a manipulator and a mobile robot. Analysis of physical properties of each component provides a methodology of appropriate role assignment for each component. In order to carrying out compliance control, an external force is computed using fingertip force information of the three fingered robot hand, instead of using force torque sensor at the wrist. Presented experimental result clearly shows the effectiveness of the proposed scheme. --- paper_title: Force/tactile sensor for robotic applications paper_content: Abstract The paper describes the detailed design and the prototype characterization of a novel tactile sensor 1 for robotic applications. The sensor is based on a two-layer structure, i.e. a printed circuit board with optoelectronic components below a deformable silicon layer with a suitably designed geometry. The mechanical structure of the sensor has been optimized in terms of geometry and material physical properties to provide the sensor with different capabilities. The first capability is to work as a six-axis force/torque sensor; additionally, the sensor can be used as a tactile sensor providing a spatially distributed information exploited to estimate the geometry of the contact with a stiff external object. An analytical physical model and a complete experimental characterization of the sensor are presented. --- paper_title: Development of a high-speed multifingered hand system and its application to catching paper_content: In this paper we introduce a newly developed high-speed multi-fingered robotic hand. The hand has 8-joints and 3-fingers. A newly developed small harmonic drive gear and a high-power mini actuator are fitted in each finger link, and a strain gauge sensor is in each joint. The weight of the hand module is only 0.8 kg, but high-speed motion and high-power grasping are possible. The hand can close its joints at 180 deg per 0.1 s, and the fingertips have an output force of about 28 N. The hand system is controlled by a massively parallel vision system. Experimental results are shown in which a falling object was caught by the high-speed hand. --- paper_title: The HIT/DLR dexterous hand: work in progress paper_content: This paper presents the current work progress of HIT/DLR Dexterous Hand. Based on the technology of DLR Hand II, HIT and DLR are jointly developing a smaller and easier manufactured robot hand. The prototype of one finger has been successfully built. The finger has three DOF and four joints, the last two joints are mechanically coupled by a rigid linkage. All the actuators are commercial brushless DC motors with integrated analog Hall sensors. DSP based control system is implemented in PCI bus architecture and the serial communication between the hand and DSP needs only 6 lines(4 lines power supply and 2 lines communication interface). The fingertip force can reach 10N. --- paper_title: Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu hand II paper_content: Presents an anthropomorphic robot hand called Gifu hand II that is a revised version of the Gifu hand I. The Gifu hand II has five fingers in which all joints are driven by servomotors built in the fingers and the palm. The thumb has 4 joints with 4 degrees of freedom (DOF), the other fingers have 4 joints with 3 DOF respectively, and two axes of joints near the palm cross orthogonally at one point like the human finger. It can be equipped with a 6 axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at object grasping are shown. These show that the Gifu hand II has a high potential to perform dexterous object manipulation like a human hand. --- paper_title: A highly-underactuated robotic hand with force and joint angle sensors paper_content: This paper describes a novel underactuated robotic hand design. The hand is highly underactuated as it contains three fingers with three joints each controlled by a single motor. One of the fingers (“thumb”) can also be rotated about the base of the hand, yielding a total of two controllable degrees-of-freedom. A key component of the design is the addition of position and tactile sensors which provide precise angle feedback and binary force feedback. Our mechanical design can be analyzed theoretically to predict contact forces as well as hand position given a particular object shape --- paper_title: Optical sensor for angular position measurements embedded in robotic finger joints paper_content: Abstract This paper presents the modeling and the design of an innovative angular position sensor able to be easily integrated into miniaturized robotic joints (a joint is mm large with a diameter of mm). The sensor working principle is based on the modulation of the light radiant power flux that goes from an InfraRed light emitting diode (IR LED) to a photodiode (PD) through a thickness-varying canal integrated into the joint itself. The LED and the PD are fixed on one of the links that compose the robotic joint, while the canal is integrated into the other link. A model for the interaction between the optoelectronic components is presented, validated, and used to design a robotic joint with the embedded angular sensor. The robotic joint has been characterized and use to implement a complete robotic finger with four embedded angular sensors. The tendon-driven robotic finger has been calibrated and experimentally tested. --- paper_title: Force sensor based on discrete optoelectronic components and compliant frames paper_content: Abstract In this paper, a novel force sensor based on commercial discrete optoelectronic components mounted on a compliant frame is described. The compliant frame has been designed through an optimization procedure to achieve a desired relation between the applied force and the angular displacement of the optical axes of the optoelectronic components. The narrow-angle characteristics of Light Emitting Diode (LED) and PhotoDetector (PD) couples have been exploited for the generation of a signal proportional to very limited deformation of the compliant frame caused by the external traction force. This sensor is suitable for applications in the field of tendon driven robots, and in particular the use of this sensor for the measurement of the actuator side tendon force in a robotic hand is reported. The design procedure of the sensor is presented together with the sensor prototype, the experimental verification of the calibration curve and of the frame deformation and the testing in a force feedback control system. The main advantages of this sensor are the simplified conditioning electronics, the very high noise-to-signal ratio and the immunity to electromagnetic fields. --- paper_title: A compliant, underactuated hand for robust manipulation paper_content: This paper introduces the i-HY Hand, an underactuated hand driven by 5 actuators that is capable of performing a wide range of grasping and in-hand manipulation tasks. This hand was designed to address the need for a durable, inexpensive, moderately dexterous hand suitable for use on mobile robots. The primary focus of this paper will be on the novel minimalistic design of i-HY, which was developed by choosing a set of target tasks around which the design of the hand was optimized. Particular emphasis is placed on the development of underactuated fingers that are capable of both firm power grasps and low- stiffness fingertip grasps using only the passive mechanics of the finger mechanism. Experimental results demonstrate successful grasping of a wide range of target objects, the stability of fingertip grasping, as well as the ability to adjust the force exerted on grasped objects using the passive finger mechanics. --- paper_title: Fingertip force and position control using force sensor and tactile sensor for Universal Robot Hand II paper_content: Various humanoid robots and multi-fingered robot hands are used in research and development. As these robot hands grasp and manipulate an object, the control phase is divided into an “approach phase” and a “manipulation phase.” In the approach phase, a position control method is necessary to control the posture of the robot hand. In the manipulation phase, a force control method is necessary to control the fingertip force of the robot hand. However, it is difficult to control both the force and position of these hands at the same time. In this paper, we propose a grasping force control method based on position control for manipulation. In this proposed method, the finger position is controlled in the direction of the force vector. With this control method, any external force is cancelled and the initial force is kept constant, or the setting force is applied to an object. --- paper_title: DLR's multisensory articulated hand. II. The parallel torque/position control system paper_content: Gives a brief description of feedback control systems engaged in DLR's recently developed multisensory 4-finger robot hand. The work is concentrated on constructing the dynamic model and the control strategy for one joint of the fingers. One goal is to make the hand follow a dataglove for fine manipulation tasks. Our proposed strategy for this task is parallel torque/position control; sliding mode control is realized for the robust trajectory tracking in free space; while impedance control is provided for compliance control in the constrained environment; and an easily-designed parallel observer is used for the switch between these two control modes during the transition from or to contact motion. Some experimental results show the effectiveness of proposed strategy for the pure position control, torque control, and the transition control. --- paper_title: DLR's multisensory articulated hand. I. Hard- and software architecture paper_content: The main features of DLR's dextrous robot hand as a modular component of a complete robotics system are outlined in this paper. The application of robotics systems in unstructured servicing environments requires dextrous manipulation abilities and facilities to perform complex remote operations in a very flexible way. Therefore we have developed a multisensory articulated four finger hand, where all actuators are integrated in the hand's palm or the fingers directly. It is an integrated part of a complex light-weight manipulation system aiming at the development of robonauts for space. After a brief description of the hand and it's sensorial equipment the hard- and software architecture is outlined with particular emphasis on flexibility and performance issues. The hand is typically controlled through a data glove for telemanipulation and skill-transfer purposes. Autonomous grasping and manipulation capabilities are currently under development. --- paper_title: Design of a fully modular and backdrivable dexterous hand paper_content: This paper presents the mechatronic design of a new anthropomorphic hand. It has been developed in the context of a multidisciplinary project which aims at understanding how humans perform the manipulation of objects in order to replicate grasping and in-hand movements with an artificial hand. This has required the development of a new hand with a high level of both anthropomorphism and dexterity. The hand must exactly replicate the kinematics of the human hand, adding up to 24 degrees of mobility and 20 degrees of freedom, which is a design challenge if a high level of dexterity must be guaranteed. Three key concepts have guided the mechanical design: modularity, backdrivability and mechanical simplicity. A modular approach simplifies the complex hand assembly and permits us to concentrate our efforts on one basic unit which will be replicated throughout the hand. Mechanical simplicity and backdrivability ensure a good natural mechanical behavior, essential for stable control of contact forces. Likewise, a better controllability will enhance the dexterity of the hand. A thorough mechanical design assures backdrivability through the whole mechanism, including actuators and transmission of movement to the joints. Experimental results confirm the validity of the design approach and will open new lines of research into robotic hands. --- paper_title: Development of a low cost anthropomorphic robot hand with high capability paper_content: This paper presents a development of an anthropomorphic robot hand, ‘KITECH Hand’ that has 4 full-actuated fingers. Most robot hands have small size simultaneously many joints as compared with robot manipulators. Components of actuator, gear, and sensors used for building robots are not small and are expensive, and those make it difficult to build a small sized robot hand. Differently from conventional development of robot hands, KITECH hand adopts a RC servo module that is cheap, easily obtainable, and easy to handle. The RC servo module that have been already used for several small sized humanoid can be new solution of building small sized robot hand with many joints. The feasibility of KITECH hand in object manipulation is shown through various experimental results. It is verified that the modified RC servo module is one of effective solutions in the development of a robot hand. --- paper_title: Anthropomorphic Robot Hand : Gifu Hand III paper_content: This paper presents an anthropomorphic robot hand called Gifu Hand III, which is a modified version of Gifu Hand II. The Gifu Hand is aimed to be used as a platform of robot hands for robotics research. The Gifu Hand III is improved on the points of backlash of transmission, opposability of the thumb, and mobility space of fingertips. The opposability of the thumb is evaluated by a cubature of opposable space. An explicit kinematical relation between the fourth joint and the third one is shown, where the fourth joint is engaged with the third joint by a planar four-bars linkage mechanism. The distributed tactile sensor, which has grid pattern electrodes and uses conductive ink, is mounted on the hand surface. The sensor presents 235 points expansion relatively to that of the Gifu Hand II. To reduce insensitive area, the electrodes width and pitch are expanded and narrowed, respectively. In consequence, the insensitive area is reduced to 49.1%. Experiments of grasping several objects are shown. With these improvements and experiments, the Gifu hand III has a higher potential to perform dexterous object manipulations like the human hand. --- paper_title: THE BARRETTHAND GRASPER - PROGRAMMABLY FLEXIBLE PART HANDLING AND ASSEMBLY paper_content: This paper details the design and operation of the BarrettHand BH8-250, an intelligent, highly flexible eight-axis gripper that reconfigures itself in real t ime to conform securely to a wide variety of part shapes without tool-change interruptions. The grasper brings enormous value to factory automation because it: reduces the required number and size of robotic work cells (which average US$90,000 each - not including the high cost of footprint) while boosting factory throughput; consolidates the hodgepodge proliferat ion of customized gripper-jaw shapes onto a common programmab le p latform; and enables incremental process imp rovement and accommodates frequent new-product introductions, capabilit ies deployed instantly via software across international networks of factories. --- paper_title: Development of the NAIST-Hand with Vision-based Tactile Fingertip Sensor paper_content: This paper introduces a multifingered robotic hand eNAIST-Handf and a grip force control by slip margin feedback. The developed prototype finger of the NAIST-hand has a new mechanism by which all 3 motors can be placed inside the palm without using wire-driven mechanisms. A method of grip force control is proposed using an incipient slip estimation. A new tactile sensor is designed to active the proposed control method by the NAIST-Hand. This sensor consists of a transparent semispherical gel, an embedded small camera, and a force sensor in order to implement the direct slip margin estimation. The structure and the principle of sensing are described. --- paper_title: Development of tactile sensor for detecting contact force and slip paper_content: In this paper, a fingertip tactile sensor is presented which can detect contact normal force as well as incipient slip. The sensor, based on polyvinylidene fluoride (PVDF), and pressure variable resistor ink, is physically flexible enough to be deformed into any three-dimensional geometry. In order to detect incipient slip, a PVDF strip is arranged along the direction normal to the surface of the finger of the robot hand. Also, a thin flexible sensor to sense the static force as well as the contact location, is fabricated into an arrayed type using pressure variable resistor ink. In addition, a tactile sensing system is developed with miniaturized electronic hardware such as charge amplifier, signal processing unit etc., and it feasibility is validated experimentally. --- paper_title: Design of a Compliant and Force Sensing Hand for a Humanoid Robot paper_content: Abstract : Robot manipulation tasks in unknown and unstructured environments can often be better addressed with hands that are capable of force-sensing and passive compliance. We describe the design of a compact four degree-of-freedom (DOF) hand that exhibits these properties. This hand is being developed for a new humanoid robot platform. Our hand contains four modular Force Sensing Compliant (FSC) actuators acting on three fingers. One actuator controls the spread between two fingers. Three actuators independently control the top knuckle of each finger. The lower knuckles of the finger are passively coupled to the top knuckle. We place a pair of torsion springs between the motor housing and the hand chassis. By measuring the deflection of these springs, we can determine the acting force of the actuator. The springs also provide compliance in the finger and protect the motor gearbox from high impact shocks. Our novel actuators, combined with embedded control electrics, allow for a compact and dexterous hand design that is well suited to humanoid manipulation research. --- paper_title: A tactile sensor sheet using pressure conductive rubber with electrical-wires stitched method paper_content: A new type of tactile sensor using pressure-conductive rubber with stitched electrical wires is presented. The sensor is thin and flexible and can cover three-dimensional objects. Since the sensor adopts a single-layer composite structure, the sensor is durable with respect to external force. In order to verify the effectiveness of this tactile sensor, we performed an experiment in which a four-fingered robot hand equipped with tactile sensors grasped sphere and column. The sensor structure, electrical circuit, and characteristics are described. The sensor control system and experimental results are also described. --- paper_title: Reconstructing the Shape of a Deformable Membrane from Image Data paper_content: In this paper, we study the problem of determining a mathematical description of the surface defined by the shape of a membrane based on an image of it and present an algorithm for reconstructing the surface when the membrane is deformed by unknown external elements. The given data are the projection on an image plane of markings on the surface of the membrane, the undeformed configuration of the membrane, and a model for the membrane mechanics. The method of re construction is based on the principle that the shape assumed by the membrane will minimize the elastic energy stored in the membrane subject to the constraints implied by the measurements. Energy minimization leads to a set of nonlinear partial differential equations. An approximate solution is found using linearization. The initial motivation, and our first application of these ideas, comes from tactile sensing. Experimental results affirm that this approach can be very effective in this context. --- paper_title: Sensing the texture of surfaces by anthropomorphic soft fingertips with multi-modal sensors paper_content: This paper describes the development of a human-like multi-modal soft finger and its ability to sense the texture of objects. This fingertip has two silicon rubber layers of different hardness; strain gauges and PVDF films are randomly distributed as tactile sensors. Owing to the dynamics of the silicon between sensors, the fingertip is supposed to have several sensor modalities. Preliminary experiments show that the fingertip can detect the difference between the textures of objects (paper and wood). --- paper_title: Sensing characteristics of an optical three-axis tactile sensor mounted on a multi-fingered robotic hand paper_content: To develop a new three-axis tactile sensor for mounting on multi-fingered robotic hands, in this work we optimize sensing elements on the basis of our previous works concerning optical three-axis tactile sensors with a flat sensing surface. The present tactile sensor is based on the principle of an optical waveguide-type tactile sensor, which is composed of an acrylic hemispherical dome, a light source, an array of rubber sensing elements, and a CCD camera. The sensing element of the present tactile sensor comprises one columnar feeler and eight conical feelers. The contact areas of the conical feelers, which maintain contact with the acrylic dome, detect the three-axis force applied to the tip of the sensing element. Normal and shearing forces are then calculated from integration and centroid displacement of the gray-scale value derived from the conical feeler's contacts. To evaluate the present tactile sensor, we have conducted a series of experiments using a y-z stage, a rotational stage and a force gauge, and have found that although the relationship between integrated gray-scale value and normal force depends on the latitude on the hemispherical surface, it is easy to modify the sensitivity according to the latitude, and that the centroid displacement of the gray-scale value is proportional to the shearing force. Finally, to verify the present tactile sensor, we performed a series of scanning tests using a robotic manipulator equipped with the present tactile sensor to have the manipulator scan surfaces of fine abrasive papers. Results show that the obtained shearing force increased with an increase in the particle diameter of aluminium dioxide contained in the abrasive paper, and decreased with an increase in the scanning velocity of the manipulator over the abrasive paper. Because these results are consistent with tribology, we conclude that the present tactile sensor has sufficient dynamic sensing capability to detect normal and shearing forces. --- paper_title: Electroactive polymeric sensors in hand prostheses: bending response of an ionic polymer metal composite. paper_content: In stark contrast to the inspiring functionality of the natural hand, limitations of current upper limb prostheses stemming from marginal feedback control, challenges of mechanical design, and lack of sensory capacity, are well-established. This paper provides a critical review of current sensory systems and the potential of a selection of electroactive polymers for sensory applications in hand prostheses. Candidate electroactive polymers are reviewed in terms of their relevant advantages and disadvantages, together with their current implementation in related applications. Empirical analysis of one of the most novel electroactive polymers, ionic polymer metal composites (IPMC), was conducted to demonstrate its potential for prosthetic applications. With linear responses within the operating range typical of hand prostheses, bending angles, and bending rates were accurately measured with 4.4+/-2.5 and 4.8+/-3.5% error, respectively, using the IPMC sensors. With these comparable error rates to traditional resistive bend sensors and a wide range of sensitivities and responses, electroactive polymers offer a promising alternative to more traditional sensory approaches. Their potential role in prosthetics is further heightened by their flexible and formable structure, and their ability to act as both sensors and actuators. --- paper_title: Tactile sensing in intelligent robotic manipulation – a review paper_content: Purpose - When designing hardware and algorithms for robotic manipulation and grasping, sensory information is typically needed to control the grasping process. This paper presents an overview of t ... --- paper_title: Development of the NAIST-Hand with Vision-based Tactile Fingertip Sensor paper_content: This paper introduces a multifingered robotic hand eNAIST-Handf and a grip force control by slip margin feedback. The developed prototype finger of the NAIST-hand has a new mechanism by which all 3 motors can be placed inside the palm without using wire-driven mechanisms. A method of grip force control is proposed using an incipient slip estimation. A new tactile sensor is designed to active the proposed control method by the NAIST-Hand. This sensor consists of a transparent semispherical gel, an embedded small camera, and a force sensor in order to implement the direct slip margin estimation. The structure and the principle of sensing are described. --- paper_title: Direct neural sensory feedback and control of a prosthetic arm paper_content: Evidence indicates that user acceptance of modern artificial limbs by amputees would be significantly enhanced by a system that provides appropriate, graded, distally referred sensations of touch and joint movement, and that the functionality of limb prostheses would be improved by a more natural control mechanism. We have recently demonstrated that it is possible to implant electrodes within individual fascicles of peripheral nerve stumps in amputees, that stimulation through these electrodes can produce graded, discrete sensations of touch or movement referred to the amputee's phantom hand, and that recordings of motor neuron activity associated with attempted movements of the phantom limb through these electrodes can be used as graded control signals. We report here that this approach allows amputees to both judge and set grip force and joint position in an artificial arm, in the absence of visual input, thus providing a substrate for better integration of the artificial limb into the amputee's body image. We believe this to be the first demonstration of direct neural feedback from and direct neural control of an artificial arm in amputees. --- paper_title: Development of tactile sensor for detecting contact force and slip paper_content: In this paper, a fingertip tactile sensor is presented which can detect contact normal force as well as incipient slip. The sensor, based on polyvinylidene fluoride (PVDF), and pressure variable resistor ink, is physically flexible enough to be deformed into any three-dimensional geometry. In order to detect incipient slip, a PVDF strip is arranged along the direction normal to the surface of the finger of the robot hand. Also, a thin flexible sensor to sense the static force as well as the contact location, is fabricated into an arrayed type using pressure variable resistor ink. In addition, a tactile sensing system is developed with miniaturized electronic hardware such as charge amplifier, signal processing unit etc., and it feasibility is validated experimentally. --- paper_title: A tactile sensor sheet using pressure conductive rubber with electrical-wires stitched method paper_content: A new type of tactile sensor using pressure-conductive rubber with stitched electrical wires is presented. The sensor is thin and flexible and can cover three-dimensional objects. Since the sensor adopts a single-layer composite structure, the sensor is durable with respect to external force. In order to verify the effectiveness of this tactile sensor, we performed an experiment in which a four-fingered robot hand equipped with tactile sensors grasped sphere and column. The sensor structure, electrical circuit, and characteristics are described. The sensor control system and experimental results are also described. --- paper_title: Tactile Sensing for Robotic Manipulation paper_content: In several fields of robotics, tactile and force sensors represent a basic tool for achieving an enhanced interaction with the environment. As a matter of fact, areas such as advanced manipulation, telemanipulation, haptic devices, legged robots and so on are intrinsically based on an advanced sensorial equipment and on proper techniques for the exploitation of their information. These types of sensors give information such as the presence of a contact, its size and shape, the exchanged forces/torques. More advanced sensors can also provide additional information, such as mechanical properties of the bodies in contact (e.g. friction coefficient, roughness, . . . ) or the slippage. In this chapter, an overview on tactile and force sensors and their specific use in robotic manipulation is presented. In particular, after an illustration of the state of the art and of the main recent technological developments, results concerning the detection and control of the relative motion (slippage) of two bodies in contact are discussed. This problem may be of relevance, e.g. in advanced manipulation by robotic systems in which, depending on the task to be executed, it might be desirable either to avoid or to exploit the slippage of the manipulated object. Similar problems can be found with legged robots, or more in general in any case where a robotic device has to interact with its environment in a controlled manner. These and other applications have justified an increasing research effort in this field in the last years, generating several interesting prototypes and techniques for data analysis. --- paper_title: Anatomically correct testbed hand control: Muscle and joint control strategies paper_content: Human hands are capable of many dexterous grasping and manipulation tasks. To understand human levels of dexterity and to achieve it with robotic hands, we constructed an anatomically correct testbed (ACT) hand which allows for the investigation of the biomechanical features and neural control strategies of the human hand. This paper focuses on developing control strategies for the index finger motion of the ACT Hand. A direct muscle position control and a force-optimized joint control are implemented as building blocks and tools for comparisons with future biological control approaches. We show how Gaussian process regression techniques can be used to determine the relationships between the muscle and joint motions in both controllers. Our experiments demonstrate that the direct muscle position controller allows for accurate and fast position tracking, while the force-optimized joint controller allows for exploitation of actuation redundancy in the finger critical for this redundant system. Furthermore, a comparison between Gaussian processes and least squares regression method shows that Gaussian processes provide better parameter estimation and tracking performance. This first control investigation on the ACT hand opens doors to implement biological strategies observed in humans and achieve the ultimate human-level dexterity. --- paper_title: Design and Evaluation of a Low-Cost Force Feedback System for Myoelectric Prosthetic Hands paper_content: Myoelectrically powered prosthetic hands lack sensory feedback relating to the force exerted by the artificial hand on a grasped object. The degree of control is imprecise, and often much more force than necessary is applied. The aim of this study was to develop and evaluate a force feedback system considering design constraints, providing the user with closed-loop control. Different methods and design criteria for providing myoelectric prosthetic hands with force feedback were analyzed, with stimulation by vibration being preferred. A new feedback system was designed, consisting of a miniature vibration motor, a piezoresistive force sensor, and control electronics. Grasping forces with and without feedback were recorded and compared from five habitual myoelectric hand users when grasping a hand dynamometer with different weights attached to it. All five patients rapidly improved their ability to regulate the grasping force without the help of vision when feedback was applied. An average force reduction of 37% was found when vibration was applied indirectly to the hand, and a decrease of 54% was found when feedback was applied directly to the skin of the residual limb. Constraints for a prosthetic force feedback system such as low power consumption, compactness, and being imperceptible to others are included in the design. General acceptance of vibration as a feedback signal was good, especially when applied indirectly. The results indicate that the new system is of potential value for myoelectric prosthetic hand users. More precise control is possible, and redundant grasping force can be diminished with a feedback system. (J Prosthet Orthot. 2006;18:1‐1.) KEY INDEXING TERMS: closed-loop control, force feedback, prosthetic hand, sensor, vibration --- paper_title: Underactuated five-finger prosthetic hand inspired by grasping force distribution of humans paper_content: Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands. --- paper_title: Design of a cybernetic hand for perception and action paper_content: Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control. --- paper_title: Upper limb amputees can be induced to experience a rubber hand as their own paper_content: We describe how upper limb amputees can be made to experience a rubber hand as part of their own body. This was accomplished by applying synchronous touches to the stump, which was out of view, and to the index finger of a rubber hand, placed in full view (26 cm medial to the stump). This elicited an illusion of sensing touch on the artificial hand, rather than on the stump and a feeling of ownership of the rubber hand developed. This effect was supported by quantitative subjective reports in the form of questionnaires, behavioural data in the form of misreaching in a pointing task when asked to localize the position of the touch, and physiological evidence obtained by skin conductance responses when threatening the hand prosthesis. Our findings outline a simple method for transferring tactile sensations from the stump to a prosthetic limb by tricking the brain, thereby making an important contribution to the field of neuroprosthetics where a major goal is to develop artificial limbs that feel like a real parts of the body. --- paper_title: Referral of sensation to an advanced humanoid robotic hand prosthesis paper_content: Hand prostheses that are currently available on the market are used by amputees to only a limited extent, partly because of lack of sensory feedback from the artificial hand. We report a pilot study that showed how amputees can experience a robot-like advanced hand prosthesis as part of their own body. We induced a perceptual illusion by which touch applied to the stump of the arm was experienced from the artificial hand. This illusion was elicited by applying synchronous tactile stimulation to the hidden amputation stump and the robotic hand prosthesis in full view. In five people who had had upper limb amputations this stimulation caused referral touch sensation from the stump to the artificial hand, and the prosthesis was experienced more like a real hand. We also showed that this illusion can work when the amputee controls the movements of the artificial hand by recordings of the arm muscle activity with electromyograms. These observations indicate that the previously described “rubber hand illusion” ... --- paper_title: Design and control of a shape memory alloy based dexterous robot hand paper_content: Modern externally powered upper-body prostheses are conventionally actuated by electric servomotors. Although these motors achieve reasonable kinematic performance, they are voluminous and heavy. Deterring factors such as these lead to a substantial proportion of upper extremity amputees avoiding the use of their prostheses. Therefore, it is apparent that there exists a need for functional prosthetic devices that are compact and lightweight. The realization of such a device requires an alternative actuation technology, and biological inspiration suggests that tendon based systems are advantageous. Shape memory alloys are a type of smart material that exhibit an actuation mechanism resembling the biological equivalent. As such, shape memory alloy enabled devices promise to be of major importance in the future of dexterous robotics, and of prosthetics in particular. This paper investigates the design, instrumentation, and control issues surrounding the practical application of shape memory alloys as artificial muscles in a three-fingered robot hand. --- paper_title: Multi-fingered robotic hand employing strings transmission named “Twist Drive” — Video contribution paper_content: A goal of our research is to produce a light-weight, low-cost five fingered robotic hand that has similar degrees of freedom as a human hand. The joints in the fingers of the developed robotic hand are powered by a newly proposed strings transmission named “Twist Drive”. The transmission converts torque into a pulling force by using a pair of strings that twist on each other. The basic characteristics of the transmission are given in the paper. A robotic hand prototype with 18 joints of which 14 are independently powered by Twist Drives was produced. The size of the hand is equal to the size of an adult human's hand and its weight including the power circuits is approximately 800 grams. The mechanical and the control systems of the hand are presented in the paper. --- paper_title: Development of intelligent robot hand using proximity, contact and slip sensing paper_content: To achieve the skillful task like the human, many researchers have been working on robot hand. An interaction with vision and tactile information are indispensable for realization of skillful tasks. In the existing research, the method using a camera to get the vision information is often found. But, in the boundary area of a non-contact phase and a contact phase, there are problem that lack of sensor information because the influence of occlusion comes up to surface. We devise to introduce the proximity sensor in this area. And we call the robot hand which is equipped with proximity, tactile and slip sensor “intelligent robot hand”. In this research, we show the constitution example of the intelligent robot hand and propose the method to realize Pick&Place as concrete task. --- paper_title: A multi-sensor system applied to control an intelligent robotic hand for underwater environment paper_content: We present a multi-sensors system adopted in an intelligent robotic hand that works in underwater environment. Four kinds of different sensors, force sensor, image sensor, ultrasonic sensor and hall sensor, are adopted in this system. Among them, information gained by force sensors formed a closed loop together with the control variables in the system, thus, the accuracy and stability of the whole system is improved remarkably. An upper computer is applied to merge the information gathered by these sensors and send commands to actuators. In order that the system can act as required stably, we applied some protecting methods in the software. With the help of this system, we can control the intelligent robotic hand in real time all by human or let it work automatically. --- paper_title: Grasping Force Control of Multi-fingered Robot Hand based on Slip Detection Using Tactile Sensor paper_content: To achieve a human like grasping by the multi-fingered robot hand, grasping force should be controlled without information of the grasping object such as the weight and the friction coefficient. In this study, we propose a method for detecting the slip of grasping object by force output of the Center of Pressure (CoP) tactile sensor. CoP sensor can measure center position of distributed load and total load which is applied on the surface of the sensor within 1 [ms] . This sensor is arranged on finger of the robot hand, and the effectiveness as slip detecting sensor is confirmed by experiment of slip detection on grasping. Finally, we propose a method for controlling grasping force resist the tangential force added to the grasping object by feedback control system of the CoP sensor force output. --- paper_title: Development of a high-speed multifingered hand system and its application to catching paper_content: In this paper we introduce a newly developed high-speed multi-fingered robotic hand. The hand has 8-joints and 3-fingers. A newly developed small harmonic drive gear and a high-power mini actuator are fitted in each finger link, and a strain gauge sensor is in each joint. The weight of the hand module is only 0.8 kg, but high-speed motion and high-power grasping are possible. The hand can close its joints at 180 deg per 0.1 s, and the fingertips have an output force of about 28 N. The hand system is controlled by a massively parallel vision system. Experimental results are shown in which a falling object was caught by the high-speed hand. --- paper_title: The HIT/DLR dexterous hand: work in progress paper_content: This paper presents the current work progress of HIT/DLR Dexterous Hand. Based on the technology of DLR Hand II, HIT and DLR are jointly developing a smaller and easier manufactured robot hand. The prototype of one finger has been successfully built. The finger has three DOF and four joints, the last two joints are mechanically coupled by a rigid linkage. All the actuators are commercial brushless DC motors with integrated analog Hall sensors. DSP based control system is implemented in PCI bus architecture and the serial communication between the hand and DSP needs only 6 lines(4 lines power supply and 2 lines communication interface). The fingertip force can reach 10N. --- paper_title: One-handed knotting of a flexible rope with a high-speed multifingered hand having tactile sensors paper_content: This paper proposes a new strategy for making knots with a high-speed multifingered robot hand having tactile sensors. The strategy is divided into three skills: loop production, rope permutation, and rope pulling. Through these three skills, a knot can be made with a single multifingered robot hand. The dynamics of the rope permutation are analyzed in order to improve the success rate, and an effective tactile feedback control method is proposed based on the analysis. Finally, experimental results are shown. --- paper_title: Multisensory five-finger dexterous hand: The DLR/HIT Hand II paper_content: This paper presents a new developed multisensory five-fingered dexterous robot hand: the DLR/HIT Hand II. The hand has an independent palm and five identical modular fingers, each finger has three DOFs and four joints. All the actuators and electronics are integrated in the finger body and the palm. By using powerful super flat brushless DC motors, tiny harmonic drivers and BGA form DSPs and FPGAs, the whole fingerpsilas size is about one third smaller than the former finger in the DLR/HIT Hand I. By using the steel coupling mechanism, the phalanx distalpsilas transmission ratio is exact 1:1 in the whole movement range. At the same time, the multisensory dexterous hand integrates position, force/torque and temperature sensors. The hierarchical hardware structure of the hand consists of the finger DSPs, the finger FPGAs, the palm FPGA and the PCI based DSP/FPGA board. The hand can communicate with external with PPSeCo, CAN and Internet. Instead of extra cover, the packing mechanism of the hand is implemented directly in the finger body and palm to make the hand smaller and more human like. The whole weight of the hand is about 1.5Kg and the fingertip force can reach 10N. --- paper_title: Modularly designed lightweight anthropomorphic robot hand paper_content: In this paper, the modular design of artificial hands is presented. The modular concept shall be introduced based on the example of the artificial anthropomorphic hand prototype, which is part of the project for the development of a human assistive robot. Particular attention shall be dedicated to details of modular construction and servicing of the hand prototype. Prototype components, functional activities, technical characteristics, and the control system of the hand shall be introduced as well. Additionally, two hand prototypes were attached to a humanoid service robot and first experience gained from their operation will be presented in conclusion. --- paper_title: Bio-inspired sensorization of a biomechatronic robot hand for the grasp-and-lift task paper_content: It has been concluded from numerous neurophysiological studies that humans rely on detecting discrete mechanical events that occur when grasping, lifting and replacing an object, i.e., during a prototypical manipulation task. Such events represent transitions between phases of the evolving manipulation task such as object contact, lift-off, etc., and appear to provide critical information required for the sequential control of the task as well as for corrections and parameterization of the task. We have sensorized a biomechatronic anthropomorphic hand with the goal to detect such mechanical transients. The developed sensors were designed to specifically provide the information about task-relevant discrete events rather than to mimic their biological counterparts. To accomplish this we have developed (1) a contact sensor that can be applied to the surface of the robotic fingers and that show a sensitivity to indentation and a spatial resolution comparable to that of the human glabrous skin, and (2) a sensitive low-noise three-axial force sensor that was embedded in the robotic fingertips and showed a frequency response covering the range observed in biological tactile sensors. We describe the design and fabrication of these sensors, their sensory properties and show representative recordings from the sensors during grasp-and-lift tasks. We show how the combined use of the two sensors is able to provide information about crucial mechanical events during such tasks. We discuss the importance of the sensorized hand as a test bed for low-level grasp controllers and for the development of functional sensory feedback from prosthetic devices. --- paper_title: Design of anthropomorphic dexterous hand with passive joints and sensitive soft skins paper_content: Installation of passive elements on the skin (whole-part covered soft skin) and inside joints of the robotic hand becomes a key-technology to remarkably enhance the stability of object handling and manipulation and adaptability to external forces. Based on this idea, in this paper a sophisticated mechanism design of TWENDY-ONE hands with mechanical springs in DIP and MP joints and whole-part covered soft skins is presented. In addition, we present a design method of tactile sensors for highly dexterous robotic hand, whish are necessary for recognition of volume and softness of objects grasped as well as improvement of handling and manipulation more stably. Evaluation experiments focusing on kitchen supports using TWENDY-ONE hands indicate that this new robot has high dexterity due to the hand and will be extremely useful to enhance the quality of life for the elderly in the near future where human and robot co-exist. --- paper_title: The modular multisensory DLR-HIT-Hand paper_content: Abstract The paper presents hardware and software architecture of the new developed compact multisensory DLR-HIT hand. The hand has four identical fingers and an extra degree of freedom for palm. In each finger there is a Field Programmable Gate Array (FPGA) for data collection, brushless DC motors control and communication with palm’s FPGA by Point-to-Point Serial Communication (PPSeCo). The kernel of the hardware system is a PCI-based high speed floating-point Digital Signal Processor (DSP) for data processing, and FPGA for high speed (up to 25 Mbps) real-time serial communication with the palm’s FPGA. In order to achieve high modularity and reliability of the hand, a fully mechatronic integration and analog signals in situ digitalization philosophy are implemented to minimize the dimension, number of the cables (five cables including power supply) and protect data communication from outside disturbances. Furthermore, according to the hardware structure of the hand, a hierarchical software structure has been established to perform all data processing and the control of the hand. It provides basic API functions and skills to access all hardware resources for data acquisition, computation and tele-operation. With the nice design of the hand’s envelop, the hand looks more like humanoid. --- paper_title: Dynamic Pen Spinning Using a High-speed Multifingered Hand with High-speed Tactile Sensor paper_content: We propose a tactile feedback system in real time using a high-speed multifingered robot hand and high-speed tactile sensor. The system is respectively capable of high-speed finger motion up to 180 deg per 0.1 s and high-speed tactile feedback with a sampling rate higher than 1 kHz. In this paper, we describe dynamic pen spinning as an example of a skillful manipulation task using a high-speed multifingered hand equipped with tactile sensors. The paper describes the tactile feedback control strategies and experimental results. --- paper_title: Biomimetic Tactile Sensor Array paper_content: The performance of robotic and prosthetic hands in unstructured environments is severely limited by their having little or no tactile information compared to the rich tactile feedback of the human ... --- paper_title: High-Resolution Thin-Film Device to Sense Texture by Touch paper_content: Touch (or tactile) sensors are gaining renewed interest as the level of sophistication in the application of minimum invasive surgery and humanoid robots increases. The spatial resolution of current large-area (greater than 1 cm 2 ) tactile sensor lags by more than an order of magnitude compared with the human finger. By using metal and semiconducting nanoparticles, a ∼100-nm-thick, large-area thin-film device is self-assembled such that the change in current density through the film and the electroluminescent light intensity are linearly proportional to the local stress. A stress image is obtained by pressing a copper grid and a United States 1-cent coin on the device and focusing the resulting electroluminescent light directly on the charge-coupled device. Both the lateral and height resolution of texture are comparable to the human finger at similar stress levels of ∼10 kilopascals. --- paper_title: Tactile sensing for an anthropomorphic robotic hand: Hardware and signal processing paper_content: In this paper, a tactile sensing system for an anthropomorphic robot hand is presented. The tactile sensing system is designed as a construction kit making it very versatile. The sensor data preprocessing is embedded into the hand's hardware structure and is fully integrated. The sensor system is able to gather tactile pressure profiles and to measure vibrations in the sensor's cover. Additionally to the introduction of the hardware, the signal processing and the classification of the acquired sensor data will be explained in detail. These algorithms make the tactile sensing system capable to detect contact points, to classify contact patterns and to detect slip conditions during object manipulation and grasping. --- paper_title: Grip Control Using Biomimetic Tactile Sensing Systems paper_content: We present a proof-of-concept for controlling the grasp of an anthropomorphic mechatronic prosthetic hand by using a biomimetic tactile sensor, Bayesian inference, and simple algorithms for estimation and control. The sensor takes advantage of its compliant mechanics to provide a triaxial force sensing end-effector for grasp control. By calculating normal and shear forces at the fingertip, the prosthetic hand is able to maintain perturbed objects within the force cone to prevent slip. A Kalman filter is used as a noise-robust method to calculate tangential forces. Biologically inspired algorithms and heuristics are presented that can be implemented online to support rapid, reflexive adjustments of grip. --- paper_title: A sensor for dynamic tactile information with applications in human-robot interaction and object exploration paper_content: We present a novel tactile sensor, which is applied for dextrous grasping with a simple robot gripper. The hardware novelty consists of an array of capacitive sensors, which couple to the object by means of little brushes of fibers. These sensor elements are very sensitive (with a threshold of about 5 mN) but robust enough not to be damaged during grasping. They yield two types of dynamical tactile information corresponding roughly to two types of tactile sensor in the human skin. The complete sensor consists of a foil-based static force sensor, which yields the total force and the center of the two-dimensional force distribution and is surrounded by an array of the dynamical sensor elements. One such sensor has been mounted on each of the two gripper jaws of our humanoid robot and equipped with the necessary read-out electronics and a CAN bus interface. We describe applications to guiding a robot arm on a desired trajectory with negligible force, reflective grip improvement, and tactile exploration of objects to create a shape representation and find stable grips, which are applied autonomously on the basis of visual recognition. --- paper_title: A tactile sensor for the fingertips of the humanoid robot iCub paper_content: In order to successfully perform object manipulation, humanoid robots must be equipped with tactile sensors. However, the limited space that is available in robotic fingers imposes severe design constraints. In [1] we presented a small prototype fingertip which incorporates a capacitive pressure system. This paper shows an improved version, which has been integrated on the hand of the humanoid robot iCub. The fingertip is 14.5 mm long and 13 mm wide. The capacitive pressure sensor system has 12 sensitive zones and includes the electronics to send the 12 measurements over a serial bus with only 4 wires. Each synthetic fingertip is shaped approximately like a human fingertip. Furthermore, an integral part of the capacitive sensor is soft silicone foam, and therefore the fingertip is compliant. We describe the structure of the fingertip, their integration on the humanoid robot iCub and present test results to show the characteristics of the sensor. --- paper_title: A robust micro-vibration sensor for biomimetic fingertips paper_content: Controlling grip force in a prosthetic or robotic hand requires detailed sensory feedback information about microslips between the artificial fingertips and the object. In the biological hand this is accomplished with neural transducers capable of measuring micro-vibrations in the skin due to sliding friction. For prosthetic tactile sensors, emulating these biological transducers is a difficult challenge due to the fragility associated with highly sensitive devices. Incorporating a pressure sensor into a fluid-filled fingertip provides a novel solution to this problem by effectively creating a device similar to a hydrophone, capable of recording vibrations from lateral movements. The fluid conducts these acoustic signals well and with little attenuation, permitting the pressure sensing elements to be located in a protected region inside the core of the sensor and removing them from harmpsilas way. Preliminary studies demonstrate that high frequency vibrations (50-400 Hz) can be readily detected when such a fingertip slides across a ridged surface. --- paper_title: Piezoelectric Vibration-Type Tactile Sensor Using Elasticity and Viscosity Change of Structure paper_content: We propose a new tactile sensor utilizing piezoelectric vibration. This tactile sensor has a high sensitivity, wide measurement range, pressure resistance, flexibility, and self-sensing function. This tactile sensor comprises two piezoelectric materials. One is used for the vibration of the sensor element and the other is used for the measurement of the change in mechanical impedance induced by an external force. We achieved the wide measurement range by implementing two ideas. One was to apply the external force to the sensor element through an elastic body and the other was to use two or more modes of vibration. Moreover, for the elastic body, it is preferable to use a material whose elasticity and viscosity are easily changed by an external force, such as a gel. In this study, first, this tactile sensor was analyzed, and then its characteristics were derived. The analytical results qualitatively corresponded to the experimental results. Next, a prototype tactile sensor was fabricated and evaluated. The evaluation results showed that this tactile sensor can measure a pressure of 2.5 Pa or less and a pressure of 10 kPa or more and its pressure resistance is 1 MPa or more. --- paper_title: Signal processing and fabrication of a biomimetic tactile sensor array with thermal, force and microvibration modalities paper_content: We have developed a finger-shaped sensor array that provides simultaneous information about the contact forces, microvibrations and thermal fluxes induced by contact with external objects. In this paper, we describe a microprocessor-based signal conditioning and digitizing system for these sensing modalities and its embodiment on a flex-circuit that facilitates efficient assembly of the entire system via injection molding. Thermal energy from the embedded electronics is used to heat the finger above ambient temperature, similar to the biological finger. This enables the material properties of contacted objects to be inferred from thermal transients measured by a thermistor in the sensor array. Combining sensor modalities provides synergistic benefits. For example, the contact forces for exploratory movements can be calibrated so that thermal and microvibration data can be interpreted more definitively. --- paper_title: An embedded artificial skin for humanoid robots paper_content: A novel artificial skin for covering the whole body of a humanoid robot is presented. It provides pressure measurements and shape information about the contact surfaces between the robot and the environment. The system is based on a mesh of sensors interconnected in order to form a networked structure. Each sensor has 12 capacitive taxels, has a triangular shape and is supported by a flexible substrate in order to conform to smooth curved surfaces. Three communications ports placed along the sides of each sensor sides allow communications with adjacent sensors. The tactile measurements are sent to embed microcontroller boards using serial bus communication links. The system can adaptively reduce its spatial resolution, improving the response time. This feature is very useful for detecting the first contact very rapidly, at a lower spatial resolution, and then increase the spatial resolution in the region of contact for accurate reconstruction of the contact pressure distribution. --- paper_title: Electric Field Servoing for robotic manipulation paper_content: This paper presents two experiments with electric field servoing for robotic manipulation. In the first, a robot hand pre-shapes to the geometry and pose of objects to be grasped, by servoing each finger according to the values on EF sensors built in to each finger. In the second, a 7 degree of freedom arm aligns itself in 2 dimensions with a target object using electric field measurements as the error signal. This system also allows the end effector to dynamically track the target object as it is moved. --- paper_title: An Electric Field Pretouch system for grasping and co-manipulation paper_content: Pretouch sensing is longer range than contact, but shorter range than vision. The hypothesis motivating this work is that closed loop feedback based on short range but non-contact measurements can improve the reliability of manipulation. This paper presents a grasping system that is guided at short range by Electric Field (EF) Pretouch. We describe two sets of experiments. The first set of experiments involves human-to-robot and robot-to-human handoff, including the use of EF Pretouch to detect whether or not a human is also touching an object that the robot is holding, which we call the “co-manipulation state.” In the second set of experiments, the robot picks up standalone objects. We describe a number of techniques that servo the arm and fingers in order to both collect relevant geometrical information, and to actually perform the manipulation task. --- paper_title: A robust, low-cost and low-noise artificial skin for human-friendly robots paper_content: As robots and humans move towards sharing the same environment, the need for safety in robotic systems is of growing importance. Towards this goal of human-friendly robotics, a robust, low-cost, low-noise capacitive force sensing array is presented with application as a whole body artificial skin covering. This highly scalable design provides excellent noise immunity, low-hysteresis, and has the potential to be made flexible and formable. Noise immunity is accomplished through the use of shielding and local sensor processing. A small and low-cost multivibrator circuit is replicated locally at each taxel, minimizing stray capacitance and noise coupling. Each circuit has a digital pulse train output, which allows robust signal transmission in noisy electrical environments. Wire count is minimized through serial or row-column addressing schemes, and the use of an open-drain output on each taxel allows hundreds of sensors to require only a single output wire. With a small set of interface wires, large arrays can be scanned hundreds of times per second and dynamic response remains flat over a broad frequency range. Sensor performance is evaluated on a bench-top version of a 4×4 taxel array in quasi-static and dynamic cases. --- paper_title: Unknown Object Grasping Strategy Imitating Human Grasping Reflex for Anthropomorphic Robot Hand paper_content: This paper presents a grasping strategy for unknown objects that imitating human grasping reflex for the anthropomorphic robot hands. A 10 months baby may bend his/her thumb and 4 fingers trying to grasp an object when it is in contact with the palm. After grasping it, if the object is plucked from the baby's hand, the baby holds the object more strongly. In addition, the hand approaches the object by only touching the palm lightly. The reaction of the hand is called a grasping reflex. In the proposed grasping strategy, each joint of the thumb and the fingers is controlled independently using the contact force affecting its adjacent fingertip side link to imitate the grasping reflex. By setting a suitable contact force, both fingertip grasping and enveloped grasping with uniform grasping force are executable. Experimental results of grasping three dimensional unknown objects by using an anthropomorphic robot hand called Gifu Hand III are shown. --- paper_title: Electric field imaging pretouch for robotic graspers paper_content: This paper proposes the use of electric field sensors to implement "pretouch" for robotic grasping. Weakly electric fish use this perceptual channel, but it has not received much attention in robotics. This paper describes a series of manipulators each of which incorporates electric field sensors in a different fashion. In each case, the paper presents techniques for using the sensors, and experimental data collected. First, a simple dynamic-object avoidance technique is presented. Next, a 1-D alignment task for grasping is described. Then linear and rotary electrode scanning techniques are presented. It is shown that these techniques can distinguish a small object at close range from a large object farther away, a capability that may be important for grasping. --- paper_title: Repetitive grasping with anthropomorphic skin-covered hand enables robust haptic recognition paper_content: Skin is an essential component of artificial hands. It enables the use of object affordance for recognition and control, but due to its intrinsic locality and low density of current tactile sensors, stable and proper manual contacts with the objects are indispensable. Recently, design of hand structure have shown to be effective for adaptive grasping. However, such adaptive design are only introduced to the fingers in existing works of haptics and their role in recognition remains unclear. This paper introduces the design of the Bionic Hand; an anthropomorphic hand with adaptive design introduced to the whole hand and fully covered with sensitive skin. The experiment shows that anthropomorphic design of hand structure enables robust haptic recognition by convergence of object contact conditions into stable representative states through repetitive grasping. The structure of the human hand is found to solve the issue of narrowing down the sensor space for haptic object recognition by morphological computation. --- paper_title: Dual-Mode Capacitive Proximity Sensor for Robot Application: Implementation of Tactile and Proximity Sensing Capability on a Single Polymer Platform Using Shared Electrodes paper_content: In this paper, we report a flexible dual-mode capacitive sensor for robot applications which has two sensing capabilities in a single platform; tactile and proximity sensing capability. The sensor consists of a mechanical structure based on PDMS (Polydimethylsiloxane) and a mesh of multiple copper electrode strips. The mesh is composed of 16 top and 16 bottom copper strips crossed each other to form a 16 times 16 capacitor array. The proposed sensor is able to switch its function from tactile sensing to proximity sensing or vice versa by reconfiguring the connection of electrodes. The tactile sensing capability has been demonstrated already and reported in our previous paper (Lee et al.,, 2006); therefore, in this paper, we will demonstrate the feasibility of the proximity sensing capability and the dual-mode operation of the proposed sensor in detail. The capacitance change caused by an approaching object has been estimated through simulation of multiple two-dimensional models as an initial study. The measured data have shown similar trends with the simulation results. We tested various materials from conducting metals to a human hand for proximity measurement. The fabricated sensor could detect a human hand at a distance up to 17 cm away from the sensor. We also have successfully demonstrated the feasibility of dual-mode operation of the proposed sensor in real-time exploiting a custom designed PCB, a data acquisition pad, and Labview software. --- paper_title: Robust sensor-based grasp primitive for a three-finger robot hand paper_content: This paper addresses the problem of robot grasping in conditions of uncertainty. We propose a grasp controller that deals robustly with this uncertainty using feedback from different contact-based sensors. This controller assumes a description of grasp consisting of a primitive that only determines the initial configuration of the hand and the control law to be used. We exhaustively validate the controller by carrying out a large number of tests with different degrees of inaccuracy in the pose of the target objects and by comparing it with results of a naive grasp controller. --- paper_title: Adaptive grasping by multi fingered hand with tactile sensor based on robust force and position control paper_content: In this paper we propose a new robust force and position control method for property-unknown objects grasping. The proposed control method is capable of selecting the force control or position control, and smooth and quick switching according to the amount of the external force. The proposed method was applied to adaptive grasping by three-fingered hand which has 12 DOF, and the experimental results revealed that the smooth collision process and the stable grasping is realized even if the precise surface position, the mass and the stiffness are unknown. In addition a new algorithm determines the grasp force according to the "slip" measured with the tactile sensor and the viscoelastic media on the fingertip. This algorithm works at starting and stationary state, so the friction and mass unknown object grasping is realized by the effectual force. --- paper_title: Anthropomorphic Robot Hand : Gifu Hand III paper_content: This paper presents an anthropomorphic robot hand called Gifu Hand III, which is a modified version of Gifu Hand II. The Gifu Hand is aimed to be used as a platform of robot hands for robotics research. The Gifu Hand III is improved on the points of backlash of transmission, opposability of the thumb, and mobility space of fingertips. The opposability of the thumb is evaluated by a cubature of opposable space. An explicit kinematical relation between the fourth joint and the third one is shown, where the fourth joint is engaged with the third joint by a planar four-bars linkage mechanism. The distributed tactile sensor, which has grid pattern electrodes and uses conductive ink, is mounted on the hand surface. The sensor presents 235 points expansion relatively to that of the Gifu Hand II. To reduce insensitive area, the electrodes width and pitch are expanded and narrowed, respectively. In consequence, the insensitive area is reduced to 49.1%. Experiments of grasping several objects are shown. With these improvements and experiments, the Gifu hand III has a higher potential to perform dexterous object manipulations like the human hand. --- paper_title: Tactile sensing for dexterous in-hand manipulation in robotics-A review paper_content: Abstract As the field of robotics is expanding from the fixed environment of a production line to complex human environments, robots are required to perform increasingly human-like manipulation tasks, moving the state-of-the-art in robotics from grasping to advanced in-hand manipulation tasks such as regrasping, rotation and translation. To achieve advanced in-hand manipulation tasks, robotic hands are required to be equipped with distributed tactile sensing that can continuously provide information about the magnitude and direction of forces at all contact points between them and the objects they are interacting with. This paper reviews the state-of-the-art in force and tactile sensing technologies that can be suitable within the specific context of dexterous in-hand manipulation. In previous reviews of tactile sensing for robotic manipulation, the specific functional and technical requirements of dexterous in-hand manipulation, as compared to grasping, are in general not taken into account. This paper provides a review of models describing human hand activity and movements, and a set of functional and technical specifications for in-hand manipulation is defined. The paper proceeds to review the current state-of-the-art tactile sensor solutions that fulfil or can fulfil these criteria. An analytical comparison of the reviewed solutions is presented, and the advantages and disadvantages of different sensing technologies are compared. --- paper_title: Skin-inspired electronic devices paper_content: Electronic devices that mimic the properties of skin have potential important applications in advanced robotics, prosthetics, and health monitoring technologies. Methods for measuring tactile and temperature signals have progressed rapidly due to innovations in materials and processing methods. Imparting skin-like stretchability to electronic devices can be accomplished by patterning traditional electronic materials or developing new materials that are intrinsically stretchable. The incorporation of sensing methods with transistors facilitates large-area sensor arrays. While sensor arrays have surpassed the properties of human skin in terms of sensitivity, time response, and device density, many opportunities remain for future development. --- paper_title: Development of a Flexible 3-D Tactile Sensor System for Anthropomorphic Artificial Hand paper_content: In this paper, we report a novel flexible tactile sensor array for an anthropomorphic artificial hand with the capability of measuring both normal and shear force distributions using quantum tunneling composite as a base material. There are four fan-shaped electrodes in a cell that decompose the contact force into normal and shear components. The sensor has been realized in a 2ntn 6 array of unit sensors, and each unit sensor responds to normal and shear stresses in all three axes. By applying separated drops of conductive polymer instead of a full layer, cross-talk between the sensor cells is decreased. Furthermore, the voltage mirror method is used in this circuit to avoid crosstalk effect, which is based on a programmable system-on-chip. The measurement of a single sensor shows that the full-scale range of detectable forces are about 20, 8, and 8 N for the x-, y-, and z-directions, respectively. The sensitivities of a cell measured with a current setup are 0.47, 0.45, and 0.16 mV/mN for the x-, y-, and y-directions, respectively. The sensor showed a high repeatability, low hysteresis, and minimum tactile crosstalk. The proposed flexible three-axial tactile sensor array can be applied in a curved or compliant surface that requires slip detection and flexibility, such as a robotic finger. --- paper_title: Bayesian Exploration for Intelligent Identification of Textures paper_content: In order to endow robots with humanlike abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac®) we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness). Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of “prior experience” to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median=5) and yielded a 95.4% success rate. The method of “Bayesian exploration” developed and tested in this paper may generalize well to other cognitive problems. --- paper_title: Highly sensitive sensor for detection of initial slip and its application in a multi-fingered robot hand paper_content: Tactile sensors for slip detection are essential for implementing human-like gripping in a robot hand. In previous studies, we proposed flexible, thin and lightweight slip detection sensors utilizing the characteristics of pressure-sensitive conductive rubber. This was achieved by using the high-frequency vibration component generated in the process of slipping of the gripped object in order to distinguish between slipping of the object and changes in the normal force. In this paper, we design a slip detection sensor for a multi-fingered robot hand and examine the influence of noise caused by the operation of such a hand. Finally, we describe an experiment focusing on the adjustment of the gripping force of a multi-fingered robot hand equipped with the developed sensors --- paper_title: A soft, amorphous skin that can sense and localize textures paper_content: We present a soft, amorphous skin that can sense and localize textures. The skin consists of a series of sensing and computing elements that are networked with their local neighbors and mimic the function of the Pacinian corpuscle in human skin. Each sensor node samples a vibration signal at 1 KHz, transforms the signal into the frequency domain, and classifies up to 15 textures using logistic regression. By measuring the power spectrum of the signal and comparing it with its local neighbors, computing elements can then collaboratively estimate the location of the stimulus. The resulting low-bandwidth information, consisting of the texture probability distribution and its location are then routed to a sink anywhere in the skin in a multi-hop fashion. We describe the design, manufacturing, classification, localization and networking algorithms and experimentally validate the proposed approach. In particular, we demonstrate texture classification with 71% accuracy and centimeter accuracy in localization over an area of approximately three square feet using ten networked sensor nodes. --- paper_title: Grasping Force Control of Multi-fingered Robot Hand based on Slip Detection Using Tactile Sensor paper_content: To achieve a human like grasping by the multi-fingered robot hand, grasping force should be controlled without information of the grasping object such as the weight and the friction coefficient. In this study, we propose a method for detecting the slip of grasping object by force output of the Center of Pressure (CoP) tactile sensor. CoP sensor can measure center position of distributed load and total load which is applied on the surface of the sensor within 1 [ms] . This sensor is arranged on finger of the robot hand, and the effectiveness as slip detecting sensor is confirmed by experiment of slip detection on grasping. Finally, we propose a method for controlling grasping force resist the tangential force added to the grasping object by feedback control system of the CoP sensor force output. --- paper_title: Tactile identification of objects using Bayesian exploration paper_content: In order to endow robots with human-like tactile sensory abilities, they must be provided with tactile sensors and intelligent algorithms to select and control useful exploratory movements and interpret data from all available sensors. Current robotic systems do not possess such sensors or algorithms. In this study we integrate multimodal tactile sensing (force, vibration and temperature) from the BioTac® with a Shadow Dexterous Hand and program the robot to make exploratory movements similar to those humans make when identifying objects by their compliance, texture, and thermal properties. Signal processing strategies were developed to provide measures of these perceptual properties. When identifying an object, exploratory movements are intelligently selected using a process we have previously developed called Bayesian exploration [1], whereby exploratory movements that provide the most disambiguation between likely candidates of objects are automatically selected. The exploration algorithm was augmented with reinforcement learning whereby its internal representations of objects evolved according to its cumulative experience with them. This allowed the algorithm to compensate for drift in the performance of the anthropomorphic robot hand and the ambient conditions of testing, improving accuracy while reducing the number of exploratory movements required to identify an object. The robot correctly identified 10 different objects on 99 out of 100 presentations. --- paper_title: An ultra-sensitive resistive pressure sensor based on hollow-sphere microstructure induced elasticity in conducting polymer film. paper_content: Pressure sensing is an important function of electronic skin devices. The development of pressure sensors that can mimic and surpass the subtle pressure sensing properties of natural skin requires the rational design of materials and devices. Here we present an ultra-sensitive resistive pressure sensor based on an elastic, microstructured conducting polymer thin film. The elastic microstructured film is prepared from a polypyrrole hydrogel using a multiphase reaction that produced a hollow-sphere microstructure that endows polypyrrole with structure-derived elasticity and a low effective elastic modulus. The contact area between the microstructured thin film and the electrodes increases with the application of pressure, enabling the device to detect low pressures with ultra-high sensitivity. Our pressure sensor based on an elastic microstructured thin film enables the detection of pressures of less than 1Pa and exhibits a short response time, good reproducibility, excellent cycling stability and temperature-stable sensing. --- paper_title: Online in-hand object localization paper_content: Robotic hands are a key component of humanoids. Initially more fragile and larger than their human counterparts, the technology has evolved and the latest generation is close to the human hand in size and robustness. However, it is still disappointing to see how little robotic hands are able to do once the grasp is acquired due to the difficulty to obtain a reliable pose of the object within the palm. This paper presents a novel method based on a particle filter used to estimate online the object pose. It is shown that the method is robust, accurate and handles many realistic scenario without hand crafted rules. It combines an efficient collision checker with a few very simple ideas, that require only a basic knowledge of the geometry of the objects. It is shown, by experiments and simulations, that the algorithm is able to deal with inaccurate finger position measurements and can integrate tactile measurements. The method greatly enhances the performance of common manipulation operations, such as a pick and place tasks, and boosts the sensing capabilities of the robot. --- paper_title: Finger-shaped thermal sensor using thermo-sensitive paint and camera for telexistence paper_content: A thermal change on a fingertip is essential for haptic perception. We have proposed a vision-based thermal sensor using thermo-sensitive paint and a CCD camera for telexistence. The thermo-sensitive paint is employed to measure thermal information on the basis of its color, which changes according to its temperature. The proposed sensor can simulate the physical interaction between a human fingertip and an object in order to measure surface thermal information correctly. Furthermore, because the proposed sensor can be easily integrated with our vision-based force sensor, a comprehensive measurement device for measuring haptic information can be realized. In this study, we constructed a prototype of the proposed thermal sensor and experimentally confirmed that this sensor could measure surface thermal information. --- paper_title: Bioinspired Sinusoidal Finger Joint Synergies for a Dexterous Robotic Hand to Screw and Unscrew Objects With Different Diameters paper_content: This paper addresses the complex task of unscrewing and screwing objects with a dexterous anthropomorphic robotic hand in two cases: with the first finger and thumb and also with the little finger and thumb. To develop an anthropomorphic solution, human finger synergies from nine test subjects were recorded while unscrewing and screwing a threaded cap. Human results showed that the periodic motions exhibited by the finger joints shared a common frequency for each subject, but differed in amplitude and phase. From the gathered data, a set of sinusoidal trajectories were developed to approximate this motion for application to a robotic hand. Because the joint trajectories exhibited the same frequency, a family of sinusoids that share a common time vector can be used in the path planning of the robotic hand to unscrew and screw objects. Additionally, the human unscrewing data are highly similar to the mirror image of the screwing data. This chiastic trait enables screwing to be performed by decreasing the time vector; increasing the time vector produces unscrewing. These factors significantly reduce the computational cost and complexity of the task. Cartesian and joint space error analyses show that the developed sinusoidal trajectories closely mimic the motion profiles seen in the human experiments. Furthermore, this bioinspired sinusoidal solution is extended to objects with wide variations in diameters by relating joint angle offsets of the robotic hand to object diameter size through the forward kinematics equations. The sinusoidal trajectories are all implemented within a PID sliding mode controller for a dexterous artificial hand to ensure overall system stability. Using the bioinspired sinusoidal joint angle trajectories, the robotic hand successfully unscrewed and screwed four different objects in all trials conducted with each object diameter size. --- paper_title: Adaptive Sliding Mode Control for Prosthetic Hands to Simultaneously Prevent Slip and Minimize Deformation of Grasped Objects paper_content: Adaptive sliding mode and integral sliding mode grasped object slip prevention controllers are implemented for a prosthetic hand and compared to a proportional derivative shear force feedback slip prevention controller as well as a sliding mode controller without slip prevention capabilities. Slip of grasped objects is detected by band-pass filtering the shear force derivative to amplify high frequency vibrations that occur as the grasped object slides relative to the fingers. The integral sliding mode slip prevention controller provides a robust design framework for slip prevention while addressing the issue of reducing the amount of deformation that the grasped object experiences to prevent slip. Averaged results from bench top experiments show that the integral sliding mode slip prevention controller produces the least amount of deformation to the grasped object while simultaneously preventing the object from being dropped. --- paper_title: Majority Voting: Material Classification by Tactile Sensing Using Surface Texture paper_content: In this paper, we present an application of machine learning to distinguish between different materials based on their surface texture. Such a system can be used for the estimation of surface friction during manipulation tasks; quality assurance in the textile, cosmetics, and harvesting industries; and other applications requiring tactile sensing. Several machine learning algorithms, such as naive Bayes, decision trees, and naive Bayes trees, have been trained to distinguish textures sensed by a biologically inspired artificial finger. The finger has randomly distributed strain gauges and polyvinylidene fluoride (PVDF) films embedded in silicone. Different textures induce different intensities of vibrations in the silicone. Consequently, textures can be distinguished by the presence of different frequencies in the signal. The data from the finger are preprocessed, and the Fourier coefficients of the sensor outputs are used to train classifiers. We show that the classifiers generalize well for unseen datasets with performance exceeding previously reported algorithms. Our classifiers can distinguish between different materials, such as carpet, flooring vinyls, tiles, sponge, wood, and polyvinyl-chloride (PVC) woven mesh with an accuracy of on unseen test data. --- paper_title: Design of a flexible tactile sensor for classification of rigid and deformable objects paper_content: For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits. --- paper_title: Using robotic exploratory procedures to learn the meaning of haptic adjectives paper_content: Delivering on the promise of real-world robotics will require robots that can communicate with humans through natural language by learning new words and concepts through their daily experiences. Our research strives to create a robot that can learn the meaning of haptic adjectives by directly touching objects. By equipping the PR2 humanoid robot with state-of-the-art biomimetic tactile sensors that measure temperature, pressure, and fingertip deformations, we created a platform uniquely capable of feeling the physical properties of everyday objects. The robot used five exploratory procedures to touch 51 objects that were annotated by human participants with 34 binary adjective labels. We present both static and dynamic learning methods to discover the meaning of these adjectives from the labeled objects, achieving average F1 scores of 0.57 and 0.79 on a set of eight previously unfelt items. --- paper_title: Semi-anthropomorphic 3D printed multigrasp hand for industrial and service robots paper_content: This paper presents the preliminary prototype design and implementation of the Nazarbayev University (NU) Hand, a new semi-anthropomorphic multigrasp robotic hand. The hand is designed to be an end effector for industrial and service robots. The main objective is to develop a low-cost, low-weight and easily manufacturable robotic hand with a sensor module allowing acquisition of data for autonomous intelligent object manipulation. 3D printing technologies were extensively used in the implementation of the hand. Specifically, the structure of the hand is printed using a 3D printer as a complete assembly voiding the need of using fasteners and bearings for the assembly of the hand and decreasing the total weight. The hand also incorporates a sensor module containing a LIDAR, digital camera and non-contact infrared temperature sensor for intelligent automation. As an alternative to teach pendants for the industrial manipulators, a teaching glove was developed, which acts as the primary human machine interface between the user and the NU Hand. The paper presents an extensive performance characterization of the robotic hand including finger forces, weight, audible noise level during operation and sensor data acquisition. --- paper_title: A stretchable carbon nanotube strain sensor for human-motion detection paper_content: Thin films of single-wall carbon nanotube have been used to create stretchable devices that can be incorporated into clothes and used to detect human motions. --- paper_title: A Soft Strain Sensor Based on Ionic and Metal Liquids paper_content: A novel soft strain sensor capable of withstanding strains of up to 100% is described. The sensor is made of a hyperelastic silicone elastomer that contains embedded microchannels filled with conductive liquids. This is an effort of improving the previously reported soft sensors that uses a single liquid conductor. The proposed sensor employs a hybrid approach involving two liquid conductors: an ionic solution and an eutectic gallium-indium alloy. This hybrid method reduces the sensitivity to noise that may be caused by variations in electrical resistance of the wire interface and undesired stress applied to signal routing areas. The bridge between these two liquids is made conductive by doping the elastomer locally with nickel nanoparticles. The design, fabrication, and characterization of the sensor are presented. --- paper_title: Using Near-Field Stereo Vision for Robotic Grasping in Cluttered Environments paper_content: Robotic grasping in unstructured environments requires the ability to adjust and recover when a pre-planned grasp faces imminent failure. Even for a single object, modeling uncertainties due to occluded surfaces, sensor noise and calibration errors can cause grasp failure; cluttered environments exacerbate the problem. In this work, we propose a simple but robust approach to both pre-touch grasp adjustment and grasp planning for unknown objects in clutter, using a small-baseline stereo camera attached to the gripper of the robot. By employing a 3D sensor from the perspective of the gripper we gain information about the object and nearby obstacles immediately prior to grasping that is not available during head-sensor-based grasp planning. We use a feature-based cost function on local 3D data to evaluate the feasibility of a proposed grasp. In cases where only minor adjustments are needed, our algorithm uses gradient descent on a cost function based on local features to find optimal grasps near the original grasp. In cases where no suitable grasp is found, the robot can search for a significantly different grasp pose rather than blindly attempting a doomed grasp. We present experimental results to validate our approach by grasping a wide range of unknown objects in cluttered scenes. Our results show that reactive pre-touch adjustment can correct for a fair amount of uncertainty in the measured position and shape of the objects, or the presence of nearby obstacles. --- paper_title: Methods for safe human-robot-interaction using capacitive tactile proximity sensors paper_content: In this paper we base upon capacitive tactile proximity sensor modules developed in a previous work to demonstrate applications for safe human-robot-interaction. Arranged as a matrix, the modules can be used to model events in the near proximity of the robot surface, closing the near field perception gap in robotics. The central application investigated here is object tracking. Several results are shown: the tracking of two human hands as well as the handling of occlusions and the prediction of collision for object trajectories. These results are important for novel pretouch- and touch-based humanrobot interaction strategies and for assessing and implementing safety capabilities with these sensor systems. --- paper_title: Using depth and appearance features for informed robot grasping of highly wrinkled clothes paper_content: Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. --- paper_title: Seashell effect pretouch sensing for robotic grasping paper_content: This paper introduces seashell effect pretouch sensing, and demonstrates application of this new sensing modality to robot grasp control, and also to robot grasp planning. --- paper_title: Pre-shaping for various objects by the robot hand equipped with resistor network structure proximity sensors paper_content: In this paper, we demonstrate a preliminary motion before grasping by a robot hand, for adjusting the object-fingertip distance and 2-axis postures simultaneously, using a Resistor Network Structure Proximity sensor (RNSP sensor). Through this motion (called “pre-shaping”) and the grasping of an object, the surface of each fingertip is brought into contact with the object surface so that in the next stage grasping can be undertaken. In the next stage, a force can be applied from the fingertips onto the object surface directly. The pre-shaping enhances the reliability of the feedback control for the after-contact tactile sensors. To realize the pre-shaping, we use fingertips equipped with RNSP sensors, which can detect the distance between the fingertip and the object, to determine the relative position between fingertips and an object. The RNSP sensor has a fast response (<;1 [ms]) and simple connectivity (only 6 wires), and can be mounted easily. Additionally, a characteristics of the RNSP sensor output can be designed by the arrangement of the sensor elements. To perform the pre-shaping by simple sensor feedback control based on the configuration between the fingertip and object, we designed the RNSP sensor so that it had the appropriate characteristics for the pre-shaping. --- paper_title: Fusion of stereo vision, force-torque, and joint sensors for estimation of in-hand object location paper_content: This paper develops a method to fuse stereo vision, force-torque sensor, and joint angle encoder measurements to estimate and track the location of a grasped object within the hand. We pose the problem as a hybrid systems estimation problem, where the continuous states are the object 6D pose, finger contact location, wrist-to-camera transform and the discrete states are the finger contact modes with the object. This paper develops the key measurement equations that govern the fusion process. Experiments with a Barrett Hand, Bumblebee 2 stereo camera, and an ATI omega force-torque sensor validate and demonstrate the method. --- paper_title: Finger-shaped thermal sensor using thermo-sensitive paint and camera for telexistence paper_content: A thermal change on a fingertip is essential for haptic perception. We have proposed a vision-based thermal sensor using thermo-sensitive paint and a CCD camera for telexistence. The thermo-sensitive paint is employed to measure thermal information on the basis of its color, which changes according to its temperature. The proposed sensor can simulate the physical interaction between a human fingertip and an object in order to measure surface thermal information correctly. Furthermore, because the proposed sensor can be easily integrated with our vision-based force sensor, a comprehensive measurement device for measuring haptic information can be realized. In this study, we constructed a prototype of the proposed thermal sensor and experimentally confirmed that this sensor could measure surface thermal information. ---
Title: Sensors for Robotic Hands: A Survey of State of the Art Section 1: INTRODUCTION Description 1: Summarize the evolution, functionality, and importance of the human hand, and outline the origins and development of artificial hands, including their design challenges and multidisciplinary nature. Section 2: 2000 -2005 Description 2: Provide a comprehensive review of the advancements in artificial hand development from 2000 to 2005, focusing on prosthetic hands, research platforms, developments in tactile and force sensors, and significant projects. Section 3: 2005 -2010 Description 3: Discuss the progress in artificial hand technologies between 2005 and 2010, highlighting innovations in prosthetic hands, research platforms, actuation and transmission mechanisms affecting sensors, and tactile sensing. Section 4: 2010 -2015 Description 4: Outline the key trends and developments in artificial hand sensors from 2010 to 2015, emphasizing multimodal sensing, artificial skin, and advancements in tactile sensor technologies. Section 5: CONCLUSION Description 5: Summarize the survey findings, project future trends in artificial hand sensor development, and discuss the continuing technical challenges and opportunities in the field.
A Survey of auditory display in image-guided interventions
11
--- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Touchless interaction with software in interventional radiology and surgery: a systematic literature review paper_content: PURPOSE ::: In this article, we systematically examine the current state of research of systems that focus on touchless human-computer interaction in operating rooms and interventional radiology suites. We further discuss the drawbacks of current solutions and underline promising technologies for future development. ::: ::: ::: METHODS ::: A systematic literature search of scientific papers that deal with touchless control of medical software in the immediate environment of the operation room and interventional radiology suite was performed. This includes methods for touchless gesture interaction, voice control and eye tracking. ::: ::: ::: RESULTS ::: Fifty-five research papers were identified and analyzed in detail including 33 journal publications. Most of the identified literature (62 %) deals with the control of medical image viewers. The others present interaction techniques for laparoscopic assistance (13 %), telerobotic assistance and operating room control (9 % each) as well as for robotic operating room assistance and intraoperative registration (3.5 % each). Only 8 systems (14.5 %) were tested in a real clinical environment, and 7 (12.7 %) were not evaluated at all. ::: ::: ::: CONCLUSION ::: In the last 10 years, many advancements have led to robust touchless interaction approaches. However, only a few have been systematically evaluated in real operating room settings. Further research is required to cope with current limitations of touchless software interfaces in clinical environments. The main challenges for future research are the improvement and evaluation of usability and intuitiveness of touchless human-computer interaction and the full integration into productive systems as well as the reduction of necessary interaction steps and further development of hands-free interaction. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Image-Guided Interventions: Technology Review and Clinical Applications paper_content: Image-guided interventions are medical procedures that use computer-based systems to provide virtual image overlays to help the physician precisely visualize and target the surgical site. This field has been greatly expanded by the advances in medical imaging and computing power over the past 20 years. This review begins with a historical overview and then describes the component technologies of tracking, registration, visualization, and software. Clinical applications in neurosurgery, orthopedics, and the cardiac and thoracoabdominal areas are discussed, together with a description of an evolving technology named Natural Orifice Transluminal Endoscopic Surgery (NOTES). As the trend toward minimally invasive procedures continues, image-guided interventions will play an important role in enabling new procedures, while improving the accuracy and success of existing approaches. Despite this promise, the role of image-guided systems must be validated by clinical trials facilitated by partnerships between scie... --- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery paper_content: OBJECTIVES/HYPOTHESIS ::: Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. ::: ::: ::: STUDY DESIGN ::: Randomized-controlled trial plus qualitative analysis. ::: ::: ::: METHODS ::: Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. ::: ::: ::: RESULTS ::: The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. ::: ::: ::: CONCLUSIONS ::: The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. --- paper_title: Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience. paper_content: In this study the authors measured the effect of auditory feedback during image-guided surgery (IGS) in a phantom model and in a clinical setting. In the phantom setup, advanced IGS with complementary auditory feedback was compared with results obtained with 2 routine forms of IGS, either with an on-screen image display or with image injection via a microscope. The effect was measured by means of volumetric resection assessments. The authors also present their first clinical data concerning the effects of complementary auditory feedback on instrument handling during image-guided neurosurgery. When using image-injection through the microscope for navigation, however, resection quality was significantly worse. In the clinical portion of the study, the authors performed resections of cerebral mass lesions in 6 patients with the aid of auditory feedback. Instrument tip speeds were slightly (although significantly) influenced by this feedback during resection. Overall, the participating neurosurgeons reported that the auditory feedback helped in decision-making during resection without negatively influencing instrument use. Postoperative volumetric imaging studies revealed resection rates of > or = 95% when IGS with auditory feedback was used. There was only a minor amount of brain shift, and postoperative resection volumes corresponded well with the preoperative intentions of the neurosurgeon. Although the results of phantom surgery with auditory feedback revealed no significant effect on resection quality or extent, auditory cues may help prevent damage to eloquent brain structures. --- paper_title: Less is sometimes more: a comparison of distance-control and navigated-control concepts of image-guided navigation support for surgeons. paper_content: Image-guided navigation (IGN) systems provide automation support of intra-operative information analysis and decision-making for surgeons. Previous research showed that navigated-control (NC) systems which represent high levels of decision-support and directly intervene in surgeons' workflow provide benefits with respect to patient safety and surgeons' physiological stress but also involve several cost effects (e.g. prolonged surgery duration, reduced secondary-task performance). It was hypothesised that less automated distance-control (DC) systems would provide a better solution in terms of human performance consequences. N = 18 surgeons performed a simulated mastoidectomy with NC, DC and without IGN assistance. Effects on surgical performance, physiological effort, workload and situation awareness (SA) were compared. As expected, DC technology had the same benefits as the NC system but also led to less unwanted side effects on surgery duration, subjective workload and SA. This suggests that IGN systems ... --- paper_title: A Surgical Navigation System for Guiding Exact Cochleostomy Using Auditory Feedback: A Clinical Feasibility Study paper_content: In cochlear implantation (CI), the insertion of the electrode array into the appropriate compartment of the cochlea, the scala tympani, is important for an optimal hearing outcome. The current surgical technique for CI depends primarily on the surgeon's skills and experience level to achieve the correct placement of the electrode array, and the surgeon needs to confirm that the exact placement is achieved prior to completing the procedure. Thus, a surgical navigation system can help the surgeon to access the scala tympani without injuring important organs in the complex structure of the temporal bone. However, the use of a surgical microscope has restricted the effectiveness of the surgical navigation because it has been difficult to deliver the navigational information to the surgeon from outside of the surgeon's visual attention. We herein present a clinical feasibility study of an auditory feedback function developed as a computer-surgeon interface that can guide the surgeon to the preset cochleostomy location. As a result, the surgeon could confirm that the drilling point was correct, while keeping his or her eyes focused on the microscope. The proposed interface reduced the common frustration that surgeons experience when using surgical navigation during otologic surgeries. --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: Auditory feedback to support image-guided medical needle placement paper_content: Purpose ::: During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. --- paper_title: Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. paper_content: OBJECTIVES ::: Direct haptic (force or tactile) feedback is not yet available in commercial robotic surgical systems. Previous work by our group and others suggests that haptic feedback might significantly enhance the execution of surgical tasks requiring fine suture manipulation, specifically those encountered in cardiothoracic surgery. We studied the effects of substituting direct haptic feedback with visual and auditory cues to provide the operating surgeon with a representation of the forces he or she is applying with robotic telemanipulators. ::: ::: ::: METHODS ::: Using the robotic da Vinci surgical system (Intuitive Surgical, Inc, Sunnyvale, Calif), we compared applied forces during a standardized surgical knot-tying task under 4 different sensory-substitution scenarios: no feedback, auditory feedback, visual feedback, and combined auditory-visual feedback. ::: ::: ::: RESULTS ::: The forces applied with these sensory-substitution modes more closely approximate suture tensions achieved under ideal haptic conditions (ie, hand ties) than forces applied without such sensory feedback. The consistency of applied forces during robot-assisted suture tying aided by visual feedback or combined auditory-visual feedback sensory substitution is superior to that achieved with hand ties. Robot-assisted ties aided with auditory feedback revealed levels of consistency that were generally equivalent or superior to those attained with hand ties. Visual feedback and auditory feedback improve the consistency of robotically applied forces. ::: ::: ::: CONCLUSIONS ::: Sensory substitution, in the form of visual feedback, auditory feedback, or both, confers quantifiable advantages in applied force accuracy and consistency during the performance of a simple surgical task. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Auditory support for navigated radiofrequency ablation paper_content: Radiofrequency ablation is applied to treat a lesion using a needle inserted into the patient, which delivers local radiofrequency energy. Guided surgical methods allow surgeons to view the placement of the needle in relation to the patient to aid in guiding the tip of the needle to the target point. Unfortunately, such methods require that surgeons remove attention from the patient in order to receive guidance information from a screen. We introduce a novel method to align and insert an ablation needle using auditory display, allowing the surgeon to retain attention on the patient. First evaluation results show that novice users can successfully guide a needle towards a target point using primarily auditory display. We hypothesize that successful auditory display will lead to increased attention on the patient and reduce unnecessary operator head and neck movements. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery paper_content: OBJECTIVES/HYPOTHESIS ::: Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. ::: ::: ::: STUDY DESIGN ::: Randomized-controlled trial plus qualitative analysis. ::: ::: ::: METHODS ::: Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. ::: ::: ::: RESULTS ::: The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. ::: ::: ::: CONCLUSIONS ::: The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. --- paper_title: Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience. paper_content: In this study the authors measured the effect of auditory feedback during image-guided surgery (IGS) in a phantom model and in a clinical setting. In the phantom setup, advanced IGS with complementary auditory feedback was compared with results obtained with 2 routine forms of IGS, either with an on-screen image display or with image injection via a microscope. The effect was measured by means of volumetric resection assessments. The authors also present their first clinical data concerning the effects of complementary auditory feedback on instrument handling during image-guided neurosurgery. When using image-injection through the microscope for navigation, however, resection quality was significantly worse. In the clinical portion of the study, the authors performed resections of cerebral mass lesions in 6 patients with the aid of auditory feedback. Instrument tip speeds were slightly (although significantly) influenced by this feedback during resection. Overall, the participating neurosurgeons reported that the auditory feedback helped in decision-making during resection without negatively influencing instrument use. Postoperative volumetric imaging studies revealed resection rates of > or = 95% when IGS with auditory feedback was used. There was only a minor amount of brain shift, and postoperative resection volumes corresponded well with the preoperative intentions of the neurosurgeon. Although the results of phantom surgery with auditory feedback revealed no significant effect on resection quality or extent, auditory cues may help prevent damage to eloquent brain structures. --- paper_title: Less is sometimes more: a comparison of distance-control and navigated-control concepts of image-guided navigation support for surgeons. paper_content: Image-guided navigation (IGN) systems provide automation support of intra-operative information analysis and decision-making for surgeons. Previous research showed that navigated-control (NC) systems which represent high levels of decision-support and directly intervene in surgeons' workflow provide benefits with respect to patient safety and surgeons' physiological stress but also involve several cost effects (e.g. prolonged surgery duration, reduced secondary-task performance). It was hypothesised that less automated distance-control (DC) systems would provide a better solution in terms of human performance consequences. N = 18 surgeons performed a simulated mastoidectomy with NC, DC and without IGN assistance. Effects on surgical performance, physiological effort, workload and situation awareness (SA) were compared. As expected, DC technology had the same benefits as the NC system but also led to less unwanted side effects on surgery duration, subjective workload and SA. This suggests that IGN systems ... --- paper_title: A Surgical Navigation System for Guiding Exact Cochleostomy Using Auditory Feedback: A Clinical Feasibility Study paper_content: In cochlear implantation (CI), the insertion of the electrode array into the appropriate compartment of the cochlea, the scala tympani, is important for an optimal hearing outcome. The current surgical technique for CI depends primarily on the surgeon's skills and experience level to achieve the correct placement of the electrode array, and the surgeon needs to confirm that the exact placement is achieved prior to completing the procedure. Thus, a surgical navigation system can help the surgeon to access the scala tympani without injuring important organs in the complex structure of the temporal bone. However, the use of a surgical microscope has restricted the effectiveness of the surgical navigation because it has been difficult to deliver the navigational information to the surgeon from outside of the surgeon's visual attention. We herein present a clinical feasibility study of an auditory feedback function developed as a computer-surgeon interface that can guide the surgeon to the preset cochleostomy location. As a result, the surgeon could confirm that the drilling point was correct, while keeping his or her eyes focused on the microscope. The proposed interface reduced the common frustration that surgeons experience when using surgical navigation during otologic surgeries. --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: Auditory feedback to support image-guided medical needle placement paper_content: Purpose ::: During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. --- paper_title: Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. paper_content: OBJECTIVES ::: Direct haptic (force or tactile) feedback is not yet available in commercial robotic surgical systems. Previous work by our group and others suggests that haptic feedback might significantly enhance the execution of surgical tasks requiring fine suture manipulation, specifically those encountered in cardiothoracic surgery. We studied the effects of substituting direct haptic feedback with visual and auditory cues to provide the operating surgeon with a representation of the forces he or she is applying with robotic telemanipulators. ::: ::: ::: METHODS ::: Using the robotic da Vinci surgical system (Intuitive Surgical, Inc, Sunnyvale, Calif), we compared applied forces during a standardized surgical knot-tying task under 4 different sensory-substitution scenarios: no feedback, auditory feedback, visual feedback, and combined auditory-visual feedback. ::: ::: ::: RESULTS ::: The forces applied with these sensory-substitution modes more closely approximate suture tensions achieved under ideal haptic conditions (ie, hand ties) than forces applied without such sensory feedback. The consistency of applied forces during robot-assisted suture tying aided by visual feedback or combined auditory-visual feedback sensory substitution is superior to that achieved with hand ties. Robot-assisted ties aided with auditory feedback revealed levels of consistency that were generally equivalent or superior to those attained with hand ties. Visual feedback and auditory feedback improve the consistency of robotically applied forces. ::: ::: ::: CONCLUSIONS ::: Sensory substitution, in the form of visual feedback, auditory feedback, or both, confers quantifiable advantages in applied force accuracy and consistency during the performance of a simple surgical task. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Auditory support for navigated radiofrequency ablation paper_content: Radiofrequency ablation is applied to treat a lesion using a needle inserted into the patient, which delivers local radiofrequency energy. Guided surgical methods allow surgeons to view the placement of the needle in relation to the patient to aid in guiding the tip of the needle to the target point. Unfortunately, such methods require that surgeons remove attention from the patient in order to receive guidance information from a screen. We introduce a novel method to align and insert an ablation needle using auditory display, allowing the surgeon to retain attention on the patient. First evaluation results show that novice users can successfully guide a needle towards a target point using primarily auditory display. We hypothesize that successful auditory display will lead to increased attention on the patient and reduce unnecessary operator head and neck movements. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery paper_content: OBJECTIVES/HYPOTHESIS ::: Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. ::: ::: ::: STUDY DESIGN ::: Randomized-controlled trial plus qualitative analysis. ::: ::: ::: METHODS ::: Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. ::: ::: ::: RESULTS ::: The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. ::: ::: ::: CONCLUSIONS ::: The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. --- paper_title: Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience. paper_content: In this study the authors measured the effect of auditory feedback during image-guided surgery (IGS) in a phantom model and in a clinical setting. In the phantom setup, advanced IGS with complementary auditory feedback was compared with results obtained with 2 routine forms of IGS, either with an on-screen image display or with image injection via a microscope. The effect was measured by means of volumetric resection assessments. The authors also present their first clinical data concerning the effects of complementary auditory feedback on instrument handling during image-guided neurosurgery. When using image-injection through the microscope for navigation, however, resection quality was significantly worse. In the clinical portion of the study, the authors performed resections of cerebral mass lesions in 6 patients with the aid of auditory feedback. Instrument tip speeds were slightly (although significantly) influenced by this feedback during resection. Overall, the participating neurosurgeons reported that the auditory feedback helped in decision-making during resection without negatively influencing instrument use. Postoperative volumetric imaging studies revealed resection rates of > or = 95% when IGS with auditory feedback was used. There was only a minor amount of brain shift, and postoperative resection volumes corresponded well with the preoperative intentions of the neurosurgeon. Although the results of phantom surgery with auditory feedback revealed no significant effect on resection quality or extent, auditory cues may help prevent damage to eloquent brain structures. --- paper_title: Less is sometimes more: a comparison of distance-control and navigated-control concepts of image-guided navigation support for surgeons. paper_content: Image-guided navigation (IGN) systems provide automation support of intra-operative information analysis and decision-making for surgeons. Previous research showed that navigated-control (NC) systems which represent high levels of decision-support and directly intervene in surgeons' workflow provide benefits with respect to patient safety and surgeons' physiological stress but also involve several cost effects (e.g. prolonged surgery duration, reduced secondary-task performance). It was hypothesised that less automated distance-control (DC) systems would provide a better solution in terms of human performance consequences. N = 18 surgeons performed a simulated mastoidectomy with NC, DC and without IGN assistance. Effects on surgical performance, physiological effort, workload and situation awareness (SA) were compared. As expected, DC technology had the same benefits as the NC system but also led to less unwanted side effects on surgery duration, subjective workload and SA. This suggests that IGN systems ... --- paper_title: A Surgical Navigation System for Guiding Exact Cochleostomy Using Auditory Feedback: A Clinical Feasibility Study paper_content: In cochlear implantation (CI), the insertion of the electrode array into the appropriate compartment of the cochlea, the scala tympani, is important for an optimal hearing outcome. The current surgical technique for CI depends primarily on the surgeon's skills and experience level to achieve the correct placement of the electrode array, and the surgeon needs to confirm that the exact placement is achieved prior to completing the procedure. Thus, a surgical navigation system can help the surgeon to access the scala tympani without injuring important organs in the complex structure of the temporal bone. However, the use of a surgical microscope has restricted the effectiveness of the surgical navigation because it has been difficult to deliver the navigational information to the surgeon from outside of the surgeon's visual attention. We herein present a clinical feasibility study of an auditory feedback function developed as a computer-surgeon interface that can guide the surgeon to the preset cochleostomy location. As a result, the surgeon could confirm that the drilling point was correct, while keeping his or her eyes focused on the microscope. The proposed interface reduced the common frustration that surgeons experience when using surgical navigation during otologic surgeries. --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: Auditory feedback to support image-guided medical needle placement paper_content: Purpose ::: During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. --- paper_title: Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. paper_content: OBJECTIVES ::: Direct haptic (force or tactile) feedback is not yet available in commercial robotic surgical systems. Previous work by our group and others suggests that haptic feedback might significantly enhance the execution of surgical tasks requiring fine suture manipulation, specifically those encountered in cardiothoracic surgery. We studied the effects of substituting direct haptic feedback with visual and auditory cues to provide the operating surgeon with a representation of the forces he or she is applying with robotic telemanipulators. ::: ::: ::: METHODS ::: Using the robotic da Vinci surgical system (Intuitive Surgical, Inc, Sunnyvale, Calif), we compared applied forces during a standardized surgical knot-tying task under 4 different sensory-substitution scenarios: no feedback, auditory feedback, visual feedback, and combined auditory-visual feedback. ::: ::: ::: RESULTS ::: The forces applied with these sensory-substitution modes more closely approximate suture tensions achieved under ideal haptic conditions (ie, hand ties) than forces applied without such sensory feedback. The consistency of applied forces during robot-assisted suture tying aided by visual feedback or combined auditory-visual feedback sensory substitution is superior to that achieved with hand ties. Robot-assisted ties aided with auditory feedback revealed levels of consistency that were generally equivalent or superior to those attained with hand ties. Visual feedback and auditory feedback improve the consistency of robotically applied forces. ::: ::: ::: CONCLUSIONS ::: Sensory substitution, in the form of visual feedback, auditory feedback, or both, confers quantifiable advantages in applied force accuracy and consistency during the performance of a simple surgical task. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Auditory support for navigated radiofrequency ablation paper_content: Radiofrequency ablation is applied to treat a lesion using a needle inserted into the patient, which delivers local radiofrequency energy. Guided surgical methods allow surgeons to view the placement of the needle in relation to the patient to aid in guiding the tip of the needle to the target point. Unfortunately, such methods require that surgeons remove attention from the patient in order to receive guidance information from a screen. We introduce a novel method to align and insert an ablation needle using auditory display, allowing the surgeon to retain attention on the patient. First evaluation results show that novice users can successfully guide a needle towards a target point using primarily auditory display. We hypothesize that successful auditory display will lead to increased attention on the patient and reduce unnecessary operator head and neck movements. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: The process of sonification design for guidance tasks paper_content: This article deals with the process of sonification design for guidance tasks. It presents several studies that aim at overcoming two major problems of sonification: the aesthetics of sound design and the lack of a general method. On the basis of these studies, it proposes some guidelines for the generalization of the sonification process. First, it introduces the need to disassociate data and display dimensions; then, it proposes a method to classify and evaluate sound strategies; finally, it introduces a method for the customization of the sound design. The whole process is based on the identification and the manipulation of particular sound morphologies. --- paper_title: Using an auditory display to manage attention in a dual task, multiscreen environment paper_content: Spatialized sound technology is under consideration for use in future U. S. Navy watchstation systems as a technique to manage attention. In this study, we looked at whether spatialized sound would reduce head movements. The subjects used a simulated watchstation that had three displays, one forward and one on each side. A dual task paradigm was used that included a continuous tracking task in one window and an intermittent task in another window. These two windows were presented adjacent to each other in the center display or opposite each other on the side displays. Subjects performed the dual task with and without sound. Head turns were recorded manually and were found to be significantly fewer in number when sound was present. Further, when sound was present, subjects used its cessation as an indication of a successfully entered response. This aural feedback reduced head movements that would normally be made to confirm the successful data entry. Together with other results on reaction time and accuracy, these results provide persuasive support for the use of spatialized sound to direct attention. --- paper_title: A Model for Interaction in Exploratory Sonification Displays paper_content: This paper presents a general model for sonification of large spatial data sets (e.g. seismic data, medical data) based on ideas from ecological acoustics. The model incorporates not only what we hear (the sounds), but also how we listen (the interaction). Metaphorically speaking the interpreter is walking along paths in areas of the data set, listening to locally and globally defined sound objects. The time aspects of sonification are given special attention, introducing the notion of temporalization. Some features of a preliminary Windows NT implementation are summarized. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Less is sometimes more: a comparison of distance-control and navigated-control concepts of image-guided navigation support for surgeons. paper_content: Image-guided navigation (IGN) systems provide automation support of intra-operative information analysis and decision-making for surgeons. Previous research showed that navigated-control (NC) systems which represent high levels of decision-support and directly intervene in surgeons' workflow provide benefits with respect to patient safety and surgeons' physiological stress but also involve several cost effects (e.g. prolonged surgery duration, reduced secondary-task performance). It was hypothesised that less automated distance-control (DC) systems would provide a better solution in terms of human performance consequences. N = 18 surgeons performed a simulated mastoidectomy with NC, DC and without IGN assistance. Effects on surgical performance, physiological effort, workload and situation awareness (SA) were compared. As expected, DC technology had the same benefits as the NC system but also led to less unwanted side effects on surgery duration, subjective workload and SA. This suggests that IGN systems ... --- paper_title: A Surgical Navigation System for Guiding Exact Cochleostomy Using Auditory Feedback: A Clinical Feasibility Study paper_content: In cochlear implantation (CI), the insertion of the electrode array into the appropriate compartment of the cochlea, the scala tympani, is important for an optimal hearing outcome. The current surgical technique for CI depends primarily on the surgeon's skills and experience level to achieve the correct placement of the electrode array, and the surgeon needs to confirm that the exact placement is achieved prior to completing the procedure. Thus, a surgical navigation system can help the surgeon to access the scala tympani without injuring important organs in the complex structure of the temporal bone. However, the use of a surgical microscope has restricted the effectiveness of the surgical navigation because it has been difficult to deliver the navigational information to the surgeon from outside of the surgeon's visual attention. We herein present a clinical feasibility study of an auditory feedback function developed as a computer-surgeon interface that can guide the surgeon to the preset cochleostomy location. As a result, the surgeon could confirm that the drilling point was correct, while keeping his or her eyes focused on the microscope. The proposed interface reduced the common frustration that surgeons experience when using surgical navigation during otologic surgeries. --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. paper_content: OBJECTIVES ::: Direct haptic (force or tactile) feedback is not yet available in commercial robotic surgical systems. Previous work by our group and others suggests that haptic feedback might significantly enhance the execution of surgical tasks requiring fine suture manipulation, specifically those encountered in cardiothoracic surgery. We studied the effects of substituting direct haptic feedback with visual and auditory cues to provide the operating surgeon with a representation of the forces he or she is applying with robotic telemanipulators. ::: ::: ::: METHODS ::: Using the robotic da Vinci surgical system (Intuitive Surgical, Inc, Sunnyvale, Calif), we compared applied forces during a standardized surgical knot-tying task under 4 different sensory-substitution scenarios: no feedback, auditory feedback, visual feedback, and combined auditory-visual feedback. ::: ::: ::: RESULTS ::: The forces applied with these sensory-substitution modes more closely approximate suture tensions achieved under ideal haptic conditions (ie, hand ties) than forces applied without such sensory feedback. The consistency of applied forces during robot-assisted suture tying aided by visual feedback or combined auditory-visual feedback sensory substitution is superior to that achieved with hand ties. Robot-assisted ties aided with auditory feedback revealed levels of consistency that were generally equivalent or superior to those attained with hand ties. Visual feedback and auditory feedback improve the consistency of robotically applied forces. ::: ::: ::: CONCLUSIONS ::: Sensory substitution, in the form of visual feedback, auditory feedback, or both, confers quantifiable advantages in applied force accuracy and consistency during the performance of a simple surgical task. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery paper_content: OBJECTIVES/HYPOTHESIS ::: Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. ::: ::: ::: STUDY DESIGN ::: Randomized-controlled trial plus qualitative analysis. ::: ::: ::: METHODS ::: Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. ::: ::: ::: RESULTS ::: The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. ::: ::: ::: CONCLUSIONS ::: The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. --- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience. paper_content: In this study the authors measured the effect of auditory feedback during image-guided surgery (IGS) in a phantom model and in a clinical setting. In the phantom setup, advanced IGS with complementary auditory feedback was compared with results obtained with 2 routine forms of IGS, either with an on-screen image display or with image injection via a microscope. The effect was measured by means of volumetric resection assessments. The authors also present their first clinical data concerning the effects of complementary auditory feedback on instrument handling during image-guided neurosurgery. When using image-injection through the microscope for navigation, however, resection quality was significantly worse. In the clinical portion of the study, the authors performed resections of cerebral mass lesions in 6 patients with the aid of auditory feedback. Instrument tip speeds were slightly (although significantly) influenced by this feedback during resection. Overall, the participating neurosurgeons reported that the auditory feedback helped in decision-making during resection without negatively influencing instrument use. Postoperative volumetric imaging studies revealed resection rates of > or = 95% when IGS with auditory feedback was used. There was only a minor amount of brain shift, and postoperative resection volumes corresponded well with the preoperative intentions of the neurosurgeon. Although the results of phantom surgery with auditory feedback revealed no significant effect on resection quality or extent, auditory cues may help prevent damage to eloquent brain structures. --- paper_title: Touchless interaction with software in interventional radiology and surgery: a systematic literature review paper_content: PURPOSE ::: In this article, we systematically examine the current state of research of systems that focus on touchless human-computer interaction in operating rooms and interventional radiology suites. We further discuss the drawbacks of current solutions and underline promising technologies for future development. ::: ::: ::: METHODS ::: A systematic literature search of scientific papers that deal with touchless control of medical software in the immediate environment of the operation room and interventional radiology suite was performed. This includes methods for touchless gesture interaction, voice control and eye tracking. ::: ::: ::: RESULTS ::: Fifty-five research papers were identified and analyzed in detail including 33 journal publications. Most of the identified literature (62 %) deals with the control of medical image viewers. The others present interaction techniques for laparoscopic assistance (13 %), telerobotic assistance and operating room control (9 % each) as well as for robotic operating room assistance and intraoperative registration (3.5 % each). Only 8 systems (14.5 %) were tested in a real clinical environment, and 7 (12.7 %) were not evaluated at all. ::: ::: ::: CONCLUSION ::: In the last 10 years, many advancements have led to robust touchless interaction approaches. However, only a few have been systematically evaluated in real operating room settings. Further research is required to cope with current limitations of touchless software interfaces in clinical environments. The main challenges for future research are the improvement and evaluation of usability and intuitiveness of touchless human-computer interaction and the full integration into productive systems as well as the reduction of necessary interaction steps and further development of hands-free interaction. --- paper_title: Auditory feedback to support image-guided medical needle placement paper_content: Purpose ::: During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Auditory support for navigated radiofrequency ablation paper_content: Radiofrequency ablation is applied to treat a lesion using a needle inserted into the patient, which delivers local radiofrequency energy. Guided surgical methods allow surgeons to view the placement of the needle in relation to the patient to aid in guiding the tip of the needle to the target point. Unfortunately, such methods require that surgeons remove attention from the patient in order to receive guidance information from a screen. We introduce a novel method to align and insert an ablation needle using auditory display, allowing the surgeon to retain attention on the patient. First evaluation results show that novice users can successfully guide a needle towards a target point using primarily auditory display. We hypothesize that successful auditory display will lead to increased attention on the patient and reduce unnecessary operator head and neck movements. --- paper_title: Augmented-reality visualizations guided by cognition: perceptual heuristics for combining visible and obscured information paper_content: One unique feature of mixed and augmented reality (MR/AR) systems is that hidden and occluded objects an be readily visualized. We call this specialized use of MR/AR, obscured information visualization (OIV). In this paper, we describe the beginning of a research program designed to develop such visualizations through the use of principles derived from perceptual psychology and cognitive science. In this paper we surveyed the cognitive science literature as it applies to such visualization tasks, described experimental questions derived from these cognitive principles, and generated general guidelines that can be used in designing future OIV systems (as well improving AR displays more generally). We also report the results from an experiment that utilized a functioning AR-OIV system: we found that in relative depth judgment, subjects reported rendered objects as being in front of real-world objects, except when additional occlusion and motion cues were presented together. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Auditory support for navigated radiofrequency ablation paper_content: Radiofrequency ablation is applied to treat a lesion using a needle inserted into the patient, which delivers local radiofrequency energy. Guided surgical methods allow surgeons to view the placement of the needle in relation to the patient to aid in guiding the tip of the needle to the target point. Unfortunately, such methods require that surgeons remove attention from the patient in order to receive guidance information from a screen. We introduce a novel method to align and insert an ablation needle using auditory display, allowing the surgeon to retain attention on the patient. First evaluation results show that novice users can successfully guide a needle towards a target point using primarily auditory display. We hypothesize that successful auditory display will lead to increased attention on the patient and reduce unnecessary operator head and neck movements. --- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery paper_content: OBJECTIVES/HYPOTHESIS ::: Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. ::: ::: ::: STUDY DESIGN ::: Randomized-controlled trial plus qualitative analysis. ::: ::: ::: METHODS ::: Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. ::: ::: ::: RESULTS ::: The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. ::: ::: ::: CONCLUSIONS ::: The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. --- paper_title: Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience. paper_content: In this study the authors measured the effect of auditory feedback during image-guided surgery (IGS) in a phantom model and in a clinical setting. In the phantom setup, advanced IGS with complementary auditory feedback was compared with results obtained with 2 routine forms of IGS, either with an on-screen image display or with image injection via a microscope. The effect was measured by means of volumetric resection assessments. The authors also present their first clinical data concerning the effects of complementary auditory feedback on instrument handling during image-guided neurosurgery. When using image-injection through the microscope for navigation, however, resection quality was significantly worse. In the clinical portion of the study, the authors performed resections of cerebral mass lesions in 6 patients with the aid of auditory feedback. Instrument tip speeds were slightly (although significantly) influenced by this feedback during resection. Overall, the participating neurosurgeons reported that the auditory feedback helped in decision-making during resection without negatively influencing instrument use. Postoperative volumetric imaging studies revealed resection rates of > or = 95% when IGS with auditory feedback was used. There was only a minor amount of brain shift, and postoperative resection volumes corresponded well with the preoperative intentions of the neurosurgeon. Although the results of phantom surgery with auditory feedback revealed no significant effect on resection quality or extent, auditory cues may help prevent damage to eloquent brain structures. --- paper_title: NASA-Task Load Index (NASA-TLX); 20 years later paper_content: NASA-TLX is a multi-dimensional scale designed to obtain workload estimates from one or more operators while they are performing a task or immediately afterwards. The years of research that precede... --- paper_title: Less is sometimes more: a comparison of distance-control and navigated-control concepts of image-guided navigation support for surgeons. paper_content: Image-guided navigation (IGN) systems provide automation support of intra-operative information analysis and decision-making for surgeons. Previous research showed that navigated-control (NC) systems which represent high levels of decision-support and directly intervene in surgeons' workflow provide benefits with respect to patient safety and surgeons' physiological stress but also involve several cost effects (e.g. prolonged surgery duration, reduced secondary-task performance). It was hypothesised that less automated distance-control (DC) systems would provide a better solution in terms of human performance consequences. N = 18 surgeons performed a simulated mastoidectomy with NC, DC and without IGN assistance. Effects on surgical performance, physiological effort, workload and situation awareness (SA) were compared. As expected, DC technology had the same benefits as the NC system but also led to less unwanted side effects on surgery duration, subjective workload and SA. This suggests that IGN systems ... --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: Auditory feedback to support image-guided medical needle placement paper_content: Purpose ::: During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. --- paper_title: Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. paper_content: OBJECTIVES ::: Direct haptic (force or tactile) feedback is not yet available in commercial robotic surgical systems. Previous work by our group and others suggests that haptic feedback might significantly enhance the execution of surgical tasks requiring fine suture manipulation, specifically those encountered in cardiothoracic surgery. We studied the effects of substituting direct haptic feedback with visual and auditory cues to provide the operating surgeon with a representation of the forces he or she is applying with robotic telemanipulators. ::: ::: ::: METHODS ::: Using the robotic da Vinci surgical system (Intuitive Surgical, Inc, Sunnyvale, Calif), we compared applied forces during a standardized surgical knot-tying task under 4 different sensory-substitution scenarios: no feedback, auditory feedback, visual feedback, and combined auditory-visual feedback. ::: ::: ::: RESULTS ::: The forces applied with these sensory-substitution modes more closely approximate suture tensions achieved under ideal haptic conditions (ie, hand ties) than forces applied without such sensory feedback. The consistency of applied forces during robot-assisted suture tying aided by visual feedback or combined auditory-visual feedback sensory substitution is superior to that achieved with hand ties. Robot-assisted ties aided with auditory feedback revealed levels of consistency that were generally equivalent or superior to those attained with hand ties. Visual feedback and auditory feedback improve the consistency of robotically applied forces. ::: ::: ::: CONCLUSIONS ::: Sensory substitution, in the form of visual feedback, auditory feedback, or both, confers quantifiable advantages in applied force accuracy and consistency during the performance of a simple surgical task. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience. paper_content: In this study the authors measured the effect of auditory feedback during image-guided surgery (IGS) in a phantom model and in a clinical setting. In the phantom setup, advanced IGS with complementary auditory feedback was compared with results obtained with 2 routine forms of IGS, either with an on-screen image display or with image injection via a microscope. The effect was measured by means of volumetric resection assessments. The authors also present their first clinical data concerning the effects of complementary auditory feedback on instrument handling during image-guided neurosurgery. When using image-injection through the microscope for navigation, however, resection quality was significantly worse. In the clinical portion of the study, the authors performed resections of cerebral mass lesions in 6 patients with the aid of auditory feedback. Instrument tip speeds were slightly (although significantly) influenced by this feedback during resection. Overall, the participating neurosurgeons reported that the auditory feedback helped in decision-making during resection without negatively influencing instrument use. Postoperative volumetric imaging studies revealed resection rates of > or = 95% when IGS with auditory feedback was used. There was only a minor amount of brain shift, and postoperative resection volumes corresponded well with the preoperative intentions of the neurosurgeon. Although the results of phantom surgery with auditory feedback revealed no significant effect on resection quality or extent, auditory cues may help prevent damage to eloquent brain structures. --- paper_title: A Surgical Navigation System for Guiding Exact Cochleostomy Using Auditory Feedback: A Clinical Feasibility Study paper_content: In cochlear implantation (CI), the insertion of the electrode array into the appropriate compartment of the cochlea, the scala tympani, is important for an optimal hearing outcome. The current surgical technique for CI depends primarily on the surgeon's skills and experience level to achieve the correct placement of the electrode array, and the surgeon needs to confirm that the exact placement is achieved prior to completing the procedure. Thus, a surgical navigation system can help the surgeon to access the scala tympani without injuring important organs in the complex structure of the temporal bone. However, the use of a surgical microscope has restricted the effectiveness of the surgical navigation because it has been difficult to deliver the navigational information to the surgeon from outside of the surgeon's visual attention. We herein present a clinical feasibility study of an auditory feedback function developed as a computer-surgeon interface that can guide the surgeon to the preset cochleostomy location. As a result, the surgeon could confirm that the drilling point was correct, while keeping his or her eyes focused on the microscope. The proposed interface reduced the common frustration that surgeons experience when using surgical navigation during otologic surgeries. --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: Auditory support for resection guidance in navigated liver surgery paper_content: Background ::: ::: An alternative mode of interaction with navigation systems for open liver surgery was requested. Surgeons who use such systems are impeded by having to constantly switch between viewing the navigation system screen and the patient during an operation. ::: ::: ::: ::: Methods ::: ::: To this end, an auditory display system for open liver surgery is introduced with support for guiding the tracked instrument towards and remaining on a predefined resection line. To evaluate the method, a clinically orientated user study with 12 surgeons was conducted. ::: ::: ::: ::: Results ::: ::: It is shown in qualitative results from the user study that the proposed auditory display is recognized as a useful addition to the current visual mode of interaction. It was revealed in a statistical analysis that participants spent less time looking on the screen (10% vs. 96%). Accuracy for resection guidance was significantly improved when using auditory display as an additional information channel (0.6 vs. 1.4 mm); however, the overall time for the resection task was shorter without auditory display (47 vs. 24 s). ::: ::: ::: ::: Conclusions ::: ::: By reducing dependence on the visual modality during resection guidance, the auditory display is well suited to become integrated in navigation systems for liver surgery. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery paper_content: OBJECTIVES/HYPOTHESIS ::: Image-guided surgery (IGS) systems are frequently utilized during cranial base surgery to aid in orientation and facilitate targeted surgery. We wished to assess the performance of our recently developed localized intraoperative virtual endoscopy (LIVE)-IGS prototype in a preclinical setting prior to deployment in the operating room. This system combines real-time ablative instrument tracking, critical structure proximity alerts, three-dimensional virtual endoscopic views, and intraoperative cone-beam computed tomographic image updates. ::: ::: ::: STUDY DESIGN ::: Randomized-controlled trial plus qualitative analysis. ::: ::: ::: METHODS ::: Skull base procedures were performed on 14 cadaver specimens by seven fellowship-trained skull base surgeons. Each subject performed two endoscopic transclival approaches; one with LIVE-IGS and one using a conventional IGS system in random order. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores were documented for each dissection, and a semistructured interview was recorded for qualitative assessment. ::: ::: ::: RESULTS ::: The NASA-TLX scores for mental demand, effort, and frustration were significantly reduced with the LIVE-IGS system in comparison to conventional navigation (P < .05). The system interface was judged to be intuitive and most useful when there was a combination of high spatial demand, reduced or absent surface landmarks, and proximity to critical structures. The development of auditory icons for proximity alerts during the trial better informed the surgeon while limiting distraction. ::: ::: ::: CONCLUSIONS ::: The LIVE-IGS system provided accurate, intuitive, and dynamic feedback to the operating surgeon. Further refinements to proximity alerts and visualization settings will enhance orientation while limiting distraction. The system is currently being deployed in a prospective clinical trial in skull base surgery. --- paper_title: A Telerobotic System for Transnasal Surgery paper_content: Mechanics-based models of concentric tube continuum robots have recently achieved a level of sophistication that makes it possible to begin to apply these robots to a variety of real-world clinical scenarios. Endonasal skull base surgery is one such application, where their small diameter and tentacle like dexterity are particularly advantageous. In this paper we provide the medical motivation for an endonasal surgical robot featuring concentric tube manipulators, and describe our model-based design and teleoperation methods, as well as a complete system incorporating image-guidance. Experimental demonstrations using a laparoscopic training task, a cadaver reachability study, and a phantom tumor resection experiment illustrate that both novice and expert users can effectively teleoperate the system, and that skull base surgeons can use the robot to achieve their objectives in a realistic surgical scenario. --- paper_title: The perceived urgency of auditory warning alarms used in the hospital operating room is inappropriate paper_content: PURPOSE ::: To examine the perceived urgency of 13 auditory warning alarms commonly occurring in the hospital operating room. ::: ::: ::: METHODS ::: Undergraduate students, who were naïve with respect to the clinical situation associated with the alarms, judged perceived urgency of each alarm on a ten-point scale. ::: ::: ::: RESULTS ::: The perceived urgency of the alarms was not consistent with the actual urgency of the clinical situation that triggers it. In addition, those alarms indicating patient condition were generally perceived as less urgent than those alarms indicating the operation of equipment. Of particular interest were three sets of alarms designed by equipment manufacturers to indicate specific priorities for action. Listeners did not perceive any differences in the urgency of the 'information only', 'medium' and 'high' priority alarms of two of the monitors with all judged as low to moderate in urgency. In contrast, the high priority alarm of the third monitor was judged as significantly more urgent than its low and medium urgency counterparts. ::: ::: ::: CONCLUSION ::: The alarms currently in use do not convey the intended sense of urgency to naïve listeners, and this holds even for two sets of alarms designed specifically by manufacturers to convey different levels of urgency. --- paper_title: Intraoperative image-guided navigation system: development and applicability in 65 patients undergoing liver surgery paper_content: BACKGROUND ::: Image-guided systems have recently been introduced for their application in liver surgery. We aimed to identify and propose suitable indications for image-guided navigation systems in the domain of open oncologic liver surgery and, more specifically, in the setting of liver resection with and without microwave ablation. ::: ::: ::: METHOD ::: Retrospective analysis was conducted in patients undergoing liver resection with and without microwave ablation using an intraoperative image-guided stereotactic system during three stages of technological development (accuracy: 8.4 ± 4.4 mm in phase I and 8.4 ± 6.5 mm in phase II versus 4.5 ± 3.6 mm in phase III). It was evaluated, in which indications image-guided surgery was used according to the different stages of technical development. ::: ::: ::: RESULTS ::: Between 2009 and 2013, 65 patients underwent image-guided surgical treatment, resection alone (n = 38), ablation alone (n = 11), or a combination thereof (n = 16). With increasing accuracy of the system, image guidance was progressively used for atypical resections and combined microwave ablation and resection instead of formal liver resection (p < 0.0001). ::: ::: ::: CONCLUSION ::: Clinical application of image guidance is feasible, while its efficacy is subject to accuracy. The concept of image guidance has been shown to be increasingly efficient for selected indications in liver surgery. While accuracy of available technology is increasing pertaining to technological advancements, more and more previously untreatable scenarios such as multiple small, bilobar lesions and so-called vanishing lesions come within reach. --- paper_title: Localized Intraoperative Virtual Endoscopy (LIVE) for Surgical Guidance in 16 Skull Base Patients paper_content: IMPORTANCE ::: Previous preclinical studies of localized intraoperative virtual endoscopy-image-guided surgery (LIVE-IGS) for skull base surgery suggest a potential clinical benefit. ::: ::: ::: OBJECTIVE ::: The first aim was to evaluate the registration accuracy of virtual endoscopy based on high-resolution magnetic resonance imaging under clinical conditions. The second aim was to implement and assess real-time proximity alerts for critical structures during skull base drilling. ::: ::: ::: DESIGN AND SETTING ::: Patients consecutively referred for sinus and skull base surgery were enrolled in this prospective case series. ::: ::: ::: PARTICIPANTS ::: Five patients were used to check registration accuracy and feasibility with the subsequent 11 patients being treated under LIVE-IGS conditions with presentation to the operating surgeon (phase 2). ::: ::: ::: INTERVENTION ::: Sixteen skull base patients were endoscopically operated on by using image-based navigation while LIVE-IGS was tested in a clinical setting. ::: ::: ::: MAIN OUTCOME AND MEASURES ::: Workload was quantitatively assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. ::: ::: ::: RESULTS ::: Real-time localization of the surgical drill was accurate to ~1 to 2 mm in all cases. The use of 3-mm proximity alert zones around the carotid arteries and optic nerve found regular clinical use, as the median minimum distance between the tracked drill and these structures was 1 mm (0.2-3.1 mm) and 0.6 mm (0.2-2.5 mm), respectively. No statistical differences were found in the NASA-TLX indicators for this experienced surgical cohort. ::: ::: ::: CONCLUSIONS AND RELEVANCE ::: Real-time proximity alerts with virtual endoscopic guidance was sufficiently accurate under clinical conditions. Further clinical evaluation is required to evaluate the potential surgical benefits, particularly for less experienced surgeons or for teaching purposes. --- paper_title: Influence of music on operation theatre staff. paper_content: Background and Objective: The purpose of the study was to evaluate the perception of influence of music among surgeons, anesthesiologist and nurses in our hospital as well as to critically evaluate whether music can be used as an aid in improving the work efficiency of medical personnel in the operation theatre (OT). Materials and Methods: A prospective, questionnaire-based cross-sectional study was conducted. A total of 100 randomly selected subjects were interviewed, which included 44 surgeons, 25 anesthesiologists and 31 nurses. Statistical package for social sciences (SPSS) Windows Version 16 software was used for statistical evaluation. Results: Most of the OT medical personnel were found to be aware of the beneficial effects of music, with 87% consenting to the playing of music in the OT. It was also found that most participants agreed to have heard music on a regular basis in the OT, while 17% had heard it whenever they have been to the OT. Conclusions: Majority of the respondent's preferred playing music in the OT which helped them relax. It improved the cognitive function of the listeners and created a sense of well being among the people and elevated mood in them. Music helped in reducing the autonomic reactivity of theatre personnel in stressful surgeries allowing them to approach their surgeries in a more thoughtful and relaxed manner. Qualitative, objective and comprehensive effect of specific music types varied with different individuals. Music can aid in improving the work efficiency of medical personnel in the OT. The study has reinforced the beneficial effects of playing music in the OT outweighing its deleterious outcomes. --- paper_title: Comparison and Evaluation of Sonification Strategies for Guidance Tasks paper_content: This article aims to reveal the efficiency of sonification strategies in terms of rapidity, precision and overshooting in the case of a one-dimensional guidance task. The sonification strategies are based on the four main perceptual attributes of a sound (i.e. pitch, loudness, duration/tempo and timbre) and classified with respect to the presence or not of one or several auditory references. Perceptual evaluations are used to display the strategies in a precision/rapidity space and enable prediction of user behavior for a chosen sonification strategy. The evaluation of sonification strategies constitutes a first step toward general guidelines for sound design in interactive multimedia systems that involve guidance issues. --- paper_title: The process of sonification design for guidance tasks paper_content: This article deals with the process of sonification design for guidance tasks. It presents several studies that aim at overcoming two major problems of sonification: the aesthetics of sound design and the lack of a general method. On the basis of these studies, it proposes some guidelines for the generalization of the sonification process. First, it introduces the need to disassociate data and display dimensions; then, it proposes a method to classify and evaluate sound strategies; finally, it introduces a method for the customization of the sound design. The whole process is based on the identification and the manipulation of particular sound morphologies. --- paper_title: Warning navigation system using real-time safe region monitoring for otologic surgery paper_content: Purpose We developed a surgical navigation system that warns the surgeon with auditory and visual feedback to protect the facial nerve with real-time monitoring of the safe region during drilling. --- paper_title: An overview of systems for CT- and MRI-guided percutaneous needle placement in the thorax and abdomen paper_content: BACKGROUND ::: Minimally invasive biopsies, drainages and therapies in the soft tissue organs of the thorax and abdomen are typically performed through a needle, which is inserted percutaneously to reach the target area. The conventional workflow for needle placement employs an iterative freehand technique. This article provides an overview of needle-placement systems developed to improve this method. ::: ::: ::: METHODS ::: An overview of systems for needle placement was assembled, including those found in scientific publications and patents, as well as those that are commercially available. The systems are categorized by function and tabulated. ::: ::: ::: RESULTS ::: Over 40 systems were identified, ranging from simple passive aids to fully actuated robots. ::: ::: ::: CONCLUSIONS ::: The overview shows a wide variety of developed systems with growing complexity. However, given that only a few systems have reached commercial availability, it is clear that the technical community is struggling to develop solutions that are adopted clinically. Copyright © 2014 John Wiley & Sons, Ltd. --- paper_title: Improving Auditory Warning Design: Relationship between Warning Sound Parameters and Perceived Urgency paper_content: This paper presents an experimental study of the effects of individual sound parameters on perceived (psychoacoustic) urgency. Experimental Series 1 showed that fundamental frequency, harmonic series, amplitude envelope shape, and delayed harmonics all have clear and consistent effects on perceived urgency. Experimental Series 2 showed that temporal and melodic parameters such as speed, rhythm, pitch range, and melodic structure also have clear and consistent effects on perceived urgency. The final experiment tested a set of 13 auditory warnings generated by an application of the earlier experimental findings. The urgency rank ordering of this warning set was predicted, and the correlation between the predicted and the obtained order was highly significant. The results of these experiments have a widespread application in the improvement of existing auditory warning systems and the design of new systems, where the psychoacoustic and psychological appropriateness of warnings could be enhanced. --- paper_title: Auditory feedback to support image-guided medical needle placement paper_content: Purpose ::: During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician’s visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. --- paper_title: Objective evaluation of the effect of noise on the performance of a complex laparoscopic task. paper_content: BACKGROUND ::: Noise in operating rooms has been found to be much higher than the recommended level of 45 dB. The aim of this study was to objectively evaluate the effect of noise and music on the performance of a complex surgical task. ::: ::: ::: METHODS ::: Twelve surgeons with varying experience in laparoscopic suturing undertook 3 sutures in a laparoscopic trainer under 3 conditions: quiet, noise at 80 to 85 dB, and music. Other than the test conditions, all other conditions were standardized. A validated motion analysis system was used to assess performance. The tasks were recorded by video and played back to 2 blinded observers who rated the surgeons' performance on a global rating scale by observing the tasks for accuracy, knot quality, and number of nonpurposeful movements. ::: ::: ::: RESULTS ::: Time taken for the tasks (P=.78), total number of movements (P=.78), total path length (P=.47), global score (P=.54), accuracy, and knot quality remained unchanged across the 3 conditions. The main study measures had a high test-retest reliability and internal consistency. No learning effect was seen across the 3 conditions. ::: ::: ::: CONCLUSIONS ::: Surgeons can effectively "block out" noise and music. This is probably due to the high levels of concentration required for the performance of a complex surgical task. Future research should focus on the effect of these conditions on communication in the operating room. --- paper_title: Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. paper_content: OBJECTIVES ::: Direct haptic (force or tactile) feedback is not yet available in commercial robotic surgical systems. Previous work by our group and others suggests that haptic feedback might significantly enhance the execution of surgical tasks requiring fine suture manipulation, specifically those encountered in cardiothoracic surgery. We studied the effects of substituting direct haptic feedback with visual and auditory cues to provide the operating surgeon with a representation of the forces he or she is applying with robotic telemanipulators. ::: ::: ::: METHODS ::: Using the robotic da Vinci surgical system (Intuitive Surgical, Inc, Sunnyvale, Calif), we compared applied forces during a standardized surgical knot-tying task under 4 different sensory-substitution scenarios: no feedback, auditory feedback, visual feedback, and combined auditory-visual feedback. ::: ::: ::: RESULTS ::: The forces applied with these sensory-substitution modes more closely approximate suture tensions achieved under ideal haptic conditions (ie, hand ties) than forces applied without such sensory feedback. The consistency of applied forces during robot-assisted suture tying aided by visual feedback or combined auditory-visual feedback sensory substitution is superior to that achieved with hand ties. Robot-assisted ties aided with auditory feedback revealed levels of consistency that were generally equivalent or superior to those attained with hand ties. Visual feedback and auditory feedback improve the consistency of robotically applied forces. ::: ::: ::: CONCLUSIONS ::: Sensory substitution, in the form of visual feedback, auditory feedback, or both, confers quantifiable advantages in applied force accuracy and consistency during the performance of a simple surgical task. --- paper_title: Surgical navigation using audio feedback. paper_content: Current medical visualization technology intended for positional guidance in surgical applications may only ever have limited utility in the operating room due to the preexisting visual requirements of surgical practice. Additionally, visual systems impose limits as a result of their high latency, poor image resolution, problems with stereopsis and physical strain upon the user. Audio technology is relatively unexamined in the broad range of available methodologies for medical devices. The potential to translate surgical instrument position into audio feedback presents a novel solution to the human factors and engineering problems faced by visual display technology because audio technology employs a rich and as yet unburdened sensory modality. We describe an experimental system we have developed for investigating this new interface design approach using commercially available hardware. --- paper_title: Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality paper_content: Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task. --- paper_title: Validation of Exposure Visualization and Audible Distance Emission for Navigated Temporal Bone Drilling in Phantoms paper_content: BACKGROUND ::: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve) and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. ::: ::: ::: METHODOLOGY/PRINCIPAL FINDINGS ::: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. ::: ::: ::: CONCLUSIONS/SIGNIFICANCE ::: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling. --- paper_title: Image-Guided Interventions: Technology Review and Clinical Applications paper_content: Image-guided interventions are medical procedures that use computer-based systems to provide virtual image overlays to help the physician precisely visualize and target the surgical site. This field has been greatly expanded by the advances in medical imaging and computing power over the past 20 years. This review begins with a historical overview and then describes the component technologies of tracking, registration, visualization, and software. Clinical applications in neurosurgery, orthopedics, and the cardiac and thoracoabdominal areas are discussed, together with a description of an evolving technology named Natural Orifice Transluminal Endoscopic Surgery (NOTES). As the trend toward minimally invasive procedures continues, image-guided interventions will play an important role in enabling new procedures, while improving the accuracy and success of existing approaches. Despite this promise, the role of image-guided systems must be validated by clinical trials facilitated by partnerships between scie... ---
Title: A Survey of Auditory Display in Image-Guided Interventions Section 1: Introduction Description 1: Introduce the topic of image-guided interventions and the growing importance of auditory display in this field. Section 2: Literature search Description 2: Explain the methodology used to search for relevant literature, including search terms and databases. Section 3: Eligibility criteria Description 3: Describe the criteria used to include or exclude literature from the review. Section 4: Data extraction Description 4: Detail the process and categories used for extracting data from the selected articles. Section 5: Results Description 5: Summarize the findings of the literature review, including the number and type of articles reviewed. Section 6: Interventional tasks supported by auditory display Description 6: Discuss the various interventional tasks that are supported by auditory display as found in the literature. Section 7: Clinical motivations for exploring auditory display Description 7: Explore the motivations for developing auditory display systems for clinical use. Section 8: Methods of auditory display for image-guided interventions Description 8: Outline the different methods of auditory display used in image-guided interventions, such as alerts, auditory icons, and parameter-mapping models. Section 9: Experimental designs and findings Description 9: Describe the different experimental designs used in the evaluation of auditory display systems and summarize the findings. Section 10: Discussion Description 10: Provide a discussion on the state of the art, including benefits, drawbacks, and future research directions. Section 11: Conclusion Description 11: Conclude the review by summarizing the key points and potential for future development in auditory display for image-guided interventions.
Application of Data Warehouse in Real Life: State-of- the-art Survey from User Preferences' Perspective
6
--- paper_title: Dimensional issues in agricultural data warehouse designs paper_content: Recently, the government of India embarked on an ambitious project of designing and deploying the Integrated National Agricultural Resources Information System (INARIS) data warehouse for the agricultural sector. The system's purpose is to support macro level planning. This paper presents some of the challenges faced in designing the data warehouse, specifically dimensional and deployment challenges of the warehouse. We also present some early user evaluations of the warehouse. Governmental data warehouse implementations are rare, especially at the national level. Furthermore, the motivations are significantly different from private sectors. Designing the INARIS agricultural data warehouse posed unique and significant challenges because, traditionally, the collection and dissemination of information are localized. --- paper_title: Building the Data Warehouse paper_content: From the Publisher: ::: The data warehouse solves the problem of getting information out of legacy systems quickly and efficiently. If designed and built right, data warehouses can provide significant freedom of access to data, thereby delivering enormous benefits to any organization. In this unique handbook, W. H. Inmon, "the father of the data warehouse," provides detailed discussion and analysis of all major issues related to the design and construction of the data warehouse, including granularity of data, partitioning data, metadata, lack of creditability of decision support systems (DSS) data, the system of record, migration and more. This Second Edition of Building the Data Warehouse is revised and expanded to include new techniques and applications of data warehouse technology and update existing topics to reflect the latest thinking. It includes a useful review checklist to help evaluate the effectiveness of the design. --- paper_title: Critical factors influencing the adoption of data warehouse technology: a study of the banking industry in Taiwan paper_content: Previous literature suggests that various factors play crucial roles in the adoption of an information system; however, there is little empirical research about the factors affecting adoption of data warehouse technology, particularly in a single information technology intensive industry. In this study, we used a survey to investigate the factors influencing adoption of data warehouse technology in the banking industry in Taiwan. A total of 50 questionnaires were mailed to CIOs in domestic banks. The response rate was 60%. Discriminant analysis was employed to test hypotheses. The results revealed that factors such as support from the top management, size of the bank, effect of champion, internal needs, and competitive pressure would affect the adoption of data warehouse technology. The results and conclusions from this study may be a good reference for global banks in these aforementioned countries to establish and develop operational strategies, which in turn will facilitate the implementation in overseas branches. --- paper_title: Key organizational factors in data warehouse architecture selection paper_content: Even though data warehousing has been in existence for over a decade, companies are still uncertain about a critical decision - which data warehouse architecture to implement? Based on the existing literature, theory, and interviews with experts, a research model was created that identifies the various contextual factors that affect the selection decision. The results from the field survey and multinomial logistic regression suggest that various combinations of organizational factors influence data warehouse architecture selection. The strategic view of the data warehouse prior to implementation emerged as a key determinant. The research suggests an overall model for predicting the data warehouse architecture selection decision. --- paper_title: A Data Warehouse Architecture for Clinical Data Warehousing paper_content: Data warehousing methodologies share a common set of tasks, including business requirements analysis, data design, architectural design, implementation and deployment. Clinical data warehouses are complex and time consuming to review a series of patient records however it is one of the efficient data repository existing to deliver quality patient care. Data integration tasks of medical data store are challenging scenarios when designing clinical data warehouse architecture. The presented data warehouse architectures are practicable solutions to tackle data integration issues and could be adopted by small to large clinical data warehouse applications. --- paper_title: Dimensional issues in agricultural data warehouse designs paper_content: Recently, the government of India embarked on an ambitious project of designing and deploying the Integrated National Agricultural Resources Information System (INARIS) data warehouse for the agricultural sector. The system's purpose is to support macro level planning. This paper presents some of the challenges faced in designing the data warehouse, specifically dimensional and deployment challenges of the warehouse. We also present some early user evaluations of the warehouse. Governmental data warehouse implementations are rare, especially at the national level. Furthermore, the motivations are significantly different from private sectors. Designing the INARIS agricultural data warehouse posed unique and significant challenges because, traditionally, the collection and dissemination of information are localized. --- paper_title: A comparison of data warehousing methodologies paper_content: Using a common set of attributes to determine which methodology to use in a particular data warehousing project. --- paper_title: An architecture for a business and information system paper_content: The transaction-processing environment in which companies maintain their operational databases was the original target for computerization and is now well understood. On the other hand, access to company information on a large scale by an end user for reporting and data analysis is relatively new. Within IBM, the computerization of informational systems is progressing, driven by business needs and by the availability of improved tools for accessing the company data. It is now apparent that an architecture is needed to draw together the various strands of informational system activity within the company. IBM Europe, Middle East, and Africa (E/ME/A) has adopted an architecture called the E/ME/A Business Information System (EBIS) architecture as the strategic direction for informational systems. EBIS proposes an integrated warehouse of company data based firmly in the relational database environment. End-user access to this warehouse is simplified by a consistent set of tools provided by an end-user interface and supported by a business data directory that describes the information available in user terms. This paper describes the background and components of the architecture of EBIS. --- paper_title: Building the Data Warehouse paper_content: From the Publisher: ::: The data warehouse solves the problem of getting information out of legacy systems quickly and efficiently. If designed and built right, data warehouses can provide significant freedom of access to data, thereby delivering enormous benefits to any organization. In this unique handbook, W. H. Inmon, "the father of the data warehouse," provides detailed discussion and analysis of all major issues related to the design and construction of the data warehouse, including granularity of data, partitioning data, metadata, lack of creditability of decision support systems (DSS) data, the system of record, migration and more. This Second Edition of Building the Data Warehouse is revised and expanded to include new techniques and applications of data warehouse technology and update existing topics to reflect the latest thinking. It includes a useful review checklist to help evaluate the effectiveness of the design. --- paper_title: A conceptual model of data warehousing for medical device manufacturers paper_content: Required information for management decisions in an organization can span across many internal functions and external sources, particularly for decisions with strategic ramifications. In recent years, increasing numbers of organizations have employed data warehouses to meet their needs of accurate and timely information. A literature research has been performed to understand data warehousing applications in the healthcare and pharmaceutical industry. With lessons learned from these applications and situation analyses of medical device organizations, a conceptual model of data warehousing for medical device manufactures is proposed. This model proposes an enterprise-wide data warehouse, which will link information generated or obtained from different local systems. This data warehouse is the focal point of an integrated system where data from all sources are stored in predefined formats and will provide information of interest to various users in the organization. Some practical applications of such an integrated data warehouse in areas of product portfolio planning, sales/marketing, and clinical/regulatory are discussed. The future trends of data warehousing applications for medical device manufacturers are also envisioned. --- paper_title: Critical factors influencing the adoption of data warehouse technology: a study of the banking industry in Taiwan paper_content: Previous literature suggests that various factors play crucial roles in the adoption of an information system; however, there is little empirical research about the factors affecting adoption of data warehouse technology, particularly in a single information technology intensive industry. In this study, we used a survey to investigate the factors influencing adoption of data warehouse technology in the banking industry in Taiwan. A total of 50 questionnaires were mailed to CIOs in domestic banks. The response rate was 60%. Discriminant analysis was employed to test hypotheses. The results revealed that factors such as support from the top management, size of the bank, effect of champion, internal needs, and competitive pressure would affect the adoption of data warehouse technology. The results and conclusions from this study may be a good reference for global banks in these aforementioned countries to establish and develop operational strategies, which in turn will facilitate the implementation in overseas branches. --- paper_title: TO OUTSOURCE OR NOT TO OUTSOURCE paper_content: SUBTITLE: THIRD-PARTY LOGISTICS PROVIDERS CAN OFFER EXPERTISE AND IMPRESSIVE SAVINGS, BUT THERE'S A BIG RISK IN GIVING UP CONTROL OVER SOMETHING AS IMPORTANT AS LOGISTICS OPERATIONS: WHAT'S A SHIPPER TO DO? --- paper_title: A Comprehensive Analysis of Materialized Views in a Data Warehouse Environment paper_content: Data in a warehouse can be perceived as a collection of materialized views that are generated as per the user requirements specified in the queries being generated against the information contained in the warehouse. User requirements and constraints frequently change over time, which may evolve data and view definitions stored in a data warehouse dynamically. The current requirements are modified and some novel and innovative requirements are added in order to deal with the latest business scenarios. In fact, data preserved in a warehouse along with these materialized views must also be updated and maintained so that they can deal with the changes in data sources as well as the requirements stated by the users. Selection and maintenance of these views is one of the vital tasks in a data warehousing environment in order to provide optimal efficiency by reducing the query response time, query processing and maintenance costs as well. Another major issue related to materialized views is that whether these views should be recomputed for every change in the definition or base relations, or they should be adapted incrementally from existing views. In this paper, we have examined several ways o performing changes in materialized views their selection and maintenance in data warehousing environments. We have also provided a comprehensive study on research works of different authors on various parameters and presented the same in a tabular manner. --- paper_title: Dimensional issues in agricultural data warehouse designs paper_content: Recently, the government of India embarked on an ambitious project of designing and deploying the Integrated National Agricultural Resources Information System (INARIS) data warehouse for the agricultural sector. The system's purpose is to support macro level planning. This paper presents some of the challenges faced in designing the data warehouse, specifically dimensional and deployment challenges of the warehouse. We also present some early user evaluations of the warehouse. Governmental data warehouse implementations are rare, especially at the national level. Furthermore, the motivations are significantly different from private sectors. Designing the INARIS agricultural data warehouse posed unique and significant challenges because, traditionally, the collection and dissemination of information are localized. --- paper_title: Actions for data warehouse success paper_content: Problem statement: The Data Warehouse is a database dedicated to the storage of all data used in the decision analysis, it meets the customer requirements, to ensure, in time, that a data warehouse complies with the rules of construction and manages the evolutions necessary of the information system (IS). ::: Results: According to the studies carried out, we see that a system based on a data warehouse governed by the best practices of The Information Technology Infrastructure Library (ITIL) and equipped with a multi-agent system will make it possible our direction to ensure governance tending towards the optimization of the exploitation of the data warehouse. --- paper_title: Building the Data Warehouse paper_content: From the Publisher: ::: The data warehouse solves the problem of getting information out of legacy systems quickly and efficiently. If designed and built right, data warehouses can provide significant freedom of access to data, thereby delivering enormous benefits to any organization. In this unique handbook, W. H. Inmon, "the father of the data warehouse," provides detailed discussion and analysis of all major issues related to the design and construction of the data warehouse, including granularity of data, partitioning data, metadata, lack of creditability of decision support systems (DSS) data, the system of record, migration and more. This Second Edition of Building the Data Warehouse is revised and expanded to include new techniques and applications of data warehouse technology and update existing topics to reflect the latest thinking. It includes a useful review checklist to help evaluate the effectiveness of the design. --- paper_title: Critical factors influencing the adoption of data warehouse technology: a study of the banking industry in Taiwan paper_content: Previous literature suggests that various factors play crucial roles in the adoption of an information system; however, there is little empirical research about the factors affecting adoption of data warehouse technology, particularly in a single information technology intensive industry. In this study, we used a survey to investigate the factors influencing adoption of data warehouse technology in the banking industry in Taiwan. A total of 50 questionnaires were mailed to CIOs in domestic banks. The response rate was 60%. Discriminant analysis was employed to test hypotheses. The results revealed that factors such as support from the top management, size of the bank, effect of champion, internal needs, and competitive pressure would affect the adoption of data warehouse technology. The results and conclusions from this study may be a good reference for global banks in these aforementioned countries to establish and develop operational strategies, which in turn will facilitate the implementation in overseas branches. --- paper_title: Key organizational factors in data warehouse architecture selection paper_content: Even though data warehousing has been in existence for over a decade, companies are still uncertain about a critical decision - which data warehouse architecture to implement? Based on the existing literature, theory, and interviews with experts, a research model was created that identifies the various contextual factors that affect the selection decision. The results from the field survey and multinomial logistic regression suggest that various combinations of organizational factors influence data warehouse architecture selection. The strategic view of the data warehouse prior to implementation emerged as a key determinant. The research suggests an overall model for predicting the data warehouse architecture selection decision. --- paper_title: An overview of data warehousing and OLAP technology paper_content: Data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. Many commercial products and services are now available, and all of the principal database management system vendors now have offerings in these areas. Decision support places some rather different requirements on database technology compared to traditional on-line transaction processing applications. This paper provides an overview of data warehousing and OLAP technologies, with an emphasis on their new requirements. We describe back end tools for extracting, cleaning and loading data into a data warehouse; multidimensional data models typical of OLAP; front end client tools for querying and data analysis; server extensions for efficient query processing; and tools for metadata management and for managing the warehouse. In addition to surveying the state of the art, this paper also identifies some promising research issues, some of which are related to problems that the database research community has worked on for years, but others are only just beginning to be addressed. This overview is based on a tutorial that the authors presented at the VLDB Conference, 1996. --- paper_title: Data warehousing and analytics infrastructure at facebook paper_content: Scalable analysis on large data sets has been core to the functions of a number of teams at Facebook - both engineering and non-engineering. Apart from ad hoc analysis of data and creation of business intelligence dashboards by analysts across the company, a number of Facebook's site features are also based on analyzing large data sets. These features range from simple reporting applications like Insights for the Facebook Advertisers, to more advanced kinds such as friend recommendations. In order to support this diversity of use cases on the ever increasing amount of data, a flexible infrastructure that scales up in a cost effective manner, is critical. We have leveraged, authored and contributed to a number of open source technologies in order to address these requirements at Facebook. These include Scribe, Hadoop and Hive which together form the cornerstones of the log collection, storage and analytics infrastructure at Facebook. In this paper we will present how these systems have come together and enabled us to implement a data warehouse that stores more than 15PB of data (2.5PB after compression) and loads more than 60TB of new data (10TB after compression) every day. We discuss the motivations behind our design choices, the capabilities of this solution, the challenges that we face in day today operations and future capabilities and improvements that we are working on. --- paper_title: A data warehouse-based decision support system for sewer infrastructure management paper_content: Abstract Since the inception of the Governmental Accounting Standards Board statement-34 (GASB 34) in the United States, local and state governing entities need to inspect sewer systems and collect general information about their properties. Application of the collected information in decision-making processes, however, is often problematic due to the lack of consistency and completeness of infrastructure data. In addition, most techniques involved in decision-making processes are relatively complicated and difficult to implement without a certain level of engineering experience and training. Consequently, the sharing and transferring of pertinent information among stakeholders is not smooth and is frequently limited. This study presents a decision support system (DSS) for the management of sewer infrastructure using data warehousing technology. The proposed decision support system automatically assigns appropriate inspection and renewal methods for each pipeline and estimates associated costs, resulting in effective and practical sewer infrastructure management from various perspectives, with corresponding levels of detail. --- paper_title: Customer relationship management in financial services: towards information-enabled relationship marketing paper_content: Relationship marketing is concerned with how organizations manage and improve their relationships with customers for long-term profitability. Customer relationship management (CRM), which is becoming a topic of increasing importance in marketing, is concerned with using information technology (IT) in implementing relationship marketing strategies. This paper reports on a study of the adoption and use of CRM in the financial services sector. In particular, the key elements of CRM are examined in these organizations and executives' perceptions of the main IT components that enable responsive CRM are explored. CRM is classified into five stages of sophistication and a framework for CRM adoption is developed. --- paper_title: Data Warehouse Requirements Analysis Framework: Business-Object Based Approach paper_content: Detailed requirements analysis plays a key role towards the design of successful Data Warehouse (DW) system. The requirements analysis specifications are used as the prime input for the construction of conceptual level multidimensional data model. This paper has proposed a Business Object based requirements analysis framework for DW system which is supported with abstraction mechanism and reuse capability. It also facilitate the stepwise mapping of requirements descriptions into high level design components of graph semantic based conceptual level object oriented multidimensional data model. The proposed framework starts with the identification of the analytical requirements using business process driven approach and finally refine the requirements in further detail to map into the conceptual level DW design model using either Demand- driven of Mixed-driven approach for DW requirements analysis. --- paper_title: Applications of Data Mining in Higher Education paper_content: Data analysis plays an important role for decision support irrespective of type of industry like any manufacturing unit and educations system. There are many domains in which data mining techniques plays an important role. This paper proposes the use of data mining techniques to improve the efficiency of higher education institution. If data mining techniques such as clustering, decision tree and association are applied to higher education processes, it would help to improve students performance, their life cycle management, selection of courses, to measure their retention rate and the grant fund management of an institution. This is an approach to examine the effect of using data mining techniques in higher education. --- paper_title: Application of data warehouse and Decision Support System in construction management paper_content: Abstract How to provide construction managers with information about and insight into the existing data, so as to make decision more efficiently without interrupting the daily work of an On-Line Transaction Processing (OLTP) system is a problem during the construction management process. To solve this problem, the integration of a data warehouse and a Decision Support System (DSS) seems to be efficient. ‘Data warehouse’ technology is a new database discipline, which has not yet been applied to construction management. Hence, it is worthwhile to experiment in this particular field in order to gauge the full scope of its capability. First reviewed in this paper are the concepts of the data warehouse, On-Line Analysis Processing (OLAP) and DSS. The method of creating a data warehouse is then shown, changing the data in the data warehouse into a multidimensional data cube and integrating the data warehouse with a DSS. Finally, an application example is given to illustrate the use of the Construction Management Decision Support System (CMDSS) developed in this study. Integration of a data warehouse and a DSS enable the right data to be tracked down and provide the required information in a direct, rapid and meaningful way. Construction managers can view data from various perspectives with significantly reduced query time, thus making decisions faster and more comprehensive. The applications of a data warehousing integrated with a DSS in construction management practice are seen to have considerable potential. --- paper_title: E Data Turning Data Into Information With Data Warehousing paper_content: Thank you for downloading e data turning data into information with data warehousing. As you may know, people have look hundreds times for their favorite books like this e data turning data into information with data warehousing, but end up in infectious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some harmful virus inside their laptop. --- paper_title: Computing in Civil Engineering paper_content: This proceedings consist of papers presented at the Third Congress on Computing in Civil Engineering held in Anaheim, California, June 17-19, 1996. The proceedings covers advanced computing theory and technologies, computing applications in civil engineering practice and education, and computing issues, experiences and lessons learned. Within these broad topics, the book contains papers on subjects such as: applications of the Internet, World Wide Web and multimedia in civil engineering practice and education; risk and reliability assessment and management; visualization, modeling and simulation; artificial intelligence and advanced computing; geographic information systems; and interoperability. --- paper_title: Comprehensive Centralized-Data Warehouse for Managing Malaria Cases paper_content: Tanah Bumbu is one of the most endemic areas in Indonesia for patients diagnosed with malaria diseases. Currently, available malaria case data were stored in disparate sources. Hence, it is difficult for the public health department to quickly and easily gather the useful information for determining strategic actions in tackling these cases. The purpose of this research is to build a data warehouse that integrates all malaria cases from disparate sources. This malaria data warehouse is a centralized architecture of galaxy or constellation scheme that consists of three fact tables and 13 dimension tables. SQL Server Integration Services (SSIS) is utilized to build ETL packages that load data from various sources to stages, dimensions, and fact tables in malaria data warehouse. Finally, a timely report can be generated by extracting the salient information located in malaria data warehouse. --- paper_title: Critical factors influencing the adoption of data warehouse technology: a study of the banking industry in Taiwan paper_content: Previous literature suggests that various factors play crucial roles in the adoption of an information system; however, there is little empirical research about the factors affecting adoption of data warehouse technology, particularly in a single information technology intensive industry. In this study, we used a survey to investigate the factors influencing adoption of data warehouse technology in the banking industry in Taiwan. A total of 50 questionnaires were mailed to CIOs in domestic banks. The response rate was 60%. Discriminant analysis was employed to test hypotheses. The results revealed that factors such as support from the top management, size of the bank, effect of champion, internal needs, and competitive pressure would affect the adoption of data warehouse technology. The results and conclusions from this study may be a good reference for global banks in these aforementioned countries to establish and develop operational strategies, which in turn will facilitate the implementation in overseas branches. ---
Title: Application of Data Warehouse in Real Life: State-of-the-art Survey from User Preferences' Perspective Section 1: INTRODUCTION Description 1: This section introduces the concept of data warehousing, its evolution, and its importance in decision support for organizations. Section 2: DATA WAREHOUSE TECHNOLOGY Description 2: This section discusses the foundational technology and concepts behind data warehousing, including its architecture, attributes, and processes like ETL (Extraction, Transformation, Loading). Section 3: APPLICATIONS OF DATA WAREHOUSE IN REAL LIFE Description 3: This section explores various real-life applications of data warehousing across different domains such as business, government, finance, healthcare, and education. Section 4: CASE STUDIES Description 4: This section provides detailed case studies from both the business and government perspectives, highlighting the implementation and benefits of data warehousing in specific contexts. Section 5: COMPARISON OF DIFFERENT CROSS DOMAIN AREAS AFFECTING DATA WAREHOUSE Description 5: This section presents a comparative analysis of how various cross-domain areas impact data warehousing, illustrated with graphical representations. Section 6: CONCLUSION Description 6: This section summarizes the findings of the survey, reflecting on the importance of data warehousing technology, its adoption, and the challenges faced by organizations.
A survey of transactional issues for Web Service composition and recovery
13
--- paper_title: Service-Oriented Computing: Semantics, Processes, Agents paper_content: About the Authors.Preface.Note to the Reader.Acknowledgments.Figures.Tables.Listings.I Basics.1. Computing with Services.2. Basic Standards for Web Services.3. Programming Web Services.4. Enterprise Architectures.5. Principles of Service-Oriented Computing.II Description.6. Modeling and Representation.7. Resource Description Framework.8. Web Ontology Language.9. Ontology Management.III Engagement.10. Execution Models.11. Transaction Concepts.12. Coordination Frameworks for Web Services.13. Process Specifications.14. Formal Specification and Enactment.IV Collaboration.15. Agents.16. Multiagent Systems.17. Organizations.18. Communication.V Solutions.19. Semantic Service Solutions.20. Social Service Selection.21. Economic Service Selection.VI Engineering.22. Building SOC Applications.23. Service Management.24. Security.VII Directions.25. Challenge and Extensions.VIII Appendices.Appendix A: XML and XML Schema.Appendix B: URI, URN, URL and UUID.Appendix C: XML Namespace Abbreviations.Glossary.About the Authors.Bibliography.Index. --- paper_title: Fundamentals of Database Systems paper_content: From the Publisher: ::: Fundamentals of Database Systems combines clear explanations of theory and design, broad coverage of models and real systems, and excellent examples with up-to-date introductions to modern database technologies. This edition is completely revised and updated, and reflects the latest trends in technological and application development. Professors Elmasri and Navathe focus on the relational model and include coverage of recent object-oriented developments. They also address advanced modeling and system enhancements in the areas of active databases, temporal and spatial databases, and multimedia information systems. This edition also surveys the latest application areas of data warehousing, data mining, web databases, digital libraries, GIS, and genome databases. New to the Third Edition ::: Reorganized material on data modeling to clearly separate entity relationship modeling, extended entity relationship modeling, and object-oriented modeling Expanded coverage of the object-oriented and object/relational approach to data management, including ODMG and SQL3 Uses examples from real database systems including OracleTM and Microsoft AccessAE Includes discussion of decision support applications of data warehousing and data mining, as well as emerging technologies of web databases, multimedia, and mobile databases Covers advanced modeling in the areas of active, temporal, and spatial databases Provides coverage of issues of physical database tuning Discusses current database application areas of GIS, genome, and digital libraries --- paper_title: Workflow and Process Automation: Concepts and Technology paper_content: Preface. 1. Introduction. 2. Process Technology. 3. Workflow Technology. 4. Transactional Aspects of Workflows. 5. Ongoing Research in Workflow and Process Automation. 6. State of the Industry. References. Index. --- paper_title: Principles and realization strategies of multilevel transaction management paper_content: One of the demands of database system transaction management is to achieve a high degree of concurrency by taking into consideration the semantics of high-level operations. On the other hand, the implementation of such operations must pay attention to conflicts on the storage representation levels below. To meet these requirements in a layered architecture, we propose a multilevel transaction management utilizing layer-specific semantics. Based on the theoretical notion of multilevel serializability, a family of concurrency control strategies is developed. Suitable recovery protocols are investigated for aborting single transactions and for restarting the system after a crash. The choice of levels involved in a multilevel transaction strategy reveals an inherent trade-off between increased concurrency and growing recovery costs. A series of measurements has been performed in order to compare several strategies. Preliminary results indicate considerable performance gains of the multilevel transaction approach. --- paper_title: Nested Transactions An Approach To Reliable Distributed Computing paper_content: Thank you for reading nested transactions an approach to reliable distributed computing. Maybe you have knowledge that, people have search hundreds times for their chosen novels like this nested transactions an approach to reliable distributed computing, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they cope with some malicious virus inside their laptop. --- paper_title: Transaction Management Support for Cooperative Applications paper_content: Foreword. 1. Introduction W. Klas, et al. 2. The TransCoop Paradigm J. Veijalainen, et al. 3. Transaction Models in Cooperative Work - An Overview J. Veijalainen, et al. 4. Application Requirements T. Tesch, et al. 5. The TransCoop Architecture A. Lehtola, et al. 6. The TransCoop Specification Environment F.J. Faase, et al. 7. The TransCoop Transaction Model J. Klingemann, et al. 8. The TransCoop Demonstrator System J. Klingemann, S. Even. 9. Conclusions S. Even, et al. References. Index. --- paper_title: Database transaction models for advanced applications paper_content: 1 Transaction Management in Database Systems 2 Introduction to Advanced Transaction Models 3 A Cooperative Transaction Model for Design Databases 4 A Flexible Framework for Transaction Management in Engineering Environments 5 A Transaction Model for Active Distributed Object Systems 6 A Transaction Model for an Open Publication Environment 7 The ConTract Model 8 Dynamic Restructuring of Transactions 9 Multidatabase Transaction and Query Processing in Logic 10 ACTA: The Saga Continues 11 A Transaction Manager Development Facility for Non Standard Database Systems 12 The S-Transaction Model 13 Concepts and Applications of Multilevel Transactions and Open Nested Transactions 14 Using Polytransactions to Manage Interdependent Data --- paper_title: Log-Based Recovery for Nested Transactions paper_content: Techniques similar to shadow pages have been suggested for use in rollback and crash recovery for nested transactions. However, undo/redo log methods have not been presented, though undo/redo logs are widely used for transaction recovery, and perhaps preferable to shadow methods. We develop a scheme of log-based recovery for nested transactions. The resulting design is promising because it requires a relatively small number of extensions to a similar scheme of recovery for single-level transactions. --- paper_title: A Multidatabase Transaction Model for InterBase paper_content: The management of multidatabase transactions presents new and interesting challenges, due mainly to the requirement of the autonomy of local database systems. In this paper, we present an extended transaction model which provides the following features useful in a multidatabase environment: (1) It allows the composition of flexi.ble transactions which can tolerate failures of individual sub transactions by taking advantage of the fact that a given function can frequently be accomplished by more than one database system; (2) It supports the concept of mixed transactions allowing compensatable and non-compensatable subtransactions to coexist within a single global transaction; and (3) It incorporates the concept of time jn both the subtransaction and global transaction processing, thus allowing more flexibility in transaction scheduling. We formally define the extended transaction model and discuss its transaction scheduling mechanism. --- paper_title: The Workflow Activity Model WAMO paper_content: Workflow technology has not yet lived up to its expectations not only because of social problems but also because of technical problems, like inflexible and rigid process specification and execution mechanisms and insufficient possibilities to handle exceptions. The aim of this paper is to present a workflow model which significantly facilitates the design and reliable management of complex business processes supported by an automatic mechanism to handle exceptions. The strength of the model is its simplicity and the application independent transaction facility (advanced control mechanism for workflow units) which guarantees reliable execution of workflow activities. --- paper_title: The Mentor project: steps towards enterprise-wide workflow management paper_content: Enterprise-wide workflow management where workflows may span multiple organizational units require particular consideration of scalability, heterogeneity, and availability issues. The Mentor project, introduced in this paper, aims to reconcile a rigorous workflow specification method with a distributed middleware architecture as a step towards enterprise-wide solutions. The project uses the formalism of state and activity charts and a commercial tool, Statemate, for workflow specification. A first prototype of Mentor has been built which allows executing specifications in a distributed manner. A major contribution of this paper is the method for transforming a centralized state chart specification into a form that is amenable to a distributed execution and to incorporate the necessary synchronization between different processing entities. Fault tolerance issues are addressed by coupling Mentor with the Tuxedo TP monitor. --- paper_title: Failure handling and coordinated execution of concurrent workflows paper_content: Workflow management systems (WFMSs) coordinate the execution of applications distributed over networks. In WFMSs, data inconsistencies can arise due to: the interaction between steps of concurrent threads within a workflow (intra-workflow coordination); the interaction between steps of concurrent workflows (inter-workflow coordination); and the presence of failures. Since these problems have not received adequate attention, this paper focuses on developing the necessary concepts and infrastructure to handle them. First, to deal with inter- and intra-workflow coordination requirements we have identified a set of high level building blocks. Secondly, to handle failures we propose a novel and pragmatic approach called opportunistic compensation and re-execution that allows a workflow designer to customize workflow recovery from correctness as well as performance perspectives. Thirdly based on these concepts we have designed a workflow specification language that expresses new requirements for workflow executions and implemented a run-time system for managing workflow executions while satisfying the new requirements. These ideas are geared towards improving the modeling and correctness properties offered by WFMSs and making them more robust and flexible. --- paper_title: 1 TRANSACTIONS IN TRANSACTIONAL WORKFLOWS paper_content: Workflow management systems (WFMSs) are finding wide applicability in small and large organizational settings. Advanced transaction models (ATMs) focus on maintaining data consistency and have provided solutions to many problems such as correctness, consistency, and reliability in transaction processing and database management environments. While such concepts have yet to be solved in the domain of workflow systems, database researchers have proposed to use, or attempted to use ATMs to model workflows. In this paper we survey the work done in the area of transactional workflow systems. We then argue that workflow requirements in large-scale enterprise-wide applications involving heterogeneous and distributed environments either differ or exceed the modeling and functionality support provided by ATMs. We propose that an ATM is unlikely to provide the primary basis for modeling of workflow applications, and subsequently workflow management. We discuss a framework for error handling and recovery in the METEOR2 WFMS that borrows from relevant work in ATMs, distributed systems, software engineering, and organizational sciences. We have also presented various connotations of transactions in real-world organizational processes today. Finally, we point out the need for looking beyond ATMs and using a multi-disciplinary approach for modeling large-scale workflow applications of the future. --- paper_title: A distributed object oriented framework to offer transactional support for long running business processes paper_content: Many business processes are both long running and transactional in nature. They are also mostly multi-user processes. Implementations such as the CORBA OTS (Object Transaction Services) modeled on the lock-based systems used for classic transactions do not fully support the requirements of such processes, and as a result, application developers must develop custom-built infrastructure — on an application-by-application basis — to support users' transactional expectations. This paper presents a novel approach to implementing long-lived transactions within distributed object environments. We propose the use of the unit-of-work (UOW) transaction model and framework, an advanced nested transaction model that enables concurrent access to shared data without locking resources. The UOW approach describes a well-structured distributed object architecture that can easily be integrated with distributed object systems. The framework offers uniform (i.e., application independent) structural transaction support for long running business processes and provides them with the semantics of traditional, short, transactions. Use of the framework enables object developers to focus on business logic, with the framework infrastructure providing functions required to support the desired semantics. We discuss the framework programming model, how it provides transactional behavior to long running business processes and some of the research challenges still ahead of us. --- paper_title: Service-oriented computing paper_content: Service-oriented computing (SOC) is the computing paradigm that uses software services (or simply se… --- paper_title: Rule-Based Coordination of Distributed Web Service Transactions paper_content: Current approaches to transactional support of distributed processes in service-oriented environments are limited to scenarios where the participant initiating the process maintains a controlling position throughout the lifetime of the process. This constraint impedes support of complex processes where participants may only possess limited local views on the overall process. Furthermore, there is little support of dynamic aspects: failure or exit of participants usually leads to cancelation of the whole process. In this paper, we address these limitations by introducing a framework that strengthens the role of the coordinator and allows for largely autonomous coordination of dynamic processes. We first discuss motivating examples and analyze existing approaches to transactional coordination. Subsequently, we present our framework TracG, which is based on WS-BusinessActivity. It contains at its core a set of rules for deciding on the ongoing confirmation or cancelation status of participants' work and protocol extensions for monitoring the progress of a process. Various types of participant vitality for a process are distinguished, facilitating the controlled exit of nonvital participants as well as continuation of a process in case of tolerable failures. The implementation of the framework is presented and discussed regarding interoperability issues. --- paper_title: Revisiting the Behavior of Fault and Compensation Handlers in WS-BPEL paper_content: When automating work, it is often desirable to compensate completed work by undoing the work done by one or more activities. In the context of workflow, where compensation actions are defined on nested 'scopes' that group activities, this requires a model of nested compensation---based transactions. The model must enable the automatic determination of compensation order by considering not only the nesting of scopes but also the control dependencies between them. The current standard for Web services workflows, Business Process Execution Language for Web Services (WS-BPEL), has such compensation capabilities. In this paper, we show that the current mechanism in WS-BPEL shows compensation processing anomalies, such as neglecting control link dependencies between nested non-isolated scopes. We then propose an alternate approach that through elimination of default handlers as well as the complete elimination of termination handlers not only removes those anomalies but also relaxes current WS-BPEL restrictions on control links. The result is a new and deterministic model for handling default compensation for scopes in structures where: (1)both fault handling and compensation handling are present and (2)the relationships between scopes include both structured nesting and graph---based links. --- paper_title: Web Service Composition Transaction Management paper_content: The development of new web services by composition of existing services is becoming an extensive approach. This has resulted in transactions that span in multiple web services. These business transactions may be unpredictable and long in duration. Thus they may not be acceptable to lock resources exclusively for such long period. Two-phase commit is also not suitable for transactions with some long sub-transactions. Compensation is a way to ensure transaction reliability. However, rolling back a previously completed transaction is potentially expensive. Thus, tentative holding is another option. This paper presents a transaction management model for web service composition. We apply the approach of tentative hold and compensation for the composite transaction. We also present a multi-dimension negotiation model for the service composition. --- paper_title: A reservation-based coordination protocol for Web services paper_content: Traditional transaction semantics are not appropriate for business activities that involve long-running transactions in a loosely-coupled distributed environment, in particular, for Web services that operate between different enterprises over the Internet. In this paper we describe a novel reservation-based extended transaction protocol that can be used to coordinate such business activities. The protocol avoids the use of compensating transactions, which can result in undesirable effects. In our protocol, each task within a business activity is executed as two steps. The first step involves an explicit reservation of resources. The second step involves the confirmation or cancellation of the reservation. Each step is executed as a separate traditional short-running transaction. We show how our protocol can be implemented as a reservation protocol on top of the Web services transaction specification or, alternatively, as a coordination protocol on top of the Web services coordination specification. --- paper_title: Delivering Promises for Web Services Applications paper_content: Among the problems facing designers of complex multi-participant Web services-based applications is dealing with the consequences of the lack of suitable isolation mechanisms. This deficiency means that concurrent applications can interfere with each other, resulting in race conditions and lost updates. This paper considers a proposed solution to this problem based on 'promises' and shows that this model can be implemented in practice. We consider implementation issues that need to be handled in promise-based systems and discuss a proof of concept prototype that supports promise-based isolation without requiring changes to existing applications and resources. --- paper_title: Decentralized data dependency analysis for concurrent process execution paper_content: This paper presents our results with the investigation of decentralized data dependency analysis among concurrently executing processes in a service-oriented environment. Distributed Process Execution Agents (PEXAs) are responsible for controlling the execution of processes that are composed of web services. PEXAs are also associated with specific distributed sites for the purpose of capturing data changes that occur at those sites in the context of service executions using Delta-Enabled Grid Services. PEXAs then exchange this information with other PEXAs to dynamically discover data dependencies that can be used to enhance recovery activities for concurrent processes that execute with relaxed isolation properties. This paper outlines the functionality of PEXAs, describing the data structures and communication mechanisms that are used to support decentralized construction of distributed process dependency graphs, demonstrating a more dynamic and intelligent approach to identifying how the failure of one process can potentially affect other concurrently executing processes. --- paper_title: Using Rules and Data Dependencies for the Recovery of Concurrent Processes in a Service-Oriented Environment paper_content: This paper presents a recovery algorithm for service execution failure in the context of concurrent process execution. The recovery algorithm was specifically designed to support a rule-based approach to user-defined correctness in execution environments that support a relaxed form of isolation for service execution. Data dependencies are analyzed from data changes that are extracted from database transaction log files and generated as a stream of deltas from Delta-Enabled Grid Services. The deltas are merged by time stamp to create a global schedule of data changes that, together with the process execution context, are used to identify processes that are read and write dependent on failed processes. Process interference rules are used to express semantic conditions that determine if a process that is dependent on a failed process should recover or continue execution. The recovery algorithm integrates a service composition model that supports nested processes, compensation, contingency, and rollback procedures with the data dependency analysis process and rule execution procedure to provide a new approach for addressing consistency among concurrent processes that access shared data. We present the recovery algorithm and also discuss our results with simulation and evaluation of the concurrent process recovery algorithm. --- paper_title: Using deltas to analyze data dependencies and semantic correctness in the recovery of concurrent process execution paper_content: This research has developed an approach for analyzing data dependencies in a distributed environment, providing a rule-based mechanism to support semantic correctness in the recovery of concurrently executing processes over Grid Services. Delta-Enabled Grid Services are used to capture incremental data changes, known as deltas, from processes that execute over distributed services. Deltas are forwarded to a Process History Capture System (PHCS) that constructs a global process execution history to support the analysis of data dependencies when process failure occurs. An abstract execution model has been developed that is composed of three sub-models: (1) a service composition and recovery model defining the hierarchical composition structure with recovery features for backward recovery and forward execution; (2) a process dependency model defining and analyzing read and write dependencies among concurrently executing processes; and, (3) a rule-based model that uses process interference rules to specify how failure recovery of one process can potentially affect other process execution based on application semantics. A Process Recovery System (PRS) implements the recovery algorithms associated with the abstract execution model. A simulation framework has been developed to demonstrate the functionality of the PHCS and PRS for concurrent process recovery and to conduct performance evaluation on the PHCS and PRS. The results of this research support relaxed isolation and application-dependent semantic correctness for concurrent process execution, with a unique approach to resolving the impact of process failure and recovery on other concurrently executing processes, using data dependencies derived from distributed, autonomous services. --- paper_title: Concurrency control issues in Grid databases paper_content: Grid architecture is a fast evolving distributed computing architecture. The working of databases in the Grid architecture is not well understood. In view of changing distributed architecture we strongly feel that concurrency control issues should be revisited and reassessed for this new and evolving architecture. Implementing global lock table and global log records may not be practically possible in the Grid architecture due to the scalability issues. In this paper, we propose a correctness criterion and the Grid concurrency control protocol, which has the capability to deal with heterogeneity, autonomy, distribution and high volume of data in Grids. We then prove the correctness of the protocol followed by performance evaluation of the protocol. --- paper_title: High-Performance Parallel Database Processing and Grid Databases paper_content: This book targets the theoretical/conceptual details needed to form a base of understanding and then delivers information on development, implementations, and analytical modeling of parallel databases. It includes key information on new developments with grid databases. Also uses a theoretical and practical balance to support in-depth study of parallel query processing offered by modern DBMS as well as hands on experience of parallel query algorithms development, implementation, and analysis. --- paper_title: Monitoring Data Dependencies in Concurrent Process Execution through Delta-Enabled Grid Services paper_content: This paper presents our results with monitoring data dependencies among concurrently executing, distributed processes that execute over grid services. The research has been conducted in the context of the DeltaGrid project, focusing on the development of a semantically robust execution environment for the composition of grid services. Delta-Enabled Grid Services (DEGS) are a foundational aspect of the DeltaGrid environment, extending grid services with the capability of recording incremental data changes, known as deltas. Deltas generated by DEGS are forwarded to a Process History Capture System (PHCS) that organises deltas from distributed sources into a global, time-sequenced schedule of data changes. The design and construction of DEGS is presented, along with the storage and indexing techniques for merging deltas from multiple DEGS to create a global schedule of data changes that can be analysed to determine how the failure and recovery of one process can potentially affect other data-dependent processes. The paper also summarises the performance results for the global history construction and retrieval process. --- paper_title: Process Dependencies and Process Interference Rules for Analyzing the Impact of Failure in a Service Composition Environment paper_content: This paper presents a process dependency model for dynamically analyzing data dependencies among concurrently executing processes in an autonomous, distributed service composition environment. Data dependencies are derived from incremental data changes captured at each service execution site. Deltas are then used within a rule-based recovery model to specify how failure recovery of one process can potentially affect another process execution based on application semantics. This research supports relaxed isolation and application-dependent semantic correctness for concurrent process execution, with a unique approach to resolving the impact of process failure recovery on other processes, using data dependencies that are dynamically derived from distributed, autonomous services. --- paper_title: The dynamics of process modeling: new directions for the use of events and rules in service-oriented computing paper_content: The introduction of service-oriented computing has created a more dynamic environment for the composition of software applications, where processes are affected by events and data changes and also pose data consistency issues that must be considered in application design and development. This chapter addresses the need to develop a more effective means to model the dynamic aspects of processes in contemporary, distributed applications, especially in the context of concurrently executing processes that access shared data and cannot enforce traditional transaction properties. After an assessment of current tools for process modeling, we outline four approaches for the use of events and rules to support dynamic behavior associated with constraint checking, exception handling, and recovery. The specific techniques include the use of integration rules, assurance points, application exception rules, and invariants. The chapter concludes with a discussion of future research directions for the integrated modeling of events, rules, and processes. --- paper_title: Using Rules and Data Dependencies for the Recovery of Concurrent Processes in a Service-Oriented Environment paper_content: This paper presents a recovery algorithm for service execution failure in the context of concurrent process execution. The recovery algorithm was specifically designed to support a rule-based approach to user-defined correctness in execution environments that support a relaxed form of isolation for service execution. Data dependencies are analyzed from data changes that are extracted from database transaction log files and generated as a stream of deltas from Delta-Enabled Grid Services. The deltas are merged by time stamp to create a global schedule of data changes that, together with the process execution context, are used to identify processes that are read and write dependent on failed processes. Process interference rules are used to express semantic conditions that determine if a process that is dependent on a failed process should recover or continue execution. The recovery algorithm integrates a service composition model that supports nested processes, compensation, contingency, and rollback procedures with the data dependency analysis process and rule execution procedure to provide a new approach for addressing consistency among concurrent processes that access shared data. We present the recovery algorithm and also discuss our results with simulation and evaluation of the concurrent process recovery algorithm. --- paper_title: Using deltas to analyze data dependencies and semantic correctness in the recovery of concurrent process execution paper_content: This research has developed an approach for analyzing data dependencies in a distributed environment, providing a rule-based mechanism to support semantic correctness in the recovery of concurrently executing processes over Grid Services. Delta-Enabled Grid Services are used to capture incremental data changes, known as deltas, from processes that execute over distributed services. Deltas are forwarded to a Process History Capture System (PHCS) that constructs a global process execution history to support the analysis of data dependencies when process failure occurs. An abstract execution model has been developed that is composed of three sub-models: (1) a service composition and recovery model defining the hierarchical composition structure with recovery features for backward recovery and forward execution; (2) a process dependency model defining and analyzing read and write dependencies among concurrently executing processes; and, (3) a rule-based model that uses process interference rules to specify how failure recovery of one process can potentially affect other process execution based on application semantics. A Process Recovery System (PRS) implements the recovery algorithms associated with the abstract execution model. A simulation framework has been developed to demonstrate the functionality of the PHCS and PRS for concurrent process recovery and to conduct performance evaluation on the PHCS and PRS. The results of this research support relaxed isolation and application-dependent semantic correctness for concurrent process execution, with a unique approach to resolving the impact of process failure and recovery on other concurrently executing processes, using data dependencies derived from distributed, autonomous services. --- paper_title: Supporting data consistency in concurrent process execution with assurance points and invariants paper_content: This research has developed the concept of invariant rules for monitoring data in a service-oriented environment that allows concurrent data accessibility with relaxed isolation. The invariant rule approach is an extension of the assurance point concept, where an assurance point is a logical and physical checkpoint that is used to store critical data values and to check pre and post conditions related to service execution. Invariant rules provide a stronger way of monitoring constraints and guaranteeing that a condition holds for a specific duration of execution as defined by starting and ending assurance points, using the change notification capabilities of Delta-Enabled Grid Services. This paper outlines the specification of invariant rules as well as the invariant monitoring system for activating invariants, evaluating invariant rule conditions, and deactivating invariants. The system is supported by an invariant evaluation web service that uses materialized views for more efficient re-evaluation of invariant rule conditions. The research includes a performance analysis of the invariant evaluation Web Service. The strength of the invariant rule technique is that it provides a way to monitor data consistency in an environment where the coordinated locking of data items across multiple service executions is not possible. --- paper_title: Monitoring Data Dependencies in Concurrent Process Execution through Delta-Enabled Grid Services paper_content: This paper presents our results with monitoring data dependencies among concurrently executing, distributed processes that execute over grid services. The research has been conducted in the context of the DeltaGrid project, focusing on the development of a semantically robust execution environment for the composition of grid services. Delta-Enabled Grid Services (DEGS) are a foundational aspect of the DeltaGrid environment, extending grid services with the capability of recording incremental data changes, known as deltas. Deltas generated by DEGS are forwarded to a Process History Capture System (PHCS) that organises deltas from distributed sources into a global, time-sequenced schedule of data changes. The design and construction of DEGS is presented, along with the storage and indexing techniques for merging deltas from multiple DEGS to create a global schedule of data changes that can be analysed to determine how the failure and recovery of one process can potentially affect other data-dependent processes. The paper also summarises the performance results for the global history construction and retrieval process. --- paper_title: Transparent Fault Tolerance for Web Services Based Architectures paper_content: Service-based architectures enable the development of new classes of Grid and distributed applications. One of the main capabilities provided by such systems is the dynamic and flexible integration of services, according to which services are allowed to be a part of more than one distributed system and simultaneously serve different applications. This increased flexibility in system composition makes it difficult to address classical distributed system issues such as fault-tolerance. While it is relatively easy to make an individual service fault-tolerant, improving fault-tolerance of services collaborating in multiple application scenarios is a challenging task. In this paper, we look at the issue of developing fault-tolerant service-based distributed systems, and propose an infrastructure to implement fault tolerance capabilities transparent to services. --- paper_title: The DeltaGrid Service Composition and Recovery Model paper_content: This research has defined an abstract execution model for establishing user-defined correctness and recovery in a service composition environment. The service composition model defines a flexible, hierarchical service composition structure, where a service is composed of atomic and/or composite groups. The model provides multi-level protection against service execution failure by using compensation and contingency at different composition granularity levels, thus maximizing the potential for forward recovery of a process when failure occurs. The recovery procedures also include rollback as a recovery option, where incremental data changes known as deltas are extracted from service executions and externalized by streaming data changes to a Process History Capture System. Deltas can then be used to backward recover an operation through a process known as Delta-Enabled Rollback. This article defines the semantics of the service composition model and the manner in which compensation, contingency, and Delta-Enabled-rollback are used together to recover process execution. The authors also present a case study and describe a simulation and evaluation framework for demonstrating the functionality of the recovery algorithm and for evaluating the performance of the recovery command generation process. --- paper_title: Checkpointing for workflow recovery paper_content: Workflow technology targets supporting reliable and scaleable execution, for workflow management systems (WfMS) to support large-scale multi-system applications, involving both humans and legacy systems, in distributed and often heterogeneous environments. In case of failures, workflow processes usually need to resume their executions from one of their saved states, called a checkpoint, achieved by saving the states from time to time persistently. The activity of restoring a checkpoint and resuming the execution from the checkpoint is called rollback. Those techniques have long been used in database systems. A checkpoint is an action consistent checkpoint if it represents a state between complete update operations. A consistent state in the database domain is a state when no update transactions were active. This checkpoint representing a consistent state is a transaction consistent checkpoint. A checkpoint does not need to satisfy any consistency constraints. But recovery after failure must always guarantee that the resultant state is transaction consistent even though any checkpoint used may not be. A checkpoint can be either local or global. A local checkpoint is a checkpoint taken locally, with or without cooperation with any other local checkpointing activities at different sites. A local checkpoint can be a fuzzy or consistent checkpoint. During global reconstruction, a set of local checkpoints, usually taken at different site, will be used to find global consistent state. To facilitate the global reconstruction, a global checkpoint, derived from a set of local checkpoints taken at different site, provides a rollback boundary, thus reducing the recovery time --- paper_title: The Assurance Point Model for Consistency and Recovery in Service Composition paper_content: This research has defined an abstract execution model for establishing user-defined correctness and recovery in a service composition environment. The service composition model defines a hierarchical service composition structure, where a service is composed of atomic and/or composite groups. The model provides multi-level protection against service execution failure by using compensation and contingency at different composition granularity levels. The model is enhanced with the concept of assurance points (APS) and integration rules, where APs serve as logical and physical checkpoints for user-defined consistency checking, invoking integration rules that check pre and post conditions at different points in the execution process. The unique aspect of APs is that they provide intermediate rollback points when failures occur, thus allowing a process to be compensated to a specific AP for the purpose of rechecking pre-conditions before retry attempts. APs also support a dynamic backward recovery process, known as cascaded contingency, for hierarchically nested processes in an attempt to recover to a previous AP that can be used to invoke contingent procedures or alternate execution paths for failure of a nested process. As a result, the assurance point approach provides flexibility with respect to the combined use of backward and forward recovery options. Petri Nets have been used to define the semantics of the assurance point approach to service composition and recovery. A comparison to the BPEL fault handler is also provided. --- paper_title: Achieving recovery in service composition with assurance points and integration rules paper_content: This paper defines the concept of Assurance Points (APs) together with the use of integration rules to provide a flexible way of checking constraints and responding to execution errors in service composition. An AP is a combined logical and physical checkpoint, providing an execution milestone that stores critical data and interacts with integration rules to alter program flow and to invoke different forms of recovery depending on the execution status. During normal execution, APs invoke rules that check pre-conditions, postconditions, and other application rules. When execution errors occur, APs are also used as rollback points. Integration rules can invoke backward recovery to specific APs using compensation as well as forward recovery through rechecking of preconditions before retry attempts or through execution of contingencies and alternative execution paths. APs together with integration rules provide an increased level of consistency checking as well as backward and forward recovery actions. --- paper_title: Process Dependencies and Process Interference Rules for Analyzing the Impact of Failure in a Service Composition Environment paper_content: This paper presents a process dependency model for dynamically analyzing data dependencies among concurrently executing processes in an autonomous, distributed service composition environment. Data dependencies are derived from incremental data changes captured at each service execution site. Deltas are then used within a rule-based recovery model to specify how failure recovery of one process can potentially affect another process execution based on application semantics. This research supports relaxed isolation and application-dependent semantic correctness for concurrent process execution, with a unique approach to resolving the impact of process failure recovery on other processes, using data dependencies that are dynamically derived from distributed, autonomous services. --- paper_title: An Overview of AspectJ paper_content: Aspect] is a simple and practical aspect-oriented extension to Java With just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns. In AspectJ's dynamic join point model, join points are well-defined points in the execution of the program; pointcuts are collections of join points; advice are special method-like constructs that can be attached to pointcuts; and aspects are modular units of crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse the crosscutting structure of aspects in the same kind of way as one browses the inheritance structure of classes. Several examples show that AspectJ is powerful, and that programs written using it are easy to understand. --- paper_title: Aspect-Oriented Workflow Languages paper_content: Most available aspect-oriented languages today are extensions to programming languages However, aspect-orientation, which is a paradigm for decomposition and modularization, is not only applicable in that context In this paper, we introduce aspect-oriented software development concepts to workflow languages in order to improve the modularity of workflow process specifications with respect to crosscutting concerns and crosscutting changes In fact, crosscutting concerns such as data validation and security cannot be captured in a modular way when using the constructs provided by current workflow languages We will propose a concern-based decomposition of workflow process specifications and present the main concepts of aspect-oriented workflow languages using AO4BPEL, which is an aspect-oriented workflow language for Web Service composition. --- paper_title: AO4BPEL: An Aspect-oriented Extension to BPEL paper_content: Process-oriented composition languages such as BPEL allow Web Services to be composed into more sophisticated services using a workflow process. However, such languages exhibit some limitations with respect to modularity and flexibility. They do not provide means for a well-modularized specification of crosscutting concerns such as logging, persistence, auditing, and security. They also do not support the dynamic adaptation of composition at runtime. In this paper, we advocate an aspect-oriented approach to Web Service composition and present the design and implementation of AO4BPEL, an aspect-oriented extension to BPEL. We illustrate through examples how AO4BPEL makes the composition specification more modular and the composition itself more flexible and adaptable. --- paper_title: Web services orchestration and choreography paper_content: Combining Web services to create higher level, cross-organizational business processes requires standards to model the interactions. Several standards are working their way through industry channels and into vendor products. --- paper_title: The Assurance Point Model for Consistency and Recovery in Service Composition paper_content: This research has defined an abstract execution model for establishing user-defined correctness and recovery in a service composition environment. The service composition model defines a hierarchical service composition structure, where a service is composed of atomic and/or composite groups. The model provides multi-level protection against service execution failure by using compensation and contingency at different composition granularity levels. The model is enhanced with the concept of assurance points (APS) and integration rules, where APs serve as logical and physical checkpoints for user-defined consistency checking, invoking integration rules that check pre and post conditions at different points in the execution process. The unique aspect of APs is that they provide intermediate rollback points when failures occur, thus allowing a process to be compensated to a specific AP for the purpose of rechecking pre-conditions before retry attempts. APs also support a dynamic backward recovery process, known as cascaded contingency, for hierarchically nested processes in an attempt to recover to a previous AP that can be used to invoke contingent procedures or alternate execution paths for failure of a nested process. As a result, the assurance point approach provides flexibility with respect to the combined use of backward and forward recovery options. Petri Nets have been used to define the semantics of the assurance point approach to service composition and recovery. A comparison to the BPEL fault handler is also provided. --- paper_title: Transparent Fault Tolerance for Web Services Based Architectures paper_content: Service-based architectures enable the development of new classes of Grid and distributed applications. One of the main capabilities provided by such systems is the dynamic and flexible integration of services, according to which services are allowed to be a part of more than one distributed system and simultaneously serve different applications. This increased flexibility in system composition makes it difficult to address classical distributed system issues such as fault-tolerance. While it is relatively easy to make an individual service fault-tolerant, improving fault-tolerance of services collaborating in multiple application scenarios is a challenging task. In this paper, we look at the issue of developing fault-tolerant service-based distributed systems, and propose an infrastructure to implement fault tolerance capabilities transparent to services. --- paper_title: The Assurance Point Model for Consistency and Recovery in Service Composition paper_content: This research has defined an abstract execution model for establishing user-defined correctness and recovery in a service composition environment. The service composition model defines a hierarchical service composition structure, where a service is composed of atomic and/or composite groups. The model provides multi-level protection against service execution failure by using compensation and contingency at different composition granularity levels. The model is enhanced with the concept of assurance points (APS) and integration rules, where APs serve as logical and physical checkpoints for user-defined consistency checking, invoking integration rules that check pre and post conditions at different points in the execution process. The unique aspect of APs is that they provide intermediate rollback points when failures occur, thus allowing a process to be compensated to a specific AP for the purpose of rechecking pre-conditions before retry attempts. APs also support a dynamic backward recovery process, known as cascaded contingency, for hierarchically nested processes in an attempt to recover to a previous AP that can be used to invoke contingent procedures or alternate execution paths for failure of a nested process. As a result, the assurance point approach provides flexibility with respect to the combined use of backward and forward recovery options. Petri Nets have been used to define the semantics of the assurance point approach to service composition and recovery. A comparison to the BPEL fault handler is also provided. --- paper_title: Handling faults in decentralized orchestration of composite web services paper_content: Composite web services can be orchestrated in a decentralized manner by breaking down the original service specification into a set of partitions and executing them on a distributed infrastructure. The infrastructure consists of multiple service engines communicating with each other over asynchronous messaging. Decentralized orchestration yields performance benefits by exploiting concurrency and reducing the data on the network. Further, decentralized orchestration may be necessary to orchestrate certain composite web services due to privacy and data flow constraints. However, decentralized orchestration also results in additional complexity due to absence of a centralized global state, and overlapping or different life cycles of the various partitions. This makes handling of faults arising from composite service partitions or from the failure of component web services, a challenging task. In this paper we propose a mechanism for handling faults in decentralized orchestration of composite web services. The mechanism includes a strategy for placement of fault handlers and compensation handlers, and schemes for fault propagation and fault recovery. The mechanism is designed to maintain the semantics of the original specification while ensuring minimal overheads. --- paper_title: Automated discovery, interaction and composition of Semantic Web Services paper_content: In this paper we introduce a vision for Semantic Web Services, which combine the growing Web services architecture and the Semantic Web and we will propose DAML-S as a prototypical example of an ontology for describing Semantic Web services. --- paper_title: The Assurance Point Model for Consistency and Recovery in Service Composition paper_content: This research has defined an abstract execution model for establishing user-defined correctness and recovery in a service composition environment. The service composition model defines a hierarchical service composition structure, where a service is composed of atomic and/or composite groups. The model provides multi-level protection against service execution failure by using compensation and contingency at different composition granularity levels. The model is enhanced with the concept of assurance points (APS) and integration rules, where APs serve as logical and physical checkpoints for user-defined consistency checking, invoking integration rules that check pre and post conditions at different points in the execution process. The unique aspect of APs is that they provide intermediate rollback points when failures occur, thus allowing a process to be compensated to a specific AP for the purpose of rechecking pre-conditions before retry attempts. APs also support a dynamic backward recovery process, known as cascaded contingency, for hierarchically nested processes in an attempt to recover to a previous AP that can be used to invoke contingent procedures or alternate execution paths for failure of a nested process. As a result, the assurance point approach provides flexibility with respect to the combined use of backward and forward recovery options. Petri Nets have been used to define the semantics of the assurance point approach to service composition and recovery. A comparison to the BPEL fault handler is also provided. --- paper_title: LTSA-WS: a tool for model-based verification of web service compositions and choreography paper_content: In this paper we describe a tool for a model-based approach to verifying compositions of web service implementations. The tool supports verification of properties created from design specifications and implementation models to confirm expected results from the viewpoints of both the designer and implementer. Scenarios are modeled in UML, in the form of Message Sequence Charts (MSCs), and then compiled into the Finite State Process (FSP) process algebra to concisely model the required behavior. BPEL4WS implementations are mechanically translated to FSP to allow an equivalence trace verification process to be performed. By providing early design verification and validation, the implementation, testing and deployment of web service compositions can be eased through the understanding of the behavior exhibited by the composition. The approach is implemented as a plug-in for the Eclipse development environment providing cooperating tools for specification, formal modeling, verification and validation of the composition process. --- paper_title: Enhancing BPEL scenarios with Dynamic Relevance-Based Exception Handling paper_content: Web services have become the key technology in business processes management. Business processes can be self-contained or be composed from sub-processes; the latter category is typically specified using the Web services business process execution language (WS-BPEL) and executed by a Web services orchestrator (WSO). During the execution however of such a composite service, a number of faults stemming from the distributed nature of the SOA architecture, e.g. network or server failures may occur. WS-BPEL includes provisions for exception handling, which can be exploited for detecting such failures; once detected, a failure can be resolved by invoking alternate Web service implementations that perform the same business task as the failed one. However, the inclusion of such provisions is a tedious assignment for the business process designer, while additional effort would be required to maintain the BPEL scenarios in cases that some alternate WS implementations cease to exist or new ones are introduced. In our research we are developing a framework for automating handling of that kind of exceptions. The proposed solution employs a pre-processor that enhances BPEL scenarios with code that detects failures, discovers alternate WS implementations and invokes them, fully thus resolving the exception. Alternate WS implementation discovery is based on service relevance, which takes into account both functional and qualitative properties of Web services. --- paper_title: Periodic Checkpointing for Strong Mobility of Orchestrated Web Services paper_content: Web service composition allows a fast and modular creation of applications by orchestrating several Web services. Such applications are frequently faced to performance and availability problems which may affect the partner Web services or the orchestration process itself. This requires mechanisms for adapting the architecture and the behaviour to this variable context. In this paper, we deal with strong mobility of orchestration processes as a mechanism for adaptation. We provide a solution that relies on checkpoint/rollback mechanisms. It is also based on source code transformation of the orchestration process. We apply our approach on WS-BPEL based orchestration processes. Hence, we establish a set of rules which transform WS-BEPL processes to equivalent mobile ones. When an adaptation is to be performed,the execution of some or all instances of a mobile process will be interrupted, and then they will be migrated to another node. After migration, the interrupted instances will resume starting from the last checkpoint. Experimentation results show the the efficiency of our approach and the low overhead it introduces. --- paper_title: Revisiting the Behavior of Fault and Compensation Handlers in WS-BPEL paper_content: When automating work, it is often desirable to compensate completed work by undoing the work done by one or more activities. In the context of workflow, where compensation actions are defined on nested 'scopes' that group activities, this requires a model of nested compensation---based transactions. The model must enable the automatic determination of compensation order by considering not only the nesting of scopes but also the control dependencies between them. The current standard for Web services workflows, Business Process Execution Language for Web Services (WS-BPEL), has such compensation capabilities. In this paper, we show that the current mechanism in WS-BPEL shows compensation processing anomalies, such as neglecting control link dependencies between nested non-isolated scopes. We then propose an alternate approach that through elimination of default handlers as well as the complete elimination of termination handlers not only removes those anomalies but also relaxes current WS-BPEL restrictions on control links. The result is a new and deterministic model for handling default compensation for scopes in structures where: (1)both fault handling and compensation handling are present and (2)the relationships between scopes include both structured nesting and graph---based links. --- paper_title: The Consistency of Web Conversations paper_content: We describe BPELCheck, a tool for statically analyzing interactions of composite Web services implemented in BPEL. Our algorithm is compositional, and checks each process interacting with an abstraction of its peers, without constructing the product state space. Interactions between pairs of peer processes are modeled using conversation automata which encode the set of valid message exchange sequences between the two processes. A process is consistent if each possible conversation leaves its peer automata in a state labeled as consistent and the overall execution satisfies a user-specified predicate on the automata states. We have implemented BPELCheck in the Enterprise Service Pack of the NetBeans development environment. Our tool handles the major syntactic constructs of BPEL, including sequential and parallel composition, exception handling, flows, and Boolean state variables. We have used BPELCheck to check conversational consistency for a set of BPEL processes, including an industrial example. --- paper_title: Self-healing BPEL processes with Dynamo and the JBoss rule engine paper_content: Many emerging domains such as ambient intelligence, context-aware applications, and pervasive computing are embracing the assumption that their software applications will be deployed in an open-world. By adopting the Service Oriented Architecture paradigm, and in particular its Web service based implementation, they are capable of leveraging components that are remote and not under their jurisdication, i.e. services. However, the distributed nature of these systems, the presence of many stakeholders, and the fact that no one has a complete knowledge of the system preclude classic static verification techniques. The capability to "self-heal" has become paramount. In this paper we present our solution to self-healing BPEL compositions called Dynamo. It is an assertion-based solution, that provides special purpose languages (WSCoL and WSReL) for defining monitoring and recovery activities. These are executed using Dynamo, which consists of an AOP-extended version of the ActiveBPEL orchestration engine, and which leverages the JBoss Rule Engine to ensure self-healing capabilities. The approach is exemplified on a complex case study. --- paper_title: The Pi-Calculus: A Theory of Mobile Processes paper_content: From the Publisher: ::: Mobile systems, whose components communicate and change their structure, now pervade the informational world and the wider world of which it is a part. The science of mobile systems is as yet immature, however. This book presents the pi-calculus, a theory of mobile systems. The pi-calculus provides a conceptual framework for understanding mobility, and mathematical tools for expressing systems and reasoning about their behaviors. ::: The book serves both as a reference for the theory and as an extended demonstration of how to use pi-calculus to describe systems and analyze their properties. It covers the basic theory of pi-calculus, typed pi-calculi, higher-order processes, the relationship between pi-calculus and lambda-calculus, and applications of pi-calculus to object-oriented design and programming. ::: The book is written at the graduate level, assuming no prior acquaintance with the subject, and is intended for computer scientists interested in mobile systems. --- paper_title: A bidirectional heuristic search technique for web service composition paper_content: Automatic web services composition has recently received considerable attention from researchers in different fields. In this paper we propose a model based on web service dependency graph and a bidirectional heuristic search algorithm to find composite web services. The proposed algorithm is based on a new domain independent heuristic. Experiments on different types of dependency graphs of varying sizes and number of web services show promising results for the service composition model when compared to state-of-the-art search algorithms. The proposed dependency graph based composition model is, however, not limited to traditional web services but it can be extended to more general frameworks of collective systems where a global intelligent behavior emerges by a plurality of agents which interact composing different actions, services, or resources. --- paper_title: WSAT: A Tool for Formal Analysis of Web Services paper_content: This paper presents Web Service Analysis Tool (WSAT), a tool for analyzing and verifying composite web service designs, with the state of the art model checking techniques. Web services are loosely coupled distributed systems communicating via XML messages. Communication among web services is asynchronous, and it is supported by messaging platforms such as JMS which provide FIFO queues to store incoming messages. Data transmission among web services is standardized via XML, and the specification of web service itself (invocation interface and behavior signature) relies on a stack of XML based standards (e.g. WSDL, BPEL4WS, WSCI and etc.). The characteristics of web services, however, raise several challenges in the application of model checking: (1) Numerous competing web service standards, most of which lack formal semantics, complicate the formal specification of web service composition. (2) Asynchronous messaging makes most interesting verification problems undecidable, even when XML message contents are abstracted away [3]. (3) XML data and expressive XPath based manipulation are not supported by current model checkers. --- paper_title: A Petri Net-based Model for Web Service Composition paper_content: The Internet is going through several major changes. It has become a vehicle of Web services rather than just a repository of information. Many organizations are putting their core business competencies on the Internet as a collection of Web services. An important challenge is to integrate them to create new value-added Web services in ways that could never be foreseen forming what is known as Business-to-Business (B2B) services. Therefore, there is a need for modeling techniques and tools for reliable Web service composition. In this paper, we propose a Petri net-based algebra, used to model control flows, as a necessary constituent of reliable Web service composition process. This algebra is expressive enough to capture the semantics of complex Web service combinations. --- paper_title: Transparent Fault Tolerance for Web Services Based Architectures paper_content: Service-based architectures enable the development of new classes of Grid and distributed applications. One of the main capabilities provided by such systems is the dynamic and flexible integration of services, according to which services are allowed to be a part of more than one distributed system and simultaneously serve different applications. This increased flexibility in system composition makes it difficult to address classical distributed system issues such as fault-tolerance. While it is relatively easy to make an individual service fault-tolerant, improving fault-tolerance of services collaborating in multiple application scenarios is a challenging task. In this paper, we look at the issue of developing fault-tolerant service-based distributed systems, and propose an infrastructure to implement fault tolerance capabilities transparent to services. --- paper_title: Checkpointing for workflow recovery paper_content: Workflow technology targets supporting reliable and scaleable execution, for workflow management systems (WfMS) to support large-scale multi-system applications, involving both humans and legacy systems, in distributed and often heterogeneous environments. In case of failures, workflow processes usually need to resume their executions from one of their saved states, called a checkpoint, achieved by saving the states from time to time persistently. The activity of restoring a checkpoint and resuming the execution from the checkpoint is called rollback. Those techniques have long been used in database systems. A checkpoint is an action consistent checkpoint if it represents a state between complete update operations. A consistent state in the database domain is a state when no update transactions were active. This checkpoint representing a consistent state is a transaction consistent checkpoint. A checkpoint does not need to satisfy any consistency constraints. But recovery after failure must always guarantee that the resultant state is transaction consistent even though any checkpoint used may not be. A checkpoint can be either local or global. A local checkpoint is a checkpoint taken locally, with or without cooperation with any other local checkpointing activities at different sites. A local checkpoint can be a fuzzy or consistent checkpoint. During global reconstruction, a set of local checkpoints, usually taken at different site, will be used to find global consistent state. To facilitate the global reconstruction, a global checkpoint, derived from a set of local checkpoints taken at different site, provides a rollback boundary, thus reducing the recovery time --- paper_title: Formal modeling of BPEL workflows including fault and compensation handling paper_content: Electronically executed business processes are frequently implemented using the Business Process Execution Language (BPEL). These workflows may be in control of crucial business processes of an organization, in the same time existing model checking approaches are still immature i.e. they either seem to loose to much information during the generation of the analysis model, or the state space explosion prevents from model checking. We present a formal modeling technique for BPEL workflows including fault and compensation handling providing exact semantics with a state space size that allows for model checking. Additionally, error propagation among variables is supported so the effect of a faulty activity on the entire process can be examined. --- paper_title: A Fault Taxonomy for Web Service Composition paper_content: Web services are becoming progressively popular in the building of both inter- and intra-enterprise business processes. These processes are composed from existing Web services based on defined requirements. In collecting together the services for such a composition, developers can employ languages and standards for the Web that facilitate the automation of Web service discovery, execution, composition and interoperation. However, there is no guarantee that a composition of even very good services will always work. Mechanisms are being developed to monitor a composition and to detect and recover from faults automatically. A key factor in such self-healing is to know what faults to look for. If the nature of a fault is known, the system can suggest a suitable recovery mechanism sooner. This paper proposes a novel taxonomy that captures the possible failures that can arise in Web service composition, and classifies the faults that might cause them. The taxonomy covers physical, development and interaction faults that can cause a variety of observable failures in a system's normal operation. An important use of the taxonomy is identifying the faults that can be excluded when a failure occurs. Examples of using the taxonomy are presented. --- paper_title: Semantic web services discovery based on structural ontology matching paper_content: In this paper, we present an approach to semantic-based web service discovery and a prototypical tool based on syntactic and structural schema matching. It is based on matching an input ontology, describing a service request, to web services descriptions at the 'syntactic level' through Web Services Description Language (WSDL) or, at the semantic level, through service ontologies described with languages such as Ontology Web Language for Services (OWL-S), Web Services Modelling Ontology (WSMO), Semantic Web Services Framework (SWSF) and Web Services Description Language Semantics (WSDL-S). The different input schemas, WSDL descriptions, Ontology Web Language (OWL) ontologies, OWL-S, WSMO, SWSF and WSDL-S components are represented in a uniform way by means of directed rooted graphs, where nodes represent schema elements, connected by directed links of different types, e.g., for containment and referential relationships. On this uniform internal representation, a number of matching algorithms operate, including structural-based algorithms (Children Matcher, Leaves Matcher, Graph and SubGraph Isomorphism) and syntactical ones (Edit Distance (Levenshtein Distance or LD) and Synonym Matcher (through the WordNet synonyms thesaurus)). --- paper_title: A framework to coordinate web services in composition scenarios paper_content: This paper looks into the coordination of web services following their acceptance to participate in a composition scenario. We identify two types of behaviours associated with component web services: operational and control behaviours. These behaviours are used to specify composite web services that are built upon component web services. In term of orchestration a composite web service could be either centralised or peer-to-peer. To support component/composite web services coordination per type of orchestration schema, various types of messages are exchanged between these web services. Experiments showing the use of these messages are reported in this paper as well. ---
Title: A Survey of Transactional Issues for Web Service Composition and Recovery Section 1: Introduction Description 1: Introduce the background of web services, service-oriented architecture, and the need for handling transactional issues and recovery in web service composition. Section 2: A Historical Perspective of Advanced Transaction and Workflow Models Description 2: Present relevant background on advanced transaction models and transactional workflows that influence current techniques for web service composition and recovery. Section 3: Advanced Transaction Models Description 3: Discuss various advanced transaction models such as Sagas, Nested Transaction Model, Multi-level Transaction Model, and Flexible Transaction Model. Section 4: Transactional Workflow Description 4: Explore the concept of transactional workflows, their properties, and techniques for achieving failure recovery in workflow environments. Section 5: Standards for Web Service Transactions and Composition Description 5: Summarize the standards associated with transactional issues for web services and the Business Process Execution Language for service composition and recovery. Section 6: Web Service Specifications Description 6: Outline relevant web service specifications, including WS-Coordination, WS-Transaction, and WS-Business Activity, and their roles in transactional support. Section 7: Web Services Business Process Execution Language Description 7: Describe the main concepts of WS-BPEL 2.0, including its activities, handlers, and recovery mechanisms. Section 8: Relaxed Semantics and Locking Techniques Description 8: Discuss relaxed locking techniques such as Tentative Hold, reservation-based approach, and Promises approach for addressing data consistency. Section 9: Data Dependency Analysis Description 9: Present techniques for data dependency analysis among concurrent processes and their impact on recovery procedures. Section 10: The Assurance Point System Description 10: Describe the Assurance Point System, including its hierarchical structure, usage of logical and physical checkpoints, and user-defined correctness conditions. Section 11: Functional Modularization With Aspect-Oriented Techniques Description 11: Explore the application of aspect-oriented techniques in workflow languages to provide modularization and address cross-cutting concerns. Section 12: Failure Recovery Strategies for Web Services Description 12: Summarize research on different failure recovery strategies for web services, such as re-do, un-do, and alternative strategies, and discuss self-healing and checkpointing mechanisms. Section 13: Conclusion and Future Work Description 13: Conclude the paper by summarizing the research discussed and outlining future research directions for transactional handling and recovery in web service composition.
Digital straightness: a review
7
--- paper_title: Spirograph Theory: A Framework for Calculations on Digitized Straight Lines paper_content: Using diagrams called ``spirographs'' a general theory is developed with which one can easily perform calculations on various aspects of digitized straight lines. The mathematics of the theory establishes a link between digitized straight lines and the theory of numbers (Farey series, continued fractions). To show that spirograph theory is a useful unification, we derive two previously known advanced results within the framework of the theory, and new results concerning the accuracy in position of a digitized straight line as a function of its slope and length. --- paper_title: Digital Straight Line Segments paper_content: It is shown that a digital arc S is the digitization of a straight line segment if and only if it has the "chord property:" the line segment joining any two points of S lies everywhere within distance 1 of S. This result is used to derive several regularity properties of digitizations of straight line segments. --- paper_title: Linguistic Methods for the Description of a Straight Line on a Grid paper_content: This paper describes the construction of strings representing straight lines in an arbitrary direction on a grid and compares some grammatical systems that generate these strings. An algorithm is given that constructs a string representing a straight line on a grid in Freeman's coding scheme. Some number-theoretical aspects of this algorithm are treated (Euclid's algorithm, Farey series, continued fractions). This algorithm is based on. structural properties of the string. Strings generated in this way can also be produced with a programmed grammar. Lindenmayer grammars are also very powerful for this kind of problem, because of the simultaneous applications of production rules. Another method for constructing strings representing straight lines, where the generation is essentially sequential, for instance when noise is added, is treated. In this case Lindenmayer grammars are quite useless, but programmed grammars are still very convenient. Rule-labeled programs are less convenient for both kinds of problems. --- paper_title: Discrete Representation of Straight Lines paper_content: If a continuous straight line segment is digitized on a regular grid, obviously a loss of information occurs. As a result, the discrete representation obtained (e.g., a chaincode string) can be coded more conveniently than the continuous line segment, but measurements of properties (such as line length) performed on the representation have an intrinsic inaccuracy due to the digitization process. In this paper, two fundamental properties of the quantization of straight line segments are treated. 1) It is proved that every ``straight'' chaincode string can be represented by a set of four unique integer parameters. Definitions of these parameters are given. 2) A mathematical expression is derived for the set of all continuous line segments which could have generated a given chaincode string. The relation with the chord property is briefly discussed. --- paper_title: The Discrete Equation of the Straight Line paper_content: A new method of description of a straight line defined on a square grid is presented. The discrete equation of the straight line (desl) is introduced as an extension of the classical Cartesian equation, and applies to straight lines quantized on a grid by the grid intersect method. The desl includes a set of intercepts which can be scanned in proper order to generate the chain for the given straight line. --- paper_title: Digital Computer Transformations for Irregular Line Drawings paper_content: Abstract : The report describes a parametric quantization scheme for irregular line drawings. With this scheme, different quantized versions of the same drawing can be obtained by changing the values of the parameters. Three figures of noise are proposed for evaluating the quality of quantized drawings and design formulae are developed for the parameters of the quantization scheme as functions of bounds on the figures of noise. The degradation of the quality of a quantized drawing resulting from a coordinate transformation and requantization is studied in terms of transformed figures of noise. Also, it is shown theoretically and by means of a number of examples how to choose the parameters of the quantization scheme in order to meet the requirements on the transformed figures of noise. This enables one to quantize a preprocessed satellite picture so that after computation of a Mercator projection, the resulting geographic map will have the required quality. The theory presented in this report is applicable to any irregular line drawing and to any transformation defined by a pair of functions that are continuous together with their partial derivatives. --- paper_title: Digital Straight Line Segments paper_content: It is shown that a digital arc S is the digitization of a straight line segment if and only if it has the "chord property:" the line segment joining any two points of S lies everywhere within distance 1 of S. This result is used to derive several regularity properties of digitizations of straight line segments. --- paper_title: Linguistic Methods for the Description of a Straight Line on a Grid paper_content: This paper describes the construction of strings representing straight lines in an arbitrary direction on a grid and compares some grammatical systems that generate these strings. An algorithm is given that constructs a string representing a straight line on a grid in Freeman's coding scheme. Some number-theoretical aspects of this algorithm are treated (Euclid's algorithm, Farey series, continued fractions). This algorithm is based on. structural properties of the string. Strings generated in this way can also be produced with a programmed grammar. Lindenmayer grammars are also very powerful for this kind of problem, because of the simultaneous applications of production rules. Another method for constructing strings representing straight lines, where the generation is essentially sequential, for instance when noise is added, is treated. In this case Lindenmayer grammars are quite useless, but programmed grammars are still very convenient. Rule-labeled programs are less convenient for both kinds of problems. --- paper_title: Languages of encoded line patterns paper_content: By treating patterns as statements in a two-dimensional language, it is possible to apply linguistic theory to pattern analysis and recognition. In this paper, line patterns are encoded into string form using the chain code developed by Freeman. A class of patterns, or pattern language, encodes to a set of strings that is examined using theory that exists for string languages and automata. Pattern languages formed on the basis of equations in two variables and various pattern properties are related to the hierarchy of string language classes. The known relationships between classes of string languages and classes of automata can then be applied to determine bounds on the time and memory required to recognize the various patterns. Results can be extended to other forms of pattern encoding provided that a suitable translator can be constructed. --- paper_title: Regular Arcs in Digital Contours paper_content: Abstract The concept of regularity plays an important role in the study of the arcs of a digital contour. In this paper, the structure of regular digital arcs is investigated and, once given a general definition of straightness for digital contour arcs, a necessary and sufficient condition for the straightness of regular arcs is given. This condition only depends on parameters characterizing the macrodescription of the regular digital arc; its equivalence with the chord property is also shown. The knowledge of the structure of regular digital arcs can be a good starting point for the study of nonregular digital arcs, so that it could be possible to formulate a general algorithm to detect both concavity and convexity. --- paper_title: A simple proof of Rosenfeld's characterization of digital straight line segments paper_content: A digital straight line segment is defined as the grid-intersect quantization of a straight line segment in the plane. Let S be a set of pixels on a square grid. Rosenfeld [8] showed that S is a digital straight line segment if and only if it is a Digital arc having the chord property. Then Kim and Rosenfeld [3,6] showed that S has the chord properly if and if for every p, q@eS there is a digital straight line segment C @? S such that p and q are the extremities of C. We give a simple proof of these two results based on the Transversal Theorem of Santalo. We show how the underlying methodology can be generalized to the case of (infinite) digital straight lines and to the quantization of hyperplanes in an n-dimensional space for n >= 3. --- paper_title: A compact chord property for digital arcs paper_content: Abstract Rosenfeld ( IEEE Trans. Comput. C-23 (12), 1264–1269 (1974)) defined the chord property and proved that a digital arc is a digital straight segment if and only if it satisfies the chord property. A new property is defined, which we call the compact chord property, and the two properties are proved to be equivalent. The compact chord property offers a useful alternative for testing a digital arc for straightness by exploiting the notion of visibility in computational geometry. --- paper_title: Digital Straight Line Segments paper_content: It is shown that a digital arc S is the digitization of a straight line segment if and only if it has the "chord property:" the line segment joining any two points of S lies everywhere within distance 1 of S. This result is used to derive several regularity properties of digitizations of straight line segments. --- paper_title: On the parallel generation of straight digital lines paper_content: Several procedures have been devised to generate strainght digital lines; among others an algorithm has been proposed in the literature which accomplishes the generation in a parallel fashion. The main purpose of this paper is to show that the constructed digital line possesses the chord property, and therefore is really straight. To reach this goal the geometrical properties of the generated digital line have been investigated and advantage has been taken of a theory developed in the past to study regular arcs of digital contours. A comparison between the performances of the parallel algorithm and of a sequential one, showing the convenience of the parallel approach, is also included. --- paper_title: Spirograph Theory: A Framework for Calculations on Digitized Straight Lines paper_content: Using diagrams called ``spirographs'' a general theory is developed with which one can easily perform calculations on various aspects of digitized straight lines. The mathematics of the theory establishes a link between digitized straight lines and the theory of numbers (Farey series, continued fractions). To show that spirograph theory is a useful unification, we derive two previously known advanced results within the framework of the theory, and new results concerning the accuracy in position of a digitized straight line as a function of its slope and length. --- paper_title: Discrete images, objects, and functions in Z[n] paper_content: Content.- 1 Neighborhood Structures.- 1.1 Finite Graphs.- 1.1.1 Historical Remarks.- 1.1.2 Elementary Theory of Sets and Relations.- 1.1.3 Elementary Graph Theory.- 1.2 Neighborhood Graphs.- 1.2.1 Graph Theory and Image Processing.- 1.2.2 Points, Edges, Paths, and Regions.- 1.2.3 Matrices of Adjacency.- 1.2.4 Graph Distances.- 1.3 Components in Neighborhood Structures.- 1.3.1 Search in Graphs and Labyrinths.- 1.3.2 Neighborhood Search.- 1.3.3 Graph Search in Images.- 1.3.4 Neighbored Sets and Separated Sets.- 1.3.5 Component Labeling.- 1.4 Dilatation and Erosion.- 1.4.1 Metric Spaces.- 1.4.2 Boundaries and Cores in Neighborhood Structures.- 1.4.3 Set Operations and Set Operators.- 1.4.4 Dilatation and Erosion.- 1.4.5 Opening and Closing.- 2 Incidence Structures.- 2.1 Homogeneous Incidence Structures.- 2.1.1 Topological Problems.- 2.1.2 Cellular Complexes.- 2.1.3 Incidence Structures.- 2.1.4 Homogeneous Incidence Structures.- 2.1.5 Zn as Incidence Structure.- 2.2 Oriented Neighborhood Structures.- 2.2.1 Orientation of a Neighborhood Structure.- 2.2.2 Euler Characteristic of a Neighborhood Structure.- 2.2.3 Border Meshes and Separation Theorem.- 2.2.4 Search in Oriented Neighborhood Structures.- 2.2.5 Coloring in Oriented Neighborhood Structures.- 2.3 Homogeneous Oriented Neighborhood Structures.- 2.3.1 Homogeneity in Neighborhood Structures.- 2.3.2 Toroidal Nets.- 2.3.3 Curvature of Border Meshes in Toroidal Nets.- 2.3.4 Planar Semi-Homogeneous Graphs.- 2.4 Objects in N-Dimensional Incidence Structures.- 2.4.1 Three-Dimensional Homogeneous Incidence Structures.- 2.4.2 Objects in Zn.- 2.4.3 Similarity of Objects.- 2.4.4 General Surface Formulas.- 2.4.5 Interpretation of Object Characteristics.- 3 Topological Laws and Properties.- 3.1 Objects and Surfaces.- 3.1.1 Surfaces in Discrete Spaces.- 3.1.2 Contur Following as Two-Dimensional Boundary Detection.- 3.1.3 Three-Dimensional Surface Detection.- 3.1.4 Curvature of Conturs and Surfaces.- 3.2 Motions and Intersections.- 3.2.1 Motions of Objects in Zn.- 3.2.2 Count Measures and Intersections of Objects.- 3.2.3 Applications of Intersection Formula.- 3.2.4 Count Formulas.- 3.2.5 Stochastic Images.- 3.3 Topology Preserving Operations.- 3.3.1 Topological Equivalence.- 3.3.2 Simple Points.- 3.3.3 Thinning.- 4 Geometrical Laws and Properties.- 4.1 Discrete Geometry.- 4.1.1 Geometry and Number Theory.- 4.1.2 Minkowski Geometry.- 4.1.3 Translative Neighborhood Structures.- 4.1.4 Digitalization Effects.- 4.2 Straight Lines.- 4.2.1 Rational Geometry.- 4.2.2 Digital Straight Lines in Z2.- 4.2.3 Continued Fractions.- 4.2.4 Straight Lines in Zn.- 4.3 Convexity.- 4.3.1 Convexity in Discrete Geometry.- 4.3.2 Maximal Convex Objects.- 4.3.3 Determination of Convex Hull.- 4.3.4 Convexity in Zn.- 4.4 Approximative Motions.- 4.4.1 Pythagorean Rotations.- 4.4.2 Shear Transformations.- 4.3.3 General Affine Transformations.- 5 Discrete Functions.- 5.1 One-Dimensional Periodical Discrete Functions.- 5.1.1 Functions.- 5.1.2 Space of Periodical Discrete Function.- 5.1.3 LSI-Operators and Convolutions.- 5.1.4 Products of Linear Operators.- 5.2 Algebraic Theory of Discrete Functions.- 5.2.1 Domain of Definition and Range of Values.- 5.2.2 Algebraical Structures.- 5.2.3 Convolution of Functions.- 5.2.4 Convolution Orthogonality.- 5.3 Orthogonal Convolution Bases.- 5.3.1 General Properties in OCB's.- 5.3.2 Fourier Transform.- 5.3.3 Number Theoretical Transforms.- 5.3.4 Two-Dimensional NTT.- 5.4 Inversion of Convolutions.- 5.4.1 Conditions for Inverse Elements.- 5.4.2 Deconvolutions and Texture Synthesis.- 5.4.3 Approximative Computation of Inverse Elements.- 5.4.4 Theory of Approximative Inversion.- 5.4.5 Examples of Inverse Filters.- 5.5 Differences and Sums of Functions.- 5.5.1 Differences of One-Dimensional Discrete Functions.- 5.5.2 Difference Equations and Z-Transform.- 5.5.3 Sums of Functions.- 5.5.4 Bernoulli's Polynomials.- 5.5.5 Determination of Moments.- 5.5.6 Final Comments.- 6 Summary and Symbols.- 7 References.- 8 Index. --- paper_title: Geometry of Continued Fractions paper_content: MICHAEL CHARLES IRWIN wrote his Ph.D. thesis (Cambridge University, U.K., 1962) on "Embeddings of Polyhedral Manifolds," and his early papers were concerned with piecewise linear topology, but from about 1970 onwards his research was mainly on dynamical systems. His most recent publications included work on invariant paths for Anosov diffeomorphisms. He was the author of the book Smooth Dynamical Systems published by Academic Press in 1980. He taught at the University of Liverpool, U.K., from 1961 until his death on March 11, 1988. --- paper_title: On the number of digital straight line segments paper_content: A closed-form expression has been reported in the literature for L/sub N/, the number of digital line segments of length N that correspond to lines of the form y=ax+ beta , O > --- paper_title: Linguistic Methods for the Description of a Straight Line on a Grid paper_content: This paper describes the construction of strings representing straight lines in an arbitrary direction on a grid and compares some grammatical systems that generate these strings. An algorithm is given that constructs a string representing a straight line on a grid in Freeman's coding scheme. Some number-theoretical aspects of this algorithm are treated (Euclid's algorithm, Farey series, continued fractions). This algorithm is based on. structural properties of the string. Strings generated in this way can also be produced with a programmed grammar. Lindenmayer grammars are also very powerful for this kind of problem, because of the simultaneous applications of production rules. Another method for constructing strings representing straight lines, where the generation is essentially sequential, for instance when noise is added, is treated. In this case Lindenmayer grammars are quite useless, but programmed grammars are still very convenient. Rule-labeled programs are less convenient for both kinds of problems. --- paper_title: The Discrete Equation of the Straight Line paper_content: A new method of description of a straight line defined on a square grid is presented. The discrete equation of the straight line (desl) is introduced as an extension of the classical Cartesian equation, and applies to straight lines quantized on a grid by the grid intersect method. The desl includes a set of intercepts which can be scanned in proper order to generate the chain for the given straight line. --- paper_title: On the number of factors of Sturmian words paper_content: Abstract We prove that for m ⩾1, card( A m ) = 1+∑ m i =1 ( m − i +1) ϕ ( i ) where A m is the set of factors of length m of all the Sturmian words and ϕ is the Euler function. This result was conjectured by Dulucq and Gouyou-Beauchamps (1987) who proved that this result implies that the language (∪ m ⩾0 A m ) c is inherently ambiguous. We also give a combinatorial version of the Riemann hypothesis. --- paper_title: Spirograph Theory: A Framework for Calculations on Digitized Straight Lines paper_content: Using diagrams called ``spirographs'' a general theory is developed with which one can easily perform calculations on various aspects of digitized straight lines. The mathematics of the theory establishes a link between digitized straight lines and the theory of numbers (Farey series, continued fractions). To show that spirograph theory is a useful unification, we derive two previously known advanced results within the framework of the theory, and new results concerning the accuracy in position of a digitized straight line as a function of its slope and length. --- paper_title: Compression of chain codes using digital straight line sequences paper_content: Abstract Compression of chain codes is achieved by breaking the sequence into strings of digital straight lines whose representation is stored in a table. It is shown that the number of table entries is ( 1 4π 2 )N 4 + O (N 3 log N) . A computational method is given for an exact determination which shows that the asymptotic approximation is accurate. --- paper_title: The number of digital straight lines on an N×N grid paper_content: The number of digital straight lines on an N*N grid is shown. A digital straight line is equivalent to a linear dichotomy of points on a square grid. The result is obtained by determining a way of counting the number of linearly separable dichotomies of points on the plane that are not necessarily in general position. The analysis is easily modified to provide a simple solution to a similar problem considered by C. Berenstein and D. Lavine (1988) on the number of digital straight lines from a fixed starting point. > --- paper_title: A Method for Obtaining Skeletons Using a Quasi-Euclidean Distance paper_content: The problem of obtaining the skeleton of a digitized figure is reduced to an optimal policy problem. A hierarchy of methods of defining the skeleton is proposed; in the more complicated ones, the skeleton is relatively invariant under rotation. Two algorithms for computing the skeleton are defined, and the corresponding computer programs are compared. A criterion is proposed for determining the most significant skeleton points. --- paper_title: On the number of digital straight line segments paper_content: A closed-form expression has been reported in the literature for L/sub N/, the number of digital line segments of length N that correspond to lines of the form y=ax+ beta , O > --- paper_title: Linguistic Methods for the Description of a Straight Line on a Grid paper_content: This paper describes the construction of strings representing straight lines in an arbitrary direction on a grid and compares some grammatical systems that generate these strings. An algorithm is given that constructs a string representing a straight line on a grid in Freeman's coding scheme. Some number-theoretical aspects of this algorithm are treated (Euclid's algorithm, Farey series, continued fractions). This algorithm is based on. structural properties of the string. Strings generated in this way can also be produced with a programmed grammar. Lindenmayer grammars are also very powerful for this kind of problem, because of the simultaneous applications of production rules. Another method for constructing strings representing straight lines, where the generation is essentially sequential, for instance when noise is added, is treated. In this case Lindenmayer grammars are quite useless, but programmed grammars are still very convenient. Rule-labeled programs are less convenient for both kinds of problems. --- paper_title: Digital Straight Lines and Convexity of Digital Regions paper_content: It is shown that a digital region is convex if and only if every pair of points in the region is connected by a digital straight line segment contained in the region. The midpoint property is shown to be a necessary but not a sufficient condition for the convexity of digital regions. However, it is shown that a digital region is convex if and only if it has the median-point property. --- paper_title: Fast polygonal approximation of digitized curves paper_content: Abstract We describe a new technique for fast “scan-along” computation of piecewise linear approximations of digital curves in 2-space. Our method is derived from earlier work on the theory of minimum-perimeter polygonal approximations of digitized closed curves. We demonstrate the specialization of this technique to the case where the error is measured as the largest Hausdorff-Euclidean distance between the approximation and the given digitized curve. We illustrate the application of this procedure to the boundaries of the images of a lung and a rib in chest radiographs. --- paper_title: On recursive, O(N) partitioning of a digitized curve into digital straight segments paper_content: A simple online algorithm for partitioning of a digital curve into digital straight-line segments of maximal length is given. The algorithm requires O(N) time and O(1) space and is therefore optimal. Efficient representations of the digital segments are obtained as byproducts. The algorithm also solves a number-theoretical problem concerning nonhomogeneous spectra of numbers. > --- paper_title: A note on minimal length polygonal approximation to a digitized contour paper_content: A method for extracting a smooth polygonal contour from a digitized image is illustrated. The ordered sequence of contour points and the connection graph of the image are first obtained by a modified Ledley algorithm in one image scan. A minimal perimeter polygon subjected to specified constraints is then chosen as the approximating contour. The determination of the minimal polygon can be reduced to a nonlinear programming problem, solved by an algorithm which takes into account the weak bonds between variables. Some examples are presented, and the corresponding computing times are listed. --- paper_title: A new method of analysis for discrete straight lines paper_content: Abstract The problem of finding the set of all continuous straight lines which lead to the same digitization is quite well known and has been studied in [1, 2, 3]. In this paper, we propose a new method of analysis which is simple and straightforward but is shown to be as powerful as the other techniques. Moreover, our scheme, being algebraic in nature, can be generalized to more complex shapes and figures and thus has a wider applicability. --- paper_title: A comparative evaluation of length estimators paper_content: The paper compares previously published length estimators having digitized curves as input. The evaluation uses multigrid convergence (theoretical results and measured speed of convergence) and further measures as criteria. The paper also suggests a new gradient-based method for length estimation. --- paper_title: Discrete Representation of Straight Lines paper_content: If a continuous straight line segment is digitized on a regular grid, obviously a loss of information occurs. As a result, the discrete representation obtained (e.g., a chaincode string) can be coded more conveniently than the continuous line segment, but measurements of properties (such as line length) performed on the representation have an intrinsic inaccuracy due to the digitization process. In this paper, two fundamental properties of the quantization of straight line segments are treated. 1) It is proved that every ``straight'' chaincode string can be represented by a set of four unique integer parameters. Definitions of these parameters are given. 2) A mathematical expression is derived for the set of all continuous line segments which could have generated a given chaincode string. The relation with the chord property is briefly discussed. --- paper_title: Length estimation of digital curves paper_content: The paper details two linear-time algorithms, one for the partition of the boundary line of a digital region into digital straight segments, and one for calculating the minimum length polygon within an open boundary of a digital region. Both techniques allow the estimation of the length of digital curves or the perimeter of digital regions due to known multigrid convergence theorems. The algorithms are compared with respect to convergence speed and number of generated segments.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Digital Convexity, Straightness, and Convex Polygons paper_content: New schemes for digitizing regions and arcs are introduced. It is then shown that under these schemes, Sklansky's definition of digital convexity is equivalent to other definitions. Digital convex polygons of n vertices are defined and characterized in terms of geometric properties of digital line segments. Also, a linear time algorithm is presented that, given a digital convex region, determines the smallest integer n such that the region is a digital convex n-gon. --- paper_title: Analysis and modeling of digitized straight-line segments paper_content: A general formula for the slope and mathematical models of digitized straight-line segments are derived and discussed in detail. The models are parametric and the parameters are calculated uniquely and in closed form from the slope without the need for recursion or interaction. The modeling is information-preserving and explicitly uses the slope. The models are in a form convenient for plotting, detecting, and recognizing digitized straight-line segments. > --- paper_title: An optimal O(n) algorithm for identifying line segments from a sequence of chain codes paper_content: Abstract An optimal algorithm for identifying straight lines in chain codes is described. The algorithm turns the complicated problem of determining the straightness of digital arcs into a simple task by constructing a passing area around the pixels. It also solves the problem of detecting all straight segments from a sequence of chain codes in O ( n ) time, where n is the length of the sequence. It has been found that this algorithm is not only simple and intuitive, but also highly efficient. --- paper_title: Recognizing arithmetic straight lines and planes paper_content: The problem of recognizing a straight line in the discrete plane ℤ2 (resp. a plane in ℤ3) is to find an algorithm deciding wether a given set of points in ℤ2 (resp. ℤ3) belongs to a line (resp. a plane). In this paper the lines and planes are arithmetic, as defined by Reveilles [Rev91], and the problem is translated, for any width that is a linear function of the coefficients of the normal to the searched line or plane, into the problem of solving a set of linear inequalities. This new problem is solved by using the Fourier's elimination algorithm. If there is a solution, the family of solutions is given by the algorithm as a conjunction of linear inequalities. This method of recognition is well suited to computer imagery, because any traversal algorithm of the given set is possible, and also because any incomplete segment of line or plane can be recognized. --- paper_title: Characterizing Digital Convexity and Straightness in Terms of "Length" and "Total Absolute Curvature" paper_content: By using the anisotropic version of “length” and “total absolute curvature” proposed by K. Kishimoto and M. Iri (Jpn. J. Appl. Math.6, 1989, 179?207), this paper characterizes digital convexity and digital straightness as follows:1.a bounded digital figureFis digitally convex if and only if the “total absolute curvature” of its boundary is 2?;2.a digital arcs(F) is digitally straight if and only if either of the following conditions is satisfied:(a)the “total absolute curvature” ofs(F) is 0;(b)the “length” ofs(F) is “sufficiently” small.These characterizations indicate that the presented definitions may serve as a bridge between the continuous and digital worlds. --- paper_title: Discrete Convexity, Straightness, and the 16-Neighborhood paper_content: In this paper, we extend some results in discrete geometry based on the 8-neighborhood to that of the 16-neighborhood, which now includes the chessboard and the knight moves. We first present some analogies between an 8-digital arc and a 16-digital arc as represented by shortest paths on the grid. We present a transformation which uniquely maps a 16-digital arc onto an 8-digital arc (and vice versa). The grid-intersect-quantization (GIQ) of real arcs is defined with the 16-neighborhood. This enables us to define a 16-digital straight segment. We then present two new distance functions which satisfy the metric properties and describe the extended neighborhood space. Based on these functions, we present some new results regarding discrete convexity and 16-digital straightness. In particular, we demonstrate the convexity of a 16-digital straight segment. Moreover, we define a new property for characterizing a digital straight segment in the 16-neighborhood space. In comparison to the 8-neighborhood space, the proposed 16-neighborhood coding scheme offers a more compact representation without any loss of information. --- paper_title: Some aspects of the accuracy of the approximated position of a straight line on a square grid paper_content: In a digitized scan of a picture on a square grid the grid spacing (sampling distance) and the number of gray levels can be varied. The influence of these variables on the maximum error of approximation of the position of a black straight line with rectangular profile on a white background is determined. From the results the grid spacing required to get an error below a given value is determined as a function of the number of equidistant gray levels used. With some assumptions about the storage of the data the corresponding amount of memory space is determined. The latter has a minimum for a certain number of gray levels which depends on the specified maximum error. ---
Title: Digital Straightness: A Review Section 1: Introduction Description 1: Introduce the topic of digital straightness, its historical context, and its significance in pattern recognition and related disciplines. Section 2: Tangential lines and connectivity Description 2: Discuss alternative definitions of digital rays, focusing on the concepts of tangential lines and connectivity within the grid system. Section 3: Self-similarity studies in pattern recognition Description 3: Review self-similarity properties of digital rays and their applications in pattern recognition, detailing the geometric characterizations and efficient algorithms. Section 4: Periodicity studies in the theory of words Description 4: Explore the periodicity and structure of digital rays in the context of the theory of words, including basic definitions, properties, and related theorems. Section 5: Number-theoretical studies Description 5: Examine the number-theoretical aspects of digital straightness, including algorithms and the role of continuous fractions and Farey series in modeling digital rays. Section 6: Algorithms for DSS recognition Description 6: Provide an overview of various algorithms developed for the recognition of digital straight segments (DSSs), detailing their methodologies, efficiencies, and applications. Section 7: Conclusions Description 7: Summarize the findings of the review, highlight remaining challenges, and propose areas for future research in the study of digital straightness.
Spatial and Spatio-Temporal Multidimensional Data Modelling: A Survey
11
--- paper_title: Indexing of network constrained moving objects paper_content: With the proliferation of mobile computing, the ability to index efficiently the movements of mobile objects becomes important. Objects are typically seen as moving in two-dimensional (x,y) space, which means that their movements across time may be embedded in the three-dimensional (x,y,t) space. Further, the movements are typically represented as trajectories, sequences of connected line segments. In certain cases, movement is restricted, and specifically in this paper, we aim at exploiting that movements occur in transportation networks to reduce the dimensionality of the data. Briefly, the idea is to reduce movements to occur in one spatial dimension. As a consequence, the movement data becomes two-dimensional (x,t). The advantages of considering such lower-dimensional trajectories are the reduced overall size of the data and the lower-dimensional indexing challenge. Since off-the-shelf systems typically do not offer higher-dimensional indexing, this reduction in dimensionality allows us to use such DBMSes to store and index trajectories. Moreover, we argue that, given the right circumstances, indexing these dimensionality-reduced trajectories can be more efficient than using a three-dimensional index. This hypothesis is verified by an experimental study that incorporates trajectories stemming from real and synthetic road networks. --- paper_title: Building the Data Warehouse paper_content: From the Publisher: ::: The data warehouse solves the problem of getting information out of legacy systems quickly and efficiently. If designed and built right, data warehouses can provide significant freedom of access to data, thereby delivering enormous benefits to any organization. In this unique handbook, W. H. Inmon, "the father of the data warehouse," provides detailed discussion and analysis of all major issues related to the design and construction of the data warehouse, including granularity of data, partitioning data, metadata, lack of creditability of decision support systems (DSS) data, the system of record, migration and more. This Second Edition of Building the Data Warehouse is revised and expanded to include new techniques and applications of data warehouse technology and update existing topics to reflect the latest thinking. It includes a useful review checklist to help evaluate the effectiveness of the design. --- paper_title: Building the Data Warehouse paper_content: From the Publisher: ::: The data warehouse solves the problem of getting information out of legacy systems quickly and efficiently. If designed and built right, data warehouses can provide significant freedom of access to data, thereby delivering enormous benefits to any organization. In this unique handbook, W. H. Inmon, "the father of the data warehouse," provides detailed discussion and analysis of all major issues related to the design and construction of the data warehouse, including granularity of data, partitioning data, metadata, lack of creditability of decision support systems (DSS) data, the system of record, migration and more. This Second Edition of Building the Data Warehouse is revised and expanded to include new techniques and applications of data warehouse technology and update existing topics to reflect the latest thinking. It includes a useful review checklist to help evaluate the effectiveness of the design. --- paper_title: Requirements specification and conceptual modeling for spatial data warehouses paper_content: Development of a spatial data warehouse (SDW) is a complex task, which if assisted with the methodological framework could facilitate its realization In particular, the requirements specification phase, being one of the earliest steps of system development, should attract attention since it may entail significant problems if faulty or incomplete However, a lack of methodology for the SDW design and the presence of two actors in specifying data requirements, i.e., users and source systems, complicates more the development process In this paper, we propose three different approaches for requirements specifications that lead to the creation of conceptual schemas for SDW applications. --- paper_title: An MDA Approach for the Development of Spatial Data Warehouses paper_content: In the past few years, several conceptual approaches have been proposed for the specification of the main multidimensional (MD) properties of the spatial data warehouses (SDW). However, these approaches often fail in providing mechanisms to univocally and automatically derive a logical representation. Even more, the spatial data often generates complex hierarchies (i.e., many-to-many) that have to be mapped to large and non-intuitive logical structures (i.e., bridge tables). To overcome these limitations, we implement a Model Driven Architecture (MDA) approach for spatial data warehouse development. In this paper, we present a spatial extension for the MD model to embed spatiality on it. Then, we formally define a set of Query/ View/ Transformation (QVT) transformation rules which allow us, to obtain a logical representation in an automatic way. Finally, we show how to implement the MDA approach in our Eclipse-based tool. --- paper_title: CASME: A CASE Tool for Spatial Data Marts Design and Generation paper_content: Geographic Information Systems (GIS) showed their insufficiencies in front of complex requests for decision-makers. Resulting of the association of the databases and the decision-making systems the decisional data processing was developed since the beginning of the 90th as a new way. The decisional databases thus emerged in order to answer the specific needs for OnLine Analytical Processing (OLAP) and data mining. Extensions were made to make appropriate the analysis and the algorithms to specificities of the handled spatial data. This paper describes the modeling and implementation of Spatial Data Mart (SDM). We define a formal framework for the progressive construction of spatial data warehouses by assembling these SDM. Our approach includes a meta model for SDM construction. The construction is done in accordance with the UML meta model. After the validation step, construction is followed by an automatic generation of the spatial data mart in Spatial Oracle. A CASE tool, called CASME (Computer Aided Spatial Mart Engineering), constitutes the interface through which the user will have to carry out the process. --- paper_title: Multidimensional data modeling for complex data paper_content: Online Analytical Processing (OLAP) systems considerably ease the process of analyzing business data and have become widely used in industry. Such systems primarily employ multidimensional data models to structure their data. However current multidimensional data models fall short in their abilities to model the complex data found in some real world application domains. The paper presents nine requirements to multidimensional data models, each of which is exemplified by a real world, clinical case study. A survey of the existing models reveals that the requirements not currently met include support for many-to-many relationships between facts and dimensions, built-in support for handling chance and time, and support for uncertainty as well as different levels of granularity in the data. The paper defines an extended multidimensional data model, and an associated algebra, which address all nine requirements. --- paper_title: Requirements specification and conceptual modeling for spatial data warehouses paper_content: Development of a spatial data warehouse (SDW) is a complex task, which if assisted with the methodological framework could facilitate its realization In particular, the requirements specification phase, being one of the earliest steps of system development, should attract attention since it may entail significant problems if faulty or incomplete However, a lack of methodology for the SDW design and the presence of two actors in specifying data requirements, i.e., users and source systems, complicates more the development process In this paper, we propose three different approaches for requirements specifications that lead to the creation of conceptual schemas for SDW applications. --- paper_title: Requirements specification and conceptual modeling for spatial data warehouses paper_content: Development of a spatial data warehouse (SDW) is a complex task, which if assisted with the methodological framework could facilitate its realization In particular, the requirements specification phase, being one of the earliest steps of system development, should attract attention since it may entail significant problems if faulty or incomplete However, a lack of methodology for the SDW design and the presence of two actors in specifying data requirements, i.e., users and source systems, complicates more the development process In this paper, we propose three different approaches for requirements specifications that lead to the creation of conceptual schemas for SDW applications. ---
Title: Spatial and Spatio-Temporal Multidimensional Data Modelling: A Survey Section 1: INTRODUCTION Description 1: Describe the significance of spatial data in decision-making, the emergence of spatial data warehouses, and the purpose of this survey. Section 2: MUTLIDIMENSIONNA MODELS Description 2: Discuss the various proposed spatial and spatio-temporal multidimensional models, focusing on their structure and functionality. Section 3: The model of Stefanovic and al Description 3: Present the spatial data warehouse model proposed by Stefanovic and al, including its features, methods, and evaluation. Section 4: The model of Malinowski and Zimany Description 4: Describe the MultiDimER model by Malinowski and Zimany, including its extensions for spatial measures, dimensions, and hierarchies. Section 5: The Model of Miquel and al Description 5: Discuss the approach by Miquel and al for integrating spatiotemporal data into geospatial data warehouses, highlighting their models and example application. Section 6: The model of Bedard and al Description 6: Present the method and tool proposed by Bedard and al for modeling generalization and multiple representations in spatial data warehouses. Section 7: The Model of Bauzer-Medeiros and al Description 7: Describe the multidimensional model proposed by Bauzer-Medeiros and al for real-time spatiotemporal data from road traffic systems. Section 8: The model of Glorio and al Description 8: Explain the extensions proposed by Glorio and al for traditional multidimensional models to introduce spatiality, including their UML-based implementation. Section 9: The model of Bâazaoui Zghal and al Description 9: Discuss the metamodel for building spatial data marts proposed by Bâazaoui Zghal and al, detailing its characteristics and components. Section 10: COMPARATIVE STUDY OF SPATIAL AND SPATIO-TEMPORAL MULTIDIMENSIONAL MODELS Description 10: Present a comparative analysis of the different spatial and spatio-temporal multidimensional models, evaluating them against specific requirements. Section 11: CONCLUSION Description 11: Summarize the findings of the survey, highlighting the gaps in current models and future research directions for multidimensional modeling.
A Survey of Artificial Intelligence Techniques Employed for Adaptive Educational Systems within E-Learning Platforms
9
--- paper_title: Rethinking Pedagogy for a Digital Age : Designing for 21st Century Learning paper_content: Table of Contents An introduction to rethinking pedagogy by Helen Beetham and Rhona Sharpe Part One: Principles and practices of designing for learning Chapter 1 Technology enhanced learning: the role of theory by Terry Mayes and Sara de Freitas Chapter 2 Designing for active learning in technology-rich contexts by Helen Beetham Chapter 3 The analysis of complex learning environments by Peter Goodyear and Lucila Carvalho Chapter 4 The challenge of teachers' design practice by Liz Masterman Chapter 5 Tools and resources to guide practice by By Grainne Conole Chapter 6 Describing ICT-based learning designs that promote quality learning outcomes by Ron Oliver, Barry Harper, Sandra Wills, Shirley Agostinho and John Hedberg Chapter 7 Learning designs as stimulus and support for teachers' design practices by Shirley Agostinho, Sue Bennett, Lori Lockyer, Jennifer Jones and Barry Harper Chapter 8 Representing practitioner experiences through learning designs and patterns by Patrick McAndrew and Peter Goodyear Chapter 9 The influence of open resources on design practice by Chris Pegler Part Two: Contexts for design Chapter 10 Designing for learning in course teams by Rhona Sharpe and Martin Oliver Chapter 11 The art of design by Derek Harding and Bruce Ingraham Chapter 12 Activity designs for professional learning by Rachel Ellaway Chapter 13 Designing for practice: A view from social science by Chris Jones Chapter 14 Student as producer is hacking the university by Joss Winn and Dean Lockwood Chapter 15 The LAMS community: Building communities of designers by James Dalziel Chapter 16 Design principles for mobile learning by Agnes Kukulska-Hulme and John Traxler Chapter 17 Designing for learning in an uncertain future by Helen Beetham --- paper_title: A Shortest Learning Path Selection Algorithm in E-learning paper_content: Generally speaking, in the e-learning systems, a course is modeled as a graph, where each node represents a knowledge node (KU) and two nodes are connected to form a semantic network. The desired knowledge is provided by the student as a direct request or from search results, mapping the owned knowledge onto the target knowledge. How to select a learning path which costs the least time and effort is critical. In this paper, we described the relationships between different nodes in the graph structure of knowledge units and propose an algorithm to select the shortest learning paths to learn the target knowledge. --- paper_title: e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning paper_content: In this thoroughly revised edition of the bestselling e-Learning and the Science of Instruction authors Ruth Colvin Clark and Richard E. Mayer internationally-recognized experts in the field of e-learningoffer essential information and guidelines for selecting, designing, and developing asynchronous and synchronous e-learning courses that build knowledge and skills for workers learning in corporate, government, and academic settings. In addition to updating research in all chapters, two new chapters and a CD with multimedia examples are included. --- paper_title: Staying the Course: Online Education in the United States, 2008. paper_content: Using responses from over 2,500 colleges and universities, the study examined the numbers of students involved in online education and the impact of this education for different disciplines and contexts.This report is the sixth in a series of annual reports on a study conducted by the Babson Survey Research Group for the Sloan Consortium. Using responses from over 2,500 colleges and universities, the study sought answers to several questions on online education: * How many students are learning online? * What is the impact of the economy on online enrollments? * Is online learning strategic? * What disciplines are best represented online? --- paper_title: User modeling for adaptive e-learning systems paper_content: Adaptive systems have been a hot topic in various areas like hypermedia systems, e-commerce systems, e-learning environments and information retrieval. In order to provide adaptivity, these systems need to keep track of different types of information about their users. Therefore, user modeling is at the heart of the adaptation process. In this paper, different user modeling techniques will be reviewed with the focus on what needs to be modeled and how it will be modeled, i.e., the demographic information of the users are collected in most of these systems, however, how it will be used in the adaptation process depends on the methodology being followed. The evaluation of different user modeling approaches and examination of some recent adaptive e-learning systems' architectures will also be provided. --- paper_title: Educational Data Mining and Learning Analytics: differences, similarities, and time evolution paper_content: Technological progress in recent decades has enabled people to learn in different ways. Universities now have more educational models to choose from, i.e., b-learning and e-learning. Despite the increasing opportunities for students and instructors, online learning also brings challenges due to the absence of direct human contact. Online environments allow the generation of large amounts of data related to learning/teaching processes, which offers the possibility of extracting valuable information that may be employed to improve students’ performance. In this paper, we aim to review the similarities and differences between Educational Data Mining and Learning Analytics, two relatively new and increasingly popular fields of research concerned with the collection, analysis, and interpretation of educational data. Their origins, goals, differences, similarities, time evolution, and challenges are addressed, as are their relationship with Big Data and MOOCs. --- paper_title: Learning analytics and educational data mining: towards communication and collaboration paper_content: Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields. --- paper_title: Educational Data Mining: A Review of the State of the Art paper_content: Educational data mining (EDM) is an emerging interdisciplinary research area that deals with the development of methods to explore data originating in an educational context. EDM uses computational approaches to analyze educational data in order to study educational questions. This paper surveys the most relevant studies carried out in this field to date. First, it introduces EDM and describes the different groups of user, types of educational environments, and the data they provide. It then goes on to list the most typical/common tasks in the educational environment that have been resolved through data-mining techniques, and finally, some of the most promising future lines of research are discussed. --- paper_title: Review: Educational data mining: A survey and a data mining-based analysis of recent works paper_content: This review pursues a twofold goal, the first is to preserve and enhance the chronicles of recent educational data mining (EDM) advances development; the second is to organize, analyze, and discuss the content of the review based on the outcomes produced by a data mining (DM) approach. Thus, as result of the selection and analysis of 240 EDM works, an EDM work profile was compiled to describe 222 EDM approaches and 18 tools. A profile of the EDM works was organized as a raw data base, which was transformed into an ad-hoc data base suitable to be mined. As result of the execution of statistical and clustering processes, a set of educational functionalities was found, a realistic pattern of EDM approaches was discovered, and two patterns of value-instances to depict EDM approaches based on descriptive and predictive models were identified. One key finding is: most of the EDM approaches are ground on a basic set composed by three kinds of educational systems, disciplines, tasks, methods, and algorithms each. The review concludes with a snapshot of the surveyed EDM works, and provides an analysis of the EDM strengths, weakness, opportunities, and threats, whose factors represent, in a sense, future work to be fulfilled. --- paper_title: Customer Relationship Management applied to higher education: developing an e-monitoring system to improve relationships in electronic learning environments paper_content: Customer Relationship Management (CRM) has usually been associated with business contexts. However, it has recently been pointed out that its principles and applications are also very appropriate for non-profit making organisations. In this article, we defend the broadening of the field of application of CRM from the business domain to a wider context of relationships in which the inclusion of non-profit making organisations seems natural. In particular, we focus on analysing the suitability of adopting CRM processes by universities and higher educational institutions dedicated to electronic learning (e-learning). This is an issue that has much potential but has received little attention in research so far. Our work reflects upon this matter and provides a new step towards a CRM solution for managing relationships of specific customers, such as students. Indeed, the main contribution of this article is specifically characterised by the proposal and empirical application of an e-monitoring system that aims to enhance the performance of relationships in e-learning environments. --- paper_title: Data mining in education paper_content: Applying data mining DM in education is an emerging interdisciplinary research field also known as educational data mining EDM. It is concerned with developing methods for exploring the unique types of data that come from educational environments. Its goal is to better understand how students learn and identify the settings in which they learn to improve educational outcomes and to gain insights into and explain educational phenomena. Educational information systems can store a huge amount of potential data from multiple sources coming in different formats and at different granularity levels. Each particular educational problem has a specific objective with special characteristics that require a different treatment of the mining problem. The issues mean that traditional DM techniques cannot be applied directly to these types of data and problems. As a consequence, the knowledge discovery process has to be adapted and some specific DM techniques are needed. This paper introduces and reviews key milestones and the current state of affairs in the field of EDM, together with specific applications, tools, and future insights. © 2012 Wiley Periodicals, Inc. --- paper_title: A data analysis model based on control charts to monitor online learning processes paper_content: This paper discusses the convenience of using data analysis techniques to monitor e-learning processes. It also proposes a model to monitor online students' academic activity and performance. Using data from log files and databases, this model determines which information has to be provided to online instructors and students. These reports allow instructors to: classify students according to their activity and learning outcomes, track their evolution, and identify those who might need immediate assistance. They also provide students with a periodical feedback which makes them aware of how they are performing as compared with the rest of the class. --- paper_title: Using Collaboration Strategies to Support the Monitoring of Online Collaborative Learning Activity paper_content: This paper first discusses the importance of online education and highlights its main benefits and challenges. In this context, on the one hand, we argue the significance of monitoring students’ and groups’ activity in an online learning environment. On the other hand, we analyze the informational needs that should be covered by any monitoring information system. Finally, the paper goes a step further by proposing the use of collaboration strategies as a manner to improve monitoring and learning processes in computer-supported collaborative learning. --- paper_title: Handbook of educational data mining paper_content: Preface, Joseph E. Beck Introduction, Cristobal Romero, Sebastian Ventura, Mykola Pechenizkiy, and Ryan Baker Basic Techniques, Surveys, and Tutorials Visualization in Educational Environments, Riccardo Mazza Basics of Statistical Analysis of Interactions Data from Web-Based Learning Environments, Judy Sheard A Data Repository for the EDM Community: The PSLC DataShop, Kenneth R. Koedinger, Ryan Baker, Kyle Cunningham, Alida Skogsholm, Brett Leber, and John Stamper Classifiers for EDM, Wilhelmiina Hamalainen and Mikko Vinni Clustering Educational Data, Alfredo Vellido, Felix Castro, and Angela Nebot Association Rule Mining in Learning Management Systems, Enrique Garcia, Cristobal Romero, Sebastian Ventura, Carlos de Castro, and Toon Calders Sequential Pattern Analysis of Learning Logs: Methodology and Applications, Mingming Zhou, Yabo Xu, John C. Nesbit, and Philip H. Winne Process Mining from Educational Data, Nikola Trcka, Mykola Pechenizkiy, and Wil van der Aalst Modeling Hierarchy and Dependence among Task Responses in EDM, Brian W. Junker Case Studies Novel Derivation and Application of Skill Matrices: The q-Matrix Method, Tiffany Barnes EDM to Support Group Work in Software Development Projects, Judy Kay, Irena Koprinska, and Kalina Yacef Multi-Instance Learning versus Single-Instance Learning for Predicting the Student's Performance, Amelia Zafra, Cristobal Romero, and Sebastian Ventura A Response-Time Model for Bottom-Out Hints as Worked Examples, Benjamin Shih, Kenneth R. Koedinger, and Richard Scheines Automatic Recognition of Learner Types in Exploratory Learning Environments, Saleema Amershi and Cristina Conati Modeling Affect by Mining Students' Interactions within Learning Environments, Manolis Mavrikis, Sidney D'Mello, Kaska Porayska-Pomsta, Mihaela Cocea, and Art Graesser Measuring Correlation of Strong Symmetric Association Rules in Educational Data, Agathe Merceron and Kalina Yacef Data Mining for Contextual Educational Recommendation and Evaluation Strategies, Tiffany Y. Tang and Gordon G. McCalla Link Recommendation in E-Learning Systems Based on Content-Based Student Profiles, Daniela Godoy and Analia Amandi Log-Based Assessment of Motivation in Online Learning, Arnon Hershkovitz and Rafi Nachmias Mining Student Discussions for Profiling Participation and Scaffolding Learning, Jihie Kim, Erin Shaw, and Sujith Ravi Analysis of Log Data from a Web-Based Learning Environment: A Case Study, Judy Sheard Bayesian Networks and Linear Regression Models of Students' Goals, Moods, and Emotions, Ivon Arroyo, David G. Cooper, Winslow Burleson, and Beverly P. Woolf Capturing and Analyzing Student Behavior in a Virtual Learning Environment: A Case Study on Usage of Library Resources, David Masip, Julia Minguillon, and Enric Mor Anticipating Student's Failure as soon as Possible, Claudia Antunes Using Decision Trees for Improving AEH Courses, Javier Bravo, Cesar Vialardi, and Alvaro Ortigosa Validation Issues in EDM: The Case of HTML-Tutor and iHelp, Mihaela Cocea and Stephan Weibelzahl Lessons from Project LISTEN's Session Browser, Jack Mostow, Joseph E. Beck, Andrew Cuneo, Evandro Gouvea, Cecily Heiner, and Octavio Juarez Using Fine-Grained Skill Models to Fit Student Performance with Bayesian Networks, Zachary A. Pardos, Neil T. Heffernan, Brigham S. Anderson, and Cristina L. Heffernan Mining for Patterns of Incorrect Response in Diagnostic Assessment Data, Tara M. Madhyastha and Earl Hunt Machine-Learning Assessment of Students' Behavior within Interactive Learning Environments, Manolis Mavrikis Learning Procedural Knowledge from User Solutions to Ill-Defined Tasks in a Simulated Robotic Manipulator, Philippe Fournier-Viger, Roger Nkambou, and Engelbert Mephu Nguifo Using Markov Decision Processes for Automatic Hint Generation, Tiffany Barnes, John Stamper, and Marvin Croy Data Mining Learning Objects, Manuel E. Prieto, Alfredo Zapata, and Victor H. Menendez An Adaptive Bayesian Student Model for Discovering the Student's Learning Style and Preferences, Cristina Carmona, Gladys Castillo, and Eva Millan Index --- paper_title: Drawbacks and solutions of applying association rule mining in learning management systems paper_content: In this paper, we survey the application of association rule mining in e-learning systems, and especially, learning management systems. We describe the specific knowledge discovery process, its mains drawbacks and some possible solutions to resolve them. --- paper_title: Mining association rules between sets of items in large databases paper_content: We are given a large database of customer transactions. Each transaction consists of items purchased by a customer in a visit. We present an efficient algorithm that generates all significant association rules between items in the database. The algorithm incorporates buffer management and novel estimation and pruning techniques. We also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm. --- paper_title: Fuzzy set approach to the assessment of student-centered learning paper_content: Assessment of student learning is an important task in a teaching and learning process. It has a strong influence on students' approaches to learning and their outcomes. Development in tertiary education has shifted its emphasis from teacher-centered learning to student-centered learning. In a student-centered learning environment, criterion-referenced assessment techniques are often used in current education research and practice. However, it sometimes happens that the assessment criteria and their corresponding weights are solely determined by the lecturers in charge. This may reduce the interest of students' participation and lower the quality of their learning. This paper presents an integrated fuzzy set approach to assess the outcomes of student-centered learning. It uses fuzzy set principles to represent the imprecise concepts for subjective judgment and applies a fuzzy set method to determine the assessment criteria and their corresponding weights. Based on the commonly agreed assessment criteria, students' learning outcomes are evaluated on a fuzzy grade scale. The proposed fuzzy set approach incorporates students' opinions into assessment and allows them to have a better understanding on the assessment criteria. It aims at encouraging students to participate in the whole learning process and providing an open and fair environment for assessment. --- paper_title: Propose of fuzzy logic-based students' learning assessment paper_content: This paper proposes the students' learning assessment by using fuzzy logic. The framework of practical learning system for computer discipline is also presented to explain a conceptual design of an intelligent tutorial system. The proposed framework composes of six components including interface module, domain knowledge, inference engine, student module, mentor module and pedagogical module. The inference engine performed students' group classification form on-line pre-test examination before starting the practical worksheet application. Two input parameters consisting of percentage of score and time were established as inputs for membership functions of fuzzy logic system. The twenty-five fuzzy rules were created for the proposed method by experts. The defuzzification of output membership functions, including good, fair and improve were performed by using the centroid method. In this paper, 26 students were tested in order to compare the students' learning performance, assessed by fuzzy logic and t-score method. The results revealed that the proposed method was a flexible process to classify students' learning group based on the objectives of subject and the real time performance comparing with t-score method. --- paper_title: 1 User Models for Adaptive Hypermedia and Adaptive Educational Systems paper_content: One distinctive feature of any adaptive system is the user model that represents essential information about each user. This chapter complements other chapters of this book in reviewing user models and user modeling approaches applied in adaptive Web systems. The presentation is structured along three dimensions: what is being modeled, how it is modeled, and how the models are maintained. After a broad overview of the nature of the information presented in these various user models, the chapter focuses on two groups of approaches to user model representation and maintenance: the overlay approach to user model representation and the uncertainty-based approach to user modeling. --- paper_title: Adaptive User Interfaces for Intelligent E-Learning: Issues and Trends paper_content: Adaptive User Interfaces have a long history rooted in the emergence of such eminent technologies as Artificial Intelligence, Soft Computing, Graphical User Interface, JAVA, Internet, and Mobile Services. More specifically, the advent and advancement of the Web and Mobile Learning Services has brought forward adaptivity as an immensely important issue for both efficacy and acceptability of such services. The success of such a learning process depends on the intelligent context-oriented presentation of the domain knowledge and its adaptivity in terms of complexity and granularity consistent to the learner’s cognitive level/progress. Researchers have always deemed adaptive user interfaces as a promising solution in this regard. However, the richness in the human behavior, technological opportunities, and contextual nature of information offers daunting challenges. These require creativity, cross-domain synergy, cross-cultural and cross-demographic understanding, and an adequate representation of mission and conception of the task. This paper provides a review of state-of-the-art in adaptive user interface research in Intelligent Multimedia Educational Systems and related areas with an emphasis on core issues and future directions. --- paper_title: Authoring of probabilistic sequencing in adaptive hypermedia with bayesian networks paper_content: One of the difficulties that self-directed learners face on their learning pro- cess is choosing the right learning resources. One of the goals of adaptive educational systems is helping students in finding the best set of learning resources for them. Adap- tive systems try to infer the students' characteristics and store them in a user model whose information is used to drive the adaptation. However, the information that can be acquired is always limited and partial. In this paper, the use of Bayesian networks is proposed as a possible solution to adapt the sequence of activities to students. There are two research questions that are answered in this paper: whether Bayesian networks can be used to adaptively sequence learning material, and whether such an approach permits the reuse of learning units created for other systems. A positive answer to both question is complemented with a case study that illustrates the details of the process. --- paper_title: Intelligent student profiling with fuzzy models paper_content: Traditional Web-based educational systems still have several shortcomings when comparing with real-life classroom teaching, such as lack of contextual and adaptive support, lack of flexible support of the presentation and feedback, lack of the collaborative support between students and systems. Based on educational theory, personalization increases learning motivation, which can increase the learning effectiveness. A fuzzy epistemic logic has been built to present the student's knowledge state, while the course content is modeled by the concept of context. By applying such fuzzy epistemic logic, the content model, the student model, and the learning plan have been defined formally. A multi-agent based student profiling system has been presented. Our profiling system stores the learning activities and interaction history of each individual student into the student profile database. Such profiling data is abstracted into a student model. Based on the student model and the content model, dynamic learning plans for individual students are made. Students get personalized learning materials, personalized quizzes, and personalized advice. In order to understand the students' perception of our prototype system and to evaluate the students' learning effectiveness, a field survey has been conducted. The results from the survey indicate that our prototype system makes great improvement on personalization of learning and achieves learning effectiveness. --- paper_title: Andes : A coached problem solving environment for physics paper_content: Andes is an Intelligent Tutoring System for introductory college physics. The fundamental principles underlying the design of Andes are: (1) encourage the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, (2) facilitate transfer from the system by making the interface as much like a piece of paper as possible, (3) give immediate feedback after each action to maximize the opportunities for learning and minimize the amount of time spent going down wrong paths, and (4) give the student flexibility in the order in which actions are performed, and allow them to skip steps when appropriate. This paper gives an overview of Andes, focusing on the overall architecture and the student's experience using the system. --- paper_title: Adaptive Course Sequencing for Personalization of Learning Path Using Neural Network paper_content: Advancements in technology have led to a paradigm shift from traditional to personalized learning methods with varied implementation strategies. Presenting an optimal personalized learning path in an educational hypermedia system is one of the strategies that is important in order to increase the effectiveness of a learning session for each student. However, this task requires much effort and cost particularly in defining rules for the adaptation of learning materials. This research focuses on the adaptive course sequencing method that uses soft computing techniques as an alternative to a rule-based adaptation for an adaptive learning system. The ability of soft computing technique in handling uncertainty and incompleteness of a problem is exploited in the study. In this paper we present recent work concerning concept-based classification of learning object using artificial neural network (ANN). Self Organizing Map (SOM) and Back Propagation (BP) algorithm were employed to discover the connection between the domain concepts contained in the learning object and the learner’s learning need. The experiment result shows that this approach is assuring in determining a suitable learning object for a particular student in an adaptive and dynamic learning environment. --- paper_title: Designing Intelligent Tutoring Systems: A Bayesian Approach paper_content: This paper proposes a model and an architecture for designing intelligent tutoring system using Bayesian Networks. The design model of an intelligent tutoring system is directed towards the separation between the domain knowledge and the tutor shell. The architecture is composed by a user model, a knowledge base, an adaptation module, a pedagogical module and a presentation module. Bayesian Networks are used to assess user’s state of knowledge and preferences, in order to suggest pedagogical options and recommend future steps in the tutor. The proposed architecture is implemented in the Internet, enabling its use as an e-learning tool. An example of an intelligent tutoring system is shown for illustration purposes. --- paper_title: Fuzzy set approach to the assessment of student-centered learning paper_content: Assessment of student learning is an important task in a teaching and learning process. It has a strong influence on students' approaches to learning and their outcomes. Development in tertiary education has shifted its emphasis from teacher-centered learning to student-centered learning. In a student-centered learning environment, criterion-referenced assessment techniques are often used in current education research and practice. However, it sometimes happens that the assessment criteria and their corresponding weights are solely determined by the lecturers in charge. This may reduce the interest of students' participation and lower the quality of their learning. This paper presents an integrated fuzzy set approach to assess the outcomes of student-centered learning. It uses fuzzy set principles to represent the imprecise concepts for subjective judgment and applies a fuzzy set method to determine the assessment criteria and their corresponding weights. Based on the commonly agreed assessment criteria, students' learning outcomes are evaluated on a fuzzy grade scale. The proposed fuzzy set approach incorporates students' opinions into assessment and allows them to have a better understanding on the assessment criteria. It aims at encouraging students to participate in the whole learning process and providing an open and fair environment for assessment. --- paper_title: Propose of fuzzy logic-based students' learning assessment paper_content: This paper proposes the students' learning assessment by using fuzzy logic. The framework of practical learning system for computer discipline is also presented to explain a conceptual design of an intelligent tutorial system. The proposed framework composes of six components including interface module, domain knowledge, inference engine, student module, mentor module and pedagogical module. The inference engine performed students' group classification form on-line pre-test examination before starting the practical worksheet application. Two input parameters consisting of percentage of score and time were established as inputs for membership functions of fuzzy logic system. The twenty-five fuzzy rules were created for the proposed method by experts. The defuzzification of output membership functions, including good, fair and improve were performed by using the centroid method. In this paper, 26 students were tested in order to compare the students' learning performance, assessed by fuzzy logic and t-score method. The results revealed that the proposed method was a flexible process to classify students' learning group based on the objectives of subject and the real time performance comparing with t-score method. --- paper_title: Introduction to Type-2 Fuzzy Logic Control paper_content: We describe in this book, new methods for building intelligent systems using type-2 fuzzy logic and soft computing techniques. Soft Computing (SC) consists of several computing paradigms, including type-1 fuzzy logic, neural networks, and genetic algorithms, which can be used to create powerful hybrid intelligent systems. In this book, we are extending the use of fuzzy logic to a higher order, which is called type- 2 fuzzy logic [13]. Combining type-2 fuzzy logic with traditional SC techniques, we can build powerful hybrid intelligent systems that can use the advantages that each technique offers in solving complex control problems. --- paper_title: Adaptive User Interfaces for Intelligent E-Learning: Issues and Trends paper_content: Adaptive User Interfaces have a long history rooted in the emergence of such eminent technologies as Artificial Intelligence, Soft Computing, Graphical User Interface, JAVA, Internet, and Mobile Services. More specifically, the advent and advancement of the Web and Mobile Learning Services has brought forward adaptivity as an immensely important issue for both efficacy and acceptability of such services. The success of such a learning process depends on the intelligent context-oriented presentation of the domain knowledge and its adaptivity in terms of complexity and granularity consistent to the learner’s cognitive level/progress. Researchers have always deemed adaptive user interfaces as a promising solution in this regard. However, the richness in the human behavior, technological opportunities, and contextual nature of information offers daunting challenges. These require creativity, cross-domain synergy, cross-cultural and cross-demographic understanding, and an adequate representation of mission and conception of the task. This paper provides a review of state-of-the-art in adaptive user interface research in Intelligent Multimedia Educational Systems and related areas with an emphasis on core issues and future directions. --- paper_title: Student modeling based on fuzzy inference mechanisms paper_content: The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation. --- paper_title: Intelligent student profiling with fuzzy models paper_content: Traditional Web-based educational systems still have several shortcomings when comparing with real-life classroom teaching, such as lack of contextual and adaptive support, lack of flexible support of the presentation and feedback, lack of the collaborative support between students and systems. Based on educational theory, personalization increases learning motivation, which can increase the learning effectiveness. A fuzzy epistemic logic has been built to present the student's knowledge state, while the course content is modeled by the concept of context. By applying such fuzzy epistemic logic, the content model, the student model, and the learning plan have been defined formally. A multi-agent based student profiling system has been presented. Our profiling system stores the learning activities and interaction history of each individual student into the student profile database. Such profiling data is abstracted into a student model. Based on the student model and the content model, dynamic learning plans for individual students are made. Students get personalized learning materials, personalized quizzes, and personalized advice. In order to understand the students' perception of our prototype system and to evaluate the students' learning effectiveness, a field survey has been conducted. The results from the survey indicate that our prototype system makes great improvement on personalization of learning and achieves learning effectiveness. --- paper_title: Fuzzy User Modeling for Adaptation in Educational Hypermedia paper_content: Education is a dominating application area for adap- tive hypermedia. Web-based adaptive educational systems incor- porate complex intelligent tutoring techniques, which enable the system to recognize an individual user and their needs, and con- sequently adapt the instructional sequence. The personalization is done through the user model, which collects information about the user. Since the description of user knowledge and features also in- volves imprecision and vagueness, a user model has to be designed that is able to deal with this uncertainty. This paper presents a way of describing the uncertainty of user knowledge, which is used for user knowledge modeling in an adaptive educational system. The system builds on the concept domain model. A fuzzy user model is proposed to deal with vagueness in the user's knowledge descrip- tion. The model uses fuzzy sets for knowledge representation and linguistic rules for model updating. The data from the fuzzy user model form the basis for the system adaptation, which implements various navigation support techniques. The evaluation of the pre- sented educational system has shown that the system and its adap- tation techniques provide a valuable, easy-to-use tool, which pos- itively affects user knowledge acquisition and, therefore, leads to better learning results. --- paper_title: Inducing Fuzzy Models for Student Classification paper_content: We report an approach for implementing predictive fuzzy systems that manage capturing both the imprecision of the empirically induced classifications and the imprecision of the intuitive linguistic expressions via the extensive use of fuzzy sets. From end-users' point of view, the approach enables encapsulating the technical details of the underlying information system in terms of an intuitive linguistic interface. We describe a novel technical syntax of fuzzy descriptions and expressions, and outline the related systems of fuzzy linguistic queries and rules. To illustrate the method, we describe it in terms of a concrete educational user modelling application. We report experiments with two data sets, describing the records of the students attending to a university mathematics course in 2003 and 2004. In brief, we aim identifying the failing students of the year 2004, and develop a procedure for empirically inducing and assigning each student a fuzzy property "poor", which helps capturing the students needing extra assistance. In the educational context, the approach enables the construction of applications exploiting simple and intuitive student models, that to certain extent are self-evident. --- paper_title: An Interval Type-2 Fuzzy Logic Based System for Customised Knowledge Delivery within Pervasive E-Learning Platforms paper_content: E-learning involves the computer and network-enabled transfer of skills and knowledge. The internet has become a central core to the educative environment experienced by learners, hence facilitating learning at any location and at any time thus creating pervasive learning environments. There is a growing interest in developing e-Learning platforms which enable the creation of personalized learning environments to suit the students' individual requirements and needs. However, the vast majority of the existing adaptive educational systems do not learn from the users' behaviors to create white box models which could handle the linguistic uncertainties and could be easily read and analyzed by the lay user. This paper presents a type-2 fuzzy logic based system that can learn the users' preferred knowledge delivery based on the students characteristics to generate a personalized learning environment. The type-2 fuzzy model is first created from data acquired from a number of students with different capabilities and needs. The learnt type-2 fuzzy-based model is then used to improve the knowledge delivery to the various students based on their individual characteristics. We will show how the presented system enables customizing the learning environments to improve individualized knowledge delivery to students which can result in enhancing the students' performance. The proposed system is able to continuously respond and adapt to students' needs on a highly individualized basis. Thus, online courses can be structured to deliver customized education to the student based upon various criteria of individual needs and characteristics. The efficiency of the proposed system has been tested through various experiments with the participation of 17 students. These experiments indicate the ability of the proposed type-2 fuzzy logic based system to handle the linguistic uncertainties to produce better performance than the type-1 based fuzzy systems. --- paper_title: Learning achievement evaluation strategy using fuzzy membership function paper_content: In this paper, the authors suggest a new learning achievement evaluation strategy in student's learning procedure. They call this fuzzy evaluation. They may assign fuzzy lingual variables to each question pertaining to its importance, complexity and difficulty by using fuzzy membership functions. Then one can evaluate a score depending on the membership degree of uncertainty factors in each question. In addition, they consider the time consuming element for solving a question. They adapt an inverse sigmoid function to consider time consuming elements, fuzzy concentration and dilation function for importance, a sigmoid function for complexity, and fuzzy square method for difficulty. --- paper_title: Rethinking Pedagogy for a Digital Age : Designing for 21st Century Learning paper_content: Table of Contents An introduction to rethinking pedagogy by Helen Beetham and Rhona Sharpe Part One: Principles and practices of designing for learning Chapter 1 Technology enhanced learning: the role of theory by Terry Mayes and Sara de Freitas Chapter 2 Designing for active learning in technology-rich contexts by Helen Beetham Chapter 3 The analysis of complex learning environments by Peter Goodyear and Lucila Carvalho Chapter 4 The challenge of teachers' design practice by Liz Masterman Chapter 5 Tools and resources to guide practice by By Grainne Conole Chapter 6 Describing ICT-based learning designs that promote quality learning outcomes by Ron Oliver, Barry Harper, Sandra Wills, Shirley Agostinho and John Hedberg Chapter 7 Learning designs as stimulus and support for teachers' design practices by Shirley Agostinho, Sue Bennett, Lori Lockyer, Jennifer Jones and Barry Harper Chapter 8 Representing practitioner experiences through learning designs and patterns by Patrick McAndrew and Peter Goodyear Chapter 9 The influence of open resources on design practice by Chris Pegler Part Two: Contexts for design Chapter 10 Designing for learning in course teams by Rhona Sharpe and Martin Oliver Chapter 11 The art of design by Derek Harding and Bruce Ingraham Chapter 12 Activity designs for professional learning by Rachel Ellaway Chapter 13 Designing for practice: A view from social science by Chris Jones Chapter 14 Student as producer is hacking the university by Joss Winn and Dean Lockwood Chapter 15 The LAMS community: Building communities of designers by James Dalziel Chapter 16 Design principles for mobile learning by Agnes Kukulska-Hulme and John Traxler Chapter 17 Designing for learning in an uncertain future by Helen Beetham --- paper_title: Adaptive Course Sequencing for Personalization of Learning Path Using Neural Network paper_content: Advancements in technology have led to a paradigm shift from traditional to personalized learning methods with varied implementation strategies. Presenting an optimal personalized learning path in an educational hypermedia system is one of the strategies that is important in order to increase the effectiveness of a learning session for each student. However, this task requires much effort and cost particularly in defining rules for the adaptation of learning materials. This research focuses on the adaptive course sequencing method that uses soft computing techniques as an alternative to a rule-based adaptation for an adaptive learning system. The ability of soft computing technique in handling uncertainty and incompleteness of a problem is exploited in the study. In this paper we present recent work concerning concept-based classification of learning object using artificial neural network (ANN). Self Organizing Map (SOM) and Back Propagation (BP) algorithm were employed to discover the connection between the domain concepts contained in the learning object and the learner’s learning need. The experiment result shows that this approach is assuring in determining a suitable learning object for a particular student in an adaptive and dynamic learning environment. --- paper_title: Adaptive User Interfaces for Intelligent E-Learning: Issues and Trends paper_content: Adaptive User Interfaces have a long history rooted in the emergence of such eminent technologies as Artificial Intelligence, Soft Computing, Graphical User Interface, JAVA, Internet, and Mobile Services. More specifically, the advent and advancement of the Web and Mobile Learning Services has brought forward adaptivity as an immensely important issue for both efficacy and acceptability of such services. The success of such a learning process depends on the intelligent context-oriented presentation of the domain knowledge and its adaptivity in terms of complexity and granularity consistent to the learner’s cognitive level/progress. Researchers have always deemed adaptive user interfaces as a promising solution in this regard. However, the richness in the human behavior, technological opportunities, and contextual nature of information offers daunting challenges. These require creativity, cross-domain synergy, cross-cultural and cross-demographic understanding, and an adequate representation of mission and conception of the task. This paper provides a review of state-of-the-art in adaptive user interface research in Intelligent Multimedia Educational Systems and related areas with an emphasis on core issues and future directions. --- paper_title: Andes : A coached problem solving environment for physics paper_content: Andes is an Intelligent Tutoring System for introductory college physics. The fundamental principles underlying the design of Andes are: (1) encourage the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, (2) facilitate transfer from the system by making the interface as much like a piece of paper as possible, (3) give immediate feedback after each action to maximize the opportunities for learning and minimize the amount of time spent going down wrong paths, and (4) give the student flexibility in the order in which actions are performed, and allow them to skip steps when appropriate. This paper gives an overview of Andes, focusing on the overall architecture and the student's experience using the system. --- paper_title: Designing Intelligent Tutoring Systems: A Bayesian Approach paper_content: This paper proposes a model and an architecture for designing intelligent tutoring system using Bayesian Networks. The design model of an intelligent tutoring system is directed towards the separation between the domain knowledge and the tutor shell. The architecture is composed by a user model, a knowledge base, an adaptation module, a pedagogical module and a presentation module. Bayesian Networks are used to assess user’s state of knowledge and preferences, in order to suggest pedagogical options and recommend future steps in the tutor. The proposed architecture is implemented in the Internet, enabling its use as an e-learning tool. An example of an intelligent tutoring system is shown for illustration purposes. --- paper_title: An introduction to hidden Markov models paper_content: The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition. --- paper_title: Mining learner–system interaction data: implications for modeling learner behaviors and improving overlay models paper_content: A growing body of empirical evidence suggests that the adaptive capabilities of computer-based learning environments can be improved through the use of educational data mining techniques. Log-file trace data provides a wealth of information about learner behaviors that can be captured, monitored, and mined for the purposes of discovering new knowledge and detecting patterns of interest. This study aims to leverage these analytical techniques to mine learner behaviors in relation to both diagnostic reasoning processes and outcomes in BioWorld, a computer-based learning environment that support learners to practice solving medical problems and receive formative feedback. In doing so, hidden Markov models are used to model behavioral indicators of proficiency during problem solving, while an ensemble of text classification algorithms are applied to written case summaries that learners' write as an outcome of solving a case in BioWorld. The application of these algorithms characterize learner behaviors at different phases of problem solving which provides corroborating evidence in support of where revisions can be made to provide design guidelines of the system. We con- clude by discussing the instructional design and pedagogical implications for the --- paper_title: Adaptive E-learning using Genetic Algorithms paper_content: Summary In this paper, we describe an adaptive system conceived in order to generate pedagogical paths which are adapted to the learner profile and to the current formation pedagogical objective. We have studied the problem as an "Optimization Problem". Using Genetic Algorithms, the system seeks an optimal path starting from the learner profile to the pedagogic objective passing by intermediate courses. To prepare the courses for adaptation, the application creates a descriptive sheet for resources, in XML format, while its integration in the database. Some experiments are added to this paper. ---
Title: A Survey of Artificial Intelligence Techniques Employed for Adaptive Educational Systems within E-Learning Platforms Section 1: Introduction Description 1: Introduce the motivation and objectives for enhancing student learning through adaptive educational systems in e-learning, including background information and current trends. Section 2: Overview of Recent Topics Related to AI techniques for Adaptive Educational Systems Description 2: Discuss recent developments and topics related to adaptive educational systems, including the importance of modeling individual differences and how AI techniques facilitate adaptive learning. Section 3: Massive Open Online Courses Description 3: Explore the emergence and significance of MOOCs, their challenges such as high dropout rates, and how AI techniques can improve the adaptability and personalization of these courses. Section 4: Educational Data Mining Techniques Description 4: Detail the machine learning and statistical techniques used in educational data mining (EDM), separating them into predictive and descriptive methods, and their application in adaptive educational systems. Section 5: Predictive Methods Description 5: Elaborate on the various predictive methods like classification, regression, and prediction of density, detailing how they are used to forecast educational outcomes and personalize learning experiences. Section 6: Association Rule Mining Description 6: Explain the role of association rule mining in uncovering hidden patterns and relationships within educational data to enhance the adaptability of learning systems. Section 7: Clustering Methods Description 7: Describe clustering techniques used to group similar data points, aiding in the initial classification and subsequent detailed analysis of educational data. Section 8: An Overview on Artificial Intelligence Methodologies Employed for Adaptive Educational Systems Description 8: Provide a comprehensive overview of different AI methodologies like fuzzy logic, decision trees, neural networks, Bayesian networks, hidden Markov models, and genetic algorithms, highlighting their use in adaptive educational systems. Section 9: Conclusion Description 9: Summarize the key points discussed in the paper, emphasizing the significance of AI techniques in enhancing adaptive educational systems and their benefits for personalized learning.
Sociophysics: A review of Galam models
7
--- paper_title: Spontaneous Coalition Forming. Why Some Are Stable? paper_content: A model to describe the spontaneous formation of military and economic coalitions among a group of countries is proposed using spin glass theory. Between each couple of countries, there exists a bond exchange coupling which is either zero, cooperative or conflicting. It depends on their common history, specific nature, and cannot be varied. Then, given a frozen random bond distribution, coalitions are found to spontaneously form. However they are also unstable making the system very disordered. Countries shift coalitions all the time. Only the setting of macro extra national coalition are shown to stabilize alliances among countries. The model gives new light on the recent instabilities produced in Eastern Europe by the Warsow pact dissolution at odd to the previous communist stability. Current European stability is also discussed with respect to the European Union construction. --- paper_title: Rational group decision making: a random field Ising model at paper_content: A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups. --- paper_title: Political paradoxes of majority rule voting and hierarchical systems paper_content: The use of majority rule voting is believed to be instrumental to establish democratic operating of political organizations. However in this work it is shown that, while applied to hierarchical systems, it leads to political paradoxes. To substantiate these findings a model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after 6 hierarchical levels. Results are discussed with respect to internal operating of political organizations. --- paper_title: Local dynamics vs.?social mechanisms: A unifying frame paper_content: We present a general sequential probabilistic frame, which extends a series of earlier opinion dynamics models. In addition, it orders and classifies all of the existing two-state spin systems. The scheme operates via local updates where a majority rule is applied differently in each possible configuration of a local group. It is weighted by a local probability which is a function of the local value of the order parameter, i.e., the majority-to-minority ratio. The system is thus driven from one equilibrium state into another equilibrium state till no collective change occurs. A phase diagram can thus be constructed. It has two phases, one where the collective opinion ends up broken along one opinion, and another with an even coexistence of both opinions. Two different regimes, monotonic and dampened oscillatory, are found for the coexistence phase. At the phase transition local probabilities conserve the density of opinions and reproduce the collective dynamics of the Voter model. The essential behavior of all existing discrete two-state models (Galam, Sznajd, Ochrombel, Stauffer, Krapivsky-Redner, Mobilia-Redner, Behera-Schweitzer, Slanina-Lavicka, Sanchez ...) is recovered and found to depart from each other only in the value of their local probabilities. Corresponding simulations are discussed. It is concluded that one should not judge from the above model results the validity of their respective psycho-social assumptions. --- paper_title: Towards a theory of collective phenomena. III: Conflicts and forms of power paper_content: This paper further develops a new theory of power advanced by the authors in two previous papers (Galam and Moscovici, 1991, 1994). According to this theory power results from the build up of conflicts within a group, these conflicts requiring a degree of organizational complexity which is itself a decreasing function of group size. Within this approach, power appears to be a composite of three qualitatively different powers, institutional, generative and ecological. Levels and relationships among these forms of power are considered as a function of the diversity of the group. There exist also three states of organization associated with power evolution. At the group initial stage is the paradigmatic state. Creation and inclusion of conflicts are accomplished in the transitional state through the building of complexity. At a critical value of diversity, the group moves into the agonal state in which institutional power vanishes simultaneously with the fusion of generative and ecological powers --- paper_title: From 2000 Bush-Gore to 2006 Italian elections: Voting at fifty-fifty and the Contrarian Effect paper_content: A sociophysical model for opinion dynamics is shown to embody a series of recent western hung national votes all set at the unexpected and very improbable edge of a fifty-fifty score. It started with the Bush–Gore 2000 American presidential election, followed by the 2002 Stoiber–Schroder, then the 2005 Schroder–Merkel German elections, and finally the 2006 Prodi-Berlusconi Italian elections. In each case, the country was facing drastic choices, the running competing parties were advocating very different programs and millions of voters were involved. Moreover, polls were given a substantial margin for the predicted winner. While all these events were perceived as accidental and isolated, our model suggests that indeed they are deterministic and obey to one single universal phenomena associated to the effect of contrarian behavior on the dynamics of opinion forming. The not hung Bush–Kerry 2004 presidential election is shown to belong to the same universal frame. To conclude, the existence of contrarians hints at the repetition of hung elections in the near future. --- paper_title: Modeling rumors: The no plane pentagon french hoax case paper_content: The recent astonishing wide adhesion of French people to the rumor claiming ‘No plane did crash on the Pentagon on September 11’, is given a generic explanation in terms of a model of minority opinion spreading. Using a majority rule reaction–diffusion dynamics, a rumor is shown to invade for sure a social group provided it fulfills simultaneously two criteria. First it must initiate with a support beyond some critical threshold which however, turns out to be always very low. Then it has to be consistent with some larger collective social paradigm of the group. Otherwise it just dies out. Both conditions were satisfied in the French case with the associated book sold at more than 200 000 copies in just a few days. The rumor was stopped by the firm stand of most newspaper editors stating it is nonsense. Such an incredible social dynamics is shown to result naturally from an open and free public debate among friends and colleagues. Each one searching for the truth sincerely on a free will basis and without individual biases. The polarization process appears also to be very quick in agreement with reality. It is a very strong anti-democratic reversal of opinion although made quite democratically. The model may apply to a large range of rumors. --- paper_title: On reducing terrorism power: a hint from physics paper_content: The September 11 attack on the US has revealed an unprecedented terrorism worldwide range of destruction. Recently, it has been related to the percolation of worldwide spread passive supporters. This scheme puts the suppression of the percolation effect as the major strategic issue in the fight against terrorism. Accordingly the world density of passive supporters should be reduced below the percolation threshold. In terms of solid policy, it means to neutralize millions of random passive supporters, which is contrary to ethics and out of any sound practical scheme. Given this impossibility we suggest instead a new strategic scheme to act directly on the value of the terrorism percolation threshold itself without harming the passive supporters. Accordingly we identify the space hosting the percolation phenomenon to be a multi-dimensional virtual social space which extends the ground earth surface to include the various independent terrorist-fighting goals. The associated percolating cluster is then found to create long-range ground connections to terrorism activity. We are thus able to modify the percolation threshold pc in the virtual space to reach p<pc by decreasing the social space dimension, leaving the density p unchanged. At once that would break down the associated world terrorism network to a family of unconnected finite-size clusters. The current world terrorism threat would thus shrink immediately and spontaneously to a local geographic problem. There, military action would become limited and efficient. --- paper_title: Opinion dynamics in a three-choice system paper_content: We generalize Galam’s model of opinion spreading by introducing three competing choices. At each update, the population is randomly divided in groups of three agents, whose members adopt the opinion of the local majority. In the case of a tie, the local group adopts opinion A, B or C with probabilities α, β and (1-α-β) respectively. We derive the associated phase diagrams and dynamics by both analytical means and simulations. Polarization is always reached within very short time scales. We point out situations in which an initially very small minority opinion can invade the whole system. --- paper_title: Application of statistical physics to politics paper_content: The concept and technics of real space renormalization group are applied to study majority rule voting in hierarchical structures. It is found that democratic voting can lead to totalitarianism by keeping in power a small minority. Conditions of this paradox are analyzed and singled out. Indeed majority rule produces critical thresholds to absolute power. Values of these thresholds can vary from 50% up to at least 77%. The associated underlying mechanism could provide an explanation for both former apparent eternity of communist leaderships and their sudden collapse. --- paper_title: Universality of Group Decision Making paper_content: Group decision making is assumed to obey some universal features which are independent of both the social nature of the group making the decision and the nature of the decision itself. On this basis a simple magnetic like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasize on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead, the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. --- paper_title: Global Physics: From Percolation to Terrorism, Guerilla Warfare and Clandestine Activities paper_content: The September 11 attack on the US has revealed an unprecedented terrorism with worldwide range of destruction. It is argued to result from the first worldwide percolation of passive supporters. They are people sympathetic to the terrorism cause but without being involved with it. They just do not oppose it in case they could. This scheme puts suppression of the percolation as the major strategic issue in the fight against terrorism. Acting on the population is shown to be useless. Instead a new strategic scheme is suggested to increase the terrorism percolation threshold and in turn suppress the percolation. The relevant associated space is identified as a multi-dimensional social space including both the ground earth surface and all various independent flags displayed by the terrorist group. Some hints are given on how to shrink the geographical spreading of terrorism threat. The model apply to a large spectrum of clandestine activities including guerilla warfare as well as tax evasion, corruption, illegal gambling, illegal prostitution and black markets. --- paper_title: From Individual Choice to Group Decision Making paper_content: Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions. --- paper_title: An evolution theory in finite size systems paper_content: A new model of evolution is presented for finite size systems. Conditions under which a minority species can emerge, spread and stabilize to a macroscopic size are studied. It is found that space organization is instrumental in addition to a qualitative advantage. Some peculiar topologies ensure the overcome of the initial majority species. However the probability of such local clusters is very small and depend strongly on the system size. A probabilistic phase diagram is obtained for small sizes. It reduces to a trivial situation in the thermodynamic limit, thus indicating the importance of dealing with finite systems in evolution problems. Results are discussed with respect to both Darwin and punctuated equilibria theories. --- paper_title: Killer Geometries in Competing Species Dynamics paper_content: We discuss a cellular automata model to study the competition between an emergent better fitted species against an existing majority species. The model implements local fights among small group of individuals and a synchronous random walk on a 2D lattice. The faith of the system, i.e., the spreading or disappearance of the species is determined by their initial density and fight frequency. The initial density of the emergent species has to be higher than a critical threshold for total spreading but this value depends in a non-trivial way on the fight frequency. Below the threshold any better adapted species disappears showing that a qualitative advantage is not enough for a minority to win. No strategy is involved but spatial organization turns out to be crucial. For instance, at minority densities of zero measure some very rare local geometries which occur by chance are found to be killer geometries. Once set they lead with high probability to the total destruction of the preexisting majority species. The occurrence rate of these killer geometries is a function of the system size. This model may apply to a large spectrum of competing groups like smoker–non smoker, opinion forming, diffusion of innovation setting of industrial standards, species evolution, epidemic spreading and cancer growth. --- paper_title: Fragmentation versus stability in bimodal coalitions paper_content: Competing bimodal coalitions among a group of actors are discussed. First, a model from political sciences is revisited. Most of the model statements are found not to be contained in the model. Second, a new coalition model is built. It accounts for local versus global alignment with respect to the joining of a coalition. The existence of two competing world coaltions is found to yield one unique stable distribution of actors. On the opposite a unique world leadership allows the emergence of unstable relationships. In parallel to regular actors which have a clear coalition choice, ``neutral"``frustrated"and ``risky"actors are produced. The cold war organisation after world war II is shown to be rather stable. The emergence of a fragmentation process from eastern group disappearance is explained as well as continuing western group stability. Some hints are obtained about possible policies to stabilize world nation relationships. European construction is analyzed with respect to european stability. Chinese stability is also discussed. --- paper_title: Comment on “A landscape theory of aggregation” paper_content: The problem of aggregation processes in alignments is the subject of a paper published recently in a statistical physics journal. Two models are presented and discussed in that paper. First, the energy landscape model proposed by Robert Axelrod and D. Scott Bennett (this Journal, 23 (1993), 211–33) is analysed. The model is shown not to include most of its claimed results. Then a second model is presented to reformulate the problem correctly within statistical physics and to extend it beyond the initial Axelrod-Bennett analogy. ::: Serge Galam, ‘Fragmentation Versus Stability in Bimodal Coalitions’, Physica A , 230 (1966), 174–88. --- paper_title: The role of inflexible minorities in the breaking of democratic opinion dynamics paper_content: We study the effect of inflexible agents on two state opinion dynamics. The model operates via repeated local updates of random grouping of agents. While floater agents do eventually flip their opinion to follow the local majority, inflexible agents keep their opinion always unchanged. It is a quenched individual opinion. In the bare model (no inflexibles), a separator at 50% drives the dynamics towards either one of two pure attractors, each associated with a full polarization along one of the opinions. The initial majority wins. The existence of inflexibles for only one of the two opinions is found to shift the separator at a lower value than 50% in favor of that side. Moreover it creates an incompressible minority around the inflexibles, one of the pure attractors becoming a mixed phase attractor. In addition above a threshold of 17% inflexibles make their side sure of winning whatever the initial conditions are. The inflexible minority wins. An equal presence of inflexibles on both sides restores the balanced dynamics with again a separator at 50% and now two mixed phase attractors on each side. Nevertheless, beyond 25% the dynamics is reversed with a unique attractor at a 50–50 stable equilibrium. But a very small advantage in inflexibles results in a decisive lowering of the separator at the advantage of the corresponding opinion. A few percent advantage does guarantee to become majority with one single attractor. The model is solved exhaustedly for groups of size 3. --- paper_title: Majority rule, hierarchical structures, and democratic totalitarianism: a statistical approach paper_content: Abstract An alternative model to the formation of pyramidal structures is presented in the framework of social systems. The population is assumed to be distributed between two social orientations G and H with respective probabilities p 0 and (1 − p 0 ). Instead of starting the hierarchy with a given H -orientation at the top and then going downwards in a deterministic way, the hierarchy is initiated randomly at the bottom from the surrounding population. Every level is then selected from the one underneath using the principle of majority rule. The hierarchy is thus self-oriented at the top. It is shown how such self-oriented hierarchies are always H -oriented provided they have a minimal number of levels which is determined by the value of p 0 . Their stability is studied against increases in p 0 . An ideal transition to G -self-oriented hierarchies is obtained. --- paper_title: Stability of leadership in bottom-up hierarchical organizations paper_content: The stability of a leadership against a growing internal opposition is studied in bottom-up hierarchical organizations. Using a very simple model with bottom-up majority rule voting, the dynamics of power distribution at the various hierarchical levels is calculated within a probabilistic framework. Given a leadership at the top, the opposition weight from the hierarchy bottom is shown to fall off quickly while climbing up the hierarchy. It reaches zero after only a few hierarchical levels. Indeed the voting process is found to obey a threshold dynamics with a deterministic top outcome. Accordingly the leadership may stay stable against very large amplitude increases in the opposition at the bottom level. An opposition can thus grow steadily from few percent up to seventy seven percent with not one a single change at the elected top level. However and in contrast, from one election to another, in the vicinity of the threshold, less than a one percent additional shift at the bottom level can drive a drastic and brutal change at the top. The opposition topples the current leadership at once. In addition to analytical formulas, results from a large scale simulation are presented. The results may shed a new light on management architectures as well as on alert systems. They could also provide some --- paper_title: Social paradoxes of majority rule voting and renormalization group paper_content: Real-space renormalization group ideas are used to study a voting problem in political science. A model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules, it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after six hierarchical levels. Results are discussed with respect to the internal operation of political organizations. --- paper_title: Optimizing Conflicts in the Formation of Strategic Alliances paper_content: Abstract:Coalition setting among a set of actors (countries, firms, individuals) is studied using concepts from the theory of spin glasses. Given the distribution of respective bilateral propensities to either cooperation or conflict, the phenomenon of local aggregation is modeled. In particular the number of coalitions is determined according to a minimum conflict principle. It is found not to be always two. Along these lines, previous studies are revisited and are found not to be consistent with their own principles. The model is then used to describe the fragmentation of former Yugoslavia. Results are compared to the actual situation. --- paper_title: FASHION, NOVELTY AND OPTIMALITY: An application from Physics paper_content: We apply a physical-based model to describe the clothes fashion market. Every time a new outlet appears on the market, it can invade the market under certain specific conditions. Hence, the “old” outlet can be completely dominated and disappears. Each creator competes for a finite population of agents. Fashion phenomena are shown to result from a collective phenomenon produced by local individual imitation effects. We assume that, in each step of the imitation process, agents only interact with a subset rather than with the whole set of agents. People are actually more likely to influence (and be influenced by) their close “neighbors”. Accordingly, we discuss which strategy is best fitted for new producers when people are either simply organized into anonymous reference groups or when they are organized in social groups hierarchically ordered. While counterfeits are shown to reinforce the first strategy, creating social leaders can permit to avoid them. --- paper_title: Towards a theory of collective phenomena. II: Conformity and power paper_content: A new theory of power is presented using the concept of symmetry breakdown in small and large groups. Power appears to result from the building up of conflicts within the group. Introduction and support of these conflicts requires an internal organization of the group. The organization-associated complexity is a decreasing function of group size. Thus small groups have more difficulties in generating internal conflicts than large ones. This group dynamic is characterized by two states which are different in their nature. The group is first built within the paradigmatic state aimed to determine and reproduce group conformity The group challenge is then to reach the transitional state which enriches the group possibilities through the inclusion and stabilization of internal conflicts. --- paper_title: Towards a theory of collective phenomena: Consensus and attitude changes in groups paper_content: This study presents the outline of a model for collective phenomena. A symmetry-breaking model combines a number of well-established social psychology hypotheses with recent concepts of statistical physics. Specifically we start out from the regularities obtained in studies on the polarization of attitudes and decisions. From a strictly logical point of view, it is immediately clear that aggregation effects must be analysed separately from group effects as such. The conceptual analysis of the assumed mechanisms reveals that when we deal with phenomena that have until now been designated as polarization phenomena, we are faced not with a single phenomenon, as was believed hitherto, but with a whole class of phenomena. For this reason it would be appropriate to deal with them differentially both from an empirical and from a theoretical point of view. It is possible to show, moreover, that in principle polarization is a direct function of interaction and, beyond a critical threshold an inverse function of the differentiation between group members. A certain number of verifiable conjectures are presented on the basis of physio-mathematical-psychological considerations. It is to be hoped that these theoretical outlines will make it possible to give a new lease on life to a field of research that has established solid facts, but that became trapped in a dead-end road, for lack of a sufficiently broad analysis. --- paper_title: Dictatorship from majority rule voting paper_content: Abstract:Majority rule voting in a multi-level system is studied using tools from the physics of disorder. We are not dealing with nation-wide general elections but rather with hierarchical organisations made of small committees. While in theory, for a two candidate election, the critical threshold to absolute power is , the usual existence of some local and reasonable bias makes it asymmetric, transforming a democratic system in effect to a dictatorship. The underlying dynamics of this democratic self-elimination is studied using a simulation which visualizes the full process. In addition the effect of non-voting persons (abstention, sickness, apathy) is also studied. It is found to have an additional drastic effect on the asymmetry of the threshold value to power. Some possible applications are mentioned. --- paper_title: Threshold Phenomena versus Killer Clusters in Bimodal Competion for Standards paper_content: Given an individually used standard on a territory we study the conditions for total spreading of a new emergent better fitted competing standard. The associated dynamics is monitored by local competing updating which occurs at random among a few individuals. The analysis is done using a cellular automata model within a two-dimensional lattice with synchronous random walk. Starting from an initial density of the new standard the associated density evolution is studied using groups of four individuals each. For each local update the outcome goes along the local majority within the group. However in case of a tie, the better fitted standard wins. Updates may happen at each diffusive step according to some fixed probability. For every value of that probability a critical threshold, in the initial new emergent standard density, is found to determine its total either disappearance or spreading making the process a threshold phenomenon. Nevertheless it turns out that even at a zero density measure of the new emergent standard there exits some peculiar killer clusters of it which have a non zero probability to grow and invade the whole system. At the same time the occurrence of such killer clusters is a very rare event and is a function of the system size. Application of the model to a large spectrum of competing dynamics is discussed. It includes the smoker-non smoker fight, opinion forming, diffusion of innovation, species evolution, epidemic spreading and cancer growth. --- paper_title: Sociophysics: A mean behavior model for the process of strike paper_content: Plant for the treatment and the oxidation of antimony minerals, wherein an antimony mineral with any granulometry and humidity, eventually crushed, is carried by a hot gaseous stream, subjected to milling, drying, subsequent separations and then introduced in a rotary kiln in a state of extreme subdivision and oxidized dispersed in a gaseous compressed oxidizing stream. --- paper_title: Real space renormalization group and totalitarian paradox of majority rule voting paper_content: The effect of majority rule voting in hierarchical structures is studied using the basic concepts from real space renormalization group. It shows in particular that a huge majority can be self-eliminated while climbing up the hierarchy levels. This majority democratic self-elimination articulates around the existence of fixed points in the voting flow. An unstable fixed point determines the critical threshold to full and total power. It can be varied from 50% up to 77% of initial support. Our model could shed new light on the last century eastern European communist collapse. --- paper_title: Political paradoxes of majority rule voting and hierarchical systems paper_content: The use of majority rule voting is believed to be instrumental to establish democratic operating of political organizations. However in this work it is shown that, while applied to hierarchical systems, it leads to political paradoxes. To substantiate these findings a model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after 6 hierarchical levels. Results are discussed with respect to internal operating of political organizations. --- paper_title: Application of statistical physics to politics paper_content: The concept and technics of real space renormalization group are applied to study majority rule voting in hierarchical structures. It is found that democratic voting can lead to totalitarianism by keeping in power a small minority. Conditions of this paradox are analyzed and singled out. Indeed majority rule produces critical thresholds to absolute power. Values of these thresholds can vary from 50% up to at least 77%. The associated underlying mechanism could provide an explanation for both former apparent eternity of communist leaderships and their sudden collapse. --- paper_title: Majority rule, hierarchical structures, and democratic totalitarianism: a statistical approach paper_content: Abstract An alternative model to the formation of pyramidal structures is presented in the framework of social systems. The population is assumed to be distributed between two social orientations G and H with respective probabilities p 0 and (1 − p 0 ). Instead of starting the hierarchy with a given H -orientation at the top and then going downwards in a deterministic way, the hierarchy is initiated randomly at the bottom from the surrounding population. Every level is then selected from the one underneath using the principle of majority rule. The hierarchy is thus self-oriented at the top. It is shown how such self-oriented hierarchies are always H -oriented provided they have a minimal number of levels which is determined by the value of p 0 . Their stability is studied against increases in p 0 . An ideal transition to G -self-oriented hierarchies is obtained. --- paper_title: Stability of leadership in bottom-up hierarchical organizations paper_content: The stability of a leadership against a growing internal opposition is studied in bottom-up hierarchical organizations. Using a very simple model with bottom-up majority rule voting, the dynamics of power distribution at the various hierarchical levels is calculated within a probabilistic framework. Given a leadership at the top, the opposition weight from the hierarchy bottom is shown to fall off quickly while climbing up the hierarchy. It reaches zero after only a few hierarchical levels. Indeed the voting process is found to obey a threshold dynamics with a deterministic top outcome. Accordingly the leadership may stay stable against very large amplitude increases in the opposition at the bottom level. An opposition can thus grow steadily from few percent up to seventy seven percent with not one a single change at the elected top level. However and in contrast, from one election to another, in the vicinity of the threshold, less than a one percent additional shift at the bottom level can drive a drastic and brutal change at the top. The opposition topples the current leadership at once. In addition to analytical formulas, results from a large scale simulation are presented. The results may shed a new light on management architectures as well as on alert systems. They could also provide some --- paper_title: Social paradoxes of majority rule voting and renormalization group paper_content: Real-space renormalization group ideas are used to study a voting problem in political science. A model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules, it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after six hierarchical levels. Results are discussed with respect to the internal operation of political organizations. --- paper_title: Dictatorship from majority rule voting paper_content: Abstract:Majority rule voting in a multi-level system is studied using tools from the physics of disorder. We are not dealing with nation-wide general elections but rather with hierarchical organisations made of small committees. While in theory, for a two candidate election, the critical threshold to absolute power is , the usual existence of some local and reasonable bias makes it asymmetric, transforming a democratic system in effect to a dictatorship. The underlying dynamics of this democratic self-elimination is studied using a simulation which visualizes the full process. In addition the effect of non-voting persons (abstention, sickness, apathy) is also studied. It is found to have an additional drastic effect on the asymmetry of the threshold value to power. Some possible applications are mentioned. --- paper_title: Real space renormalization group and totalitarian paradox of majority rule voting paper_content: The effect of majority rule voting in hierarchical structures is studied using the basic concepts from real space renormalization group. It shows in particular that a huge majority can be self-eliminated while climbing up the hierarchy levels. This majority democratic self-elimination articulates around the existence of fixed points in the voting flow. An unstable fixed point determines the critical threshold to full and total power. It can be varied from 50% up to 77% of initial support. Our model could shed new light on the last century eastern European communist collapse. --- paper_title: Application of statistical physics to politics paper_content: The concept and technics of real space renormalization group are applied to study majority rule voting in hierarchical structures. It is found that democratic voting can lead to totalitarianism by keeping in power a small minority. Conditions of this paradox are analyzed and singled out. Indeed majority rule produces critical thresholds to absolute power. Values of these thresholds can vary from 50% up to at least 77%. The associated underlying mechanism could provide an explanation for both former apparent eternity of communist leaderships and their sudden collapse. --- paper_title: Majority rule, hierarchical structures, and democratic totalitarianism: a statistical approach paper_content: Abstract An alternative model to the formation of pyramidal structures is presented in the framework of social systems. The population is assumed to be distributed between two social orientations G and H with respective probabilities p 0 and (1 − p 0 ). Instead of starting the hierarchy with a given H -orientation at the top and then going downwards in a deterministic way, the hierarchy is initiated randomly at the bottom from the surrounding population. Every level is then selected from the one underneath using the principle of majority rule. The hierarchy is thus self-oriented at the top. It is shown how such self-oriented hierarchies are always H -oriented provided they have a minimal number of levels which is determined by the value of p 0 . Their stability is studied against increases in p 0 . An ideal transition to G -self-oriented hierarchies is obtained. --- paper_title: Social paradoxes of majority rule voting and renormalization group paper_content: Real-space renormalization group ideas are used to study a voting problem in political science. A model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules, it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after six hierarchical levels. Results are discussed with respect to the internal operation of political organizations. --- paper_title: Real space renormalization group and totalitarian paradox of majority rule voting paper_content: The effect of majority rule voting in hierarchical structures is studied using the basic concepts from real space renormalization group. It shows in particular that a huge majority can be self-eliminated while climbing up the hierarchy levels. This majority democratic self-elimination articulates around the existence of fixed points in the voting flow. An unstable fixed point determines the critical threshold to full and total power. It can be varied from 50% up to 77% of initial support. Our model could shed new light on the last century eastern European communist collapse. --- paper_title: Application of statistical physics to politics paper_content: The concept and technics of real space renormalization group are applied to study majority rule voting in hierarchical structures. It is found that democratic voting can lead to totalitarianism by keeping in power a small minority. Conditions of this paradox are analyzed and singled out. Indeed majority rule produces critical thresholds to absolute power. Values of these thresholds can vary from 50% up to at least 77%. The associated underlying mechanism could provide an explanation for both former apparent eternity of communist leaderships and their sudden collapse. --- paper_title: Majority rule, hierarchical structures, and democratic totalitarianism: a statistical approach paper_content: Abstract An alternative model to the formation of pyramidal structures is presented in the framework of social systems. The population is assumed to be distributed between two social orientations G and H with respective probabilities p 0 and (1 − p 0 ). Instead of starting the hierarchy with a given H -orientation at the top and then going downwards in a deterministic way, the hierarchy is initiated randomly at the bottom from the surrounding population. Every level is then selected from the one underneath using the principle of majority rule. The hierarchy is thus self-oriented at the top. It is shown how such self-oriented hierarchies are always H -oriented provided they have a minimal number of levels which is determined by the value of p 0 . Their stability is studied against increases in p 0 . An ideal transition to G -self-oriented hierarchies is obtained. --- paper_title: Social paradoxes of majority rule voting and renormalization group paper_content: Real-space renormalization group ideas are used to study a voting problem in political science. A model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules, it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after six hierarchical levels. Results are discussed with respect to the internal operation of political organizations. --- paper_title: Real space renormalization group and totalitarian paradox of majority rule voting paper_content: The effect of majority rule voting in hierarchical structures is studied using the basic concepts from real space renormalization group. It shows in particular that a huge majority can be self-eliminated while climbing up the hierarchy levels. This majority democratic self-elimination articulates around the existence of fixed points in the voting flow. An unstable fixed point determines the critical threshold to full and total power. It can be varied from 50% up to 77% of initial support. Our model could shed new light on the last century eastern European communist collapse. --- paper_title: Application of statistical physics to politics paper_content: The concept and technics of real space renormalization group are applied to study majority rule voting in hierarchical structures. It is found that democratic voting can lead to totalitarianism by keeping in power a small minority. Conditions of this paradox are analyzed and singled out. Indeed majority rule produces critical thresholds to absolute power. Values of these thresholds can vary from 50% up to at least 77%. The associated underlying mechanism could provide an explanation for both former apparent eternity of communist leaderships and their sudden collapse. --- paper_title: Majority rule, hierarchical structures, and democratic totalitarianism: a statistical approach paper_content: Abstract An alternative model to the formation of pyramidal structures is presented in the framework of social systems. The population is assumed to be distributed between two social orientations G and H with respective probabilities p 0 and (1 − p 0 ). Instead of starting the hierarchy with a given H -orientation at the top and then going downwards in a deterministic way, the hierarchy is initiated randomly at the bottom from the surrounding population. Every level is then selected from the one underneath using the principle of majority rule. The hierarchy is thus self-oriented at the top. It is shown how such self-oriented hierarchies are always H -oriented provided they have a minimal number of levels which is determined by the value of p 0 . Their stability is studied against increases in p 0 . An ideal transition to G -self-oriented hierarchies is obtained. --- paper_title: Stability of leadership in bottom-up hierarchical organizations paper_content: The stability of a leadership against a growing internal opposition is studied in bottom-up hierarchical organizations. Using a very simple model with bottom-up majority rule voting, the dynamics of power distribution at the various hierarchical levels is calculated within a probabilistic framework. Given a leadership at the top, the opposition weight from the hierarchy bottom is shown to fall off quickly while climbing up the hierarchy. It reaches zero after only a few hierarchical levels. Indeed the voting process is found to obey a threshold dynamics with a deterministic top outcome. Accordingly the leadership may stay stable against very large amplitude increases in the opposition at the bottom level. An opposition can thus grow steadily from few percent up to seventy seven percent with not one a single change at the elected top level. However and in contrast, from one election to another, in the vicinity of the threshold, less than a one percent additional shift at the bottom level can drive a drastic and brutal change at the top. The opposition topples the current leadership at once. In addition to analytical formulas, results from a large scale simulation are presented. The results may shed a new light on management architectures as well as on alert systems. They could also provide some --- paper_title: Social paradoxes of majority rule voting and renormalization group paper_content: Real-space renormalization group ideas are used to study a voting problem in political science. A model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules, it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after six hierarchical levels. Results are discussed with respect to the internal operation of political organizations. --- paper_title: Real space renormalization group and totalitarian paradox of majority rule voting paper_content: The effect of majority rule voting in hierarchical structures is studied using the basic concepts from real space renormalization group. It shows in particular that a huge majority can be self-eliminated while climbing up the hierarchy levels. This majority democratic self-elimination articulates around the existence of fixed points in the voting flow. An unstable fixed point determines the critical threshold to full and total power. It can be varied from 50% up to 77% of initial support. Our model could shed new light on the last century eastern European communist collapse. --- paper_title: Application of statistical physics to politics paper_content: The concept and technics of real space renormalization group are applied to study majority rule voting in hierarchical structures. It is found that democratic voting can lead to totalitarianism by keeping in power a small minority. Conditions of this paradox are analyzed and singled out. Indeed majority rule produces critical thresholds to absolute power. Values of these thresholds can vary from 50% up to at least 77%. The associated underlying mechanism could provide an explanation for both former apparent eternity of communist leaderships and their sudden collapse. --- paper_title: Dictatorship from majority rule voting paper_content: Abstract:Majority rule voting in a multi-level system is studied using tools from the physics of disorder. We are not dealing with nation-wide general elections but rather with hierarchical organisations made of small committees. While in theory, for a two candidate election, the critical threshold to absolute power is , the usual existence of some local and reasonable bias makes it asymmetric, transforming a democratic system in effect to a dictatorship. The underlying dynamics of this democratic self-elimination is studied using a simulation which visualizes the full process. In addition the effect of non-voting persons (abstention, sickness, apathy) is also studied. It is found to have an additional drastic effect on the asymmetry of the threshold value to power. Some possible applications are mentioned. --- paper_title: Real space renormalization group and totalitarian paradox of majority rule voting paper_content: The effect of majority rule voting in hierarchical structures is studied using the basic concepts from real space renormalization group. It shows in particular that a huge majority can be self-eliminated while climbing up the hierarchy levels. This majority democratic self-elimination articulates around the existence of fixed points in the voting flow. An unstable fixed point determines the critical threshold to full and total power. It can be varied from 50% up to 77% of initial support. Our model could shed new light on the last century eastern European communist collapse. --- paper_title: Political paradoxes of majority rule voting and hierarchical systems paper_content: The use of majority rule voting is believed to be instrumental to establish democratic operating of political organizations. However in this work it is shown that, while applied to hierarchical systems, it leads to political paradoxes. To substantiate these findings a model to construct self-directed pyramidal structures from bottom up to the top is presented. Using majority rules it is shown that a minority and even a majority can be systematically self-eliminated from top leadership, provided the hierarchy has a minimal number of levels. In some cases, 70% of the population is found to have zero representation after 6 hierarchical levels. Results are discussed with respect to internal operating of political organizations. --- paper_title: Towards a theory of collective phenomena: Consensus and attitude changes in groups paper_content: This study presents the outline of a model for collective phenomena. A symmetry-breaking model combines a number of well-established social psychology hypotheses with recent concepts of statistical physics. Specifically we start out from the regularities obtained in studies on the polarization of attitudes and decisions. From a strictly logical point of view, it is immediately clear that aggregation effects must be analysed separately from group effects as such. The conceptual analysis of the assumed mechanisms reveals that when we deal with phenomena that have until now been designated as polarization phenomena, we are faced not with a single phenomenon, as was believed hitherto, but with a whole class of phenomena. For this reason it would be appropriate to deal with them differentially both from an empirical and from a theoretical point of view. It is possible to show, moreover, that in principle polarization is a direct function of interaction and, beyond a critical threshold an inverse function of the differentiation between group members. A certain number of verifiable conjectures are presented on the basis of physio-mathematical-psychological considerations. It is to be hoped that these theoretical outlines will make it possible to give a new lease on life to a field of research that has established solid facts, but that became trapped in a dead-end road, for lack of a sufficiently broad analysis. --- paper_title: Sociophysics: A mean behavior model for the process of strike paper_content: Plant for the treatment and the oxidation of antimony minerals, wherein an antimony mineral with any granulometry and humidity, eventually crushed, is carried by a hot gaseous stream, subjected to milling, drying, subsequent separations and then introduced in a rotary kiln in a state of extreme subdivision and oxidized dispersed in a gaseous compressed oxidizing stream. --- paper_title: Sociophysics: A mean behavior model for the process of strike paper_content: Plant for the treatment and the oxidation of antimony minerals, wherein an antimony mineral with any granulometry and humidity, eventually crushed, is carried by a hot gaseous stream, subjected to milling, drying, subsequent separations and then introduced in a rotary kiln in a state of extreme subdivision and oxidized dispersed in a gaseous compressed oxidizing stream. --- paper_title: Rational group decision making: a random field Ising model at paper_content: A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups. --- paper_title: Towards a theory of collective phenomena. III: Conflicts and forms of power paper_content: This paper further develops a new theory of power advanced by the authors in two previous papers (Galam and Moscovici, 1991, 1994). According to this theory power results from the build up of conflicts within a group, these conflicts requiring a degree of organizational complexity which is itself a decreasing function of group size. Within this approach, power appears to be a composite of three qualitatively different powers, institutional, generative and ecological. Levels and relationships among these forms of power are considered as a function of the diversity of the group. There exist also three states of organization associated with power evolution. At the group initial stage is the paradigmatic state. Creation and inclusion of conflicts are accomplished in the transitional state through the building of complexity. At a critical value of diversity, the group moves into the agonal state in which institutional power vanishes simultaneously with the fusion of generative and ecological powers --- paper_title: Universality of Group Decision Making paper_content: Group decision making is assumed to obey some universal features which are independent of both the social nature of the group making the decision and the nature of the decision itself. On this basis a simple magnetic like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasize on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead, the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. --- paper_title: From Individual Choice to Group Decision Making paper_content: Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions. --- paper_title: Towards a theory of collective phenomena. II: Conformity and power paper_content: A new theory of power is presented using the concept of symmetry breakdown in small and large groups. Power appears to result from the building up of conflicts within the group. Introduction and support of these conflicts requires an internal organization of the group. The organization-associated complexity is a decreasing function of group size. Thus small groups have more difficulties in generating internal conflicts than large ones. This group dynamic is characterized by two states which are different in their nature. The group is first built within the paradigmatic state aimed to determine and reproduce group conformity The group challenge is then to reach the transitional state which enriches the group possibilities through the inclusion and stabilization of internal conflicts. --- paper_title: Towards a theory of collective phenomena: Consensus and attitude changes in groups paper_content: This study presents the outline of a model for collective phenomena. A symmetry-breaking model combines a number of well-established social psychology hypotheses with recent concepts of statistical physics. Specifically we start out from the regularities obtained in studies on the polarization of attitudes and decisions. From a strictly logical point of view, it is immediately clear that aggregation effects must be analysed separately from group effects as such. The conceptual analysis of the assumed mechanisms reveals that when we deal with phenomena that have until now been designated as polarization phenomena, we are faced not with a single phenomenon, as was believed hitherto, but with a whole class of phenomena. For this reason it would be appropriate to deal with them differentially both from an empirical and from a theoretical point of view. It is possible to show, moreover, that in principle polarization is a direct function of interaction and, beyond a critical threshold an inverse function of the differentiation between group members. A certain number of verifiable conjectures are presented on the basis of physio-mathematical-psychological considerations. It is to be hoped that these theoretical outlines will make it possible to give a new lease on life to a field of research that has established solid facts, but that became trapped in a dead-end road, for lack of a sufficiently broad analysis. --- paper_title: Sociophysics: A mean behavior model for the process of strike paper_content: Plant for the treatment and the oxidation of antimony minerals, wherein an antimony mineral with any granulometry and humidity, eventually crushed, is carried by a hot gaseous stream, subjected to milling, drying, subsequent separations and then introduced in a rotary kiln in a state of extreme subdivision and oxidized dispersed in a gaseous compressed oxidizing stream. --- paper_title: Rational group decision making: a random field Ising model at paper_content: A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups. --- paper_title: Towards a theory of collective phenomena. III: Conflicts and forms of power paper_content: This paper further develops a new theory of power advanced by the authors in two previous papers (Galam and Moscovici, 1991, 1994). According to this theory power results from the build up of conflicts within a group, these conflicts requiring a degree of organizational complexity which is itself a decreasing function of group size. Within this approach, power appears to be a composite of three qualitatively different powers, institutional, generative and ecological. Levels and relationships among these forms of power are considered as a function of the diversity of the group. There exist also three states of organization associated with power evolution. At the group initial stage is the paradigmatic state. Creation and inclusion of conflicts are accomplished in the transitional state through the building of complexity. At a critical value of diversity, the group moves into the agonal state in which institutional power vanishes simultaneously with the fusion of generative and ecological powers --- paper_title: Universality of Group Decision Making paper_content: Group decision making is assumed to obey some universal features which are independent of both the social nature of the group making the decision and the nature of the decision itself. On this basis a simple magnetic like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasize on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead, the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. --- paper_title: From Individual Choice to Group Decision Making paper_content: Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions. --- paper_title: Towards a theory of collective phenomena. II: Conformity and power paper_content: A new theory of power is presented using the concept of symmetry breakdown in small and large groups. Power appears to result from the building up of conflicts within the group. Introduction and support of these conflicts requires an internal organization of the group. The organization-associated complexity is a decreasing function of group size. Thus small groups have more difficulties in generating internal conflicts than large ones. This group dynamic is characterized by two states which are different in their nature. The group is first built within the paradigmatic state aimed to determine and reproduce group conformity The group challenge is then to reach the transitional state which enriches the group possibilities through the inclusion and stabilization of internal conflicts. --- paper_title: Towards a theory of collective phenomena: Consensus and attitude changes in groups paper_content: This study presents the outline of a model for collective phenomena. A symmetry-breaking model combines a number of well-established social psychology hypotheses with recent concepts of statistical physics. Specifically we start out from the regularities obtained in studies on the polarization of attitudes and decisions. From a strictly logical point of view, it is immediately clear that aggregation effects must be analysed separately from group effects as such. The conceptual analysis of the assumed mechanisms reveals that when we deal with phenomena that have until now been designated as polarization phenomena, we are faced not with a single phenomenon, as was believed hitherto, but with a whole class of phenomena. For this reason it would be appropriate to deal with them differentially both from an empirical and from a theoretical point of view. It is possible to show, moreover, that in principle polarization is a direct function of interaction and, beyond a critical threshold an inverse function of the differentiation between group members. A certain number of verifiable conjectures are presented on the basis of physio-mathematical-psychological considerations. It is to be hoped that these theoretical outlines will make it possible to give a new lease on life to a field of research that has established solid facts, but that became trapped in a dead-end road, for lack of a sufficiently broad analysis. --- paper_title: Spontaneous Coalition Forming. Why Some Are Stable? paper_content: A model to describe the spontaneous formation of military and economic coalitions among a group of countries is proposed using spin glass theory. Between each couple of countries, there exists a bond exchange coupling which is either zero, cooperative or conflicting. It depends on their common history, specific nature, and cannot be varied. Then, given a frozen random bond distribution, coalitions are found to spontaneously form. However they are also unstable making the system very disordered. Countries shift coalitions all the time. Only the setting of macro extra national coalition are shown to stabilize alliances among countries. The model gives new light on the recent instabilities produced in Eastern Europe by the Warsow pact dissolution at odd to the previous communist stability. Current European stability is also discussed with respect to the European Union construction. --- paper_title: Fragmentation versus stability in bimodal coalitions paper_content: Competing bimodal coalitions among a group of actors are discussed. First, a model from political sciences is revisited. Most of the model statements are found not to be contained in the model. Second, a new coalition model is built. It accounts for local versus global alignment with respect to the joining of a coalition. The existence of two competing world coaltions is found to yield one unique stable distribution of actors. On the opposite a unique world leadership allows the emergence of unstable relationships. In parallel to regular actors which have a clear coalition choice, ``neutral"``frustrated"and ``risky"actors are produced. The cold war organisation after world war II is shown to be rather stable. The emergence of a fragmentation process from eastern group disappearance is explained as well as continuing western group stability. Some hints are obtained about possible policies to stabilize world nation relationships. European construction is analyzed with respect to european stability. Chinese stability is also discussed. --- paper_title: Comment on “A landscape theory of aggregation” paper_content: The problem of aggregation processes in alignments is the subject of a paper published recently in a statistical physics journal. Two models are presented and discussed in that paper. First, the energy landscape model proposed by Robert Axelrod and D. Scott Bennett (this Journal, 23 (1993), 211–33) is analysed. The model is shown not to include most of its claimed results. Then a second model is presented to reformulate the problem correctly within statistical physics and to extend it beyond the initial Axelrod-Bennett analogy. ::: Serge Galam, ‘Fragmentation Versus Stability in Bimodal Coalitions’, Physica A , 230 (1966), 174–88. --- paper_title: Optimizing Conflicts in the Formation of Strategic Alliances paper_content: Abstract:Coalition setting among a set of actors (countries, firms, individuals) is studied using concepts from the theory of spin glasses. Given the distribution of respective bilateral propensities to either cooperation or conflict, the phenomenon of local aggregation is modeled. In particular the number of coalitions is determined according to a minimum conflict principle. It is found not to be always two. Along these lines, previous studies are revisited and are found not to be consistent with their own principles. The model is then used to describe the fragmentation of former Yugoslavia. Results are compared to the actual situation. --- paper_title: Spontaneous Coalition Forming. Why Some Are Stable? paper_content: A model to describe the spontaneous formation of military and economic coalitions among a group of countries is proposed using spin glass theory. Between each couple of countries, there exists a bond exchange coupling which is either zero, cooperative or conflicting. It depends on their common history, specific nature, and cannot be varied. Then, given a frozen random bond distribution, coalitions are found to spontaneously form. However they are also unstable making the system very disordered. Countries shift coalitions all the time. Only the setting of macro extra national coalition are shown to stabilize alliances among countries. The model gives new light on the recent instabilities produced in Eastern Europe by the Warsow pact dissolution at odd to the previous communist stability. Current European stability is also discussed with respect to the European Union construction. --- paper_title: Fragmentation versus stability in bimodal coalitions paper_content: Competing bimodal coalitions among a group of actors are discussed. First, a model from political sciences is revisited. Most of the model statements are found not to be contained in the model. Second, a new coalition model is built. It accounts for local versus global alignment with respect to the joining of a coalition. The existence of two competing world coaltions is found to yield one unique stable distribution of actors. On the opposite a unique world leadership allows the emergence of unstable relationships. In parallel to regular actors which have a clear coalition choice, ``neutral"``frustrated"and ``risky"actors are produced. The cold war organisation after world war II is shown to be rather stable. The emergence of a fragmentation process from eastern group disappearance is explained as well as continuing western group stability. Some hints are obtained about possible policies to stabilize world nation relationships. European construction is analyzed with respect to european stability. Chinese stability is also discussed. --- paper_title: Comment on “A landscape theory of aggregation” paper_content: The problem of aggregation processes in alignments is the subject of a paper published recently in a statistical physics journal. Two models are presented and discussed in that paper. First, the energy landscape model proposed by Robert Axelrod and D. Scott Bennett (this Journal, 23 (1993), 211–33) is analysed. The model is shown not to include most of its claimed results. Then a second model is presented to reformulate the problem correctly within statistical physics and to extend it beyond the initial Axelrod-Bennett analogy. ::: Serge Galam, ‘Fragmentation Versus Stability in Bimodal Coalitions’, Physica A , 230 (1966), 174–88. --- paper_title: Spontaneous Coalition Forming. Why Some Are Stable? paper_content: A model to describe the spontaneous formation of military and economic coalitions among a group of countries is proposed using spin glass theory. Between each couple of countries, there exists a bond exchange coupling which is either zero, cooperative or conflicting. It depends on their common history, specific nature, and cannot be varied. Then, given a frozen random bond distribution, coalitions are found to spontaneously form. However they are also unstable making the system very disordered. Countries shift coalitions all the time. Only the setting of macro extra national coalition are shown to stabilize alliances among countries. The model gives new light on the recent instabilities produced in Eastern Europe by the Warsow pact dissolution at odd to the previous communist stability. Current European stability is also discussed with respect to the European Union construction. --- paper_title: Fragmentation versus stability in bimodal coalitions paper_content: Competing bimodal coalitions among a group of actors are discussed. First, a model from political sciences is revisited. Most of the model statements are found not to be contained in the model. Second, a new coalition model is built. It accounts for local versus global alignment with respect to the joining of a coalition. The existence of two competing world coaltions is found to yield one unique stable distribution of actors. On the opposite a unique world leadership allows the emergence of unstable relationships. In parallel to regular actors which have a clear coalition choice, ``neutral"``frustrated"and ``risky"actors are produced. The cold war organisation after world war II is shown to be rather stable. The emergence of a fragmentation process from eastern group disappearance is explained as well as continuing western group stability. Some hints are obtained about possible policies to stabilize world nation relationships. European construction is analyzed with respect to european stability. Chinese stability is also discussed. --- paper_title: Comment on “A landscape theory of aggregation” paper_content: The problem of aggregation processes in alignments is the subject of a paper published recently in a statistical physics journal. Two models are presented and discussed in that paper. First, the energy landscape model proposed by Robert Axelrod and D. Scott Bennett (this Journal, 23 (1993), 211–33) is analysed. The model is shown not to include most of its claimed results. Then a second model is presented to reformulate the problem correctly within statistical physics and to extend it beyond the initial Axelrod-Bennett analogy. ::: Serge Galam, ‘Fragmentation Versus Stability in Bimodal Coalitions’, Physica A , 230 (1966), 174–88. --- paper_title: Majority rule, hierarchical structures, and democratic totalitarianism: a statistical approach paper_content: Abstract An alternative model to the formation of pyramidal structures is presented in the framework of social systems. The population is assumed to be distributed between two social orientations G and H with respective probabilities p 0 and (1 − p 0 ). Instead of starting the hierarchy with a given H -orientation at the top and then going downwards in a deterministic way, the hierarchy is initiated randomly at the bottom from the surrounding population. Every level is then selected from the one underneath using the principle of majority rule. The hierarchy is thus self-oriented at the top. It is shown how such self-oriented hierarchies are always H -oriented provided they have a minimal number of levels which is determined by the value of p 0 . Their stability is studied against increases in p 0 . An ideal transition to G -self-oriented hierarchies is obtained. --- paper_title: Optimizing Conflicts in the Formation of Strategic Alliances paper_content: Abstract:Coalition setting among a set of actors (countries, firms, individuals) is studied using concepts from the theory of spin glasses. Given the distribution of respective bilateral propensities to either cooperation or conflict, the phenomenon of local aggregation is modeled. In particular the number of coalitions is determined according to a minimum conflict principle. It is found not to be always two. Along these lines, previous studies are revisited and are found not to be consistent with their own principles. The model is then used to describe the fragmentation of former Yugoslavia. Results are compared to the actual situation. --- paper_title: Spontaneous Coalition Forming. Why Some Are Stable? paper_content: A model to describe the spontaneous formation of military and economic coalitions among a group of countries is proposed using spin glass theory. Between each couple of countries, there exists a bond exchange coupling which is either zero, cooperative or conflicting. It depends on their common history, specific nature, and cannot be varied. Then, given a frozen random bond distribution, coalitions are found to spontaneously form. However they are also unstable making the system very disordered. Countries shift coalitions all the time. Only the setting of macro extra national coalition are shown to stabilize alliances among countries. The model gives new light on the recent instabilities produced in Eastern Europe by the Warsow pact dissolution at odd to the previous communist stability. Current European stability is also discussed with respect to the European Union construction. --- paper_title: Fragmentation versus stability in bimodal coalitions paper_content: Competing bimodal coalitions among a group of actors are discussed. First, a model from political sciences is revisited. Most of the model statements are found not to be contained in the model. Second, a new coalition model is built. It accounts for local versus global alignment with respect to the joining of a coalition. The existence of two competing world coaltions is found to yield one unique stable distribution of actors. On the opposite a unique world leadership allows the emergence of unstable relationships. In parallel to regular actors which have a clear coalition choice, ``neutral"``frustrated"and ``risky"actors are produced. The cold war organisation after world war II is shown to be rather stable. The emergence of a fragmentation process from eastern group disappearance is explained as well as continuing western group stability. Some hints are obtained about possible policies to stabilize world nation relationships. European construction is analyzed with respect to european stability. Chinese stability is also discussed. --- paper_title: Comment on “A landscape theory of aggregation” paper_content: The problem of aggregation processes in alignments is the subject of a paper published recently in a statistical physics journal. Two models are presented and discussed in that paper. First, the energy landscape model proposed by Robert Axelrod and D. Scott Bennett (this Journal, 23 (1993), 211–33) is analysed. The model is shown not to include most of its claimed results. Then a second model is presented to reformulate the problem correctly within statistical physics and to extend it beyond the initial Axelrod-Bennett analogy. ::: Serge Galam, ‘Fragmentation Versus Stability in Bimodal Coalitions’, Physica A , 230 (1966), 174–88. --- paper_title: Optimizing Conflicts in the Formation of Strategic Alliances paper_content: Abstract:Coalition setting among a set of actors (countries, firms, individuals) is studied using concepts from the theory of spin glasses. Given the distribution of respective bilateral propensities to either cooperation or conflict, the phenomenon of local aggregation is modeled. In particular the number of coalitions is determined according to a minimum conflict principle. It is found not to be always two. Along these lines, previous studies are revisited and are found not to be consistent with their own principles. The model is then used to describe the fragmentation of former Yugoslavia. Results are compared to the actual situation. --- paper_title: On reducing terrorism power: a hint from physics paper_content: The September 11 attack on the US has revealed an unprecedented terrorism worldwide range of destruction. Recently, it has been related to the percolation of worldwide spread passive supporters. This scheme puts the suppression of the percolation effect as the major strategic issue in the fight against terrorism. Accordingly the world density of passive supporters should be reduced below the percolation threshold. In terms of solid policy, it means to neutralize millions of random passive supporters, which is contrary to ethics and out of any sound practical scheme. Given this impossibility we suggest instead a new strategic scheme to act directly on the value of the terrorism percolation threshold itself without harming the passive supporters. Accordingly we identify the space hosting the percolation phenomenon to be a multi-dimensional virtual social space which extends the ground earth surface to include the various independent terrorist-fighting goals. The associated percolating cluster is then found to create long-range ground connections to terrorism activity. We are thus able to modify the percolation threshold pc in the virtual space to reach p<pc by decreasing the social space dimension, leaving the density p unchanged. At once that would break down the associated world terrorism network to a family of unconnected finite-size clusters. The current world terrorism threat would thus shrink immediately and spontaneously to a local geographic problem. There, military action would become limited and efficient. --- paper_title: Global Physics: From Percolation to Terrorism, Guerilla Warfare and Clandestine Activities paper_content: The September 11 attack on the US has revealed an unprecedented terrorism with worldwide range of destruction. It is argued to result from the first worldwide percolation of passive supporters. They are people sympathetic to the terrorism cause but without being involved with it. They just do not oppose it in case they could. This scheme puts suppression of the percolation as the major strategic issue in the fight against terrorism. Acting on the population is shown to be useless. Instead a new strategic scheme is suggested to increase the terrorism percolation threshold and in turn suppress the percolation. The relevant associated space is identified as a multi-dimensional social space including both the ground earth surface and all various independent flags displayed by the terrorist group. Some hints are given on how to shrink the geographical spreading of terrorism threat. The model apply to a large spectrum of clandestine activities including guerilla warfare as well as tax evasion, corruption, illegal gambling, illegal prostitution and black markets. --- paper_title: Local dynamics vs.?social mechanisms: A unifying frame paper_content: We present a general sequential probabilistic frame, which extends a series of earlier opinion dynamics models. In addition, it orders and classifies all of the existing two-state spin systems. The scheme operates via local updates where a majority rule is applied differently in each possible configuration of a local group. It is weighted by a local probability which is a function of the local value of the order parameter, i.e., the majority-to-minority ratio. The system is thus driven from one equilibrium state into another equilibrium state till no collective change occurs. A phase diagram can thus be constructed. It has two phases, one where the collective opinion ends up broken along one opinion, and another with an even coexistence of both opinions. Two different regimes, monotonic and dampened oscillatory, are found for the coexistence phase. At the phase transition local probabilities conserve the density of opinions and reproduce the collective dynamics of the Voter model. The essential behavior of all existing discrete two-state models (Galam, Sznajd, Ochrombel, Stauffer, Krapivsky-Redner, Mobilia-Redner, Behera-Schweitzer, Slanina-Lavicka, Sanchez ...) is recovered and found to depart from each other only in the value of their local probabilities. Corresponding simulations are discussed. It is concluded that one should not judge from the above model results the validity of their respective psycho-social assumptions. --- paper_title: From 2000 Bush-Gore to 2006 Italian elections: Voting at fifty-fifty and the Contrarian Effect paper_content: A sociophysical model for opinion dynamics is shown to embody a series of recent western hung national votes all set at the unexpected and very improbable edge of a fifty-fifty score. It started with the Bush–Gore 2000 American presidential election, followed by the 2002 Stoiber–Schroder, then the 2005 Schroder–Merkel German elections, and finally the 2006 Prodi-Berlusconi Italian elections. In each case, the country was facing drastic choices, the running competing parties were advocating very different programs and millions of voters were involved. Moreover, polls were given a substantial margin for the predicted winner. While all these events were perceived as accidental and isolated, our model suggests that indeed they are deterministic and obey to one single universal phenomena associated to the effect of contrarian behavior on the dynamics of opinion forming. The not hung Bush–Kerry 2004 presidential election is shown to belong to the same universal frame. To conclude, the existence of contrarians hints at the repetition of hung elections in the near future. --- paper_title: Modeling rumors: The no plane pentagon french hoax case paper_content: The recent astonishing wide adhesion of French people to the rumor claiming ‘No plane did crash on the Pentagon on September 11’, is given a generic explanation in terms of a model of minority opinion spreading. Using a majority rule reaction–diffusion dynamics, a rumor is shown to invade for sure a social group provided it fulfills simultaneously two criteria. First it must initiate with a support beyond some critical threshold which however, turns out to be always very low. Then it has to be consistent with some larger collective social paradigm of the group. Otherwise it just dies out. Both conditions were satisfied in the French case with the associated book sold at more than 200 000 copies in just a few days. The rumor was stopped by the firm stand of most newspaper editors stating it is nonsense. Such an incredible social dynamics is shown to result naturally from an open and free public debate among friends and colleagues. Each one searching for the truth sincerely on a free will basis and without individual biases. The polarization process appears also to be very quick in agreement with reality. It is a very strong anti-democratic reversal of opinion although made quite democratically. The model may apply to a large range of rumors. --- paper_title: Opinion dynamics in a three-choice system paper_content: We generalize Galam’s model of opinion spreading by introducing three competing choices. At each update, the population is randomly divided in groups of three agents, whose members adopt the opinion of the local majority. In the case of a tie, the local group adopts opinion A, B or C with probabilities α, β and (1-α-β) respectively. We derive the associated phase diagrams and dynamics by both analytical means and simulations. Polarization is always reached within very short time scales. We point out situations in which an initially very small minority opinion can invade the whole system. --- paper_title: An evolution theory in finite size systems paper_content: A new model of evolution is presented for finite size systems. Conditions under which a minority species can emerge, spread and stabilize to a macroscopic size are studied. It is found that space organization is instrumental in addition to a qualitative advantage. Some peculiar topologies ensure the overcome of the initial majority species. However the probability of such local clusters is very small and depend strongly on the system size. A probabilistic phase diagram is obtained for small sizes. It reduces to a trivial situation in the thermodynamic limit, thus indicating the importance of dealing with finite systems in evolution problems. Results are discussed with respect to both Darwin and punctuated equilibria theories. --- paper_title: Killer Geometries in Competing Species Dynamics paper_content: We discuss a cellular automata model to study the competition between an emergent better fitted species against an existing majority species. The model implements local fights among small group of individuals and a synchronous random walk on a 2D lattice. The faith of the system, i.e., the spreading or disappearance of the species is determined by their initial density and fight frequency. The initial density of the emergent species has to be higher than a critical threshold for total spreading but this value depends in a non-trivial way on the fight frequency. Below the threshold any better adapted species disappears showing that a qualitative advantage is not enough for a minority to win. No strategy is involved but spatial organization turns out to be crucial. For instance, at minority densities of zero measure some very rare local geometries which occur by chance are found to be killer geometries. Once set they lead with high probability to the total destruction of the preexisting majority species. The occurrence rate of these killer geometries is a function of the system size. This model may apply to a large spectrum of competing groups like smoker–non smoker, opinion forming, diffusion of innovation setting of industrial standards, species evolution, epidemic spreading and cancer growth. --- paper_title: The role of inflexible minorities in the breaking of democratic opinion dynamics paper_content: We study the effect of inflexible agents on two state opinion dynamics. The model operates via repeated local updates of random grouping of agents. While floater agents do eventually flip their opinion to follow the local majority, inflexible agents keep their opinion always unchanged. It is a quenched individual opinion. In the bare model (no inflexibles), a separator at 50% drives the dynamics towards either one of two pure attractors, each associated with a full polarization along one of the opinions. The initial majority wins. The existence of inflexibles for only one of the two opinions is found to shift the separator at a lower value than 50% in favor of that side. Moreover it creates an incompressible minority around the inflexibles, one of the pure attractors becoming a mixed phase attractor. In addition above a threshold of 17% inflexibles make their side sure of winning whatever the initial conditions are. The inflexible minority wins. An equal presence of inflexibles on both sides restores the balanced dynamics with again a separator at 50% and now two mixed phase attractors on each side. Nevertheless, beyond 25% the dynamics is reversed with a unique attractor at a 50–50 stable equilibrium. But a very small advantage in inflexibles results in a decisive lowering of the separator at the advantage of the corresponding opinion. A few percent advantage does guarantee to become majority with one single attractor. The model is solved exhaustedly for groups of size 3. --- paper_title: FASHION, NOVELTY AND OPTIMALITY: An application from Physics paper_content: We apply a physical-based model to describe the clothes fashion market. Every time a new outlet appears on the market, it can invade the market under certain specific conditions. Hence, the “old” outlet can be completely dominated and disappears. Each creator competes for a finite population of agents. Fashion phenomena are shown to result from a collective phenomenon produced by local individual imitation effects. We assume that, in each step of the imitation process, agents only interact with a subset rather than with the whole set of agents. People are actually more likely to influence (and be influenced by) their close “neighbors”. Accordingly, we discuss which strategy is best fitted for new producers when people are either simply organized into anonymous reference groups or when they are organized in social groups hierarchically ordered. While counterfeits are shown to reinforce the first strategy, creating social leaders can permit to avoid them. --- paper_title: Threshold Phenomena versus Killer Clusters in Bimodal Competion for Standards paper_content: Given an individually used standard on a territory we study the conditions for total spreading of a new emergent better fitted competing standard. The associated dynamics is monitored by local competing updating which occurs at random among a few individuals. The analysis is done using a cellular automata model within a two-dimensional lattice with synchronous random walk. Starting from an initial density of the new standard the associated density evolution is studied using groups of four individuals each. For each local update the outcome goes along the local majority within the group. However in case of a tie, the better fitted standard wins. Updates may happen at each diffusive step according to some fixed probability. For every value of that probability a critical threshold, in the initial new emergent standard density, is found to determine its total either disappearance or spreading making the process a threshold phenomenon. Nevertheless it turns out that even at a zero density measure of the new emergent standard there exits some peculiar killer clusters of it which have a non zero probability to grow and invade the whole system. At the same time the occurrence of such killer clusters is a very rare event and is a function of the system size. Application of the model to a large spectrum of competing dynamics is discussed. It includes the smoker-non smoker fight, opinion forming, diffusion of innovation, species evolution, epidemic spreading and cancer growth. --- paper_title: Cancerous tumor: the high frequency of a rare event. paper_content: A simple model for cancer growth is presented using cellular automata. Cells diffuse randomly on a two-dimensional square lattice. Individual cells can turn cancerous at a very low rate. During each diffusive step, local fights may occur between healthy and cancerous cells. Associated outcomes depend on some biased local rules, which are independent of the overall cancerous cell density. The models unique ingredients are the frequency of local fights and the bias amplitude. While each isolated cancerous cell is eventually destroyed, an initial two-cell tumor cluster is found to have a nonzero probabilty to spread over the whole system. The associated phase diagram for survival or death is obtained as a function of both the rate of fight and the bias distribution. Within the model, although the occurrence of a killing cluster is a very rare event, it turns out to happen almost systematically over long periods of time, e.g., on the order of an adults life span. Thus, after some age, survival from tumorous cancer becomes random. --- paper_title: An evolution theory in finite size systems paper_content: A new model of evolution is presented for finite size systems. Conditions under which a minority species can emerge, spread and stabilize to a macroscopic size are studied. It is found that space organization is instrumental in addition to a qualitative advantage. Some peculiar topologies ensure the overcome of the initial majority species. However the probability of such local clusters is very small and depend strongly on the system size. A probabilistic phase diagram is obtained for small sizes. It reduces to a trivial situation in the thermodynamic limit, thus indicating the importance of dealing with finite systems in evolution problems. Results are discussed with respect to both Darwin and punctuated equilibria theories. --- paper_title: Killer Geometries in Competing Species Dynamics paper_content: We discuss a cellular automata model to study the competition between an emergent better fitted species against an existing majority species. The model implements local fights among small group of individuals and a synchronous random walk on a 2D lattice. The faith of the system, i.e., the spreading or disappearance of the species is determined by their initial density and fight frequency. The initial density of the emergent species has to be higher than a critical threshold for total spreading but this value depends in a non-trivial way on the fight frequency. Below the threshold any better adapted species disappears showing that a qualitative advantage is not enough for a minority to win. No strategy is involved but spatial organization turns out to be crucial. For instance, at minority densities of zero measure some very rare local geometries which occur by chance are found to be killer geometries. Once set they lead with high probability to the total destruction of the preexisting majority species. The occurrence rate of these killer geometries is a function of the system size. This model may apply to a large spectrum of competing groups like smoker–non smoker, opinion forming, diffusion of innovation setting of industrial standards, species evolution, epidemic spreading and cancer growth. --- paper_title: From 2000 Bush-Gore to 2006 Italian elections: Voting at fifty-fifty and the Contrarian Effect paper_content: A sociophysical model for opinion dynamics is shown to embody a series of recent western hung national votes all set at the unexpected and very improbable edge of a fifty-fifty score. It started with the Bush–Gore 2000 American presidential election, followed by the 2002 Stoiber–Schroder, then the 2005 Schroder–Merkel German elections, and finally the 2006 Prodi-Berlusconi Italian elections. In each case, the country was facing drastic choices, the running competing parties were advocating very different programs and millions of voters were involved. Moreover, polls were given a substantial margin for the predicted winner. While all these events were perceived as accidental and isolated, our model suggests that indeed they are deterministic and obey to one single universal phenomena associated to the effect of contrarian behavior on the dynamics of opinion forming. The not hung Bush–Kerry 2004 presidential election is shown to belong to the same universal frame. To conclude, the existence of contrarians hints at the repetition of hung elections in the near future. --- paper_title: Opinion dynamics in a three-choice system paper_content: We generalize Galam’s model of opinion spreading by introducing three competing choices. At each update, the population is randomly divided in groups of three agents, whose members adopt the opinion of the local majority. In the case of a tie, the local group adopts opinion A, B or C with probabilities α, β and (1-α-β) respectively. We derive the associated phase diagrams and dynamics by both analytical means and simulations. Polarization is always reached within very short time scales. We point out situations in which an initially very small minority opinion can invade the whole system. --- paper_title: COEXISTENCE OF OPPOSITE GLOBAL SOCIAL FEELINGS: THE CASE OF PERCOLATION DRIVEN INSECURITY paper_content: A model of the dynamics of appearance of a new collective feeling, in addition and opposite to an existing one, is presented. Using percolation theory, the collective feeling of insecurity is shown to be able to coexist with the opposite collective feeling of safety. Indeed this coexistence of contradictory social feelings result from the simultaneous percolation of two infinite clusters of people who are respectively experiencing a safe and unsafe local environment. Therefore opposing claims on national debates over insecurity are shown to be possibly both valid. --- paper_title: The role of inflexible minorities in the breaking of democratic opinion dynamics paper_content: We study the effect of inflexible agents on two state opinion dynamics. The model operates via repeated local updates of random grouping of agents. While floater agents do eventually flip their opinion to follow the local majority, inflexible agents keep their opinion always unchanged. It is a quenched individual opinion. In the bare model (no inflexibles), a separator at 50% drives the dynamics towards either one of two pure attractors, each associated with a full polarization along one of the opinions. The initial majority wins. The existence of inflexibles for only one of the two opinions is found to shift the separator at a lower value than 50% in favor of that side. Moreover it creates an incompressible minority around the inflexibles, one of the pure attractors becoming a mixed phase attractor. In addition above a threshold of 17% inflexibles make their side sure of winning whatever the initial conditions are. The inflexible minority wins. An equal presence of inflexibles on both sides restores the balanced dynamics with again a separator at 50% and now two mixed phase attractors on each side. Nevertheless, beyond 25% the dynamics is reversed with a unique attractor at a 50–50 stable equilibrium. But a very small advantage in inflexibles results in a decisive lowering of the separator at the advantage of the corresponding opinion. A few percent advantage does guarantee to become majority with one single attractor. The model is solved exhaustedly for groups of size 3. --- paper_title: Local dynamics vs.?social mechanisms: A unifying frame paper_content: We present a general sequential probabilistic frame, which extends a series of earlier opinion dynamics models. In addition, it orders and classifies all of the existing two-state spin systems. The scheme operates via local updates where a majority rule is applied differently in each possible configuration of a local group. It is weighted by a local probability which is a function of the local value of the order parameter, i.e., the majority-to-minority ratio. The system is thus driven from one equilibrium state into another equilibrium state till no collective change occurs. A phase diagram can thus be constructed. It has two phases, one where the collective opinion ends up broken along one opinion, and another with an even coexistence of both opinions. Two different regimes, monotonic and dampened oscillatory, are found for the coexistence phase. At the phase transition local probabilities conserve the density of opinions and reproduce the collective dynamics of the Voter model. The essential behavior of all existing discrete two-state models (Galam, Sznajd, Ochrombel, Stauffer, Krapivsky-Redner, Mobilia-Redner, Behera-Schweitzer, Slanina-Lavicka, Sanchez ...) is recovered and found to depart from each other only in the value of their local probabilities. Corresponding simulations are discussed. It is concluded that one should not judge from the above model results the validity of their respective psycho-social assumptions. --- paper_title: Modeling rumors: The no plane pentagon french hoax case paper_content: The recent astonishing wide adhesion of French people to the rumor claiming ‘No plane did crash on the Pentagon on September 11’, is given a generic explanation in terms of a model of minority opinion spreading. Using a majority rule reaction–diffusion dynamics, a rumor is shown to invade for sure a social group provided it fulfills simultaneously two criteria. First it must initiate with a support beyond some critical threshold which however, turns out to be always very low. Then it has to be consistent with some larger collective social paradigm of the group. Otherwise it just dies out. Both conditions were satisfied in the French case with the associated book sold at more than 200 000 copies in just a few days. The rumor was stopped by the firm stand of most newspaper editors stating it is nonsense. Such an incredible social dynamics is shown to result naturally from an open and free public debate among friends and colleagues. Each one searching for the truth sincerely on a free will basis and without individual biases. The polarization process appears also to be very quick in agreement with reality. It is a very strong anti-democratic reversal of opinion although made quite democratically. The model may apply to a large range of rumors. --- paper_title: FASHION, NOVELTY AND OPTIMALITY: An application from Physics paper_content: We apply a physical-based model to describe the clothes fashion market. Every time a new outlet appears on the market, it can invade the market under certain specific conditions. Hence, the “old” outlet can be completely dominated and disappears. Each creator competes for a finite population of agents. Fashion phenomena are shown to result from a collective phenomenon produced by local individual imitation effects. We assume that, in each step of the imitation process, agents only interact with a subset rather than with the whole set of agents. People are actually more likely to influence (and be influenced by) their close “neighbors”. Accordingly, we discuss which strategy is best fitted for new producers when people are either simply organized into anonymous reference groups or when they are organized in social groups hierarchically ordered. While counterfeits are shown to reinforce the first strategy, creating social leaders can permit to avoid them. --- paper_title: Sociophysics: a personal testimony paper_content: The origins of Sociophysics are discussed from a personal testimony. I trace back its history to the late 1970s. My 20 years of activities and research to establish and promote the field are reviewed. In particular, the conflicting nature of Sociophysics with the physics community is revealed from my own experience. Recent presentations of a supposed natural growth from Social Sciences are criticized. --- paper_title: Local dynamics vs.?social mechanisms: A unifying frame paper_content: We present a general sequential probabilistic frame, which extends a series of earlier opinion dynamics models. In addition, it orders and classifies all of the existing two-state spin systems. The scheme operates via local updates where a majority rule is applied differently in each possible configuration of a local group. It is weighted by a local probability which is a function of the local value of the order parameter, i.e., the majority-to-minority ratio. The system is thus driven from one equilibrium state into another equilibrium state till no collective change occurs. A phase diagram can thus be constructed. It has two phases, one where the collective opinion ends up broken along one opinion, and another with an even coexistence of both opinions. Two different regimes, monotonic and dampened oscillatory, are found for the coexistence phase. At the phase transition local probabilities conserve the density of opinions and reproduce the collective dynamics of the Voter model. The essential behavior of all existing discrete two-state models (Galam, Sznajd, Ochrombel, Stauffer, Krapivsky-Redner, Mobilia-Redner, Behera-Schweitzer, Slanina-Lavicka, Sanchez ...) is recovered and found to depart from each other only in the value of their local probabilities. Corresponding simulations are discussed. It is concluded that one should not judge from the above model results the validity of their respective psycho-social assumptions. ---
Title: Sociophysics: A review of Galam models Section 1: INTRODUCTION Description 1: Introduce the field of sociophysics, tracing its emergence, development, and recognition within the physics community. Provide an overview of the key topics addressed in sociophysics, with an emphasis on opinion dynamics. Explain the scope and focus of this review, specifically on Galam models over the past twenty-five years. Section 2: BOTTOM-UP VOTING IN HIERARCHICAL SYSTEMS Description 2: Discuss the main question of measuring the democratic balance in hierarchical organizations using bottom-up voting and local majority rule models. Highlight different scenarios and case studies involving local majority rules, power inertia, and larger voting groups, and their implications. Section 3: GROUP DECISION MAKING Description 3: Explore the application of the Ising ferromagnetic model to group decision-making processes in social systems, including the dynamics of strikes and group polarization. Include detailed models and their implications for understanding group behaviors such as consensus and extremism. Section 4: COALITIONS AND FRAGMENTATION IN A GROUP OF COUNTRIES Description 4: Analyze the use of spin glass models to describe coalition formation and fragmentation among countries. Delve into concepts like random bond and random site spin glasses, and extend the models to include multiparty coalitions with Potts variables. Discuss similarities with physical systems and provide real-world examples. Section 5: GLOBAL VERSUS LOCAL TERRORISM Description 5: Illustrate the application of percolation theory to understand the phenomena of global versus local terrorism, emphasizing the role of passive supporters. Discuss the transition between local and global terrorism and strategic implications for reducing global terrorism effectively. Section 6: OPINIONS DYNAMICS Description 6: Present models for understanding opinion dynamics, incorporating factors like local majority rule, reshuffling effects, multiple competing opinions, and heterogeneous beliefs. Discuss the impact of contrarian and inflexible agents on opinion evolution and societal outcomes. Section 7: CONCLUSION Description 7: Summarize the key findings from the various models reviewed. Discuss the current challenges and future directions of sociophysics, particularly the potential for sociophysics to become a predictive science with established rules.
Differentially Private Data Publishing and Analysis: A Survey
20
--- paper_title: Differential Privacy paper_content: In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy --- paper_title: Privacy Preserving Data Mining Models And Algorithms paper_content: Thank you very much for reading privacy preserving data mining models and algorithms. Maybe you have knowledge that, people have search numerous times for their favorite books like this privacy preserving data mining models and algorithms, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they cope with some infectious bugs inside their computer. --- paper_title: Privacy-preserving data publishing: A survey of recent developments paper_content: The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions. --- paper_title: Signal Processing and Machine Learning with Differential Privacy: Algorithms and Challenges for Continuous Data paper_content: Private companies, government entities, and institutions such as hospitals routinely gather vast amounts of digitized personal information about the individuals who are their customers, clients, or patients. Much of this information is private or sensitive, and a key technological challenge for the future is how to design systems and processing techniques for drawing inferences from this large-scale data while maintaining the privacy and security of the data and individual identities. Individuals are often willing to share data, especially for purposes such as public health, but they expect that their identity or the fact of their participation will not be disclosed. In recent years, there have been a number of privacy models and privacy-preserving data analysis algorithms to answer these challenges. In this article, we will describe the progress made on differentially private machine learning and signal processing. --- paper_title: The Algorithmic Foundations of Differential Privacy paper_content: The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it. --- paper_title: Differential Privacy paper_content: In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius' goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure, differential privacy, which, intuitively, captures the increased risk to one's privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy --- paper_title: Differential privacy: A survey of results paper_content: Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. ::: ::: In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning. --- paper_title: Differential privacy in new settings paper_content: Differential privacy is a recent notion of privacy tailored to the problem of statistical disclosure control: how to release statistical information about a set of people without compromising the the privacy of any individual [7]. We describe new work [10, 9] that extends differentially private data analysis beyond the traditional setting of a trusted curator operating, in perfect isolation, on a static dataset. We ask • How can we guarantee differential privacy, even against an adversary that has access to the algorithm's internal state, eg, by subpoena? An algorithm that achives this is said to be pan-private. • How can we guarantee differential privacy when the algorithm must continually produce outputs? We call this differential privacy under continual observation. We also consider these requirements in conjunction. --- paper_title: A firm foundation for private data analysis paper_content: In the information realm, loss of privacy is usually associated with failure to control access to information, to control the flow of information, or to control the purposes for which information is employed. Differential privacy arose in a context in which ensuring privacy is a challenge even if all these control problems are solved: privacy-preserving statistical analysis of data. The problem of statistical disclosure control – revealing accurate statistics about a set of respondents while preserving the privacy of individuals – has a venerable history, with an extensive literature spanning statistics, theoretical computer science, security, databases, and cryptography (see, for example, the excellent survey [1], the discussion of related work in [2] and the Journal of Official Statistics 9 (2), dedicated to confidentiality and disclosure control). This long history is a testament the importance of the problem. Statistical databases can be of enormous social value; they are used for apportioning resources, evaluating medical therapies, understanding the spread of disease, improving economic utility, and informing us about ourselves as a species. The data may be obtained in diverse ways. Some data, such as census, tax, and other sorts of official data, are compelled; others are collected opportunistically, for example, from traffic on the internet, transactions on Amazon, and search engine query logs; other data are provided altruistically, by respondents who hope that sharing their information will help others to avoid a specific misfortune, or more generally, to increase the public good. Altruistic data donors are typically promised their individual data will be kept confidential – in short, they are promised “privacy.” Similarly, medical data and legally compelled data, such as census data, tax return data, have legal privacy mandates. In our view, ethics demand that opportunistically obtained data should be treated no differently, especially when there is no reasonable alternative to engaging in the actions that generate the data in question. The problems remain: even if data encryption, key management, access control, and the motives of the data curator --- paper_title: Our Data, Ourselves: Privacy via Distributed Noise Generation paper_content: In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution. --- paper_title: Our Data, Ourselves: Privacy via Distributed Noise Generation paper_content: In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution. --- paper_title: Mechanism Design via Differential Privacy paper_content: We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero. --- paper_title: Calibrating Noise to Sensitivity in Private Data Analysis paper_content: We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive. --- paper_title: A learning theory approach to non-interactive database privacy paper_content: In this paper we demonstrate that, ignoring computational constraints, it is possible to privately release synthetic databases that are useful for large classes of queries -- much larger in size than the database itself. Specifically, we give a mechanism that privately releases synthetic data for a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries. We show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the VC-dimension of the class of queries, which itself grows only logarithmically with the size of the query class. ::: We also show that it is not possible to privately release even simple classes of queries (such as intervals and their generalizations) over continuous domains. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, given a slight relaxation of the utility guarantee. This algorithm does not release synthetic data, but instead another data structure capable of representing an answer for each query. We also give an efficient algorithm for releasing synthetic data for the class of interval queries and axis-aligned rectangles of constant dimension. ::: Finally, inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy. --- paper_title: Revealing information while preserving privacy paper_content: We examine the tradeoff between privacy and usability of statistical databases. We model a statistical database by an n-bit string d 1 ,..,d n , with a query being a subset q ⊆ [n] to be answered by Σ ieq d i . Our main result is a polynomial reconstruction algorithm of data from noisy (perturbed) subset sums. Applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude (Ω√n). That is, smaller perturbation always results in a strong violation of privacy. We show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude O(√n).For time-T bounded adversaries we demonstrate a privacypreserving access algorithm whose perturbation magnitude is ≈ √T. --- paper_title: Iterative Constructions and Private Data Release paper_content: In this paper we study the problem of approximately releasing the cut function of a graph while preserving differential privacy, and give new algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings. ::: ::: Our algorithms in the interactive setting are achieved by revisiting the problem of releasing differentially private, approximate answers to a large number of queries on a database. We show that several algorithms for this problem fall into the same basic framework, and are based on the existence of objects which we call iterative database construction algorithms. We give a new generic framework in which new (efficient) IDC algorithms give rise to new (efficient) interactive private query release mechanisms. Our modular analysis simplifies and tightens the analysis of previous algorithms, leading to improved bounds. We then give a new IDC algorithm (and therefore a new private, interactive query release mechanism) based on the Frieze/Kannan low-rank matrix decomposition. This new release mechanism gives an improvement on prior work in a range of parameters where the size of the database is comparable to the size of the data universe (such as releasing all cut queries on dense graphs). ::: ::: We also give a non-interactive algorithm for efficiently releasing private synthetic data for graph cuts with error O(|V|1.5). Our algorithm is based on randomized response and a non-private implementation of the SDP-based, constant-factor approximation algorithm for cut-norm due to Alon and Naor. Finally, we give a reduction based on the IDC framework showing that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts. We leave finding such an algorithm as our main open problem. --- paper_title: Exploiting Metric Structure for Efficient Private Query Release paper_content: We consider the problem of privately answering queries defined on databases which are collections of points belonging to some metric space. We give simple, computationally efficient algorithms for answering distance queries defined over an arbitrary metric. Distance queries are specified by points in the metric space, and ask for the average distance from the query point to the points contained in the database, according to the specified metric. Our algorithms run efficiently in the database size and the dimension of the space, and operate in both the online query release setting, and the offline setting in which they must in polynomial time generate a fixed data structure which can answer all queries of interest. This represents one of the first subclasses of linear queries for which efficient algorithms are known for the private query release problem, circumventing known hardness results for generic linear queries. --- paper_title: Interactive privacy via the median mechanism paper_content: We define a new interactive differentially private mechanism --- the median mechanism --- for answering arbitrary predicate queries that arrive online. Given fixed accuracy and privacy constraints, this mechanism can answer exponentially more queries than the previously best known interactive privacy mechanism (the Laplace mechanism, which independently perturbs each query result). With respect to the number of queries, our guarantee is close to the best possible, even for non-interactive privacy mechanisms. Conceptually, the median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting. We also give an efficient implementation of the median mechanism, with running time polynomial in the number of queries, the database size, and the domain size. This efficient implementation guarantees privacy for all input databases, and accurate query results for almost all input distributions. The dependence of the privacy on the number of queries in this mechanism improves over that of the best previously known efficient mechanism by a super-polynomial factor, even in the non-interactive setting. --- paper_title: A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis paper_content: We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacy-preserving answers to queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen. This is the first mechanism with worst-case accuracy guarantees that can answer large numbers of interactive queries and is {\em efficient} (in terms of the runtime's dependence on the data universe size). The error is asymptotically \emph{optimal} in its dependence on the number of participants, and depends only logarithmically on the number of queries being answered. The running time is nearly {\em linear} in the size of the data universe. As a further contribution, when we relax the utility requirement and require accuracy only for databases drawn from a rich class of databases, we obtain exponential improvements in running time. Even in this relaxed setting we continue to guarantee privacy for {\em any} input database. Only the utility requirement is relaxed. Specifically, we show that when the input database is drawn from a {\em smooth} distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes {\em poly-logarithmic} in the data universe size. The main technical contributions are the application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for the interactive setting, and a technique for reducing data dimensionality for databases drawn from smooth distributions. --- paper_title: Differential privacy: A survey of results paper_content: Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. ::: ::: In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning. --- paper_title: Differentially private histogram publication paper_content: Differential privacy (DP) is a promising scheme for releasing the results of statistical queries on sensitive data, with strong privacy guarantees against adversaries with arbitrary background knowledge. Existing studies on differential privacy mostly focus on simple aggregations such as counts. This paper investigates the publication of DP-compliant histograms, which is an important analytical tool for showing the distribution of a random variable, e.g., hospital bill size for certain patients. Compared to simple aggregations whose results are purely numerical, a histogram query is inherently more complex, since it must also determine its structure, i.e., the ranges of the bins. As we demonstrate in the paper, a DP-compliant histogram with finer bins may actually lead to significantly lower accuracy than a coarser one, since the former requires stronger perturbations in order to satisfy DP. Moreover, the histogram structure itself may reveal sensitive information, which further complicates the problem. Motivated by this, we propose two novel mechanisms, namely NoiseFirst and StructureFirst, for computing DP-compliant histograms. Their main difference lies in the relative order of the noise injection and the histogram structure computation steps. NoiseFirst has the additional benefit that it can improve the accuracy of an already published DP-compliant histogram computed using a naive method. For each of proposed mechanisms, we design algorithms for computing the optimal histogram structure with two different objectives: minimizing the mean square error and the mean absolute error, respectively. Going one step further, we extend both mechanisms to answer arbitrary range queries. Extensive experiments, using several real datasets, confirm that our two proposals output highly accurate query answers and consistently outperform existing competitors. --- paper_title: Understanding Hierarchical Methods for Differentially Private Histograms paper_content: In recent years, many approaches to differentially privately publish histograms have been proposed. Several approaches rely on constructing tree structures in order to decrease the error when answer large range queries. In this paper, we examine the factors affecting the accuracy of hierarchical approaches by studying the mean squared error (MSE) when answering range queries. We start with one-dimensional histograms, and analyze how the MSE changes with different branching factors, after employing constrained inference, and with different methods to allocate the privacy budget among hierarchy levels. Our analysis and experimental results show that combining the choice of a good branching factor with constrained inference outperform the current state of the art. Finally, we extend our analysis to multi-dimensional histograms. We show that the benefits from employing hierarchical methods beyond a single dimension are significantly diminished, and when there are 3 or more dimensions, it is almost always better to use the Flat method instead of a hierarchy. --- paper_title: Information preservation in statistical privacy and bayesian estimation of unattributed histograms paper_content: In statistical privacy, utility refers to two concepts: information preservation -- how much statistical information is retained by a sanitizing algorithm, and usability -- how (and with how much difficulty) does one extract this information to build statistical models, answer queries, etc. Some scenarios incentivize a separation between information preservation and usability, so that the data owner first chooses a sanitizing algorithm to maximize a measure of information preservation and, afterward, the data consumers process the sanitized output according to their needs [22, 46]. We analyze a variety of utility measures and show that the average (over possible outputs of the sanitizer) error of Bayesian decision makers forms the unique class of utility measures that satisfy three axioms related to information preservation. The axioms are agnostic to Bayesian concepts such as subjective probabilities and hence strengthen support for Bayesian views in privacy research. In particular, this result connects information preservation to aspects of usability -- if the information preservation of a sanitizing algorithm should be measured as the average error of a Bayesian decision maker, shouldn't Bayesian decision theory be a good choice when it comes to using the sanitized outputs for various purposes? We put this idea to the test in the unattributed histogram problem where our decision- theoretic post-processing algorithm empirically outperforms previously proposed approaches. --- paper_title: Maximum Likelihood Postprocessing for Differential Privacy under Consistency Constraints paper_content: When analyzing data that has been perturbed for privacy reasons, one is often concerned about its usefulness. Recent research on differential privacy has shown that the accuracy of many data queries can be improved by post-processing the perturbed data to ensure consistency constraints that are known to hold for the original data. Most prior work converted this post-processing step into a least squares minimization problem with customized efficient solutions. While improving accuracy, this approach ignored the noise distribution in the perturbed data. In this paper, to further improve accuracy, we formulate this post-processing step as a constrained maximum likelihood estimation problem, which is equivalent to constrained L1 minimization. Instead of relying on slow linear program solvers, we present a faster generic recipe (based on ADMM) that is suitable for a wide variety of applications including differentially private contingency tables, histograms, and the matrix mechanism (linear queries). An added benefit of our formulation is that it can often take direct advantage of algorithmic tricks used by the prior work on least-squares post-processing. An extensive set of experiments on various datasets demonstrates that this approach significantly improve accuracy over prior work. --- paper_title: Boosting the accuracy of differentially-private histograms through consistency paper_content: Recent differentially private query mechanisms offer strong privacy guarantees by adding noise to the query answer. For a single counting query, the technique is simple, accurate, and provides optimal utility. However, analysts typically wish to ask multiple queries. In this case, the optimal strategy is not apparent, and alternative query strategies can involve difficult trade-offs in accuracy, and may produce inconsistent answers. In this work we show that it is possible to significantly improve accuracy for a general class of histogram queries. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is both private and consistent, but in addition, it is often much more accurate. We apply our techniques to real datasets and show they can be used for estimating the degree sequence of a graph with extreme precision, and for computing a histogram that can support arbitrary range queries accurately. --- paper_title: Private and continual release of statistics paper_content: We ask the question - how can websites and data aggregators continually release updated statistics, and meanwhile preserve each individual user's privacy? Given a stream of 0's and 1's, we propose a differentially private continual counter that outputs at every time step the approximate number of 1's seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow websites to continually give top-k and hot items suggestions while preserving users' privacy. --- paper_title: PrivTree: A Differentially Private Algorithm for Hierarchical Decompositions paper_content: Given a set D of tuples defined on a domain Omega, we study differentially private algorithms for constructing a histogram over Omega to approximate the tuple distribution in D. Existing solutions for the problem mostly adopt a hierarchical decomposition approach, which recursively splits Omega into sub-domains and computes a noisy tuple count for each sub-domain, until all noisy counts are below a certain threshold. This approach, however, requires that we (i) impose a limit h on the recursion depth in the splitting of Omega and (ii) set the noise in each count to be proportional to h. The choice of h is a serious dilemma: a small h makes the resulting histogram too coarse-grained, while a large h leads to excessive noise in the tuple counts used in deciding whether sub-domains should be split. Furthermore, h cannot be directly tuned based on D; otherwise, the choice of h itself reveals private information and violates differential privacy. To remedy the deficiency of existing solutions, we present PrivTree, a histogram construction algorithm that adopts hierarchical decomposition but completely eliminates the dependency on a pre-defined h. The core of PrivTree is a novel mechanism that (i) exploits a new analysis on the Laplace distribution and (ii) enables us to use only a constant amount of noise in deciding whether a sub-domain should be split, without worrying about the recursion depth of splitting. We demonstrate the application of PrivTree in modelling spatial data, and show that it can be extended to handle sequence data (where the decision in sub-domain splitting is not based on tuple counts but a more sophisticated measure). Our experiments on a variety of real datasets show that PrivTree considerably outperforms the states of the art in terms of data utility. --- paper_title: Differential privacy under continual observation paper_content: Differential privacy is a recent notion of privacy tailored to privacy-preserving data analysis [11]. Up to this point, research on differentially private data analysis has focused on the setting of a trusted curator holding a large, static, data set; thus every computation is a "one-shot" object: there is no point in computing something twice, since the result will be unchanged, up to any randomness introduced for privacy. However, many applications of data analysis involve repeated computations, either because the entire goal is one of monitoring, e.g., of traffic conditions, search trends, or incidence of influenza, or because the goal is some kind of adaptive optimization, e.g., placement of data to minimize access costs. In these cases, the algorithm must permit continual observation of the system's state. We therefore initiate a study of differential privacy under continual observation. We identify the problem of maintaining a counter in a privacy preserving manner and show its wide applicability to many different problems. --- paper_title: Pan-private streaming algorithms paper_content: Collectors of confidential data, such as governmental agencies, hospitals, or search engine providers, can be pressured to permit data to be used for purposes other than that for which they were collected. To support the data curators, we initiate a study of pan-private algorithms; roughly speaking, these algorithms retain their privacy properties even if their internal state becomes visible to an adversary. Our principal focus is on streaming algorithms, where each datum may be discarded immediately after processing. --- paper_title: Differentially Private Event Sequences over Infinite Streams paper_content: Numerous applications require continuous publication of statistics or monitoring purposes, such as real-time traffic analysis, timely disease outbreak discovery, and social trends observation. These statistics may be derived from sensitive user data and, hence, necessitate privacy preservation. A notable paradigm for offering strong privacy guarantees in statistics publishing is e-differential privacy. However, there is limited literature that adapts this concept to settings where the statistics are computed over an infinite stream of "events" (i.e., data items generated by the users), and published periodically. These works aim at hiding a single event over the entire stream. We argue that, in most practical scenarios, sensitive information is revealed from multiple events occurring at contiguous time instances. Towards this end, we put forth the novel notion of w-event privacy over infinite streams, which protects any event sequence occurring in w successive time instants. We first formulate our privacy concept, motivate its importance, and introduce a methodology for achieving it. We next design two instantiations, whose utility is independent of the stream length. Finally, we confirm the practicality of our solutions experimenting with real data. --- paper_title: Private Release of Graph Statistics using Ladder Functions paper_content: Protecting the privacy of individuals in graph structured data while making accurate versions of the data available is one of the most challenging problems in data privacy. Most efforts to date to perform this data release end up mired in complexity, overwhelm the signal with noise, and are not effective for use in practice. In this paper, we introduce a new method which guarantees differential privacy. It specifies a probability distribution over possible outputs that is carefully defined to maximize the utility for the given input, while still providing the required privacy level. The distribution is designed to form a 'ladder', so that each output achieves the highest 'rung' (maximum probability) compared to less preferable outputs. We show how our ladder framework can be applied to problems of counting the number of occurrences of subgraphs, a vital objective in graph analysis, and give algorithms whose cost is comparable to that of computing the count exactly. Our experimental study confirms that our method outperforms existing methods for counting triangles and stars in terms of accuracy, and provides solutions for some problems for which no effective method was previously known. The results of our algorithms can be used to estimate the parameters of suitable graph models, allowing synthetic graphs to be sampled. --- paper_title: Analyzing Graphs with Node Differential Privacy paper_content: We develop algorithms for the private analysis of network data that provide accurate analysis of realistic networks while satisfying stronger privacy guarantees than those of previous work. We present several techniques for designing node differentially private algorithms, that is, algorithms whose output distribution does not change significantly when a node and all its adjacent edges are added to a graph. We also develop methodology for analyzing the accuracy of such algorithms on realistic networks. ::: ::: The main idea behind our techniques is to 'project' (in one of several senses) the input graph onto the set of graphs with maximum degree below a certain threshold. We design projection operators, tailored to specific statistics that have low sensitivity and preserve information about the original statistic. These operators can be viewed as giving a fractional (low-degree) graph that is a solution to an optimization problem described as a maximum flow instance, linear program, or convex program. In addition, we derive a generic, efficient reduction that allows us to apply any differentially private algorithm for bounded-degree graphs to an arbitrary graph. This reduction is based on analyzing the smooth sensitivity of the 'naive' truncation that simply discards nodes of high degree. --- paper_title: Private Analysis of Graph Structure paper_content: We present efficient algorithms for releasing useful statistics about graph data while providing rigorous privacy guarantees. Our algorithms work on datasets that consist of relationships between individuals, such as social ties or email communication. The algorithms satisfy edge differential privacy, which essentially requires that the presence or absence of any particular relationship be hidden. Our algorithms output approximate answers to subgraph counting queries. Given a query graph H, for example, a triangle, k-star, or k-triangle, the goal is to return the number of edge-induced isomorphic copies of H in the input graph. The special case of triangles was considered by Nissim et al. [2007] and a more general investigation of arbitrary query graphs was initiated by Rastogi et al. [2009]. We extend the approach of Nissim et al. to a new class of statistics, namely k-star queries. We also give algorithms for k-triangle queries using a different approach based on the higher-order local sensitivity. For the specific graph statistics we consider (i.e., k-stars and k-triangles), we significantly improve on the work of Rastogi et al.: our algorithms satisfy a stronger notion of privacy that does not rely on the adversary having a particular prior distribution on the data, and add less noise to the answers before releasing them. We evaluate the accuracy of our algorithms both theoretically and empirically, using a variety of real and synthetic datasets. We give explicit, simple conditions under which these algorithms add a small amount of noise. We also provide the average-case analysis in the Erdős-Renyi-Gilbert G(n,p) random graph model. Finally, we give hardness results indicating that the approach Nissim et al. used for triangles cannot easily be extended to k-triangles (hence justifying our development of a new algorithmic approach). --- paper_title: Smooth sensitivity and sampling in private data analysis paper_content: We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians. --- paper_title: Recursive mechanism: towards node differential privacy and unrestricted joins paper_content: Existing differential privacy (DP) studies mainly consider aggregation on data sets where each entry corresponds to a particular participant to be protected. In many situations, a user may pose a relational algebra query on a database with sensitive data, and desire differentially private aggregation on the result of the query. However, no existing work is able to release such aggregation when the query contains unrestricted join operations. This severely limits the applications of existing DP techniques because many data analysis tasks require unrestricted joins. One example is subgraph counting on a graph. Furthermore, existing methods for differentially private subgraph counting support only edge DP and are subject to very simple subgraphs. Until recent, whether any nontrivial graph statistics can be released with reasonable accuracy for arbitrary kind of input graphs under node DP was still an open problem. In this paper, we propose a novel differentially private mechanism that supports unrestricted joins, to release an approximation of a linear statistic of the result of some positive relational algebra calculation over a sensitive database. The error bound of the approximate answer is roughly proportional to the empirical sensitivity of the query --- a new notion that measures the maximum possible change to the query answer when a participant withdraws its data from the sensitive database. For subgraph counting, our mechanism provides a solution to achieve node DP, for any kind of subgraphs. --- paper_title: Differentially private data analysis of social networks via restricted sensitivity paper_content: We introduce the notion of restricted sensitivity as an alternative to global and smooth sensitivity to improve accuracy in differentially private data analysis. The definition of restricted sensitivity is similar to that of global sensitivity except that instead of quantifying over all possible datasets, we take advantage of any beliefs about the dataset that a querier may have, to quantify over a restricted class of datasets. Specifically, given a query f and a hypothesis H about the structure of a dataset D, we show generically how to transform f into a new query f_H whose global sensitivity (over all datasets including those that do not satisfy H) matches the restricted sensitivity of the query f. Moreover, if the belief of the querier is correct (i.e., D is in H) then f_H(D) = f(D). If the belief is incorrect, then f_H(D) may be inaccurate. We demonstrate the usefulness of this notion by considering the task of answering queries regarding social-networks, which we model as a combination of a graph and a labeling of its vertices. In particular, while our generic procedure is computationally inefficient, for the specific definition of H as graphs of bounded degree, we exhibit efficient ways of constructing f_H using different projection-based techniques. We then analyze two important query classes: subgraph counting queries (e.g., number of triangles) and local profile queries (e.g., number of people who know a spy and a computer-scientist who know each other). We demonstrate that the restricted sensitivity of such queries can be significantly lower than their smooth sensitivity. Thus, using restricted sensitivity we can maintain privacy whether or not D is in H, while providing more accurate results in the event that H holds true. --- paper_title: Practical Differential Privacy via Grouping and Smoothing paper_content: We address one-time publishing of non-overlapping counts with e-differential privacy. These statistics are useful in a wide and important range of applications, including transactional, traffic and medical data analysis. Prior work on the topic publishes such statistics with prohibitively low utility in several practical scenarios. Towards this end, we present GS, a method that pre-processes the counts by elaborately grouping and smoothing them via averaging. This step acts as a form of preliminary perturbation that diminishes sensitivity, and enables GS to achieve e-differential privacy through low Laplace noise injection. The grouping strategy is dictated by a sampling mechanism, which minimizes the smoothing perturbation. We demonstrate the superiority of GS over its competitors, and confirm its practicality, via extensive experiments on real datasets. --- paper_title: A Data- and Workload-Aware Query Answering Algorithm for Range Queries Under Differential Privacy paper_content: We describe a new algorithm for answering a given set of range queries under e-differential privacy which often achieves substantially lower error than competing methods. Our algorithm satisfies differential privacy by adding noise that is adapted to the input data and to the given query set. We first privately learn a partitioning of the domain into buckets that suit the input data well. Then we privately estimate counts for each bucket, doing so in a manner well-suited for the given query set. Since the performance of the algorithm depends on the input database, we evaluate it on a wide range of real datasets, showing that we can achieve the benefits of data-dependence on both "easy" and "hard" databases. --- paper_title: iReduct: differential privacy with reduced relative errors paper_content: Prior work in differential privacy has produced techniques for answering aggregate queries over sensitive data in a privacy-preserving way. These techniques achieve privacy by adding noise to the query answers. Their objective is typically to minimize absolute errors while satisfying differential privacy. Thus, query answers are injected with noise whose scale is independent of whether the answers are large or small. The noisy results for queries whose true answers are small therefore tend to be dominated by noise, which leads to inferior data utility. This paper introduces iReduct, a differentially private algorithm for computing answers with reduced relative error. The basic idea of iReduct is to inject different amounts of noise to different query results, so that smaller (larger) values are more likely to be injected with less (more) noise. The algorithm is based on a novel resampling technique that employs correlated noise to improve data utility. Performance is evaluated on an instantiation of iReduct that generates marginals, i.e., projections of multi-dimensional histograms onto subsets of their attributes. Experiments on real data demonstrate the effectiveness of our solution. --- paper_title: The matrix mechanism: optimizing linear counting queries under differential privacy paper_content: Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. We describe the matrix mechanism, an algorithm for answering a workload of linear counting queries that adapts the noise distribution to properties of the provided queries. Given a workload, the mechanism uses a different set of queries, called a query strategy, which are answered using a standard Laplace or Gaussian mechanism. Noisy answers to the workload queries are then derived from the noisy answers to the strategy queries. This two-stage process can result in a more complex, correlated noise distribution that preserves differential privacy but increases accuracy. We provide a formal analysis of the error of query answers produced by the mechanism and investigate the problem of computing the optimal query strategy in support of a given workload. We show that this problem can be formulated as a rank-constrained semidefinite program. We analyze two seemingly distinct techniques proposed in the literature, whose similar behavior is explained by viewing them as instances of the matrix mechanism. We also describe an extension of the mechanism in which nonnegativity constraints are included in the derivation process and provide experimental evidence of its efficacy. --- paper_title: Convex Optimization for Linear Query Processing under Approximate Differential Privacy paper_content: Differential privacy enables organizations to collect accurate aggregates over sensitive data with strong, rigorous guarantees on individuals' privacy. Previous work has found that under differential privacy, computing multiple correlated aggregates as a batch, using an appropriate strategy, may yield higher accuracy than computing each of them independently. However, finding the best strategy that maximizes result accuracy is non-trivial, as it involves solving a complex constrained optimization program that appears to be non-convex. Hence, in the past much effort has been devoted in solving this non-convex optimization program. Existing approaches include various sophisticated heuristics and expensive numerical solutions. None of them, however, guarantees to find the optimal solution of this optimization problem. This paper points out that under (e, ཬ)-differential privacy, the optimal solution of the above constrained optimization problem in search of a suitable strategy can be found, rather surprisingly, by solving a simple and elegant convex optimization program. Then, we propose an efficient algorithm based on Newton's method, which we prove to always converge to the optimal solution with linear global convergence rate and quadratic local convergence rate. Empirical evaluations demonstrate the accuracy and efficiency of the proposed solution. --- paper_title: Optimizing Batch Linear Queries under Exact and Approximate Differential Privacy paper_content: Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an appropriate strategy, processing a batch of correlated queries as a whole achieves considerably higher accuracy than answering them individually. However, to our knowledge there is currently no practical solution to find such a strategy for an arbitrary query batch; existing methods either return strategies of poor quality (often worse than naive methods) or require prohibitively expensive computations for even moderately large domains. Motivated by this, we propose low-rank mechanism (LRM), the first practical differentially private technique for answering batch linear queries with high accuracy. LRM works for both exact (i.e., $\epsilon$-) and approximate (i.e., ($\epsilon$, $\delta$)-) differential privacy definitions. We derive the utility guarantees of LRM, and provide guidance on how to set the privacy parameters given the user's utility expectation. Extensive experiments using real data demonstrate that our proposed method consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins. --- paper_title: Orthogonal mechanism for answering batch queries with differential privacy paper_content: Differential privacy has recently become very promising in achieving data privacy guarantee. Typically, one can achieve e-differential privacy by adding noise based on Laplace distribution to a query result. To reduce the noise magnitude for higher accuracy, various techniques have been proposed. They generally require high computational complexity, making them inapplicable to large-scale datasets. In this paper, we propose a novel orthogonal mechanism (OM) to represent a query set Q with a linear combination of a new query set Q, where Q consists of orthogonal query sets and is derived by exploiting the correlations between queries in Q. As a result of orthogonality of the derived queries, the proposed technique not only greatly reduces computational complexity, but also achieves better accuracy than the existing mechanisms. Extensive experimental results demonstrate the effectiveness and efficiency of the proposed technique. --- paper_title: Differential Privacy via Wavelet Transforms paper_content: Privacy-preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides the strongest privacy guarantee. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output data set is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution. --- paper_title: On the complexity of differentially private data release: efficient algorithms and hardness results paper_content: We consider private data analysis in the setting in which a trusted and trustworthy curator, having obtained a large data set containing private information, releases to the public a "sanitization" of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst. The sanitization may be in the form of an arbitrary data structure, accompanied by a computational procedure for determining approximate answers to queries on the original data set, or it may be a "synthetic data set" consisting of data items drawn from the same universe as items in the original data set; queries are carried out as if the synthetic data set were the actual input. In either case the process is non-interactive; once the sanitization has been released the original data and the curator play no further role. For the task of sanitizing with a synthetic dataset output, we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures. For the (potentially easier) task of sanitizing with unrestricted output format, we show a tight qualitative and quantitative connection between hardness of sanitizing and the existence of traitor tracing schemes. --- paper_title: A learning theory approach to non-interactive database privacy paper_content: In this paper we demonstrate that, ignoring computational constraints, it is possible to privately release synthetic databases that are useful for large classes of queries -- much larger in size than the database itself. Specifically, we give a mechanism that privately releases synthetic data for a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries. We show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the VC-dimension of the class of queries, which itself grows only logarithmically with the size of the query class. ::: We also show that it is not possible to privately release even simple classes of queries (such as intervals and their generalizations) over continuous domains. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, given a slight relaxation of the utility guarantee. This algorithm does not release synthetic data, but instead another data structure capable of representing an answer for each query. We also give an efficient algorithm for releasing synthetic data for the class of interval queries and axis-aligned rectangles of constant dimension. ::: Finally, inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy. --- paper_title: Differentially Private High-Dimensional Data Publication via Sampling-Based Inference paper_content: Releasing high-dimensional data enables a wide spectrum of data mining tasks. Yet, individual privacy has been a major obstacle to data sharing. In this paper, we consider the problem of releasing high-dimensional data with differential privacy guarantees. We propose a novel solution to preserve the joint distribution of a high-dimensional dataset. We first develop a robust sampling-based framework to systematically explore the dependencies among all attributes and subsequently build a dependency graph. This framework is coupled with a generic threshold mechanism to significantly improve accuracy. We then identify a set of marginal tables from the dependency graph to approximate the joint distribution based on the solid inference foundation of the junction tree algorithm while minimizing the resultant error. We prove that selecting the optimal marginals with the goal of minimizing error is NP-hard and, thus, design an approximation algorithm using an integer programming relaxation and the constrained concave-convex procedure. Extensive experiments on real datasets demonstrate that our solution substantially outperforms the state-of-the-art competitors. --- paper_title: Answering $n^2+o(1)$ Counting Queries with Differential Privacy is Hard paper_content: A central problem in differentially private data analysis is how to design efficient algorithms capable of answering large numbers of counting queries on a sensitive database. Counting queries are of the form “What fraction of individual records in the database satisfy the property $q$?” We prove that if one-way functions exist, then there is no algorithm that takes as input a database $D \in (\{0,1\}^d)^n$, and $k = \tilde{\Theta}(n^2)$ arbitrary efficiently computable counting queries, runs in time $\mathrm{poly}(d, n)$, and returns an approximate answer to each query, while satisfying differential privacy. We also consider the complexity of answering “simple” counting queries, and make some progress in this direction by showing that the above result holds even when we require that the queries are computable by constant-depth $(AC^0)$ circuits. Our result is almost tight because it is known that $\tilde{\Omega}(n^2)$ counting queries can be answered efficiently while satisfying differential privacy. Mor... --- paper_title: A simple and practical algorithm for differentially private data release paper_content: We present a new algorithm for differentially private data release, based on a simple combination of the Multiplicative Weights update rule with the Exponential Mechanism. Our MWEM algorithm achieves what are the best known and nearly optimal theoretical guarantees, while at the same time being simple to implement and experimentally more accurate on actual data sets than existing techniques. --- paper_title: Boosting and Differential Privacy paper_content: Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved {\em privacy-preserving synopses} of an input database. These are data structures that yield, for a given set $\Q$ of queries over an input database, reasonably accurate estimates of the responses to every query in~$\Q$, even when the number of queries is much larger than the number of rows in the database. Given a {\em base synopsis generator} that takes a distribution on $\Q$ and produces a ``weak'' synopsis that yields ``good'' answers for a majority of the weight in $\Q$, our {\em Boosting for Queries} algorithm obtains a synopsis that is good for all of~$\Q$. We ensure privacy for the rows of the database, but the boosting is performed on the {\em queries}. We also provide the first synopsis generators for arbitrary sets of arbitrary low-sensitivity queries, {\it i.e.}, queries whose answers do not vary much under the addition or deletion of a single row. In the execution of our algorithm certain tasks, each incurring some privacy loss, are performed many times. To analyze the cumulative privacy loss, we obtain an $O(\eps^2)$ bound on the {\em expected} privacy loss from a single $\eps$-\dfp{} mechanism. Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides $\eps$-differential privacy or one of its relaxations, and each of which operates on (potentially) different, adaptively chosen, databases. --- paper_title: PrivBayes: private data release via bayesian networks paper_content: Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless. To address the deficiency of the existing methods, this paper presents P riv B ayes , a differentially private method for releasing high-dimensional data. Given a dataset D, P riv B ayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, P riv B ayes injects noise into each marginal in P to ensure differential privacy and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, P riv B ayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, P riv B ayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate P riv B ayes on real data and demonstrate that it significantly outperforms existing solutions in terms of accuracy. --- paper_title: What Can We Learn Privately? paper_content: Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning. --- paper_title: Differentially private data release for data mining paper_content: Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among the existing privacy models, ∈-differential privacy provides one of the strongest privacy guarantees and has no assumptions about an adversary's background knowledge. Most of the existing solutions that ensure ∈-differential privacy are based on an interactive model, where the data miner is only allowed to pose aggregate queries to the database. In this paper, we propose the first anonymization algorithm for the non-interactive setting based on the generalization technique. The proposed solution first probabilistically generalizes the raw data and then adds noise to guarantee ∈-differential privacy. As a sample application, we show that the anonymized data can be used effectively to build a decision tree induction classifier. Experimental results demonstrate that the proposed non-interactive anonymization algorithm is scalable and performs better than the existing solutions for classification analysis. --- paper_title: Differentially Private High-Dimensional Data Publication via Sampling-Based Inference paper_content: Releasing high-dimensional data enables a wide spectrum of data mining tasks. Yet, individual privacy has been a major obstacle to data sharing. In this paper, we consider the problem of releasing high-dimensional data with differential privacy guarantees. We propose a novel solution to preserve the joint distribution of a high-dimensional dataset. We first develop a robust sampling-based framework to systematically explore the dependencies among all attributes and subsequently build a dependency graph. This framework is coupled with a generic threshold mechanism to significantly improve accuracy. We then identify a set of marginal tables from the dependency graph to approximate the joint distribution based on the solid inference foundation of the junction tree algorithm while minimizing the resultant error. We prove that selecting the optimal marginals with the goal of minimizing error is NP-hard and, thus, design an approximation algorithm using an integer programming relaxation and the constrained concave-convex procedure. Extensive experiments on real datasets demonstrate that our solution substantially outperforms the state-of-the-art competitors. --- paper_title: PrivBayes: private data release via bayesian networks paper_content: Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless. To address the deficiency of the existing methods, this paper presents P riv B ayes , a differentially private method for releasing high-dimensional data. Given a dataset D, P riv B ayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, P riv B ayes injects noise into each marginal in P to ensure differential privacy and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, P riv B ayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, P riv B ayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate P riv B ayes on real data and demonstrate that it significantly outperforms existing solutions in terms of accuracy. --- paper_title: Differentially Private Random Forest with High Utility paper_content: Privacy-preserving data mining has become an active focus of the research community in the domains where data are sensitive and personal in nature. For example, highly sensitive digital repositories of medical or financial records offer enormous values for risk prediction and decision making. However, prediction models derived from such repositories should maintain strict privacy of individuals. We propose a novel random forest algorithm under the framework of differential privacy. Unlike previous works that strictly follow differential privacy and keep the complete data distribution approximately invariant to change in one data instance, we only keep the necessary statistics (e.g. variance of the estimate) invariant. This relaxation results in significantly higher utility. To realize our approach, we propose a novel differentially private decision tree induction algorithm and use them to create an ensemble of decision trees. We also propose feasible adversary models to infer about the attribute and class label of unknown data in presence of the knowledge of all other data. Under these adversary models, we derive bounds on the maximum number of trees that are allowed in the ensemble while maintaining privacy. We focus on binary classification problem and demonstrate our approach on four real-world datasets. Compared to the existing privacy preserving approaches we achieve significantly higher utility. --- paper_title: Data Mining Concepts and Techniques paper_content: Understand the need for analyses of large, complex, information-rich data sets. Identify the goals and primary tasks of the data-mining process. Describe the roots of data-mining technology. Recognize the iterative character of a data-mining process and specify its basic steps. Explain the influence of data quality on a data-mining process. Establish the relation between data warehousing and data mining. Data mining is an iterative process within which progress is defined by discovery, through either automatic or manual methods. Data mining is most useful in an exploratory analysis scenario in which there are no predetermined notions about what will constitute an "interesting" outcome. Data mining is the search for new, valuable, and nontrivial information in large volumes of data. It is a cooperative effort of humans and computers. Best results are achieved by balancing the knowledge of human experts in describing problems and goals with the search capabilities of computers. In practice, the two primary goals of data mining tend to be prediction and description. Prediction involves using some variables or fields in the data set to predict unknown or future values of other variables of interest. Description, on the other hand, focuses on finding patterns describing the data that can be interpreted by humans. Therefore, it is possible to put data-mining activities into one of two categories: Predictive data mining, which produces the model of the system described by the given data set, or Descriptive data mining, which produces new, nontrivial information based on the available data set. --- paper_title: A Practical Differentially Private Random Decision Tree Classifier paper_content: In this paper, we study the problem of constructing private classifiers using decision trees, within the framework of differential privacy. We first present experimental evidence that creating a differentially private ID3 tree using differentially private low-level queries does not simultaneously provide good privacy and good accuracy, particularly for small datasets. ::: ::: In search of better privacy and accuracy, we then present a differentially private decision tree ensemble algorithm based on random decision trees. We demonstrate experimentally that this approach yields good prediction while maintaining good privacy, even for small datasets. We also present differentially private extensions of our algorithm to two settings: (1) new data is periodically appended to an existing database and (2) the database is horizontally or vertically partitioned between multiple users. --- paper_title: Data mining with differential privacy paper_content: We consider the problem of data mining with formal privacy guarantees, given a data access interface based on the differential privacy framework. Differential privacy requires that computations be insensitive to changes in any particular individual's record, thereby restricting data leaks through the results. The privacy preserving interface ensures unconditionally safe access to the data and does not require from the data miner any expertise in privacy. However, as we show in the paper, a naive utilization of the interface to construct privacy preserving data mining algorithms could lead to inferior data mining results. We address this problem by considering the privacy and the algorithmic requirements simultaneously, focusing on decision tree induction as a sample application. The privacy mechanism has a profound effect on the performance of the methods chosen by the data miner. We demonstrate that this choice could make the difference between an accurate classifier and a completely useless one. Moreover, an improved algorithm can achieve the same level of accuracy and privacy as the naive implementation but with an order of magnitude fewer learning samples. --- paper_title: Practical privacy: the SuLQ framework paper_content: We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to {0, 1}. The true answer is Σ ieS f(d i ), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11]. --- paper_title: Privacy integrated queries: an extensible platform for privacy-preserving data analysis paper_content: Privacy Integrated Queries (PINQ) is an extensible data analysis platform designed to provide unconditional privacy guarantees for the records of the underlying data sets. PINQ provides analysts with access to records through an SQL-like declarative language (LINQ) amidst otherwise arbitrary C# code. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's guarantees require no trust placed in the expertise or diligence of the analysts, broadening the scope for design and deployment of privacy-preserving data analyses, especially by privacy nonexperts. --- paper_title: PrivGene: differentially private model fitting using genetic algorithms paper_content: epsilon-differential privacy is rapidly emerging as the state-of-the-art scheme for protecting individuals' privacy in published analysis results over sensitive data. The main idea is to perform random perturbations on the analysis results, such that any individual's presence in the data has negligible impact on the randomized results. This paper focuses on analysis tasks that involve model fitting, i.e., finding the parameters of a statistical model that best fit the dataset. For such tasks, the quality of the differentially private results depends upon both the effectiveness of the model fitting algorithm, and the amount of perturbations required to satisfy the privacy guarantees. Most previous studies start from a state-of-the-art, non-private model fitting algorithm, and develop a differentially private version. Unfortunately, many model fitting algorithms require intensive perturbations to satisfy -differential privacy, leading to poor overall result quality. Motivated by this, we propose PrivGene, a general-purpose differentially private model fitting solution based on genetic algorithms (GA). PrivGene needs significantly less perturbations than previous methods, and it achieves higher overall result quality, even for model fitting tasks where GA is not the first choice without privacy considerations. Further, PrivGene performs the random perturbations using a novel technique called the enhanced exponential mechanism, which improves over the exponential mechanism by exploiting the special properties of model fitting tasks. As case studies, we apply PrivGene to three common analysis tasks involving model fitting: logistic regression, SVM classification, and k-means clustering. Extensive experiments using real data confirm the high result quality of PrivGene, and its superiority over existing methods. --- paper_title: Differentially private subspace clustering paper_content: Subspace clustering is an unsupervised learning problem that aims at grouping data points into multiple "clusters" so that data points in a single cluster lie approximately on a low-dimensional linear subspace. It is originally motivated by 3D motion segmentation in computer vision, but has recently been generically applied to a wide range of statistical machine learning problems, which often involves sensitive datasets about human subjects. This raises a dire concern for data privacy. In this work, we build on the framework of differential privacy and present two provably private subspace clustering algorithms. We demonstrate via both theory and experiments that one of the presented methods enjoys formal privacy and utility guarantees; the other one asymptotically preserves differential privacy while having good performance in practice. Along the course of the proof, we also obtain two new provable guarantees for the agnostic subspace clustering and the graph connectivity problem which might be of independent interests. --- paper_title: Smooth sensitivity and sampling in private data analysis paper_content: We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians. --- paper_title: Practical privacy: the SuLQ framework paper_content: We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to {0, 1}. The true answer is Σ ieS f(d i ), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11]. --- paper_title: GUPT: privacy preserving data analysis made easy paper_content: It is often highly valuable for organizations to have their data analyzed by external agents. However, any program that computes on potentially sensitive data risks leaking information through its output. Differential privacy provides a theoretical framework for processing data while protecting the privacy of individual records in a dataset. Unfortunately, it has seen limited adoption because of the loss in output accuracy, the difficulty in making programs differentially private, lack of mechanisms to describe the privacy budget in a programmer's utilitarian terms, and the challenging requirement that data owners and data analysts manually distribute the limited privacy budget between queries. This paper presents the design and evaluation of a new system, GUPT, that overcomes these challenges. Unlike existing differentially private systems such as PINQ and Airavat, it guarantees differential privacy to programs not developed with privacy in mind, makes no trust assumptions about the analysis program, and is secure to all known classes of side-channel attacks. GUPT uses a new model of data sensitivity that degrades privacy of data over time. This enables efficient allocation of different levels of privacy for different user applications while guaranteeing an overall constant level of privacy and maximizing the utility of each application. GUPT also introduces techniques that improve the accuracy of output while achieving the same level of privacy. These approaches enable GUPT to easily execute a wide variety of data analysis programs while providing both utility and privacy. --- paper_title: Discovering frequent patterns in sensitive data paper_content: Discovering frequent patterns from data is a popular exploratory technique in datamining. However, if the data are sensitive (e.g., patient health records, user behavior records) releasing information about significant patterns or trends carries significant risk to privacy. This paper shows how one can accurately discover and release the most significant patterns along with their frequencies in a data set containing sensitive information, while providing rigorous guarantees of privacy for the individuals whose information is stored there. We present two efficient algorithms for discovering the k most frequent patterns in a data set of sensitive records. Our algorithms satisfy differential privacy, a recently introduced definition that provides meaningful privacy guarantees in the presence of arbitrary external information. Differentially private algorithms require a degree of uncertainty in their output to preserve privacy. Our algorithms handle this by returning 'noisy' lists of patterns that are close to the actual list of k most frequent patterns in the data. We define a new notion of utility that quantifies the output accuracy of private top-k pattern mining algorithms. In typical data sets, our utility criterion implies low false positive and false negative rates in the reported lists. We prove that our methods meet the new utility criterion; we also demonstrate the performance of our algorithms through extensive experiments on the transaction data sets from the FIMI repository. While the paper focuses on frequent pattern mining, the techniques developed here are relevant whenever the data mining output is a list of elements ordered according to an appropriately 'robust' measure of interest. --- paper_title: Mining frequent graph patterns with differential privacy paper_content: Discovering frequent graph patterns in a graph database offers valuable information in a variety of applications. However, if the graph dataset contains sensitive data of individuals such as mobile phone-call graphs and web-click graphs, releasing discovered frequent patterns may present a threat to the privacy of individuals. Differential privacy has recently emerged as the de facto standard for private data analysis due to its provable privacy guarantee. In this paper we propose the first differentially private algorithm for mining frequent graph patterns. We first show that previous techniques on differentially private discovery of frequent itemsets cannot apply in mining frequent graph patterns due to the inherent complexity of handling structural information in graphs. We then address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling based algorithm. Unlike previous work on frequent itemset mining, our techniques do not rely on the output of a non-private mining algorithm. Instead, we observe that both frequent graph pattern mining and the guarantee of differential privacy can be unified into an MCMC sampling framework. In addition, we establish the privacy and utility guarantee of our algorithm and propose an efficient neighboring pattern counting technique as well. Experimental results show that the proposed algorithm is able to output frequent patterns with good precision. --- paper_title: Differentially private frequent subgraph mining paper_content: Mining frequent subgraphs from a collection of input graphs is an important topic in data mining research. However, if the input graphs contain sensitive information, releasing frequent subgraphs may pose considerable threats to individual's privacy. In this paper, we study the problem of frequent subgraph mining (FGM) under the rigorous differential privacy model. We introduce a novel differentially private FGM algorithm, which is referred to as DFG. In this algorithm, we first privately identify frequent subgraphs from input graphs, and then compute the noisy support of each identified frequent subgraph. In particular, to privately identify frequent subgraphs, we present a frequent subgraph identification approach which can improve the utility of frequent subgraph identifications through candidates pruning. Moreover, to compute the noisy support of each identified frequent subgraph, we devise a lattice-based noisy support derivation approach, where a series of methods has been proposed to improve the accuracy of the noisy supports. Through formal privacy analysis, we prove that our DFG algorithm satisfies e-differential privacy. Extensive experimental results on real datasets show that the DFG algorithm can privately find frequent subgraphs with high data utility. --- paper_title: Top-k frequent itemsets via differentially private FP-trees paper_content: Frequent itemset mining is a core data mining task and has been studied extensively. Although by their nature, frequent itemsets are aggregates over many individuals and would not seem to pose a privacy threat, an attacker with strong background information can learn private individual information from frequent itemsets. This has lead to differentially private frequent itemset mining, which protects privacy by giving inexact answers. We give an approach that first identifies top-k frequent itemsets, then uses them to construct a compact, differentially private FP-tree. Once the noisy FP-tree is built, the (privatized) support of all frequent itemsets can be derived from it without access to the original data. Experimental results show that the proposed algorithm gives substantially better results than prior approaches, especially for high levels of privacy. --- paper_title: On Differentially Private Frequent Itemset Mining ∗ paper_content: We consider differentially private frequent itemset mining. We begin by exploring the theoretical difficulty of simultaneously providing good utility and good privacy in this task. While our analysis proves that in general this is very difficult, it leaves a glimmer of hope in that our proof of difficulty relies on the existence of long transactions (that is, transactions containing many items). Accordingly, we investigate an approach that begins by truncating long transactions, trading off errors introduced by the truncation with those introduced by the noise added to guarantee privacy. Experimental results over standard benchmark databases show that truncating is indeed effective. Our algorithm solves the "classical" frequent itemset mining problem, in which the goal is to find all itemsets whose support exceeds a threshold. Related work has proposed differentially private algorithms for the top-k itemset mining problem ("find the k most frequent itemsets".) An experimental comparison with those algorithms show that our algorithm achieves better F-score unless k is small. --- paper_title: PrivBasis: Frequent Itemset Mining with Differential Privacy paper_content: The discovery of frequent itemsets can serve valuable economic and research purposes. Releasing discovered frequent itemsets, however, presents privacy challenges. In this paper, we study the problem of how to perform frequent itemset mining on transaction databases while satisfying differential privacy. We propose an approach, called PrivBasis, which leverages a novel notion called basis sets. A θ-basis set has the property that any itemset with frequency higher than θ is a subset of some basis. We introduce algorithms for privately constructing a basis set and then using it to find the most frequent itemsets. Experiments show that our approach greatly outperforms the current state of the art. --- paper_title: What Can We Learn Privately? paper_content: Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning. --- paper_title: Learning with Differential Privacy: Stability, Learnability and the Sufficiency and Necessity of ERM Principle paper_content: While machine learning has proven to be a powerful data-driven solution to many real-life problems, its use in sensitive domains has been limited due to privacy concerns. A popular approach known as differential privacy offers provable privacy guarantees, but it is often observed in practice that it could substantially hamper learning accuracy. In this paper we study the learnability (whether a problem can be learned by any algorithm) under Vapnik's general learning setting with differential privacy constraint, and reveal some intricate relationships between privacy, stability and learnability. In particular, we show that a problem is privately learnable if an only if there is a private algorithm that asymptotically minimizes the empirical risk (AERM). In contrast, for non-private learning AERM alone is not sufficient for learnability. This result suggests that when searching for private learning algorithms, we can restrict the search to algorithms that are AERM. In light of this, we propose a conceptual procedure that always finds a universally consistent algorithm whenever the problem is learnable under privacy constraint. We also propose a generic and practical algorithm and show that under very general conditions it privately learns a wide class of learning problems. Lastly, we extend some of the results to the more practical (e, δ)-differential privacy and establish the existence of a phase-transition on the class of problems that are approximately privately learnable with respect to how small δ needs to be. --- paper_title: Discovering frequent patterns in sensitive data paper_content: Discovering frequent patterns from data is a popular exploratory technique in datamining. However, if the data are sensitive (e.g., patient health records, user behavior records) releasing information about significant patterns or trends carries significant risk to privacy. This paper shows how one can accurately discover and release the most significant patterns along with their frequencies in a data set containing sensitive information, while providing rigorous guarantees of privacy for the individuals whose information is stored there. We present two efficient algorithms for discovering the k most frequent patterns in a data set of sensitive records. Our algorithms satisfy differential privacy, a recently introduced definition that provides meaningful privacy guarantees in the presence of arbitrary external information. Differentially private algorithms require a degree of uncertainty in their output to preserve privacy. Our algorithms handle this by returning 'noisy' lists of patterns that are close to the actual list of k most frequent patterns in the data. We define a new notion of utility that quantifies the output accuracy of private top-k pattern mining algorithms. In typical data sets, our utility criterion implies low false positive and false negative rates in the reported lists. We prove that our methods meet the new utility criterion; we also demonstrate the performance of our algorithms through extensive experiments on the transaction data sets from the FIMI repository. While the paper focuses on frequent pattern mining, the techniques developed here are relevant whenever the data mining output is a list of elements ordered according to an appropriately 'robust' measure of interest. --- paper_title: Mining frequent graph patterns with differential privacy paper_content: Discovering frequent graph patterns in a graph database offers valuable information in a variety of applications. However, if the graph dataset contains sensitive data of individuals such as mobile phone-call graphs and web-click graphs, releasing discovered frequent patterns may present a threat to the privacy of individuals. Differential privacy has recently emerged as the de facto standard for private data analysis due to its provable privacy guarantee. In this paper we propose the first differentially private algorithm for mining frequent graph patterns. We first show that previous techniques on differentially private discovery of frequent itemsets cannot apply in mining frequent graph patterns due to the inherent complexity of handling structural information in graphs. We then address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling based algorithm. Unlike previous work on frequent itemset mining, our techniques do not rely on the output of a non-private mining algorithm. Instead, we observe that both frequent graph pattern mining and the guarantee of differential privacy can be unified into an MCMC sampling framework. In addition, we establish the privacy and utility guarantee of our algorithm and propose an efficient neighboring pattern counting technique as well. Experimental results show that the proposed algorithm is able to output frequent patterns with good precision. --- paper_title: Differentially Private Empirical Risk Minimization paper_content: Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the $\epsilon$-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance. --- paper_title: Privacy-Preserving Deep Learning paper_content: Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets. --- paper_title: Efficient private empirical risk minimization for high-dimensional learning paper_content: Dimensionality reduction is a popular approach for dealing with high dimensional data that leads to substantial computational savings. Random projections are a simple and effective method for universal dimensionality reduction with rigorous theoretical guarantees. In this paper, we theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain). We ask: is it possible to design differentially private algorithms with small excess risk given access to only projected data? In this paper, we answer this question in affirmative, by showing that for the class of generalized linear functions, given only the projected data and the projection matrix, we can obtain excess risk bounds of O(w(C)2/3/n1/3) under e-differential privacy, and O(√w(C)/n) under (eδ)-differential privacy, where n is the sample size and w(C) is the Gaussian width of the parameter space C that we optimize over. A simple consequence of these results is that, for a large class of ERM problems, in the traditional setting (i.e., with access to the original data), under e-differential privacy, we improve the worst-case risk bounds of (Bassily et al., 2014). --- paper_title: Differential privacy under continual observation paper_content: Differential privacy is a recent notion of privacy tailored to privacy-preserving data analysis [11]. Up to this point, research on differentially private data analysis has focused on the setting of a trusted curator holding a large, static, data set; thus every computation is a "one-shot" object: there is no point in computing something twice, since the result will be unchanged, up to any randomness introduced for privacy. However, many applications of data analysis involve repeated computations, either because the entire goal is one of monitoring, e.g., of traffic conditions, search trends, or incidence of influenza, or because the goal is some kind of adaptive optimization, e.g., placement of data to minimize access costs. In these cases, the algorithm must permit continual observation of the system's state. We therefore initiate a study of differential privacy under continual observation. We identify the problem of maintaining a counter in a privacy preserving manner and show its wide applicability to many different problems. --- paper_title: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction paper_content: In recent years, deep learning has spread beyond both academia and industry with many exciting real-world applications. The development of deep learning has presented obvious privacy issues. However, there has been lack of scientific study about privacy preservation in deep learning. In this paper, we concentrate on the auto-encoder, a fundamental component in deep learning, and propose the deep private auto-encoder (dPA). Our main idea is to enforce e-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. We apply the dPA to human behavior prediction in a health social network. Theoretical analysis and thorough experimental evaluations show that the dPA is highly effective and efficient, and it significantly outperforms existing solutions. --- paper_title: Differentially Private Learning with Kernels paper_content: In this paper, we consider the problem of differentially private learning where access to the training features is through a kernel function only. As mentioned in Chaudhuri et al. (2011), the problem seems to be intractable for general kernel functions in the standard learning model of releasing different private predictor. We study this problem in three simpler but practical settings. We first study an interactive model where the user sends its test points to a trusted learner (like search engines) and expects accurate but differentially private predictions. In the second model, the learner has access to a subset of the unlabeled test set using which it releases a predictor, which preserves privacy of the training data. (NIH, 2003) is an example of such publicly available test set. Our third model is similar to the traditional model, where learner is oblivious to the test set but the kernels are restricted to functions over vector spaces. For each of the models, we derive differentially private learning algorithms with provable "utility" or error bounds. Moreover, we show that our methods can also be applied to the traditional model where they demonstrate better dimensionality dependence when compared to the methods of (Rubinstein et al., 2009; Chaudhuri et al., 2011). Finally, we provide experimental validation of our methods. --- paper_title: Near) Dimension Independent Risk Bounds for Differentially Private Learning paper_content: In this paper, we study the problem of differentially private risk minimization where the goal is to provide differentially private algorithms that have small excess risk. In particular we address the following open problem: Is it possible to design computationally efficient differentially private risk minimizers with excess risk bounds that do not explicitly depend on dimensionality (p) and do not require structural assumptions like restricted strong convexity? ::: ::: In this paper, we answer the question in the affirmative for a variant of the well-known output and objective perturbation algorithms (Chaudhuri et al., 2011). In particular, we show that under certain assumptions, variants of both output and objective perturbation algorithms have no explicit dependence on p; the excess risk depends only on the L2-norm of the true risk minimizer and that of training points. ::: ::: Next, we present a novel privacy preserving algorithm for risk minimization over simplex in the generalized linear model, where the loss function is a doubly differentiable convex function. Assuming that the training points have bounded L∞-norm, our algorithm provides risk bound that has only logarithmic dependence on p. We also apply our technique to the online learning setting and obtain a regret bound with similar logarithmic dependence on p. In contrast, the existing differentially private online learning methods incur O(√p) dependence. --- paper_title: Private Incremental Regression paper_content: Data is continuously generated by modern data sources, and a recent challenge in machine learning has been to develop techniques that perform well in an incremental (streaming) setting. A variety of offline machine learning tasks are known to be feasible under differential privacy, where generic construction exist that, given a large enough input sample, perform tasks such as PAC learning, Empirical Risk Minimization (ERM), regression, etc. In this paper, we investigate the problem of private machine learning, where as common in practice, the data is not given at once, but rather arrives incrementally over time. We introduce the problems of private incremental ERM and private incremental regression where the general goal is to always maintain a good empirical risk minimizer for the history observed under differential privacy. Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals. We take this construction as a baseline for comparison. We then provide two mechanisms for the private incremental regression problem. Our first mechanism is based on privately constructing a noisy incremental gradient function, which is then used in a modified projected gradient procedure at every timestep. This mechanism has an excess empirical risk of ≈√d where d the input and constraint set can be used to derive significantly better results for certain interesting regression problems. Our second mechanism which achieves this is based on the idea of projecting the data to a lower dimensional space using random projections, and then adding privacy noise in this low dimensional space. The mechanism overcomes the issues of adaptivity inherent with the use of random projections in online streams, and uses recent developments in high-dimensional estimation to achieve an excess empirical risk bound of ≈ T1/3 W2/3, where T is the length of the stream and W is the sum of the Gaussian widths of the input domain and the constraint set that we optimize over. --- paper_title: Top-k frequent itemsets via differentially private FP-trees paper_content: Frequent itemset mining is a core data mining task and has been studied extensively. Although by their nature, frequent itemsets are aggregates over many individuals and would not seem to pose a privacy threat, an attacker with strong background information can learn private individual information from frequent itemsets. This has lead to differentially private frequent itemset mining, which protects privacy by giving inexact answers. We give an approach that first identifies top-k frequent itemsets, then uses them to construct a compact, differentially private FP-tree. Once the noisy FP-tree is built, the (privatized) support of all frequent itemsets can be derived from it without access to the original data. Experimental results show that the proposed algorithm gives substantially better results than prior approaches, especially for high levels of privacy. --- paper_title: Differential Privacy for Functions and Functional Data paper_content: Differential privacy is a framework for privately releasing summaries of a database. Previous work has focused mainly on methods for which the output is a finite dimensional vector, or an element of some discrete set. We develop methods for releasing functions while preserving differential privacy. Specifically, we show that adding an appropriate Gaussian process to the function of interest yields differential privacy. When the functions lie in the same RKHS as the Gaussian process, then the correct noise level is established by measuring the"sensitivity"of the function in the RKHS norm. As examples we consider kernel density estimation, kernel support vector machines, and functions in reproducing kernel Hilbert spaces. --- paper_title: Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds paper_content: In this paper, we initiate a systematic investigation of differentially private algorithms for convex empirical risk minimization. Various instantiations of this problem have been studied before. We provide new algorithms and matching lower bounds for private ERM assuming only that each data point's contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal non-private running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for $(\epsilon,0)$- and $(\epsilon,\delta)$-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median. --- paper_title: On Differentially Private Frequent Itemset Mining ∗ paper_content: We consider differentially private frequent itemset mining. We begin by exploring the theoretical difficulty of simultaneously providing good utility and good privacy in this task. While our analysis proves that in general this is very difficult, it leaves a glimmer of hope in that our proof of difficulty relies on the existence of long transactions (that is, transactions containing many items). Accordingly, we investigate an approach that begins by truncating long transactions, trading off errors introduced by the truncation with those introduced by the noise added to guarantee privacy. Experimental results over standard benchmark databases show that truncating is indeed effective. Our algorithm solves the "classical" frequent itemset mining problem, in which the goal is to find all itemsets whose support exceeds a threshold. Related work has proposed differentially private algorithms for the top-k itemset mining problem ("find the k most frequent itemsets".) An experimental comparison with those algorithms show that our algorithm achieves better F-score unless k is small. --- paper_title: PrivBasis: Frequent Itemset Mining with Differential Privacy paper_content: The discovery of frequent itemsets can serve valuable economic and research purposes. Releasing discovered frequent itemsets, however, presents privacy challenges. In this paper, we study the problem of how to perform frequent itemset mining on transaction databases while satisfying differential privacy. We propose an approach, called PrivBasis, which leverages a novel notion called basis sets. A θ-basis set has the property that any itemset with frequency higher than θ is a subset of some basis. We introduce algorithms for privately constructing a basis set and then using it to find the most frequent itemsets. Experiments show that our approach greatly outperforms the current state of the art. --- paper_title: Deep Learning with Differential Privacy paper_content: Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. --- paper_title: Private Multiplicative Weights Beyond Linear Queries paper_content: A wide variety of fundamental data analyses in machine learning, such as linear and logistic regression, require minimizing a convex function defined by the data. Since the data may contain sensitive information about individuals, and these analyses can leak that sensitive information, it is important to be able to solve convex minimization in a privacy-preserving way. A series of recent results show how to accurately solve a single convex minimization problem in a differentially private manner. However, the same data is often analyzed repeatedly, and little is known about solving multiple convex minimization problems with differential privacy. For simpler data analyses, such as linear queries, there are remarkable differentially private algorithms such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS 2010) that accurately answer exponentially many distinct queries. In this work, we extend these results to the case of convex minimization and show how to give accurate and differentially private solutions to exponentially many convex minimization problems on a sensitive dataset. --- paper_title: Learning Privately with Labeled and Unlabeled Examples paper_content: A private learner is an algorithm that given a sample of labeled individual examples outputs a generalizing hypothesis while preserving the privacy of each individual. In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic construction of private learners, in which the sample complexity is (generally) higher than what is needed for non-private learners. This gap in the sample complexity was then further studied in several followup papers, showing that (at least in some cases) this gap is unavoidable. Moreover, those papers considered ways to overcome the gap, by relaxing either the privacy or the learning guarantees of the learner. ::: ::: We suggest an alternative approach, inspired by the (non-private) models of semi-supervised learning and active-learning, where the focus is on the sample complexity of labeled examples whereas unlabeled examples are of a significantly lower cost. We consider private semi-supervised learners that operate on a random sample, where only a (hopefully small) portion of this sample is labeled. The learners have no control over which of the sample elements are labeled. Our main result is that the labeled sample complexity of private learners is characterized by the VC dimension. ::: ::: We present two generic constructions of private semi-supervised learners. The first construction is of learners where the labeled sample complexity is proportional to the VC dimension of the concept class, however, the unlabeled sample complexity of the algorithm is as big as the representation length of domain elements. Our second construction presents a new technique for decreasing the labeled sample complexity of a given private learner, while roughly maintaining its unlabeled sample complexity. In addition, we show that in some settings the labeled sample complexity does not depend on the privacy parameters of the learner. --- paper_title: Characterizing the sample complexity of private learners paper_content: In 2008, Kasiviswanathan el al. defined private learning as a combination of PAC learning and differential privacy [16]. Informally, a private learner is applied to a collection of labeled individual information and outputs a hypothesis while preserving the privacy of each individual. Kasiviswanathan et al. gave a generic construction of private learners for (finite) concept classes, with sample complexity logarithmic in the size of the concept class. This sample complexity is higher than what is needed for non-private learners, hence leaving open the possibility that the sample complexity of private learning may be sometimes significantly higher than that of non-private learning. We give a combinatorial characterization of the sample size sufficient and necessary to privately learn a class of concepts. This characterization is analogous to the well known characterization of the sample complexity of non-private learning in terms of the VC dimension of the concept class. We introduce the notion of probabilistic representation of a concept class, and our new complexity measure RepDim corresponds to the size of the smallest probabilistic representation of the concept class. We show that any private learning algorithm for a concept class C with sample complexity m implies RepDim(C) = O(m), and that there exists a private learning algorithm with sample complexity m = O(RepDim(C)). We further demonstrate that a similar characterization holds for the database size needed for privately computing a large class of optimization problems and also for the well studied problem of private data release. --- paper_title: A learning theory approach to non-interactive database privacy paper_content: In this paper we demonstrate that, ignoring computational constraints, it is possible to privately release synthetic databases that are useful for large classes of queries -- much larger in size than the database itself. Specifically, we give a mechanism that privately releases synthetic data for a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries. We show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the VC-dimension of the class of queries, which itself grows only logarithmically with the size of the query class. ::: We also show that it is not possible to privately release even simple classes of queries (such as intervals and their generalizations) over continuous domains. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, given a slight relaxation of the utility guarantee. This algorithm does not release synthetic data, but instead another data structure capable of representing an answer for each query. We also give an efficient algorithm for releasing synthetic data for the class of interval queries and axis-aligned rectangles of constant dimension. ::: Finally, inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy. --- paper_title: Bounds on the sample complexity for private learning and private data release paper_content: Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many natural cases this could even be done in polynomial time. ::: ::: While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. ::: ::: Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. ::: ::: We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release. --- paper_title: An Introduction to Computational Learning Theory paper_content: The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis. --- paper_title: What Can We Learn Privately? paper_content: Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning. --- paper_title: Sample Complexity Bounds for Differentially Private Learning paper_content: This work studies the problem of privacy-preserving classication { namely, learning a classier from sensitive data while preserving the privacy of individuals in the training set. --- paper_title: Between Pure and Approximate Differential Privacy paper_content: We show a new lower bound on the sample complexity of (e,δ)-differentially private algorithms that accurately answer statistical queries on high-dimensional databases. The novelty of our bound is that it depends optimally on the parameter δ, which loosely corresponds to the probability that the algorithm fails to be private, and is the first to smoothly interpolate between approximate differential privacy (δ >0) and pure differential privacy (δ= 0). ::: ::: Specifically, we consider a database D ∈{±1}n×d and its one-way marginals, which are the d queries of the form “What fraction of individual records have the i-th bit set to +1?” We show that in order to answer all of these queries to within error ±α (on average) while satisfying (e,δ)-differential privacy for some function δ such that δ≥2−o(n) and δ≤1/n1+Ω(1), it is necessary that ::: ::: \[n≥Ω (\frac{√dlog(1/δ)}{αe}).\] ::: ::: This bound is optimal up to constant factors. This lower bound implies similar new bounds for problems like private empirical risk minimization and private PCA. To prove our lower bound, we build on the connection between fingerprinting codes and lower bounds in differential privacy (Bun, Ullman, and Vadhan, STOC’14). ::: ::: In addition to our lower bound, we give new purely and approximately differentially private algorithms for answering arbitrary statistical queries that improve on the sample complexity of the standard Laplace and Gaussian mechanisms for achieving worst-case accuracy guarantees by a logarithmic factor. --- paper_title: Differentially private transit data publication: a case study on the montreal transportation system paper_content: With the wide deployment of smart card automated fare collection (SCAFC) systems, public transit agencies have been benefiting from huge volume of transit data, a kind of sequential data, collected every day. Yet, improper publishing and use of transit data could jeopardize passengers' privacy. In this paper, we present our solution to transit data publication under the rigorous differential privacy model for the Societe de transport de Montreal (STM). We propose an efficient data-dependent yet differentially private transit data sanitization approach based on a hybrid-granularity prefix tree structure. Moreover, as a post-processing step, we make use of the inherent consistency constraints of a prefix tree to conduct constrained inferences, which lead to better utility. Our solution not only applies to general sequential data, but also can be seamlessly extended to trajectory data. To our best knowledge, this is the first paper to introduce a practical solution for publishing large volume of sequential data under differential privacy. We examine data utility in terms of two popular data analysis tasks conducted at the STM, namely count queries and frequent sequential pattern mining. Extensive experiments on real-life STM datasets confirm that our approach maintains high utility and is scalable to large datasets. --- paper_title: Location Privacy via Geo-Indistinguishability paper_content: In this paper we report on the ongoing research of our team Comete on location privacy. In particular, we focus on the problem of protecting the privacy of the user when dealing with location-based services. The starting point of our approach is the principle of geo-indistinguishability, a formal notion of privacy that protects the user's exact location, while allowing approximate information --- typically needed to obtain a certain desired service --- to be released. Then, we discuss the problem that raise in the case of traces, when the user makes consecutive uses of the location based system, while moving along a path: since the points of a trace are correlated, a simple repetition of the mechanism would cause a rapid decrease of the level of privacy. We then show a method to limit such degradation, based on the idea of predicting a point from previously reported points, instead of generating a new noisy point. Finally, we discuss a method to make our mechanism more flexible over space: we start from the observation that space is not uniform from the point of view of location hiding, and we propose an approach to adapt the level of privacy to each zone. --- paper_title: Mobile Systems Privacy: 'MobiPriv' A Robust System for Snapshot or Continuous Querying Location Based Mobile Systems paper_content: Many mobile phones have a GPS sensor that can report accurate location. Thus, if these location data are not protected adequately, they may cause privacy breeches. Moreover, several reports are available where persons have been stalked through GPS. The contributions of this paper are in two folds. First, we examine privacy issues in snapshot queries, and present our work and results in this area. The proposed method can guarantee that all queries are protected, while previously proposed algorithms only achieve a low success rate in some situations. Next, we discuss continuous queries and illustrate that current snapshot solutions cannot be applied to continuous queries. Then, we present results for our robust models for continuous queries. We will introduce a novel suite of algorithms called MobiPriv that addressed the shortcomings of previous work in location and query privacy in mobile systems. We evaluated the efficiency and effectiveness of the MobiPriv scheme against previously proposed anonymization approaches. For our experiments, we utilized real world traffic volume data, real world road network and mobile users generated realistically by a mobile object generator. --- paper_title: Differentially Private Location Recommendations in Geosocial Networks paper_content: Location-tagged social media have an increasingly important role in shaping behavior of individuals. With the help of location recommendations, users are able to learn about events, products or places of interest that are relevant to their preferences. User locations and movement patterns are available from geosocial networks such as Foursquare, mass transit logs or traffic monitoring systems. However, disclosing movement data raises serious privacy concerns, as the history of visited locations can reveal sensitive details about an individual's health status, alternative lifestyle, etc. In this paper, we investigate mechanisms to sanitize location data used in recommendations with the help of differential privacy. We also identify the main factors that must be taken into account to improve accuracy. Extensive experimental results on real-world datasets show that a careful choice of differential privacy technique leads to satisfactory location recommendation results. --- paper_title: A Framework for Protecting Worker Location Privacy in Spatial Crowdsourcing paper_content: Spatial Crowdsourcing (SC) is a transformative platform that engages individuals, groups and communities in the act of collecting, analyzing, and disseminating environmental, social and other spatio-temporal information. The objective of SC is to outsource a set of spatio-temporal tasks to a set of workers, i.e., individuals with mobile devices that perform the tasks by physically traveling to specified locations of interest. However, current solutions require the workers, who in many cases are simply volunteering for a cause, to disclose their locations to untrustworthy entities. In this paper, we introduce a framework for protecting location privacy of workers participating in SC tasks. We argue that existing location privacy techniques are not sufficient for SC, and we propose a mechanism based on differential privacy and geocasting that achieves effective SC services while offering privacy guarantees to workers. We investigate analytical models and task assignment strategies that balance multiple crucial aspects of SC functionality, such as task completion rate, worker travel distance and system overhead. Extensive experimental results on real-world datasets show that the proposed technique protects workers' location privacy without incurring significant performance metrics penalties. --- paper_title: Differentially Private Spatial Decompositions paper_content: Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of ``private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy. --- paper_title: DPT: Differentially Private Trajectory Synthesis Using Hierarchical Reference Systems paper_content: GPS-enabled devices are now ubiquitous, from airplanes and cars to smartphones and wearable technology. This has resulted in a wealth of data about the movements of individuals and populations, which can be analyzed for useful information to aid in city and traffic planning, disaster preparedness and so on. However, the places that people go can disclose extremely sensitive information about them, and thus their use needs to be filtered through privacy preserving mechanisms. This turns out to be a highly challenging task: raw trajectories are highly detailed, and typically no pair is alike. Previous attempts fail either to provide adequate privacy protection, or to remain sufficiently faithful to the original behavior. ::: ::: This paper presents DPT, a system to synthesize mobility data based on raw GPS trajectories of individuals while ensuring strong privacy protection in the form of e-differential privacy. DPT makes a number of novel modeling and algorithmic contributions including (i) discretization of raw trajectories using hierarchical reference systems (at multiple resolutions) to capture individual movements at differing speeds, (ii) adaptive mechanisms to select a small set of reference systems and construct prefix tree counts privately, and (iii) use of direction-weighted sampling for improved utility. While there have been prior attempts to solve the subproblems required to generate synthetic trajectories, to the best of our knowledge, ours is the first system that provides an end-to-end solution. We show the efficacy of our synthetic trajectory generation system using an extensive empirical evaluation. --- paper_title: Differentially private recommender systems: building privacy into the net paper_content: We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy. Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy. We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation/learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise. We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides. --- paper_title: A differential privacy framework for matrix factorization recommender systems paper_content: Recommender systems rely on personal information about user behavior for the recommendation generation purposes. Thus, they inherently have the potential to hamper user privacy and disclose sensitive information. Several works studied how neighborhood-based recommendation methods can incorporate user privacy protection. However, privacy preserving latent factor models, in particular, those represented by matrix factorization techniques, the state-of-the-art in recommender systems, have received little attention. In this paper, we address the problem of privacy preserving matrix factorization by utilizing differential privacy, a rigorous and provable approach to privacy in statistical databases. We propose a generic framework and evaluate several ways, in which differential privacy can be applied to matrix factorization. By doing so, we specifically address the privacy-accuracy trade-off offered by each of the algorithms. We show that, of all the algorithms considered, input perturbation results in the best recommendation accuracy, while guaranteeing a solid level of privacy protection against attacks that aim to gain knowledge about either specific user ratings or even the existence of these ratings. Our analysis additionally highlights the system aspects that should be addressed when applying differential privacy in practice, and when considering potential privacy preserving solutions. --- paper_title: When Differential Privacy Meets Randomized Perturbation: A Hybrid Approach for Privacy-Preserving Recommender System paper_content: Privacy risks of recommender systems have caused increasing attention. Users’ private data is often collected by probably untrusted recommender system in order to provide high-quality recommendation. Meanwhile, malicious attackers may utilize recommendation results to make inferences about other users’ private data. Existing approaches focus either on keeping users’ private data protected during recommendation computation or on preventing the inference of any single user’s data from the recommendation result. However, none is designed for both hiding users’ private data and preventing privacy inference. To achieve this goal, we propose in this paper a hybrid approach for privacy-preserving recommender systems by combining differential privacy (DP) with randomized perturbation (RP). We theoretically show the noise added by RP has limited effect on recommendation accuracy and the noise added by DP can be well controlled based on the sensitivity analysis of functions on the perturbed data. Extensive experiments on three large-scale real world datasets show that the hybrid approach generally provides more privacy protection with acceptable recommendation accuracy loss, and surprisingly sometimes achieves better privacy without sacrificing accuracy, thus validating its feasibility in practice. --- paper_title: An effective privacy preserving algorithm for neighborhood-based collaborative filtering paper_content: As a popular technique in recommender systems, Collaborative Filtering (CF) has been the focus of significant attention in recent years, however, its privacy-related issues, especially for the neighborhood-based CF methods, cannot be overlooked. The aim of this study is to address these privacy issues in the context of neighborhood-based CF methods by proposing a Private Neighbor Collaborative Filtering (PNCF) algorithm. This algorithm includes two privacy preserving operations: Private Neighbor Selection and Perturbation. Using the item-based method as an example, Private Neighbor Selection is constructed on the basis of the notion of differential privacy, meaning that neighbors are privately selected for the target item according to its similarities with others. Recommendation-Aware Sensitivity and a re-designed differential privacy mechanism are introduced in this operation to enhance the performance of recommendations. A Perturbation operation then hides the true ratings of selected neighbors by adding Laplace noise. The PNCF algorithm reduces the magnitude of the noise introduced from the traditional differential privacy mechanism. Moreover, a theoretical analysis is provided to show that the proposed algorithm can resist a KNN attack while retaining the accuracy of recommendations. The results from experiments on two real datasets show that the proposed PNCF algorithm can obtain a rigid privacy guarantee without high accuracy loss. --- paper_title: Differential Privacy with Bounded Priors: Reconciling Utility and Privacy in Genome-Wide Association Studies paper_content: Differential privacy (DP) has become widely accepted as a rigorous definition of data privacy, with stronger privacy guarantees than traditional statistical methods. However, recent studies have shown that for reasonable privacy budgets, differential privacy significantly affects the expected utility. Many alternative privacy notions which aim at relaxing DP have since been proposed, with the hope of providing a better tradeoff between privacy and utility. At CCS'13, Li et al. introduced the membership privacy framework, wherein they aim at protecting against set membership disclosure by adversaries whose prior knowledge is captured by a family of probability distributions. In the context of this framework, we investigate a relaxation of DP, by considering prior distributions that capture more reasonable amounts of background knowledge. We show that for different privacy budgets, DP can be used to achieve membership privacy for various adversarial settings, thus leading to an interesting tradeoff between privacy guarantees and utility. We re-evaluate methods for releasing differentially private chi2-statistics in genome-wide association studies and show that we can achieve a higher utility than in previous works, while still guaranteeing membership privacy in a relevant adversarial setting. --- paper_title: Privacy in the Genomic Era paper_content: Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward. --- paper_title: Privacy-preserving data exploration in genome-wide association studies paper_content: Genome-wide association studies (GWAS) have become a popular method for analyzing sets of DNA sequences in order to discover the genetic basis of disease. Unfortunately, statistics published as the result of GWAS can be used to identify individuals participating in the study. To prevent privacy breaches, even previously published results have been removed from public databases, impeding researchers' access to the data and hindering collaborative research. Existing techniques for privacy-preserving GWAS focus on answering specific questions, such as correlations between a given pair of SNPs (DNA sequence variations). This does not fit the typical GWAS process, where the analyst may not know in advance which SNPs to consider and which statistical tests to use, how many SNPs are significant for a given dataset, etc. We present a set of practical, privacy-preserving data mining algorithms for GWAS datasets. Our framework supports exploratory data analysis, where the analyst does not know a priori how many and which SNPs to consider. We develop privacy-preserving algorithms for computing the number and location of SNPs that are significantly associated with the disease, the significance of any statistical test between a given SNP and the disease, any measure of correlation between SNPs, and the block structure of correlations. We evaluate our algorithms on real-world datasets and demonstrate that they produce significantly more accurate results than prior techniques while guaranteeing differential privacy. --- paper_title: Preserving Statistical Validity in Adaptive Data Analysis paper_content: A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of m adaptively chosen functions on an unknown distribution given n random samples. We show that, surprisingly, there is a way to estimate an exponential in n number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question. --- paper_title: More General Queries and Less Generalization Error in Adaptive Data Analysis paper_content: Adaptivity is an important feature of data analysis---typically the choice of questions asked about a dataset depends on previous interactions with the same dataset. However, generalization error is typically bounded in a non-adaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. ::: Specifically, suppose there is an unknown distribution $\mathcal{P}$ and a set of $n$ independent samples $x$ is drawn from $\mathcal{P}$. We seek an algorithm that, given $x$ as input, "accurately" answers a sequence of adaptively chosen "queries" about the unknown distribution $\mathcal{P}$. How many samples $n$ must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? ::: In this work we make two new contributions towards resolving this question: ::: *We give upper bounds on the number of samples $n$ that are needed to answer statistical queries that improve over the bounds of Dwork et al. ::: *We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and the important class of convex risk minimization queries. ::: As in Dwork et al., our algorithms are based on a connection between differential privacy and generalization error, but we feel that our analysis is simpler and more modular, which may be useful for studying these questions in the future. --- paper_title: On the Generalization Properties of Differential Privacy paper_content: A new line of work, started with Dwork et al., studies the task of answering statistical queries using a sample and relates the problem to the concept of differential privacy. By the Hoeffding bound, a sample of size $O(\log k/\alpha^2)$ suffices to answer $k$ non-adaptive queries within error $\alpha$, where the answers are computed by evaluating the statistical queries on the sample. This argument fails when the queries are chosen adaptively (and can hence depend on the sample). Dwork et al. showed that if the answers are computed with $(\epsilon,\delta)$-differential privacy then $O(\epsilon)$ accuracy is guaranteed with probability $1-O(\delta^\epsilon)$. Using the Private Multiplicative Weights mechanism, they concluded that the sample size can still grow polylogarithmically with the $k$. ::: Very recently, Bassily et al. presented an improved bound and showed that (a variant of) the private multiplicative weights algorithm can answer $k$ adaptively chosen statistical queries using sample complexity that grows logarithmically in $k$. However, their results no longer hold for every differentially private algorithm, and require modifying the private multiplicative weights algorithm in order to obtain their high probability bounds. ::: We greatly simplify the results of Dwork et al. and improve on the bound by showing that differential privacy guarantees $O(\epsilon)$ accuracy with probability $1-O(\delta\log(1/\epsilon)/\epsilon)$. It would be tempting to guess that an $(\epsilon,\delta)$-differentially private computation should guarantee $O(\epsilon)$ accuracy with probability $1-O(\delta)$. However, we show that this is not the case, and that our bound is tight (up to logarithmic factors). --- paper_title: Generalization in Adaptive Data Analysis and Holdout Reuse paper_content: Overfitting is the bane of data analysts, even when data are plentiful. Formal approaches to understanding this problem focus on statistical inference and generalization of individual analysis procedures. Yet the practice of data analysis is an inherently interactive and adaptive process: new analyses and hypotheses are proposed after seeing the results of previous ones, parameters are tuned on the basis of obtained results, and datasets are shared and reused. An investigation of this gap has recently been initiated by the authors in [7], where we focused on the problem of estimating expectations of adaptively chosen functions. ::: ::: In this paper, we give a simple and practical method for reusing a holdout (or testing) set to validate the accuracy of hypotheses produced by a learning algorithm operating on a training set. Reusing a holdout set adaptively multiple times can easily lead to overfitting to the holdout set itself. We give an algorithm that enables the validation of a large number of adaptively chosen hypotheses, while provably avoiding overfitting. We illustrate the advantages of our algorithm over the standard use of the holdout set via a simple synthetic experiment. ::: ::: We also formalize and address the general problem of data reuse in adaptive data analysis. We show how the differential-privacy based approach given in [7] is applicable much more broadly to adaptive data analysis. We then show that a simple approach based on description length can also be used to give guarantees of statistical validity in adaptive settings. Finally, we demonstrate that these incomparable approaches can be unified via the notion of approximate max-information that we introduce. This, in particular, allows the preservation of statistical validity guarantees even when an analyst adaptively composes algorithms which have guarantees based on either of the two approaches. --- paper_title: Local Privacy and Statistical Minimax Rates paper_content: Working under local differential privacy-a model of privacy in which data remains private even from the statistician or learner-we study the tradeoff between privacy guarantees and the utility of the resulting statistical estimators. We prove bounds on information-theoretic quantities, including mutual information and Kullback-Leibler divergence, that influence estimation rates as a function of the amount of privacy preserved. When combined with minimax techniques such as Le Cam's and Fano's methods, these inequalities allow for a precise characterization of statistical rates under local privacy constraints. In this paper, we provide a treatment of two canonical problem families: mean estimation in location family models and convex risk minimization. For these families, we provide lower and upper bounds for estimation of population quantities that match up to constant factors, giving privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. --- paper_title: What Can We Learn Privately? paper_content: Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning. --- paper_title: Correlated network data publication via differential privacy paper_content: With the increasing prevalence of information networks, research on privacy-preserving network data publishing has received substantial attention recently. There are two streams of relevant research, targeting different privacy requirements. A large body of existing works focus on preventing node re-identification against adversaries with structural background knowledge, while some other studies aim to thwart edge disclosure. In general, the line of research on preventing edge disclosure is less fruitful, largely due to lack of a formal privacy model. The recent emergence of differential privacy has shown great promise for rigorous prevention of edge disclosure. Yet recent research indicates that differential privacy is vulnerable to data correlation, which hinders its application to network data that may be inherently correlated. In this paper, we show that differential privacy could be tuned to provide provable privacy guarantees even in the correlated setting by introducing an extra parameter, which measures the extent of correlation. We subsequently provide a holistic solution for non-interactive network data publication. First, we generate a private vertex labeling for a given network dataset to make the corresponding adjacency matrix form dense clusters. Next, we adaptively identify dense regions of the adjacency matrix by a data-dependent partitioning process. Finally, we reconstruct a noisy adjacency matrix by a novel use of the exponential mechanism. To our best knowledge, this is the first work providing a practical solution for publishing real-life network data via differential privacy. Extensive experiments demonstrate that our approach performs well on different types of real-life network datasets. --- paper_title: Mutual Information Optimally Local Private Discrete Distribution Estimation paper_content: Consider statistical learning (e.g. discrete distribution estimation) with local $\epsilon$-differential privacy, which preserves each data provider's privacy locally, we aim to optimize statistical data utility under the privacy constraints. Specifically, we study maximizing mutual information between a provider's data and its private view, and give the exact mutual information bound along with an attainable mechanism: $k$-subset mechanism as results. The mutual information optimal mechanism randomly outputs a size $k$ subset of the original data domain with delicate probability assignment, where $k$ varies with the privacy level $\epsilon$ and the data domain size $d$. After analysing the limitations of existing local private mechanisms from mutual information perspective, we propose an efficient implementation of the $k$-subset mechanism for discrete distribution estimation, and show its optimality guarantees over existing approaches. --- paper_title: No free lunch in data privacy paper_content: Differential privacy is a powerful tool for providing privacy-preserving noisy query answers over statistical databases. It guarantees that the distribution of noisy query answers changes very little with the addition or deletion of any tuple. It is frequently accompanied by popularized claims that it provides privacy without any assumptions about the data and that it protects against attackers who know all but one record. In this paper we critically analyze the privacy protections offered by differential privacy. First, we use a no-free-lunch theorem, which defines non-privacy as a game, to argue that it is not possible to provide privacy and utility without making assumptions about how the data are generated. Then we explain where assumptions are needed. We argue that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process. This is different from limiting the inference about the presence of a tuple (for example, Bob's participation in a social network may cause edges to form between pairs of his friends, so that it affects more than just the tuple labeled as "Bob"). The definition of evidence of participation, in turn, depends on how the data are generated -- this is how assumptions enter the picture. We explain these ideas using examples from social network research as well as tabular data for which deterministic statistics have been previously released. In both cases the notion of participation varies, the use of differential privacy can lead to privacy breaches, and differential privacy does not always adequately limit inference about participation. --- paper_title: Private spatial data aggregation in the local setting paper_content: With the deep penetration of the Internet and mobile devices, privacy preservation in the local setting has become increasingly relevant. The local setting refers to the scenario where a user is willing to share his/her information only if it has been properly sanitized before leaving his/her own device. Moreover, a user may hold only a single data element to share, instead of a database. Despite its ubiquitousness, the above constraints make the local setting substantially more challenging than the traditional centralized or distributed settings. In this paper, we initiate the study of private spatial data aggregation in the local setting, which finds its way in many real-world applications, such as Waze and Google Maps. In response to users' varied privacy requirements that are natural in the local setting, we propose a new privacy model called personalized local differential privacy (PLDP) that allows to achieve desirable utility while still providing rigorous privacy guarantees. We design an efficient personalized count estimation protocol as a building block for achieving PLDP and give theoretical analysis of its utility, privacy and complexity. We then present a novel framework that allows an untrusted server to accurately learn the user distribution over a spatial domain while satisfying PLDP for each user. This is mainly achieved by designing a novel user group clustering algorithm tailored to our problem. We confirm the effectiveness and efficiency of our framework through extensive experiments on multiple real benchmark datasets. --- paper_title: Reconstruction privacy: Enabling statistical learning paper_content: Non-independent reasoning (NIR) allows the information about one record in the data to be learnt from the information of other records in the data. Most posterior/prior based privacy criteria consider NIR as a privacy violation and require to smooth the distribution of published data to avoid sensitive NIR. The drawback of this approach is that it limits the utility of learning statistical relationships. The differential privacy criterion considers NIR as a non-privacy violation, therefore, enables learning statistical relationships, but at the cost of potential disclosures through NIR. A question is whether it is possible to (1) allow learning statistical relationships, yet (2) prevent sensitive NIR about an individual. We present a data perturbation and sampling method to achieve both (1) and (2). The enabling mechanism is a new privacy criterion that distinguishes the two types of NIR in (1) and (2) with the help of the law of large numbers. In particular, the record sampling effectively prevents the sensitive disclosure in (2) while having less effect on the statistical learning in (1). --- paper_title: Bayesian Differential Privacy on Correlated Data paper_content: Differential privacy provides a rigorous standard for evaluating the privacy of perturbation algorithms. It has widely been regarded that differential privacy is a universal definition that deals with both independent and correlated data and a differentially private algorithm can protect privacy against arbitrary adversaries. However, recent research indicates that differential privacy may not guarantee privacy against arbitrary adversaries if the data are correlated. In this paper, we focus on the private perturbation algorithms on correlated data. We investigate the following three problems: (1) the influence of data correlations on privacy; (2) the influence of adversary prior knowledge on privacy; and (3) a general perturbation algorithm that is private for prior knowledge of any subset of tuples in the data when the data are correlated. We propose a Pufferfish definition of privacy, called Bayesian differential privacy, by which the privacy level of a probabilistic perturbation algorithm can be evaluated even when the data are correlated and when the prior knowledge is incomplete. We present a Gaussian correlation model to accurately describe the structure of data correlations and analyze the Bayesian differential privacy of the perturbation algorithm on the basis of this model. Our results show that privacy is poorest for an adversary who has the least prior knowledge. We further extend this model to a more general one that considers uncertain prior knowledge. --- paper_title: Correlated Differential Privacy: Hiding Information in Non-IID Data Set paper_content: Privacy preserving on data mining and data release has attracted an increasing research interest over a number of decades. Differential privacy is one influential privacy notion that offers a rigorous and provable privacy guarantee for data mining and data release. Existing studies on differential privacy assume that in a data set, records are sampled independently. However, in real-world applications, records in a data set are rarely independent. The relationships among records are referred to as correlated information and the data set is defined as correlated data set. A differential privacy technique performed on a correlated data set will disclose more information than expected, and this is a serious privacy violation. Although recent research was concerned with this new privacy violation, it still calls for a solid solution for the correlated data set. Moreover, how to decrease the large amount of noise incurred via differential privacy in correlated data set is yet to be explored. To fill the gap, this paper proposes an effective correlated differential privacy solution by defining the correlated sensitivity and designing a correlated data releasing mechanism. With consideration of the correlated levels between records, the proposed correlated sensitivity can significantly decrease the noise compared with traditional global sensitivity. The correlated data releasing mechanism correlated iteration mechanism is designed based on an iterative method to answer a large number of queries. Compared with the traditional method, the proposed correlated differential privacy solution enhances the privacy guarantee for a correlated data set with less accuracy cost. Experimental results show that the proposed solution outperforms traditional differential privacy in terms of mean square error on large group of queries. This also suggests the correlated differential privacy can successfully retain the utility while preserving the privacy. --- paper_title: Collecting and Analyzing Data from Smart Device Users with Local Differential Privacy paper_content: Organizations with a large user base, such as Samsung and Google, can potentially benefit from collecting and mining users' data. However, doing so raises privacy concerns, and risks accidental privacy breaches with serious consequences. Local differential privacy (LDP) techniques address this problem by only collecting randomized answers from each user, with guarantees of plausible deniability; meanwhile, the aggregator can still build accurate models and predictors by analyzing large amounts of such randomized data. So far, existing LDP solutions either have severely restricted functionality, or focus mainly on theoretical aspects such as asymptotical bounds rather than practical usability and performance. Motivated by this, we propose Harmony, a practical, accurate and efficient system for collecting and analyzing data from smart device users, while satisfying LDP. Harmony applies to multi-dimensional data containing both numerical and categorical attributes, and supports both basic statistics (e.g., mean and frequency estimates), and complex machine learning tasks (e.g., linear regression, logistic regression and SVM classification). Experiments using real data confirm Harmony's effectiveness. --- paper_title: Using Randomized Response for Differential Privacy Preserving Data Collection paper_content: This paper studies how to enforce differential privacy by using the randomized response in the data collection scenario. Given a client’s value, the randomized algorithm executed by the client reports to the untrusted server a perturbed value. The use of randomized response in surveys enables easy estimations of accurate population statistics while preserving the privacy of the individual respondents. We compare the randomized response with the standard Laplace mechanism which is based on query-output independent adding of Laplace noise. Our research starts from the simple case with one single binary attribute and extends to the general case with multiple polychotomous attributes. We measure utility preservation in terms of the mean squared error of the estimate for various calculations including individual value estimate, proportion estimate, and various derived statistics. We theoretically derive the explicit formula of the mean squared error of various derived statistics based on the randomized response theory and prove the randomized response outperforms the Laplace mechanism. We evaluate our algorithms on YesiWell database including sensitive biomarker data and social network relationships of patients. Empirical evaluation results show effectiveness of our proposed techniques. Especially the use of the randomized response for collecting data incurs fewer utility loss than the output perturbation when the sensitivity of functions is high. --- paper_title: Pufferfish: A framework for mathematical privacy definitions paper_content: In this article, we introduce a new and general privacy framework called Pufferfish. The Pufferfish framework can be used to create new privacy definitions that are customized to the needs of a given application. The goal of Pufferfish is to allow experts in an application domain, who frequently do not have expertise in privacy, to develop rigorous privacy definitions for their data sharing needs. In addition to this, the Pufferfish framework can also be used to study existing privacy definitions. We illustrate the benefits with several applications of this privacy framework: we use it to analyze differential privacy and formalize a connection to attackers who believe that the data records are independent; we use it to create a privacy definition called hedging privacy, which can be used to rule out attackers whose prior beliefs are inconsistent with the data; we use the framework to define and study the notion of composition in a broader context than before; we show how to apply the framework to protect unbounded continuous attributes and aggregate information; and we show how to use the framework to rigorously account for prior data releases. --- paper_title: Heavy Hitter Estimation over Set-Valued Data with Local Differential Privacy paper_content: In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings. --- paper_title: $\textsf{LoPub}$ : High-Dimensional Crowdsourced Data Publication With Local Differential Privacy paper_content: High-dimensional crowdsourced data collected from a large number of users produces rich knowledge for our society. However, it also brings unprecedented privacy threats to participants. Local privacy, a variant of differential privacy, is proposed as a means to eliminate the privacy concern. Unfortunately, achieving local privacy on high-dimensional crowdsourced data raises great challenges on both efficiency and effectiveness. Here, based on EM and Lasso regression, we propose efficient multi-dimensional joint distribution estimation algorithms with local privacy. Then, we develop a Locally privacy-preserving high-dimensional data Publication algorithm, LoPub, by taking advantage of our distribution estimation techniques. In particular, both correlations and joint distribution among multiple attributes can be identified to reduce the dimension of crowdsourced data, thus achieving both efficiency and effectiveness in locally private high-dimensional data publication. Extensive experiments on real-world datasets demonstrated that the efficiency of our multivariate distribution estimation scheme and confirm the effectiveness of our LoPub scheme in generating approximate datasets with local privacy. --- paper_title: Mechanism Design via Differential Privacy paper_content: We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero. --- paper_title: Privacy and Truthful Equilibrium Selection for Aggregative Games paper_content: We study a very general class of games -- multi-dimensional aggregative games -- which in particular generalize both anonymous games and weighted congestion games. For any such game that is also large, we solve the equilibrium selection problem in a strong sense. In particular, we give an efficient weak mediator: a mechanism which has only the power to listen to reported types and provide non-binding suggested actions, such that a it is an asymptotic Nash equilibrium for every player to truthfully report their type to the mediator, and then follow its suggested action; and b that when players do so, they end up coordinating on a particular asymptotic pure strategy Nash equilibrium of the induced complete information game. In fact, truthful reporting is an ex-post Nash equilibrium of the mediated game, so our solution applies even in settings of incomplete information, and even when player types are arbitrary or worst-case i.e. not drawn from a common prior. We achieve this by giving an efficient differentially private algorithm for computing a Nash equilibrium in such games. The rates of convergence to equilibrium in all of our results are inverse polynomial in the number of players n. We also apply our main results to a multi-dimensional market game. ::: ::: Our results can be viewed as giving, for a rich class of games, a more robust version of the Revelation Principle, in that we work with weaker informational assumptions no common prior, yet provide a stronger solution concept ex-post Nash versus Bayes Nash equilibrium. In comparison to previous work, our main conceptual contribution is showing that weak mediators are a game theoretic object that exist in a wide variety of games --- previously, they were only known to exist in traffic routing games. We also give the first weak mediator that can implement an equilibrium optimizing a linear objective function, rather than implementing a possibly worst-case Nash equilibrium. --- paper_title: Approximately optimal mechanism design via differential privacy paper_content: We study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a generic mechanism that allows for approximate optimal implementation of insensitive objective functions in ex-post Nash equilibrium. If, furthermore, values are private then the same mechanism is strategy proof. We cast our results onto two specific models: pricing and facility location. The mechanism we design is optimal up to an additive factor of the order of magnitude of one over the square root of the number of agents and involves no utility transfers. Underlying our mechanism is a lottery between two auxiliary mechanisms --- with high probability we actuate a mechanism that reduces players influence on the choice of the social alternative, while choosing the optimal outcome with high probability. This is where differential privacy is employed. With the complementary probability we actuate a mechanism that may be typically far from optimal but is incentive compatible. The joint mechanism inherits the desired properties from both. --- paper_title: Higher-Order Approximate Relational Refinement Types for Mechanism Design and Differential Privacy paper_content: Mechanism design is the study of algorithm design where the inputs to the algorithm are controlled by strategic agents, who must be incentivized to faithfully report them. Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property, incentive properties are only useful if the strategic agents also believe this fact. Verification is an attractive way to convince agents that the incentive properties actually hold, but mechanism design poses several unique challenges: interesting properties can be sophisticated relational properties of probabilistic computations involving expected values, and mechanisms may rely on other probabilistic properties, like differential privacy, to achieve their goals. We introduce a relational refinement type system, called HOARe2, for verifying mechanism design and differential privacy. We show that HOARe2 is sound w.r.t. a denotational semantics, and correctly models (epsilon,delta)-differential privacy; moreover, we show that it subsumes DFuzz, an existing linear dependent type system for differential privacy. Finally, we develop an SMT-based implementation of HOARe2 and use it to verify challenging examples of mechanism design, including auctions and aggregative games, and new proposed examples from differential privacy. --- paper_title: Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds paper_content: "Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy." --- paper_title: Concentrated Differential Privacy paper_content: We introduce Concentrated Differential Privacy, a relaxation of Differential Privacy enjoying better accuracy than both pure differential privacy and its popular "(epsilon,delta)" relaxation without compromising on cumulative privacy loss over multiple computations. ---
Title: Differentially Private Data Publishing and Analysis: A Survey Section 1: Introduction Description 1: Provide an overview of the importance of privacy preservation in data publishing and analysis. Define differentially private data publishing and analysis, and introduce differential privacy as a privacy model. Section 2: Outline and Survey Overview Description 2: Summarize the historical context and the previous surveys on differential privacy. Explain the focus of the current survey, distinguishing between data publishing and data analysis. Section 3: Notation Description 3: Define the notations and symbols used throughout the survey, including datasets, queries, sensitivity, and differential privacy mechanisms. Section 4: Differential Privacy Description 4: Explain the definition of differential privacy, the privacy budget composition, sensitivity, and the main differential privacy mechanisms like the Laplace and exponential mechanisms. Section 5: Utility Measurement of Differential Privacy Description 5: Discuss various utility measurements used in differential privacy, including noise size measurement and error measurement, and their implications for data publishing and analysis. Section 6: Differentially Private Data Publishing Description 6: Describe differentially private data publishing mechanisms, including interactive and non-interactive settings. Discuss publishing mechanisms like transformation, dataset partitioning, query separation, and iteration. Section 7: Interactive Publishing Description 7: Detail interactive settings for different types of data (transaction, histogram, stream, and graph data) and present the mechanisms used in these settings. Section 8: Non-Interactive Publishing Description 8: Discuss the challenges and solutions for non-interactive publishing, including batch query publishing and synthetic dataset publishing. Section 9: Differentially Private Data Analysis Description 9: Outline the task of extending non-private algorithms to differentially private algorithms, categorizing them into Laplace/exponential frameworks and private learning frameworks. Section 10: Laplace/Exponential Framework Description 10: Explain how Laplace and exponential mechanisms can be incorporated into learning algorithms for supervised learning, unsupervised learning, and frequent itemset mining. Section 11: Private Learning Framework Description 11: Discuss private learning frameworks, focusing on empirical risk minimization (ERM) and sample complexity in PAC learning. Section 12: Differential Privacy in Location-Based Services Description 12: Address privacy concerns in location-based services and describe approaches to preserve location and trajectory privacy using differential privacy. Section 13: Differentially Private Recommender Systems Description 13: Examine the application of differential privacy in recommender systems, discussing approaches to protect user data while maintaining recommendation accuracy. Section 14: Differential Privacy in Genetic Data Description 14: Explore the use of differential privacy in genetic data, reviewing methods to preserve privacy while enabling biomedical discoveries. Section 15: Adaptive Data Analysis Description 15: Introduce adaptive data analysis, discussing how differential privacy can ensure generalization guarantees in machine learning. Section 16: Local Differential Privacy Description 16: Present the concept of local differential privacy, explaining its application in distributed systems where the data curator is untrusted. Section 17: Differential Privacy for Coupled Information Description 17: Discuss the issue of privacy in datasets with coupled information and the frameworks proposed to address this complication. Section 18: Differential Privacy and Mechanism Design Description 18: Examine the intersection of differential privacy and mechanism design, discussing game-theoretical approaches to ensure truthful inputs from agents. Section 19: Relaxation of Differential Privacy Description 19: Address the relaxation of differential privacy definitions to make them more practical, including concentrated differential privacy and zero-concentrated differential privacy. Section 20: Conclusions Description 20: Summarize the key points from the survey, discuss future challenges and opportunities in differential privacy research, and provide final thoughts on the field's potential.
Recognizable Sets of Graphs, Hypergraphs and Relational Structures: A Survey
4
--- paper_title: Basic notions of universal algebra for language theory and graph grammars paper_content: Abstract This paper reviews the basic properties of the equational and recognizable subsets of general algebras; these sets can be seen as generalizations of the context-free and regular languages, respectively. This approach, based on Universal Algebra, facilitates the development of the theory of formal languages so as to include the description of sets of finite trees, finite graphs, finite hypergraphs, tuples of words, partially commutative words (also called traces) and other similar finite objects. --- paper_title: The recognizability of sets of graphs is a robust property paper_content: Once the set of finite graphs is equipped with an algebra structure (arising from the definition of operations that generalize the concatenation of words), one can define the notion of a recognizable set of graphs in terms of finite congruences. Applications to the construction of efficient algorithms and to the theory of context-free sets of graphs follow naturally. The class of recognizable sets depends on the signature of graph operations. We consider three signatures related respectively to Hyperedge Replacement (HR) context-free graph grammars, to Vertex Replacement (VR) context-free graph grammars, and to modular decompositions of graphs. We compare the corresponding classes of recognizable sets. We show that they are robust in the sense that many variants of each signature (where in particular operations are defined by quantifier-free formulas, a quite flexible framework) yield the same notions of recognizability. We prove that for graphs without large complete bipartite subgraphs, HR-recognizability and VR-recognizability coincide. The same combinatorial condition equates HR-context-free and VR-context-free sets of graphs. Inasmuch as possible, results are formulated in the more general framework of relational structures. --- paper_title: The recognizability of sets of graphs is a robust property paper_content: Once the set of finite graphs is equipped with an algebra structure (arising from the definition of operations that generalize the concatenation of words), one can define the notion of a recognizable set of graphs in terms of finite congruences. Applications to the construction of efficient algorithms and to the theory of context-free sets of graphs follow naturally. The class of recognizable sets depends on the signature of graph operations. We consider three signatures related respectively to Hyperedge Replacement (HR) context-free graph grammars, to Vertex Replacement (VR) context-free graph grammars, and to modular decompositions of graphs. We compare the corresponding classes of recognizable sets. We show that they are robust in the sense that many variants of each signature (where in particular operations are defined by quantifier-free formulas, a quite flexible framework) yield the same notions of recognizability. We prove that for graphs without large complete bipartite subgraphs, HR-recognizability and VR-recognizability coincide. The same combinatorial condition equates HR-context-free and VR-context-free sets of graphs. Inasmuch as possible, results are formulated in the more general framework of relational structures. --- paper_title: The recognizability of sets of graphs is a robust property paper_content: Once the set of finite graphs is equipped with an algebra structure (arising from the definition of operations that generalize the concatenation of words), one can define the notion of a recognizable set of graphs in terms of finite congruences. Applications to the construction of efficient algorithms and to the theory of context-free sets of graphs follow naturally. The class of recognizable sets depends on the signature of graph operations. We consider three signatures related respectively to Hyperedge Replacement (HR) context-free graph grammars, to Vertex Replacement (VR) context-free graph grammars, and to modular decompositions of graphs. We compare the corresponding classes of recognizable sets. We show that they are robust in the sense that many variants of each signature (where in particular operations are defined by quantifier-free formulas, a quite flexible framework) yield the same notions of recognizability. We prove that for graphs without large complete bipartite subgraphs, HR-recognizability and VR-recognizability coincide. The same combinatorial condition equates HR-context-free and VR-context-free sets of graphs. Inasmuch as possible, results are formulated in the more general framework of relational structures. ---
Title: Recognizable Sets of Graphs, Hypergraphs and Relational Structures: A Survey Section 1: Introduction Description 1: Introduce the scope of the theory of formal languages extending to graphs, hypergraphs, and relational structures, highlighting its relevance and significance. Section 2: Notions from Universal Algebra Description 2: Explain the fundamental concepts from Universal Algebra that are necessary for dealing with graphs and hypergraphs, including many-sorted algebras, equational sets, and recognizable sets. Section 3: Graph operations Description 3: Discuss the different operations on graphs that form the signature for defining equational sets, specifically focusing on HR (Hyperedge Replacement) and VR (Vertex Replacement) operations and their significance. Section 4: Monadic Second-Order logic and graph properties Description 4: Explore the use of Monadic Second-Order logic (MS logic) for specifying sets of graphs and graph properties and present fundamental theorems related to MS definable sets and their recognizability. Section 5: Monadic Second-Order Transductions Description 5: Describe how Monadic Second-Order transductions are used for transforming graphs and hypergraphs, along with relevant theorems and their implications for equational and recognizable sets.
A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients
7
--- paper_title: Algorithms for Reinforcement Learning paper_content: Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective.What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms' merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming.We give a fairly comprehensive catalog of learning problems, describe the core ideas, note a large number of state of the art algorithms, followed by the discussion of their theoretical properties and limitations. --- paper_title: A Multi-agent Reinforcement Learning using Actor-Critic methods paper_content: This paper investigates a new algorithm in Multi-agent Reinforcement Learning. We propose a multi-agent learning algorithm that is extend single agent actor-critic methods to the multi-agent setting. To realize the algorithm, we introduced the value of agentpsilas temporal best-response strategy instead of the value of an equilibria. So, our algorithm uses the linear programming to compute Q values. When there are multi Nash equilibrium in the games, the mixed equilibrium was be reached. Our learning algorithm works within the very general framework of n-player, general-sum stochastic games, and learns both the game structure and its associated optimal policy. --- paper_title: Learning to predict by the methods of temporal differences paper_content: This article introduces a class of incremental learning procedures specialized for prediction-that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. --- paper_title: A convergent actor-critic-based FRL algorithm with application to power management of wireless transmitters paper_content: This paper provides the first convergence proof for fuzzy reinforcement learning (FRL) as well as experimental results supporting our analysis. We extend the work of Konda and Tsitsiklis, who presented a convergent actor-critic (AC) algorithm for a general parameterized actor. In our work we prove that a fuzzy rulebase actor satisfies the necessary conditions that guarantee the convergence of its parameters to a local optimum. Our fuzzy rulebase uses Takagi-Sugeno-Kang rules, Gaussian membership functions, and product inference. As an application domain, we chose a difficult task of power control in wireless transmitters, characterized by delayed rewards and a high degree of stochasticity. To the best of our knowledge, no reinforcement learning algorithms have been previously applied to this task. Our simulation results show that the ACFRL algorithm consistently converges in this domain to a locally optimal policy. --- paper_title: Continuous-Time Adaptive Critics paper_content: A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume --- paper_title: Technical Update: Least-Squares Temporal Difference Learning paper_content: TD.λ/ is a popular family of algorithms for approximate policy evaluation in large MDPs. TD.λ/ works by incrementally updating the value function after each observed transition. It has two major drawbacks: it may make inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and λ e 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto (1996, Machine learning, 22:1–3, 33–57) eliminates all stepsize parameters and improves data efficiency. ::: ::: This paper updates Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from λ e 0 to arbitrary values of λs at the extreme of λ e 1, the resulting new algorithm is shown to be a practical, incremental formulation of supervised linear regression. Third, it presents a novel and intuitive interpretation of LSTD as a model-based reinforcement learning technique. --- paper_title: Reinforcement Learning: A Survey paper_content: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem paper_content: In this paper we discuss an online algorithm based on policy iteration for learning the continuous-time (CT) optimal control solution with infinite horizon cost for nonlinear systems with known dynamics. That is, the algorithm learns online in real-time the solution to the optimal control design HJ equation. This method finds in real-time suitable approximations of both the optimal cost and the optimal control policy, while also guaranteeing closed-loop stability. We present an online adaptive algorithm implemented as an actor/critic structure which involves simultaneous continuous-time adaptation of both actor and critic neural networks. We call this 'synchronous' policy iteration. A persistence of excitation condition is shown to guarantee convergence of the critic to the actual optimal value function. Novel tuning algorithms are given for both critic and actor networks, with extra nonstandard terms in the actor tuning law being required to guarantee closed-loop dynamical stability. The convergence to the optimal controller is proven, and the stability of the system is also guaranteed. Simulation examples show the effectiveness of the new algorithm. --- paper_title: A Distributed Actor-Critic Algorithm and Applications to Mobile Sensor Network Coordination Problems paper_content: We introduce and establish the convergence of a distributed actor-critic method that orchestrates the coordination of multiple agents solving a general class of a Markov decision problem. The method leverages the centralized single-agent actor-critic algorithm of and uses a consensus-like algorithm for updating agents' policy parameters. As an application and to validate our approach we consider a reward collection problem as an instance of a multi-agent coordination problem in a partially known environment and subject to dynamical changes and communication constraints. --- paper_title: Reinforcement Learning and Dynamic Programming Using Function Approximators paper_content: From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments. --- paper_title: Natural Actor-Critic paper_content: In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Natural Actor-Critic for Road Traffic Optimisation paper_content: Current road-traffic optimisation practice around the world is a combination of hand tuned policies with a small degree of automatic adaption. Even state-of-the-art research controllers need good models of the road traffic, which cannot be obtained directly from existing sensors. We use a policy-gradient reinforcement learning approach to directly optimise the traffic signals, mapping currently deployed sensor observations to control signals. Our trained controllers are (theoretically) compatible with the traffic system used in Sydney and many other cities around the world. We apply two policy-gradient methods: (1) the recent natural actor-critic algorithm, and (2) a vanilla policy-gradient algorithm for comparison. Along the way we extend natural-actor critic approaches to work for distributed and online infinite-horizon problems. --- paper_title: Natural Actor-Critic paper_content: In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm. --- paper_title: Policy Gradient Methods for Reinforcement Learning with Function Approximation paper_content: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Reinforcement Learning: A Survey paper_content: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Optimality of Reinforcement Learning Algorithms with Linear Function Approximation paper_content: There are several reinforcement learning algorithms that yield approximate solutions for the problem of policy evaluation when the value function is represented with a linear function approximator. In this paper we show that each of the solutions is optimal with respect to a specific objective function. Moreover, we characterise the different solutions as images of the optimal exact value function under different projection operations. The results presented here will be useful for comparing the algorithms in terms of the error they achieve relative to the error of the optimal approximate solution. --- paper_title: Finite-Sample Analysis of LSTD paper_content: In this paper we consider the problem of policy evaluation in reinforcement learning, i.e., learning the value function of a fixed policy, using the least-squares temporal-difference (LSTD) learning algorithm. We report a finite-sample analysis of LSTD. We first derive a bound on the performance of the LSTD solution evaluated at the states generated by the Markov chain and used by the algorithm to learn an estimate of the value function. This result is general in the sense that no assumption is made on the existence of a stationary distribution for the Markov chain. We then derive generalization bounds in the case when the Markov chain possesses a stationary distribution and is β-mixing. --- paper_title: Feature-based methods for large scale dynamic programming paper_content: Summary form only given. We develop a methodological framework and present a few different ways in which dynamic programming and compact representations can be combined to solve large scale stochastic control problems. In particular, we develop algorithms that employ two types of feature-based compact representations; that is, representations that involve feature extraction and a relatively simple approximation architecture. We prove the convergence of these algorithms and provide bounds on the approximation error. As an example, one of these algorithms is used to generate a strategy for the game of Tetris. Furthermore, we provide a counter-example illustrating the difficulties of integrating compact representations with dynamic programming, which exemplifies the shortcomings of certain simple approaches. --- paper_title: Adaptive linear quadratic control using policy iteration paper_content: In this paper we present the stability and convergence results for dynamic programming-based reinforcement learning applied to linear quadratic regulation (LQR). The specific algorithm we analyze is based on Q-learning and it is proven to converge to an optimal controller provided that the underlying system is controllable and a particular signal vector is persistently excited. This is the first convergence result for DP-based reinforcement learning algorithms for a continuous problem. --- paper_title: An analysis of reinforcement learning with function approximation paper_content: We address the problem of computing the optimal Q-function in Markov decision problems with infinite state-space. We analyze the convergence properties of several variations of Q-learning when combined with function approximation, extending the analysis of TD-learning in (Tsitsiklis & Van Roy, 1996a) to stochastic control settings. We identify conditions under which such approximate methods converge with probability 1. We conclude with a brief discussion on the general applicability of our results and compare them with several related works. --- paper_title: Residual Algorithms: Reinforcement Learning with Function Approximation paper_content: ABSTRACT A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties. --- paper_title: Non-parametric policy gradients: a unified treatment of propositional and relational domains paper_content: Policy gradient approaches are a powerful instrument for learning how to interact with the environment. Existing approaches have focused on propositional and continuous domains only. Without extensive feature engineering, it is difficult - if not impossible - to apply them within structured domains, in which e.g. there is a varying number of objects and relations among them. In this paper, we describe a non-parametric policy gradient approach - called NPPG - that overcomes this limitation. The key idea is to apply Friedmann's gradient boosting: policies are represented as a weighted sum of regression models grown in an stage-wise optimization. Employing off-the-shelf regression learners, NPPG can deal with propositional, continuous, and relational domains in a unified way. Our experimental results show that it can even improve on established results. --- paper_title: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning paper_content: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms. --- paper_title: Relative Entropy Policy Search paper_content: Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. --- paper_title: Infinite-Horizon Policy-Gradient Estimation paper_content: Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\em biased} estimate of the gradient of the {\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\beta$ is related to the {\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett,&Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward --- paper_title: Policy Gradient Methods for Reinforcement Learning with Function Approximation paper_content: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. --- paper_title: A stochastic reinforcement learning algorithm for learning real-valued functions paper_content: Abstract Most of the research in reinforcement learning has been on problems with discrete action spaces. However, many control problems require the application of continuous control signals. In this paper, we present a stochastic reinforcement learning algorithm for learning functions with continuous outputs using a connectionist network. We define stochastic units that compute their real-valued outputs as a function of random activations generated using the normal distribution. Learning takes place by using our algorithm to adjust the two parameters of the normal distribution so as to increase the probability of producing the optimal real value for each input pattern. The performance of the algorithm is studied by using it to learn tasks of varying levels of difficulty. Further, as an example of a potential application, we present a network incorporating these stochastic real-valued units that learns to perform an underconstrained positioning task using a simulated 3 degree-of-freedom robot arm. --- paper_title: PILCO: A Model-Based and Data-Efficient Approach to Policy Search paper_content: In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark paper_content: In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, 'vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease --- paper_title: Reinforcement learning of motor skills with policy gradients paper_content: Autonomous learning is one of the hallmarks of human and animal behavior, and understanding the principles of learning will be crucial in order to achieve true autonomy in advanced machines like humanoid robots. In this paper, we examine learning of complex motor skills with human-like limbs. While supervised learning can offer useful tools for bootstrapping behavior, e.g., by learning from demonstration, it is only reinforcement learning that offers a general approach to the final trial-and-error improvement that is needed by each individual acquiring a skill. Neither neurobiological nor machine learning studies have, so far, offered compelling results on how reinforcement learning can be scaled to the high-dimensional continuous state and action spaces of humans or humanoids. Here, we combine two recent research developments on learning motor control in order to achieve this scaling. First, we interpret the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning. Second, we combine motor primitives with the theory of stochastic policy gradient learning, which currently seems to be the only feasible framework for reinforcement learning for humanoids. We evaluate different policy gradient methods with a focus on their applicability to parameterized motor primitives. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm. --- paper_title: Policy Search in Kernel Hilbert Space paper_content: Much recent work in reinforcement learning and stochastic optimal control has focused on algorithms that search directly through a space of policies rather than building approximate value functions. Policy search has numerous advantages: it does not rely on the Markov assumption, domain knowledge may be encoded in a policy, the policy may require less representational power than a value-function approximation, and stable and convergent algorithms are well-understood. In contrast with value-function methods, however, existing approaches to policy search have heretofore focused entirely on parametric approaches. This places fundamental limits on the kind of policies that can be represented. In this work, we show how policy search (with or without the additional guidance of value-functions) in a Reproducing Kernel Hilbert Space gives a simple and rigorous extension of the technique to non-parametric settings. In particular, we investigate a new class of algorithms which generalize REINFORCE-style likelihood ratio methods to yield both online and batch techniques that perform gradient search in a function space of policies. Further, we describe the computational tools that allow efficient implementation. Finally, we apply our new techniques towards interesting reinforcement learning problems. --- paper_title: A Multi-agent Reinforcement Learning using Actor-Critic methods paper_content: This paper investigates a new algorithm in Multi-agent Reinforcement Learning. We propose a multi-agent learning algorithm that is extend single agent actor-critic methods to the multi-agent setting. To realize the algorithm, we introduced the value of agentpsilas temporal best-response strategy instead of the value of an equilibria. So, our algorithm uses the linear programming to compute Q values. When there are multi Nash equilibrium in the games, the mixed equilibrium was be reached. Our learning algorithm works within the very general framework of n-player, general-sum stochastic games, and learns both the game structure and its associated optimal policy. --- paper_title: Learning to predict by the methods of temporal differences paper_content: This article introduces a class of incremental learning procedures specialized for prediction-that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. --- paper_title: Natural Actor-Critic paper_content: In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm. --- paper_title: Neuronlike adaptive elements that can solve difficult learning control problems paper_content: It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences. --- paper_title: Continuous-Time Adaptive Critics paper_content: A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume --- paper_title: Technical Update: Least-Squares Temporal Difference Learning paper_content: TD.λ/ is a popular family of algorithms for approximate policy evaluation in large MDPs. TD.λ/ works by incrementally updating the value function after each observed transition. It has two major drawbacks: it may make inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and λ e 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto (1996, Machine learning, 22:1–3, 33–57) eliminates all stepsize parameters and improves data efficiency. ::: ::: This paper updates Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from λ e 0 to arbitrary values of λs at the extreme of λ e 1, the resulting new algorithm is shown to be a practical, incremental formulation of supervised linear regression. Third, it presents a novel and intuitive interpretation of LSTD as a model-based reinforcement learning technique. --- paper_title: Gradient Descent for General Reinforcement Learning paper_content: A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcement-learning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MOPs. These include Q-learning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policy-search and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (VAPS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: An adaptive optimal controller for discrete-time Markov environments paper_content: This paper describes an adaptive controller for discrete-time stochastic environments. The controller receives the environment's current state and a reward signal which indicates the desirability of that state. In response, it selects an appropriate control action and notes its effect. The cycle repeats indefinitely. The control environments to be tackled include the well-known n -armed bandit problem, and the adaptive controller comprises an ensemble of n -armed bandit controllers, suitably interconnected. The design of these constituent elements is not discussed. It is shown that, under certain conditions, the controller's actions eventually become optimal for the particular control task with which it is faced, in the sense that they maximize the expected reward obtained in the future. --- paper_title: Residual Algorithms: Reinforcement Learning with Function Approximation paper_content: ABSTRACT A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties. --- paper_title: Average cost temporal-difference learning paper_content: We describe a variant of temporal-difference learning that approximates average and differential costs of an irreducible aperiodic Markov chain. Approximations are comprised of linear combinations of fixed basis functions whose weights are incrementally updated during a single endless trajectory of the Markov chain. We present results concerning convergence and the limit of convergence. We also provide a bound on the resulting approximation error that exhibits an interesting dependence on the "mixing time" of the Markov chain. The results parallel previous work by the authors (1997), involving approximations of discounted cost-to-go. --- paper_title: An actor-critic method using Least Squares Temporal Difference learning paper_content: In this paper, we use a Least Squares Temporal Difference (LSTD) algorithm in an actor-critic framework where the actor and the critic operate concurrently. That is, instead of learning the value function or policy gradient of a fixed policy, the critic carries out its learning on one sample path while the policy is slowly varying. Convergence of such a process has previously been proven for the first order TD algorithms, TD(λ) and TD(1). However, the conversion to the more powerful LSTD turns out not straightforward, because some conditions on the stepsize sequences must be modified for the LSTD case. We propose a solution and prove the convergence of the process. Furthermore, we apply the LSTD actor-critic to an application of intelligently dispatching forklifts in a warehouse. --- paper_title: Linear Least-Squares algorithms for temporal difference learning paper_content: We introduce two new temporal diffence (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Squares TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Sutton's TD(λ) algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce theTD error variance of a Markov chain, ωTD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on ωTD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters. --- paper_title: Policy Gradient Methods for Reinforcement Learning with Function Approximation paper_content: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. --- paper_title: Adaptive linear quadratic control using policy iteration paper_content: In this paper we present the stability and convergence results for dynamic programming-based reinforcement learning applied to linear quadratic regulation (LQR). The specific algorithm we analyze is based on Q-learning and it is proven to converge to an optimal controller provided that the underlying system is controllable and a particular signal vector is persistently excited. This is the first convergence result for DP-based reinforcement learning algorithms for a continuous problem. --- paper_title: Stable Function Approximation in Dynamic Programming paper_content: The success of reinforcement learning in practical problems depends on the ability to combine function approximation with temporal difference methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of approximate value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment. --- paper_title: Natural Actor-Critic paper_content: In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm. --- paper_title: Neuronlike adaptive elements that can solve difficult learning control problems paper_content: It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences. --- paper_title: Policy Gradient Methods for Reinforcement Learning with Function Approximation paper_content: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. --- paper_title: Feature-based methods for large scale dynamic programming paper_content: Summary form only given. We develop a methodological framework and present a few different ways in which dynamic programming and compact representations can be combined to solve large scale stochastic control problems. In particular, we develop algorithms that employ two types of feature-based compact representations; that is, representations that involve feature extraction and a relatively simple approximation architecture. We prove the convergence of these algorithms and provide bounds on the approximation error. As an example, one of these algorithms is used to generate a strategy for the game of Tetris. Furthermore, we provide a counter-example illustrating the difficulties of integrating compact representations with dynamic programming, which exemplifies the shortcomings of certain simple approaches. --- paper_title: Reinforcement Learning for Humanoid Robotics paper_content: Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely classified into three different categories, i.e., greedy methods, ‘vanilla’ policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. ‘Vanilla’ policy gradient methods on the other hand have been successfully applied on real-world robots including at least one humanoid robot [3]. We demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving that the average policy gradient of Kakade [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local minimum of the cost function with respect to the Fisher information metric under suitable conditions. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems. --- paper_title: Some studies in machine learning using the game of checkers paper_content: Two machine-learning procedures have been investigated in some detail using the game of checkers. Enough work has been done to verify the fact that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program. Further-more, it can learn to do this in a remarkably short period of time (8 or 10 hours of machine-playing time) when given only the rules of the game, a sense of direction, and a redundant and incomplete list of parameters which are thought to have something to do with the game, but whose correct signs and relative weights are unknown and unspecified. The principles of machine learning verified by these experiments are, of course, applicable to many other situations. --- paper_title: A stochastic reinforcement learning algorithm for learning real-valued functions paper_content: Abstract Most of the research in reinforcement learning has been on problems with discrete action spaces. However, many control problems require the application of continuous control signals. In this paper, we present a stochastic reinforcement learning algorithm for learning functions with continuous outputs using a connectionist network. We define stochastic units that compute their real-valued outputs as a function of random activations generated using the normal distribution. Learning takes place by using our algorithm to adjust the two parameters of the normal distribution so as to increase the probability of producing the optimal real value for each input pattern. The performance of the algorithm is studied by using it to learn tasks of varying levels of difficulty. Further, as an example of a potential application, we present a network incorporating these stochastic real-valued units that learns to perform an underconstrained positioning task using a simulated 3 degree-of-freedom robot arm. --- paper_title: An adaptive optimal controller for discrete-time Markov environments paper_content: This paper describes an adaptive controller for discrete-time stochastic environments. The controller receives the environment's current state and a reward signal which indicates the desirability of that state. In response, it selects an appropriate control action and notes its effect. The cycle repeats indefinitely. The control environments to be tackled include the well-known n -armed bandit problem, and the adaptive controller comprises an ensemble of n -armed bandit controllers, suitably interconnected. The design of these constituent elements is not discussed. It is shown that, under certain conditions, the controller's actions eventually become optimal for the particular control task with which it is faced, in the sense that they maximize the expected reward obtained in the future. --- paper_title: Residual Algorithms: Reinforcement Learning with Function Approximation paper_content: ABSTRACT A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general function-approximation system, such as a sigmoidal multilayer perceptron, a radial-basis-function system, a memory-based learning system, or even a linear function-approximation system. A new class of algorithms, residual gradient algorithms, is proposed, which perform gradient descent on the mean squared Bellman residual, guaranteeing convergence. It is shown, however, that they may learn very slowly in some cases. A larger class of algorithms, residual algorithms, is proposed that has the guaranteed convergence of the residual gradient algorithms, yet can retain the fast learning speed of direct algorithms. In fact, both direct and residual gradient algorithms are shown to be special cases of residual algorithms, and it is shown that residual algorithms can combine the advantages of each approach. The direct, residual gradient, and residual forms of value iteration, Q-learning, and advantage learning are all presented. Theoretical analysis is given explaining the properties these algorithms have, and simulation results are given that demonstrate these properties. --- paper_title: A fuzzy Actor-Critic reinforcement learning network paper_content: One of the difficulties encountered in the application of reinforcement learning methods to real-world problems is their limited ability to cope with large-scale or continuous spaces. In order to solve the curse of the dimensionality problem, resulting from making continuous state or action spaces discrete, a new fuzzy Actor-Critic reinforcement learning network (FACRLN) based on a fuzzy radial basis function (FRBF) neural network is proposed. The architecture of FACRLN is realized by a four-layer FRBF neural network that is used to approximate both the action value function of the Actor and the state value function of the Critic simultaneously. The Actor and the Critic networks share the input, rule and normalized layers of the FRBF network, which can reduce the demands for storage space from the learning system and avoid repeated computations for the outputs of the rule units. Moreover, the FRBF network is able to adjust its structure and parameters in an adaptive way with a novel self-organizing approach according to the complexity of the task and the progress in learning, which ensures an economic size of the network. Experimental studies concerning a cart-pole balancing control illustrate the performance and applicability of the proposed FACRLN. --- paper_title: Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation paper_content: The problem of finding a root of the multivariate gradient equation that arises in function minimization is considered. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm for the general Kiefer-Wolfowitz type is appropriate for estimating the root. The paper presents an SA algorithm that is based on a simultaneous perturbation gradient approximation instead of the standard finite-difference approximation of Keifer-Wolfowitz type procedures. Theory and numerical experience indicate that the algorithm can be significantly more efficient than the standard algorithms in large-dimensional problems. > --- paper_title: Neuronlike adaptive elements that can solve difficult learning control problems paper_content: It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences. --- paper_title: Policy Gradient Methods for Reinforcement Learning with Function Approximation paper_content: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. --- paper_title: An actor-critic algorithm with function approximation for discounted cost constrained Markov decision processes paper_content: We develop in this article the first actor-critic reinforcement learning algorithm with function approximation for a problem of control under multiple inequality constraints. We consider the infinite horizon discounted cost framework in which both the objective and the constraint functions are suitable expected policy-dependent discounted sums of certain sample path functions. We apply the Lagrange multiplier method to handle the inequality constraints. Our algorithm makes use of multi-timescale stochastic approximation and incorporates a temporal difference (TD) critic and an actor that makes a gradient search in the space of policy parameters using efficient simultaneous perturbation stochastic approximation (SPSA) gradient estimates. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal policy. (C) 2010 Elsevier B.V. All rights reserved. --- paper_title: Application of actor-critic learning to adaptive state space construction paper_content: In order to adopt reinforcement learning for complicated and continuous systems, an adaptive control scheme based on normalized radial basis function under the structure of actor-critic is proposed. The state value function and action-state value function are approximated by the identical normalized radial basis function neural network. Taking into account the adaptivity and computational efficiency, input layer and hidden layer of NRBF network are shared by the actor and critic. The units of the hidden layer can be adaptively added and deleted according to task requirement during the learning process. This method was applied to the balance of an inverted pendulum. The simulation result in the paper evaluates the validity of the proposed algorithm. --- paper_title: A convergent actor-critic-based FRL algorithm with application to power management of wireless transmitters paper_content: This paper provides the first convergence proof for fuzzy reinforcement learning (FRL) as well as experimental results supporting our analysis. We extend the work of Konda and Tsitsiklis, who presented a convergent actor-critic (AC) algorithm for a general parameterized actor. In our work we prove that a fuzzy rulebase actor satisfies the necessary conditions that guarantee the convergence of its parameters to a local optimum. Our fuzzy rulebase uses Takagi-Sugeno-Kang rules, Gaussian membership functions, and product inference. As an application domain, we chose a difficult task of power control in wireless transmitters, characterized by delayed rewards and a high degree of stochasticity. To the best of our knowledge, no reinforcement learning algorithms have been previously applied to this task. Our simulation results show that the ACFRL algorithm consistently converges in this domain to a locally optimal policy. --- paper_title: Q-learning paper_content: \cal Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. ::: ::: This paper presents and proves in detail a convergence theorem for \cal Q-learning based on that outlined in Watkins (1989). We show that \cal Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many \cal Q values can be changed each iteration, rather than just one. --- paper_title: Feature-based methods for large scale dynamic programming paper_content: Summary form only given. We develop a methodological framework and present a few different ways in which dynamic programming and compact representations can be combined to solve large scale stochastic control problems. In particular, we develop algorithms that employ two types of feature-based compact representations; that is, representations that involve feature extraction and a relatively simple approximation architecture. We prove the convergence of these algorithms and provide bounds on the approximation error. As an example, one of these algorithms is used to generate a strategy for the game of Tetris. Furthermore, we provide a counter-example illustrating the difficulties of integrating compact representations with dynamic programming, which exemplifies the shortcomings of certain simple approaches. --- paper_title: Technical Update: Least-Squares Temporal Difference Learning paper_content: TD.λ/ is a popular family of algorithms for approximate policy evaluation in large MDPs. TD.λ/ works by incrementally updating the value function after each observed transition. It has two major drawbacks: it may make inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and λ e 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto (1996, Machine learning, 22:1–3, 33–57) eliminates all stepsize parameters and improves data efficiency. ::: ::: This paper updates Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from λ e 0 to arbitrary values of λs at the extreme of λ e 1, the resulting new algorithm is shown to be a practical, incremental formulation of supervised linear regression. Third, it presents a novel and intuitive interpretation of LSTD as a model-based reinforcement learning technique. --- paper_title: Reinforcement Learning for Humanoid Robotics paper_content: Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely classified into three different categories, i.e., greedy methods, ‘vanilla’ policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. ‘Vanilla’ policy gradient methods on the other hand have been successfully applied on real-world robots including at least one humanoid robot [3]. We demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving that the average policy gradient of Kakade [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local minimum of the cost function with respect to the Fisher information metric under suitable conditions. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Importance sampling actor-critic algorithms paper_content: Importance sampling (IS) and actor-critic are two methods which have been used to reduce the variance of gradient estimates in policy gradient optimization methods. We show how IS can be used with temporal difference methods to estimate a cost function parameter for one policy using the entire history of system interactions incorporating many different policies. The resulting algorithm is then applied to improving gradient estimates in a policy gradient optimization. The empirical results demonstrate a 20-40 /spl times/ reduction in variance over the IS estimator for an example queueing problem, resulting in a similar factor of improvement in convergence for a gradient search. --- paper_title: A fuzzy reinforcement learning approach to power control in wireless transmitters paper_content: We address the issue of power-controlled shared channel access in wireless networks supporting packetized data traffic. We formulate this problem using the dynamic programming framework and present a new distributed fuzzy reinforcement learning algorithm (ACFRL-2) capable of adequately solving a class of problems to which the power control problem belongs. Our experimental results show that the algorithm converges almost deterministically to a neighborhood of optimal parameter values, as opposed to a very noisy stochastic convergence of earlier algorithms. The main tradeoff facing a transmitter is to balance its current power level with future backlog in the presence of stochastically changing interference. Simulation experiments demonstrate that the ACFRL-2 algorithm achieves significant performance gains over the standard power control approach used in CDMA2000. Such a large improvement is explained by the fact that ACFRL-2 allows transmitters to learn implicit coordination policies, which back off under stressful channel conditions as opposed to engaging in escalating "power wars". --- paper_title: An actor-critic method using Least Squares Temporal Difference learning paper_content: In this paper, we use a Least Squares Temporal Difference (LSTD) algorithm in an actor-critic framework where the actor and the critic operate concurrently. That is, instead of learning the value function or policy gradient of a fixed policy, the critic carries out its learning on one sample path while the policy is slowly varying. Convergence of such a process has previously been proven for the first order TD algorithms, TD(λ) and TD(1). However, the conversion to the more powerful LSTD turns out not straightforward, because some conditions on the stepsize sequences must be modified for the LSTD case. We propose a solution and prove the convergence of the process. Furthermore, we apply the LSTD actor-critic to an application of intelligently dispatching forklifts in a warehouse. --- paper_title: Linear Least-Squares algorithms for temporal difference learning paper_content: We introduce two new temporal diffence (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Squares TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Sutton's TD(λ) algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce theTD error variance of a Markov chain, ωTD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on ωTD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters. --- paper_title: Why natural gradient? paper_content: Gradient adaptation is a useful technique for adjusting a set of parameters to minimize a cost function. While often easy to implement, the convergence speed of gradient adaptation can be slow when the slope of the cost function varies widely for small changes in the parameters. In this paper, we outline an alternative technique, termed natural gradient adaptation, that overcomes the poor convergence properties of gradient adaptation in many cases. The natural gradient is based on differential geometry and employs knowledge of the Riemannian structure of the parameter space to adjust the gradient search direction. Unlike Newton's method, natural gradient adaptation does not assume a locally-quadratic cost function. Moreover, for maximum likelihood estimation tasks, natural gradient adaptation is asymptotically Fisher-efficient. A simple example illustrates the desirable properties of natural gradient adaptation. --- paper_title: Why natural gradient? paper_content: Gradient adaptation is a useful technique for adjusting a set of parameters to minimize a cost function. While often easy to implement, the convergence speed of gradient adaptation can be slow when the slope of the cost function varies widely for small changes in the parameters. In this paper, we outline an alternative technique, termed natural gradient adaptation, that overcomes the poor convergence properties of gradient adaptation in many cases. The natural gradient is based on differential geometry and employs knowledge of the Riemannian structure of the parameter space to adjust the gradient search direction. Unlike Newton's method, natural gradient adaptation does not assume a locally-quadratic cost function. Moreover, for maximum likelihood estimation tasks, natural gradient adaptation is asymptotically Fisher-efficient. A simple example illustrates the desirable properties of natural gradient adaptation. --- paper_title: Learning to predict by the methods of temporal differences paper_content: This article introduces a class of incremental learning procedures specialized for prediction-that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. --- paper_title: Natural Gradient Works Efficiently in Learning paper_content: When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed. --- paper_title: Non-parametric policy gradients: a unified treatment of propositional and relational domains paper_content: Policy gradient approaches are a powerful instrument for learning how to interact with the environment. Existing approaches have focused on propositional and continuous domains only. Without extensive feature engineering, it is difficult - if not impossible - to apply them within structured domains, in which e.g. there is a varying number of objects and relations among them. In this paper, we describe a non-parametric policy gradient approach - called NPPG - that overcomes this limitation. The key idea is to apply Friedmann's gradient boosting: policies are represented as a weighted sum of regression models grown in an stage-wise optimization. Employing off-the-shelf regression learners, NPPG can deal with propositional, continuous, and relational domains in a unified way. Our experimental results show that it can even improve on established results. --- paper_title: Reinforcement Learning for Humanoid Robotics paper_content: Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely classified into three different categories, i.e., greedy methods, ‘vanilla’ policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. ‘Vanilla’ policy gradient methods on the other hand have been successfully applied on real-world robots including at least one humanoid robot [3]. We demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving that the average policy gradient of Kakade [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local minimum of the cost function with respect to the Fisher information metric under suitable conditions. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems. --- paper_title: Natural Gradient Works Efficiently in Learning paper_content: When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed. --- paper_title: A Generalized Natural Actor-Critic Algorithm paper_content: Policy gradient Reinforcement Learning (RL) algorithms have received substantial attention, seeking stochastic policies that maximize the average (or discounted cumulative) reward. In addition, extensions based on the concept of the Natural Gradient (NG) show promising learning efficiency because these regard metrics for the task. Though there are two candidate metrics, Kakade's Fisher Information Matrix (FIM) for the policy (action) distribution and Morimura's FIM for the state-action joint distribution, but all RL algorithms with NG have followed Kakade's approach. In this paper, we describe a generalized Natural Gradient (gNG) that linearly interpolates the two FIMs and propose an efficient implementation for the gNG learning based on a theory of the estimating function, the generalized Natural Actor-Critic (gNAC) algorithm. The gNAC algorithm involves a near optimal auxiliary function to reduce the variance of the gNG estimates. Interestingly, the gNAC can be regarded as a natural extension of the current state-of-the-art NAC algorithm [1], as long as the interpolating parameter is appropriately selected. Numerical experiments showed that the proposed gNAC algorithm can estimate gNG efficiently and outperformed the NAC algorithm. --- paper_title: Incremental Natural Actor-Critic Algorithms paper_content: We present four new reinforcement learning algorithms based on actor-critic and natural-gradient ideas, and provide their convergence proofs. Actor-critic reinforcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their compatibility with function approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further reduce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal difference learning in the actor and by incorporating natural gradients, and they extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms. --- paper_title: Natural Actor-Critic paper_content: In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm. --- paper_title: Non-parametric policy gradients: a unified treatment of propositional and relational domains paper_content: Policy gradient approaches are a powerful instrument for learning how to interact with the environment. Existing approaches have focused on propositional and continuous domains only. Without extensive feature engineering, it is difficult - if not impossible - to apply them within structured domains, in which e.g. there is a varying number of objects and relations among them. In this paper, we describe a non-parametric policy gradient approach - called NPPG - that overcomes this limitation. The key idea is to apply Friedmann's gradient boosting: policies are represented as a weighted sum of regression models grown in an stage-wise optimization. Employing off-the-shelf regression learners, NPPG can deal with propositional, continuous, and relational domains in a unified way. Our experimental results show that it can even improve on established results. --- paper_title: Basis expansion in natural actor critic methods paper_content: In reinforcement learning, the aim of the agent is to find a policy that maximizes its expected return. Policy gradient methods try to accomplish this goal by directly approximating the policy using a parametric function approximator; the expected return of the current policy is estimated and its parameters are updated by steepest ascent in the direction of the gradient of the expected return with respect to the policy parameters. In general, the policy is defined in terms of a set of basis functions that capture important features of the problem. Since the quality of the resulting policies directly depend on the set of basis functions, and defining them gets harder as the complexity of the problem increases, it is important to be able to find them automatically. In this paper, we propose a new approach which uses cascade-correlation learning architecture for automatically constructing a set of basis functions within the context of Natural Actor-Critic (NAC) algorithms. Such basis functions allow more complex policies be represented, and consequently improve the performance of the resulting policies. We also present the effectiveness of the method empirically. --- paper_title: Fitted natural actor-critic: A new algorithm for continuous state-action MDPs paper_content: In this paper we address reinforcement learning problems with continuous state-action spaces. We propose a new algorithm, fitted natural actor-critic(FNAC), that extends the work in [1] to allow for general function approximation and data reuse. We combine the natural actor-critic architecture [1] with a variant of fitted value iteration using importance sampling. The method thus obtained combines the appealing features of both approaches while overcoming their main weaknesses: the use of a gradient-based actor readily overcomes the difficulties found in regression methods with policy optimization in continuous action-spaces; in turn, the use of a regression-based critic allows for efficient use of data and avoids convergence problems that TD-based critics often exhibit. We establish the convergence of our algorithm and illustrate its application in a simple continuous space, continuous action problem. --- paper_title: Natural gradient actor-critic algorithms using random rectangular coarse coding paper_content: Learning performance of natural gradient actor-critic algorithms is outstanding especially in high-dimensional spaces than conventional actor-critic algorithms. However, representation issues of stochastic policies or value functions are remaining because the actor-critic approaches need to design it carefully. The author has proposed random rectangular coarse coding, that is very simple and suited for approximating Q-values in high-dimensional state-action space. This paper shows a quantitative analysis of the random coarse coding comparing with regular-grid approaches, and presents a new approach that combines the natural gradient actor-critic with the random rectangular coarse coding. --- paper_title: Technical Update: Least-Squares Temporal Difference Learning paper_content: TD.λ/ is a popular family of algorithms for approximate policy evaluation in large MDPs. TD.λ/ works by incrementally updating the value function after each observed transition. It has two major drawbacks: it may make inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and λ e 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto (1996, Machine learning, 22:1–3, 33–57) eliminates all stepsize parameters and improves data efficiency. ::: ::: This paper updates Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from λ e 0 to arbitrary values of λs at the extreme of λ e 1, the resulting new algorithm is shown to be a practical, incremental formulation of supervised linear regression. Third, it presents a novel and intuitive interpretation of LSTD as a model-based reinforcement learning technique. --- paper_title: Reinforcement Learning for Humanoid Robotics paper_content: Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely classified into three different categories, i.e., greedy methods, ‘vanilla’ policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. ‘Vanilla’ policy gradient methods on the other hand have been successfully applied on real-world robots including at least one humanoid robot [3]. We demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving that the average policy gradient of Kakade [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local minimum of the cost function with respect to the Fisher information metric under suitable conditions. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems. --- paper_title: Natural Gradient Works Efficiently in Learning paper_content: When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed. --- paper_title: Adaptive linear quadratic control using policy iteration paper_content: In this paper we present the stability and convergence results for dynamic programming-based reinforcement learning applied to linear quadratic regulation (LQR). The specific algorithm we analyze is based on Q-learning and it is proven to converge to an optimal controller provided that the underlying system is controllable and a particular signal vector is persistently excited. This is the first convergence result for DP-based reinforcement learning algorithms for a continuous problem. --- paper_title: A stochastic reinforcement learning algorithm for learning real-valued functions paper_content: Abstract Most of the research in reinforcement learning has been on problems with discrete action spaces. However, many control problems require the application of continuous control signals. In this paper, we present a stochastic reinforcement learning algorithm for learning functions with continuous outputs using a connectionist network. We define stochastic units that compute their real-valued outputs as a function of random activations generated using the normal distribution. Learning takes place by using our algorithm to adjust the two parameters of the normal distribution so as to increase the probability of producing the optimal real value for each input pattern. The performance of the algorithm is studied by using it to learn tasks of varying levels of difficulty. Further, as an example of a potential application, we present a network incorporating these stochastic real-valued units that learns to perform an underconstrained positioning task using a simulated 3 degree-of-freedom robot arm. --- paper_title: An RLS-Based Natural Actor-Critic Algorithm for Locomotion of a Two-Linked Robot Arm paper_content: Recently, actor-critic methods have drawn much interests in the area of reinforcement learning, and several algorithms have been studied along the line of the actor-critic strategy. This paper studies an actor-critic type algorithm utilizing the RLS(recursive least-squares) method, which is one of the most efficient techniques for adaptive signal processing, together with natural policy gradient. In the actor part of the studied algorithm, we follow the strategy of performing parameter update via the natural gradient method, while in its update for the critic part, the recursive least-squares method is employed in order to make the parameter estimation for the value functions more efficient. The studied algorithm was applied to locomotion of a two-linked robot arm, and showed better performance compared to the conventional stochastic gradient ascent algorithm. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: A new natural policy gradient by stationary distribution metric paper_content: The parameter space of a statistical learning machine has a Riemannian metric structure in terms of its objective function. [1] Amari proposed the concept of "natural gradient" that takes the Riemannian metric of the parameter space into account. Kakade [2] applied it to policy gradient reinforcement learning, called a natural policy gradient (NPG). Although NPGs evidently depend on the underlying Riemannian metrics, careful attention was not paid to the alternative choice of the metric in previous studies. In this paper, we propose a Riemannian metric for the joint distribution of the state-action, which is directly linked with the average reward, and derive a new NPG named "Natural State-action Gradient"(NSG). Then, we prove that NSG can be computed by fitting a certain linear model into the immediate reward function. In numerical experiments, we verify that the NSG learning can handle MDPs with a large number of states, for which the performances of the existing (N)PG methods degrade. --- paper_title: Derivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning paper_content: Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate γ for the value functions close to 1, these algorithms do not permit γ to be set exactly at γ = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting γ = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods. --- paper_title: Reinforcement learning of motor skills with policy gradients paper_content: Autonomous learning is one of the hallmarks of human and animal behavior, and understanding the principles of learning will be crucial in order to achieve true autonomy in advanced machines like humanoid robots. In this paper, we examine learning of complex motor skills with human-like limbs. While supervised learning can offer useful tools for bootstrapping behavior, e.g., by learning from demonstration, it is only reinforcement learning that offers a general approach to the final trial-and-error improvement that is needed by each individual acquiring a skill. Neither neurobiological nor machine learning studies have, so far, offered compelling results on how reinforcement learning can be scaled to the high-dimensional continuous state and action spaces of humans or humanoids. Here, we combine two recent research developments on learning motor control in order to achieve this scaling. First, we interpret the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning. Second, we combine motor primitives with the theory of stochastic policy gradient learning, which currently seems to be the only feasible framework for reinforcement learning for humanoids. We evaluate different policy gradient methods with a focus on their applicability to parameterized motor primitives. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm. --- paper_title: Policy Search in Kernel Hilbert Space paper_content: Much recent work in reinforcement learning and stochastic optimal control has focused on algorithms that search directly through a space of policies rather than building approximate value functions. Policy search has numerous advantages: it does not rely on the Markov assumption, domain knowledge may be encoded in a policy, the policy may require less representational power than a value-function approximation, and stable and convergent algorithms are well-understood. In contrast with value-function methods, however, existing approaches to policy search have heretofore focused entirely on parametric approaches. This places fundamental limits on the kind of policies that can be represented. In this work, we show how policy search (with or without the additional guidance of value-functions) in a Reproducing Kernel Hilbert Space gives a simple and rigorous extension of the technique to non-parametric settings. In particular, we investigate a new class of algorithms which generalize REINFORCE-style likelihood ratio methods to yield both online and batch techniques that perform gradient search in a function space of policies. Further, we describe the computational tools that allow efficient implementation. Finally, we apply our new techniques towards interesting reinforcement learning problems. --- paper_title: Natural Actor-Critic for Road Traffic Optimisation paper_content: Current road-traffic optimisation practice around the world is a combination of hand tuned policies with a small degree of automatic adaption. Even state-of-the-art research controllers need good models of the road traffic, which cannot be obtained directly from existing sensors. We use a policy-gradient reinforcement learning approach to directly optimise the traffic signals, mapping currently deployed sensor observations to control signals. Our trained controllers are (theoretically) compatible with the traffic system used in Sydney and many other cities around the world. We apply two policy-gradient methods: (1) the recent natural actor-critic algorithm, and (2) a vanilla policy-gradient algorithm for comparison. Along the way we extend natural-actor critic approaches to work for distributed and online infinite-horizon problems. --- paper_title: Actor-Critic--Type Learning Algorithms for Markov Decision Processes paper_content: Algorithms for learning the optimal policy of a Markov decision process (MDP) based on simulated transitions are formulated and analyzed. These are variants of the well-known "actor-critic" (or "adaptive critic") algorithm in the artificial intelligence literature. Distributed asynchronous implementations are considered. The analysis involves two time scale stochastic approximations. --- paper_title: Natural Actor-Critic paper_content: In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients employing Amari's natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke's Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm. --- paper_title: A fuzzy Actor-Critic reinforcement learning network paper_content: One of the difficulties encountered in the application of reinforcement learning methods to real-world problems is their limited ability to cope with large-scale or continuous spaces. In order to solve the curse of the dimensionality problem, resulting from making continuous state or action spaces discrete, a new fuzzy Actor-Critic reinforcement learning network (FACRLN) based on a fuzzy radial basis function (FRBF) neural network is proposed. The architecture of FACRLN is realized by a four-layer FRBF neural network that is used to approximate both the action value function of the Actor and the state value function of the Critic simultaneously. The Actor and the Critic networks share the input, rule and normalized layers of the FRBF network, which can reduce the demands for storage space from the learning system and avoid repeated computations for the outputs of the rule units. Moreover, the FRBF network is able to adjust its structure and parameters in an adaptive way with a novel self-organizing approach according to the complexity of the task and the progress in learning, which ensures an economic size of the network. Experimental studies concerning a cart-pole balancing control illustrate the performance and applicability of the proposed FACRLN. --- paper_title: Real-time learning: a ball on a beam paper_content: In the Real-Time Learning Laboratory at GTE Laboratories, machine learning algorithms are being implemented on hardware testbeds. A modified connectionist actor-critic system has been applied to a ball balancing task. The system learns to balance a ball on a beam in less than 5 min and maintains the balance. A ball can roll along a few inches of a track on a flat metal beam, which an electric motor can rotate. A computer learning system running on a PC senses the position of the ball and the angular position of the beam. The system learns to prevent the ball from reaching either end of the beam. The system has shown to be robust through sensor noise and mechanical changes; it has also generated many interesting questions for future research. > --- paper_title: Reinforcement Learning for Resource Allocation in LEO Satellite Networks paper_content: In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements --- paper_title: Reinforcement learning applications in dynamic pricing of retail markets paper_content: In this paper, we investigate the use of reinforcement learning (RL) techniques to the problem of determining dynamic prices in an electronic retail market. As representative models, we consider a single seller market and a two seller market, and formulate the dynamic pricing problem in a setting that easily generalizes to markets with more than two sellers. We first formulate the single seller dynamic pricing problem in the RL framework and solve the problem using the Q-learning algorithm through simulation. Next we model the two seller dynamic pricing problem as a Markovian game and formulate the problem in the RL framework. We solve this problem using actor-critic algorithms through simulation. We believe our approach to solving these problems is a promising way of setting dynamic prices in multi-agent environments. We illustrate the methodology with two illustrative examples of typical retail markets. --- paper_title: An RLS-Based Natural Actor-Critic Algorithm for Locomotion of a Two-Linked Robot Arm paper_content: Recently, actor-critic methods have drawn much interests in the area of reinforcement learning, and several algorithms have been studied along the line of the actor-critic strategy. This paper studies an actor-critic type algorithm utilizing the RLS(recursive least-squares) method, which is one of the most efficient techniques for adaptive signal processing, together with natural policy gradient. In the actor part of the studied algorithm, we follow the strategy of performing parameter update via the natural gradient method, while in its update for the critic part, the recursive least-squares method is employed in order to make the parameter estimation for the value functions more efficient. The studied algorithm was applied to locomotion of a two-linked robot arm, and showed better performance compared to the conventional stochastic gradient ascent algorithm. --- paper_title: Reinforcement Learning: A Survey paper_content: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. --- paper_title: Reinforcement learning for a biped robot based on a CPG-actor-critic method paper_content: Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes. --- paper_title: Two steps natural actor critic learning for underwater cable tracking paper_content: This paper proposes a field application of a high-level Reinforcement Learning (RL) control system for solving the action selection problem of an autonomous robot in a cable tracking task. The underwater vehicle ICTINEUAUV learns to perform a visual based cable tracking task in a two step learning process. First, a policy is computed by means of simulation where a hydrodynamic model of the vehicle simulates the cable following task. Once the simulated results are accurate enough, in a second step, the learned-in-simulation policy is transferred to the vehicle where the learning procedure continues in a real environment, improving the initial policy. The natural actor-critic (NAC) algorithm has been selected to solve the problem in both steps. This algorithm aims to take advantage of policy gradient and value function techniques for fast convergence. Actor's policy gradient gives convergence guarantees under function approximation and partial observability while critic's value function reduces variance of the estimates update improving the convergence process. --- paper_title: Learning Control Under Extreme Uncertainty paper_content: A peg-in-hole insertion task is used as an example to illustrate the utility of direct associative reinforcement learning methods for learning control under real-world conditions of uncertainty and noise. Task complexity due to the use of an unchamfered hole and a clearance of less than 0.2mm is compounded by the presence of positional uncertainty of magnitude exceeding 10 to 50 times the clearance. Despite this extreme degree of uncertainty, our results indicate that direct reinforcement learning can be used to learn a robust reactive control strategy that results in skillful peg-in-hole insertions. --- paper_title: Biped dynamic walking using reinforcement learning paper_content: This paper presents some results from a study of biped dynamic walking using reinforcement learning. During this study a hardware biped robot was built, a new reinforcement learning algorithm as well as a new learning architecture were developed. The biped learned dynamic walking without any previous knowledge about its dynamic model. The self scaling reinforcement (SSR) learning algorithm was developed in order to deal with the problem of reinforcement learning in continuous action domains. The learning architecture was developed in order to solve complex control problems. It uses different modules that consist of simple controllers and small neural networks. The architecture allows for easy incorporation of new modules that represent new knowledge, or new requirements for the desired task. --- paper_title: An actor-critic method using Least Squares Temporal Difference learning paper_content: In this paper, we use a Least Squares Temporal Difference (LSTD) algorithm in an actor-critic framework where the actor and the critic operate concurrently. That is, instead of learning the value function or policy gradient of a fixed policy, the critic carries out its learning on one sample path while the policy is slowly varying. Convergence of such a process has previously been proven for the first order TD algorithms, TD(λ) and TD(1). However, the conversion to the more powerful LSTD turns out not straightforward, because some conditions on the stepsize sequences must be modified for the LSTD case. We propose a solution and prove the convergence of the process. Furthermore, we apply the LSTD actor-critic to an application of intelligently dispatching forklifts in a warehouse. --- paper_title: Relative Entropy Policy Search paper_content: Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. --- paper_title: Natural gradient actor-critic algorithms using random rectangular coarse coding paper_content: Learning performance of natural gradient actor-critic algorithms is outstanding especially in high-dimensional spaces than conventional actor-critic algorithms. However, representation issues of stochastic policies or value functions are remaining because the actor-critic approaches need to design it carefully. The author has proposed random rectangular coarse coding, that is very simple and suited for approximating Q-values in high-dimensional state-action space. This paper shows a quantitative analysis of the random coarse coding comparing with regular-grid approaches, and presents a new approach that combines the natural gradient actor-critic with the random rectangular coarse coding. --- paper_title: Policy Search for Motor Primitives in Robotics paper_content: Many motor skills in humanoid robotics can be learned using parametrized motor primitives as done in imitation learning. However, most interesting motor learning problems are high-dimensional reinforcement learning problems often beyond the reach of current methods. In this paper, we extend previous work on policy learning from the immediate reward case to episodic reinforcement learning. We show that this results in a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning that is particularly well-suited for dynamic motor primitives. The resulting algorithm is an EM-inspired algorithm applicable to complex motor learning tasks. We compare this algorithm to several well-known parametrized policy search methods and show that it outperforms them. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task using a real Barrett WAM™ robot arm. --- paper_title: Continuous-Time Adaptive Critics paper_content: A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume ---
Title: A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients Section 1: Introduction Description 1: Introduce the basic concepts of reinforcement learning (RL), the significance of actor-critic methods, and the motivation for this survey. Section 2: Markov Decision Processes Description 2: Introduce the fundamental ideas of Markov decision processes (MDPs) and their role in reinforcement learning, covering both discounted and average reward settings. Section 3: Actor-Critic in the Context of Reinforcement Learning Description 3: Explain the three groups of RL methods: critic-only, actor-only, and actor-critic methods, along with the policy gradient theorem. Section 4: Standard Gradient Actor-Critic Algorithms Description 4: Survey actor-critic algorithms that use the standard gradient, including those in both discounted and average reward settings and their variations. Section 5: Natural Gradient Actor-Critic Algorithms Description 5: Explain the concept of the natural gradient and describe actor-critic algorithms that employ natural policy gradients. Distinguish between discounted and average reward settings. Section 6: Applications Description 6: Provide references and brief descriptions of practical applications of actor-critic algorithms in various fields such as robotics, logistics, traffic control, and finance. Section 7: Discussion and Outlook Description 7: Discuss the general insights from the survey, including guidelines on choosing among different RL methods, and suggest directions for future research.
A survey of sketch-based 3-D modeling techniques
14
--- paper_title: In Search for an Ideal Computer-Assisted Drawing System paper_content: Diagram drawing with conventional computer-assisted drawing(CAD) editors often tend to take considerable amount of time despite their seeming ease of use. We analyzed the problems of such systems focusing on the problem of cognitive overload, and observed that 1) the necessity of cognitive planning process in current CAD system causes the problems and that 2) reducing the overload can lead to fundamental improvement in overall drawing efficiency. We have conducted an experiment to verify these observations by comparing a typical drawing system and our prototype drawing system called Interactive Beautification, which combines the ease of freehand drawing and precision of traditional drawing editors by extracting various constraints in input strokes, and generating the desired diagrams automatically. Results show that significant amount of time is spent for cognitive planning process, and reduction of such planning time by Interactive Beautification can significantly improve the efficiency of CAD. --- paper_title: Drawing as a means to design reasoning paper_content: We investigate the functions of drawing in design and how, based on these functions, a computational sketching environment might support design reasoning. Design, like all problem solving activities, involves reasoning— making decisions, expressing ideas, verifying and evaluating proposals, and ultimately, taking action. For designers, drawing is a vehicle for design reasoning, and therefore the spontaneous marks made on paper during sketching form a partial record of the designer’s thinking. Most designers sketch early design ideas with a pencil on paper: sketching is still the quickest and most direct means to produce visual representations of ideas. The ambiguity of free hand sketching allows multiple interpretations and thus stimulates the production of more design alternatives. The linked acts of drawing and looking invite designers to recognize new interpretations of the alternatives they propose. By drawing and looking, designers find visual analogies, remember relevant examples, and discover new shapes based on previously unrecognized geometric configurations in their sketches. 1. Thinking with a pencil Visual representations such as freehand sketches and concept diagrams seem to play a significant role in design problem solving. Design reasoning is accompanied by, and we might say, embedded in, the act of drawing. Sketching on paper and pencil supports ambiguity, imprecision, and incremental formalization of ideas as well as rapid exploration of alternatives. --- paper_title: Sketching with a low-latency electronic ink drawing tablet paper_content: Drawing on paper is an experience which is still unmatched by any input device for drawing into a computer in terms of accuracy, dexterity and general pleasantness of use. This paper describes a paper-like drawing tablet which uses electronic ink as its output medium with stylus-based touchpanel input. The device mimics the experience of drawing in a manner which can be adjusted to approach the feel of different kinds of paper. We discuss further some basic issues which need to be addressed in managing interfacing to such a device, specifically the avoidance of the legacy of mouse-oriented point-and-click interfaces which have influenced GUI design for so long. We see a sketch-based model for interaction, based on free-form curve drawing, as being the way forward but new interaction models are required. The tablet is initially intended to serve as an input-device for cartoon drawing and editing, so the product of any sketching process has to be presented to the rest of the animation data-path in terms of a conventional curve model, here Bezier chains. We discuss models for achieving this without having to resort to legacy curve-editing techniques which have no counterpart in drawing on paper or in the repertoire of the traditional animator. Potential uses of these interaction techniques go well beyond supporting the cartoon drawing application. --- paper_title: Sketching with a low-latency electronic ink drawing tablet paper_content: Drawing on paper is an experience which is still unmatched by any input device for drawing into a computer in terms of accuracy, dexterity and general pleasantness of use. This paper describes a paper-like drawing tablet which uses electronic ink as its output medium with stylus-based touchpanel input. The device mimics the experience of drawing in a manner which can be adjusted to approach the feel of different kinds of paper. We discuss further some basic issues which need to be addressed in managing interfacing to such a device, specifically the avoidance of the legacy of mouse-oriented point-and-click interfaces which have influenced GUI design for so long. We see a sketch-based model for interaction, based on free-form curve drawing, as being the way forward but new interaction models are required. The tablet is initially intended to serve as an input-device for cartoon drawing and editing, so the product of any sketching process has to be presented to the rest of the animation data-path in terms of a conventional curve model, here Bezier chains. We discuss models for achieving this without having to resort to legacy curve-editing techniques which have no counterpart in drawing on paper or in the repertoire of the traditional animator. Potential uses of these interaction techniques go well beyond supporting the cartoon drawing application. --- paper_title: Calligraphic Interfaces: Mixed Metaphors for Design paper_content: CAD systems have yet to become usable at the early stages of product ideation, where precise shape definitions and sometimes even design intentions are not fully developed. To overcome these limitations, new approaches, which we call Calligraphic Interfaces, use sketching as the main organizing paradigm. Such applications rely on continuous input modalities rather than discrete interactions characteristic of WIMP interfaces. However, replacing direct manipulation by sketching alone poses very interesting challenges. While the temptation to follow the paper-and-pencil metaphor is great, free-hand sketch recognition remains an elusive goal. Further, using gestures to enter commands and sketches to draw shapes requires users to learn a command set – sketches do not enjoy the self-disclosing characteristics of menus. Moreover, the imprecise nature of interactions presents additional problems that are difficult to address using present-day techniques. --- paper_title: Free-form Sketching with Variational Implicit Surfaces paper_content: With the advent of sketch-based methods for shape construction, there’s a new degree of power available in the rapid creation of approximate shapes. Sketch [Zeleznik, 1996] showed how a gesture-based modeler could be used to simplify conventional CSG-like shape creation. Teddy [Igarashi, 1999] extended this to more free-form models, getting much of its power from its "inflation" operation (which converted a simple closed curve in the plane into a 3D shape whose silhouette, from the current point of view, was that curve on the view plane) and from an elegant collection of gestures for attaching additional parts to a shape, cutting a shape, and deforming it. But despite the powerful collection of tools in Teddy, the underlying polygonal representation of shapes intrudes on the results in many places. In this paper, we discuss our preliminary efforts at using variational implicit surfaces [Turk, 2000] as a representation in a free-form modeler. We also discuss the implementation of several operations within this context, and a collection of user-interaction elements that work well together to make modeling interesting hierarchies simple. These include “stroke inflation” via implicit functions, blob-merging, automatic hierarchy construction, and local surface modification via silhouette oversketching. We demonstrate our results by creating several models. --- paper_title: Sketching with a low-latency electronic ink drawing tablet paper_content: Drawing on paper is an experience which is still unmatched by any input device for drawing into a computer in terms of accuracy, dexterity and general pleasantness of use. This paper describes a paper-like drawing tablet which uses electronic ink as its output medium with stylus-based touchpanel input. The device mimics the experience of drawing in a manner which can be adjusted to approach the feel of different kinds of paper. We discuss further some basic issues which need to be addressed in managing interfacing to such a device, specifically the avoidance of the legacy of mouse-oriented point-and-click interfaces which have influenced GUI design for so long. We see a sketch-based model for interaction, based on free-form curve drawing, as being the way forward but new interaction models are required. The tablet is initially intended to serve as an input-device for cartoon drawing and editing, so the product of any sketching process has to be presented to the rest of the animation data-path in terms of a conventional curve model, here Bezier chains. We discuss models for achieving this without having to resort to legacy curve-editing techniques which have no counterpart in drawing on paper or in the repertoire of the traditional animator. Potential uses of these interaction techniques go well beyond supporting the cartoon drawing application. --- paper_title: Cascading recognizers for ambiguous calligraphic interaction paper_content: Throughout the last decade many approaches have been made to the problem of developing CAD systems that are usable in the early stages of product ideation. Although most of these approaches rely on some kind of drawing paradigm and on the paper-and-pencil metaphor, only a few of them deal with the ambiguity that is inherent to natural languages in general and to sketching in particular. Also the paper-and-pencil metaphor has not in most cases been fully accomplished, since many gesture-based interfaces resort to secondary buttons and modifier keys in order to make command strokes easier to differentiate from their geometry instantiating counterparts. In this paper we describe the architecture of GIDeS++, a sketch-based 3D modeling system that approaches these problems in three different ways: by dealing with ambiguity and exploring it to the user's benefit; by reducing the command set and thus minimizing the cognitive load on the user; and by cascading different types of gesture recognizers, which allows interaction to resort only to the button located on the tip of an electronic stylus. --- paper_title: A suggestive interface for 3D drawing paper_content: This paper introduces a new type of interface for 3D drawings that improves the usability of gestural interfaces and augments typical command-based modeling systems. In our suggestive interface, the user gives hints about a desired operation to the system by highlighting related geometric components in the scene. The system then infers possible operations based on the hints and presents the results of these operations as small thumbnails. The user completes the editing operation simply by clicking on the desired thumbnail. The hinting mechanism lets the user specify geometric relations among graphical components in the scene, and the multiple thumbnail suggestions make it possible to define many operations with relatively few distinct hint patterns. The suggestive interface system is implemented as a set of suggestion engines working in parallel, and is easily extended by adding customized engines. Our prototype 3D drawing system, Chateau, shows that a suggestive interface can effectively support construction of various 3D drawings. --- paper_title: Correlation-based reconstruction of a 3D object from a single freehand sketch paper_content: We propose a new approach for reconstructing a three-dimensional object from a single two-dimensional freehand line drawing depicting it. A sketch is essentially a noisy projection of a 3D object onto an arbitrary 2D plane. Reconstruction is the inverse projection of the sketched geometry from two dimensions back into three dimensions. While humans can do this reverse-projection remarkably easily and almost without being aware of it, this process is mathematically indeterminate and is very difficult to emulate computationally. Here we propose that the ability of humans to perceive a previously unseen 3D object from a single sketch is based on simple 2D-3D geometrical correlations that are learned from visual experience. We demonstrate how a simple correlation system that is exposed to many object-sketch pairs eventually learns to perform the inverse projection successfully for unseen objects. Conversely, we show how the same correlation data can be used to gauge the understandability of synthetically generated projections of given 3D objects. Using these principles we demonstrate for the first time a completely automatic conversion of a single freehand sketch into a physical solid object. These results have implications for bidirectional human-computer communication of 3D graphic concepts, and might also shed light on the human visual system. --- paper_title: Gestalt Isomorphism and the Primacy of Subjective Conscious Experience: A Gestalt Bubble Model paper_content: A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this picture-in-the-head concept of perceptual representation are shown to be ill founded. --- paper_title: On seeing things paper_content: The importance of effective task representations in the design of programs intended to exhibit sophisticated behaviour manifests itself in the area of Picture Interpretation as the so-called 'Linguistic Approach'. A brief survey of Pattern Description Languages leads up to an analysis of a simple letter recognition task from which it is argued that at least two types of description of the pattern must be utilised if any significant pattern generalisation is to be achieved, and in general that all picture interpretation tasks involve descriptions in two domains. Further support for this viewpoint is provided by a characterization of the problem of interpreting line diagrams as pictures of three dimensional sceness, in which the form of these decriptions and of their interrelation by an algorithm is described in detail. The paper concludes by relating these ideas to the distinction between syntax and semantics, and the concept of denotation. --- paper_title: Impossible Objects as Nonsense Sentences paper_content: To every 3-dimensional scene there correspond as many 2-dimensional pictures as there are possible vantage points for the camera. It is, however, possible to construct pictures for which there is no corresponding scene containing physically-realizable objects. Pictures of such 'impossible objects' can be useful in giving insight into the constraints or grammatical rules associated with the 'language' of pictures, just as nonsense sentences can be useful in illustrating the rules of other languages. Impossible objects have been used by psychologists (Penrose and Penrose 1958) to create visual illusions which successfully challenge the ability of our perceptual systems to synthesize a 3-dimensional world from 2-dimensional information. The incompatibilities among the various portions of pictures of these objects are a novel way of testing our picture analysis procedures. The purpose of this paper is to demonstrate some possible decision procedures and to test them on pictures of both possible and impossible objects. --- paper_title: Sketching a virtual environment: modeling using line-drawing interpretation paper_content: Here we demonstrate the direct input to computer of a handdrawn perspective sketch to create a virtual environment. We either start with a photograph of a real environment or an existing VRML model, and then use a mouse or pen pad to sketch line drawings onto the scene. Visual clues and constraints from the existing background and line drawing, as well as heuristics for form recognition are used to build a 3D optimization problem. We use a multiple objective genetic algorithm to find a viable solution to the problem, and VRML output is generated, either for re-entry to the system or use in another system. Our software is currently available compiled for either a PC running Windows 98/NT or an SGI machine running IRIX 6.x. --- paper_title: Creating solid models from single 2D sketches paper_content: We describe a method of constructing a B-rep solid model from a single hidden-line removed sketch view of a 3D object. The main steps of our approach are as follows. The sketch is first tidied in 2D (to remove digitisation errors). Line labelling is used to deduce the initial topology of the object and to locate hidden faces. Constraints are then produced from the line labelling and features in the drawing (such as probable symmetry) involving the unknown face coefficients and point depths. A least squares solution is found to the linear system and any grossly incompatible equations are rejected. Vertices are recalculated as the intersections of the faces to ensure we have a reconstructible solid. Any incomplete faces are then completed as far as possible from neighbouring faces, producing a solid model from the initial sketch, if successful. The current software works for polyhedral objects with trihedral vertices. CR Descriptors: I.3.5 [Computer Graphics]: Computational Geometry and Object Modelling Geometric algorithms, languages and systems; I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Perceptual reasoning; I.3.6 [Computer Graphics]: Methodology and Techniques Interaction techniques; J.6 [Computer Applications]: CAE CAD. --- paper_title: Can Machines Interpret Line Drawings? paper_content: Engineering design would be easier if a computer could interpret initial concept drawings. We outline an approach for automated interpretation of line drawings of polyhedra, and summarise what is already possible, what developments can be expected in the near future, and which areas remain problematic. We illustrate this with particular reference to our own system, RIBALD, summarising the published state of the art, and discussing recent unpublished improvements to RIBALD. In general, successful interpretation depends on two factors: the number of lines, and whether or not the drawing can be classified as a member of special shape class (e.g. an extrusion or normalon). The state-of-the-art achieves correct interpretation of extrusions of any size and most normalons of 20—30 lines, but drawings of only 10—20 lines can be problematic for unclassified objects.Despite successes, there are caseswhere the desired interpretation is obvious to a human but cannot be determined by currently-available algorithms. We give examples both of our successes and of typical caseswhere human skill cannot be replicated. --- paper_title: Intuitive Shape Modeling by Shading Design paper_content: Shading has a great impact to the human perception of 3D objects. Thus, in order to create or to deform a 3D object, it seems natural to manipulate its perceived shading. This paper presents a new solution for the software implementation of this idea. Our approach is based on the ability of a user to coarsely draw a shading, under different lighting directions. With this intuitive process, users can create or edit a height field (locally or globally), that will correspond to the drawn shading values. Moreover, we present the possibility to edit the shading intensity by means of a specular reflectance model. --- paper_title: Image-based object editing paper_content: We examine the problem of editing complex 3D objects. We convert the problem of editing a 3D object of arbitrary size and surface properties to a problem of editing a 2D image. We allow the user to specify edits in both geometry and surface properties from any view and at any resolution they find convenient, regardless of the interactive rendering capability of their computer. We use specially-constrained shape from shading algorithms to convert a shaded image specified by the user into a 3D geometry. --- paper_title: Sculpting: an interactive volumetric modeling technique paper_content: We present a new interactive modeling technique based on the notion of sculpting a solid material. A sculpting tool is controlled by a 3D input device and the material is represented by voxel data; the tool acts by modifying the values in the voxel array, much as a "paint" program's "paintbrush" modifies bitmap values. The voxel data is converted to a polygonal surface using a "marching-cubes" algorithm; since the modifications to the voxel data are local, we accelerate this computation by an incremental algorithm and accelerate the display by using a special data structure for determining which polygons must be redrawn in a particular screen region. We provide a variety of tools: one that cuts away material, one that adds material, a "sandpaper" tool, a "heat gun," etc. The technique provides an intuitive direct interaction, as if the user were working with clay or wax. The models created are free-form and may have complex topology; however, they are not precise, so the technique is appropriate for modeling a boulder or a tooth but not for modeling a crankshaft. --- paper_title: The BlobTree -- Warping, blending and boolean operations in an implicit surface modelling system paper_content: Automatic blending has characterized the major advantage of implicit surface modeling systems. Recently, the introduction of deformations based on space warping and boolean operations between primitives has increased the usefulness of such systems. We propose a further enhancement which will greatly enhance the range of models that can be easily and intuitively defined with a skeletal implicit surface system. We decribe a hierarchical method which allows arbitrary compositions of models that make use of blending, warping and boolean operations. We call this structure the BlobTree. Blending and space warping are treated in the same way as union, difference and intersection, i.e. as nodes in the BlobTree. The traversal of the BlobTree is described along with two rendering algorithms; a polygonizer and a ray tracer. We present some examples of interesting models which can be made easily using our approach that would be very difficult to represent with conventional systems. --- paper_title: Adaptively sampled distance fields: a general representation of shape for computer graphics paper_content: Adaptively Sampled Distance Fields (ADFs) are a unifying representation of shape that integrate numerous concepts in computer graphics including the representation of geometry and volume data and a broad range of processing operations such as rendering, sculpting, level-of-detail management, surface offsetting, collision detection, and color gamut correction. Its structure is uncomplicated and direct, but is especially effective for quality reconstruction of complex shapes, e.g., artistic and organic forms, precision parts, volumes, high order functions, and fractals. We characterize one implementation of ADFs, illustrating its utility on two diverse applications: 1) artistic carving of fine detail, and 2) representing and rendering volume data and volumetric effects. Other applications are briefly presented. --- paper_title: Practical volumetric sculpting paper_content: We present here a sculpture metaphor for rapid shape-prototyping. The sculpted shape is the iso-surface of a scalar field spatially sampled. The user can deposit material wherever he desires in space and then iteratively refine it using a freeform tool. The tool shape can be designed inside the application. The tool can add, remove, paint or smooth material, or it can be used as a stamp to make prints on an existing shape. The user can move the scene and/or the tool with a Spacemouse (6D input device), or a 2D mouse using virtual trackball. We also focussed on the rendering quality, exploiting lighting variations, and environment textures that simulate high quality highlights on the surface. Both greatly enhance the shape estimation, which is in our opinion a crucial step during the design process. We also explored some stereo configurations to improve the user's perception of the spatial relationships between the elements of the scene. Our current implementation based on GLUT allows the application to run both on UNIX-based systems, such as IRIX and Linux, and on Windows systems. We obtain interactive response times, strongly related to the size of the tool. The performances issues and limitations are discussed. --- paper_title: Surface drawing: creating organic 3D shapes with the hand and tangible tools paper_content: Surface Drawing is a system for creating organic 3D shapes in a manner which supports the needs and interests of artists. This medium facilitates the early stages of creative design which many 3D modeling programs neglect. Much like traditional media such as line drawing and painting, Surface Drawing lets users construct shapes through repeated marking. In our case, the hand is used to mark 3D space in a semi-immersive virtual environment. The interface is completed with tangible tools to edit and manipulate models. We introduce the use of tongs to move and scale 3D shapes and demonstrate a magnet tool which is comfortably held without restricting hand motion. We evaluated our system through collaboration with artists and designers, exhibition before hundreds of users, our own extensive exploration of the medium, and an informal user study. Response was especially positive from users with an artistic background. --- paper_title: Volume sculpting paper_content: We present a modeling technique based on the metaphor of interactively sculpting complex 3D objects from a solid material, such as a block of wood or marble. The 3D model is represented in a 3D raster of voxels where each voxel stores local material property information such as color and texture. Sculpting is done by moving 3D voxel-based tools within the model. The affected regions are indicated directly on the 2D projected image of the 3D model. By reducing the complex operations between the 3D tool volume and the 3D model down to primitive voxel-by-voxel operations, coupled with the utilization of a localized ray casting for image updating, our sculpting tool achieves real-time interaction. Furthermore, volume sampling techniques and volume manipulations are employed to ensure that the process of sculpting does not introduce aliasing into the models. --- paper_title: A painting interface for interactive surface deformations paper_content: A long-standing challenge in geometric modeling is providing a natural, intuitive interface for making local deformations to 3D surfaces. Previous approaches have provided either interactive manipulation or physical simulation to control surface deformations. In this paper, we investigate combining these two approaches with a painting interface that gives the user direct, local control over a physical simulation. The paint a user applies to the model defines its instantaneous surface velocity, the user can effect surface deformations. We have found that this painting metaphor gives the user direct, local control over surface deformations for several applications: creating new models, removing noise from existing models, and adding geometric texture to an existing surface at multiple scales. --- paper_title: Wires: a geometric deformation technique paper_content: Finding effective interactive deformation techniques for complex geometric objects continues to be a challenging problem in modeling and animation. We present an approach that is inspired by armatures used by sculptors, in which wire curves give definition to an object and shape its deformable features. We also introduce domain curves that define the domain of deformation about an object. A wire together with a collection of domain curves provide a new basis for an implicit modeling primitive. Wires directly reflect object geometry, and as such they provide a coarse geometric representation of an object that can be created through sketching. Furthermore, the aggregate deformation from several wires is easy to define. We show that a single wire is an appealing direct manipulation deformation technique; we demonstrate that the combination of wires and domain curves provide a new way to outline the shape of an implicit volume in space; and we describe techniques for the aggregation of deformations resulting from multiple wires, domain curves and their interaction with each other and other deformation techniques. The power of our approach is illustrated using applications of animating figures with flexible articulations, modeling wrinkled surfaces and stitching geometry together. --- paper_title: Prototype Modeling from Sketched Silhouettes based on Convolution Surfaces paper_content: This paper presents a hybrid method for creating three-dimensional shapes by sketching silhouette curves. Given a silhouette curve, we approximate its medial axis as a set of line segments, and convolve a linearly weighted kernel along each segment. By summing the fields of all segments, an analytical convolution surface is obtained. The resulting generic shape has circular cross-section, but can be conveniently modified via sketched profile or shape parameters of a spatial transform. New components can be similarly designed by sketching on different projection planes. The convolution surface model lends itself to smooth merging between the overlapping components. Our method overcomes several limitations of previous sketched-based systems, including designing objects of arbitrary genus, objects with semi-sharp features, and the ability to easily generate variants of shapes. --- paper_title: Free-form Sketching with Variational Implicit Surfaces paper_content: With the advent of sketch-based methods for shape construction, there’s a new degree of power available in the rapid creation of approximate shapes. Sketch [Zeleznik, 1996] showed how a gesture-based modeler could be used to simplify conventional CSG-like shape creation. Teddy [Igarashi, 1999] extended this to more free-form models, getting much of its power from its "inflation" operation (which converted a simple closed curve in the plane into a 3D shape whose silhouette, from the current point of view, was that curve on the view plane) and from an elegant collection of gestures for attaching additional parts to a shape, cutting a shape, and deforming it. But despite the powerful collection of tools in Teddy, the underlying polygonal representation of shapes intrudes on the results in many places. In this paper, we discuss our preliminary efforts at using variational implicit surfaces [Turk, 2000] as a representation in a free-form modeler. We also discuss the implementation of several operations within this context, and a collection of user-interaction elements that work well together to make modeling interesting hierarchies simple. These include “stroke inflation” via implicit functions, blob-merging, automatic hierarchy construction, and local surface modification via silhouette oversketching. We demonstrate our results by creating several models. --- paper_title: A sketching interface for modeling the internal structures of 3D shapes paper_content: This paper presents a sketch-based modeling system for creating objects that have internal structures. The user input consists of hand-drawn sketches and the system automatically generates a volumetric model. The volumetric representation solves any self-intersection problems and enables the creation of models with a variety of topological structures, such as a torus or a hollow sphere. To specify internal structures, our system allows the user to cut the model temporarily and apply modeling operations to the exposed face. In addition, the user can draw multiple contours in the Create or Sweep stages. Our system also allows automatic rotation of the model so that the user does not need to perform frequent manual rotations. Our system is much simpler to implement than a surface-oriented system because no complicated mesh editing code is required. We observed that novice users could quickly create a variety of objects using our system. --- paper_title: An interface for sketching 3D curves paper_content: The ability to specify nonplanar 3D curves is of fundamental importance in 3D modeling and animation systems. Effective techniques for specifying such curves using 2D input devices are desirable, but existing methods typically require the user to edit the curve from several viewpoints. We present a novel method for specifying 3D curves with 2D input from a single viewpoint. The user rst draws the curve as it appears from the current viewpoint, and then draws its shadow on the oor plane. The system correlates the curve with its shadow to compute the curve's 3D shape. This method is more natural than existing methods in that it leverages skills that many artists and designers have developed from work with pencil and paper. --- paper_title: A suggestive interface for image guided 3D sketching paper_content: We present an image guided pen-based suggestive interface for sketching 3D wireframe models. Rather than starting from a blank canvas, existing 2D images of similar objects serve as a guide to the user. Image based filters enable attraction, smoothing, and resampling of input curves, and allows for their selective application using pinning and gluing techniques. New input strokes also invoke suggestions of relevant geometry that can be used, reducing the need to explicitly draw all parts of the new model. All suggestions appear in-place with the model being built, in the user's focal attention space. A curve matching algorithm seamlessly augments basic suggestions with more complex ones from a database populated with previously used geometry. The interface also incorporates gestural command input, and interaction techniques for camera controls that enable smooth transitions between orthographic and perspective views. --- paper_title: Interaction techniques for 3D modeling on large displays paper_content: We present an alternate interface for 3D modeling for use on large scale displays. The interface integrates several concepts specifically selected and enhanced for large scale interaction. These include 2D construction planes spatially integrated in a 3D volume, enhanced orthographic views, smooth transitions between 2D and 3D views, tape drawing as the primary curve and line creation technique, visual viewpoint markers, and continuous twohanded interaction. --- paper_title: A multi-layered architecture for sketch-based interaction within virtual environments paper_content: Abstract In this article, we describe a multi-layered architecture for sketch-based interaction within virtual environments. Our architecture consists of eight hierarchically arranged layers that are described by giving examples of how they are implemented and how they interact. Focusing on table-like projection systems (such as Virtual Tables or Responsive Workbenches) as human-centered output-devices, we show examples of how to integrate parts or all of the architecture into existing domain-specific applications — rather than realizing new general sketch applications — to make sketching an integral part of the next-generation human–computer interface. --- paper_title: Harold: a world made of drawings paper_content: The problem of interactively creating 3D scenes from 2D input is a compelling one, and recent progress has been exciting. We present our system, Harold, which combines ideas from existing techniques and introduces new concepts to make an interactive system for creating 3D worlds. The interface paradigm in Harold is drawing :a ll objects are created simply by drawing them with a 2D input device. Most of the 3D objects in Harold are collections of planar strokes that are reoriented in a view-dependent way as the camera moves through the world. Virtual worlds created in Harold are rendered with a stroke-based system so that a world will maintain a hand-drawn appearance as the user navigates through it. Harold is not suitable for representing certain classes of 3D objects, especially geometrically regular or extremely asymmetric objects. However, Harold supports a large enough class of objects that a user can rapidly create expressive and visually rich 3D worlds. --- paper_title: Drawing for Illustration and Annotation in 3D paper_content: We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model. --- paper_title: Creating principal 3D curves with digital tape drawing paper_content: Previous systems have explored the challenges of designing an interface for automotive styling which combine the metaphor of 2D drawing using physical tape with the simultaneous creation and management of a corresponding virtual 3D model. These systems have been limited to only 2D planar curves while typically the principal characteristic curves of an automotive design are three dimensional and non-planar. We present a system which addresses this limitation. Our system allows a designer to construct these non-planar 3D curves by drawing a series of 2D curves using the 2D tape drawing technique and interaction style. These results are generally applicable to the interface design of 3D modeling applications and also to the design of arm's length interaction on large scale display systems --- paper_title: FreeDrawer: a free-form sketching system on the responsive workbench paper_content: A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed. --- paper_title: 3D Sketching with profile curves paper_content: In recent years, 3D sketching has gained popularity as an efficient alternative to conventional 3D geometric modeling for rapid prototyping, as it allows the user to intuitively generate a large range of different shapes. In this paper, we present some sketching interactions for 3D modeling, based on a set of two different bidimensional sketches (profile curve and silhouette curve). By using these two sketches and combining them with a gesture grammar, a very large variety of shapes (including shapes with topological holes) can be easily produced by our interactive modeling environment. --- paper_title: Sketch-based modeling with few strokes paper_content: We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system. ---
Title: A survey of sketch-based 3-D modeling techniques Section 1: Introduction Description 1: Provide an overview of the increasing demand for 3-D modeling, the challenges with traditional modeling techniques, and the emergence of sketch-based modeling as an intuitive solution. Section 2: The practice of sketching Description 2: Define sketching, discuss its importance in the creative process, and explain the components of traditional sketching such as feedback, overdrawing, and incremental refinement. Section 3: Why sketch-based modeling? Description 3: Explain the reasons for using sketching as a basis for 3-D modeling, touching on its importance in the creative process and its applicability to 3-D modeling interfaces. Section 4: Sketch input Description 4: Discuss the hardware and software requirements for sketch-based modeling, focusing on digitizing tablets, line creation, and issues related to overdrawing. Section 5: Sketch-based modeling methods Description 5: Provide an overview of various sketch-based modeling techniques, categorized into seven classifications with detailed examples of each. Section 5.1: Gesture created primitives Description 5.1: Explain how drawing input is interpreted as gestures for creating basic shapes and the evolution of this approach with examples. Section 5.2: Reconstruction Description 5.2: Discuss methods for directly interpreting users' drawings into 3-D shapes, touching on the challenges and techniques like Huffman-Clowes line labeling. Section 5.3: Height-fields and shape from shading Description 5.3: Explain how shading information is used to infer 3-D geometry and the related techniques such as height-fields and shape from shading. Section 5.4: Deformation and sculpture Description 5.4: Describe techniques that involve deforming 3-D models as if sculpting, including global and local deformations, and related applications. Section 5.5: Blobby inflation Description 5.5: Provide an overview of systems that use inflation techniques to create 3-D models from 2-D silhouette drawings, with examples like Teddy. Section 5.6: Contour curves and drawing surfaces Description 5.6: Discuss methods for using 2-D drawing input to create 3-D curves and surfaces, including the use of drawing planes and fully 3-D space curves. Section 5.7: Stroke based constructions Description 5.7: Describe systems that use strokes to create and manipulate 3-D surfaces and volumes, providing examples of parametric surfaces and procedural modeling methods. Section 6: Discussion Description 6: Summarize the observations from previous sections, including the role of 2-D input in 3-D modeling, the expressivity and ambiguity of sketching, and the challenges and prospects of sketch-based modeling interfaces. Section 7: Conclusion Description 7: Conclude by summarizing the key points discussed in the paper, emphasizing the potential future directions for sketch-based 3-D modeling techniques.
Texture image analysis and texture classification methods - A review
12
--- paper_title: Texture features for browsing and retrieval of image data paper_content: Image content based retrieval is emerging as an important research area with application to digital libraries and multimedia databases. The focus of this paper is on the image processing aspects and in particular using texture information for browsing and retrieval of large image data. We propose the use of Gabor wavelet features for texture analysis and provide a comprehensive experimental evaluation. Comparisons with other multiresolution texture features using the Brodatz texture database indicate that the Gabor features provide the best pattern retrieval accuracy. An application to browsing large air photos is illustrated. --- paper_title: Grey level co-occurrence matrix and its application to seismic data paper_content: Christoph Georg Eichkitz, John Davies, Johannes Amtmann, Marcellus Gregor Schreilechner and Paul de Groot demonstrate how grey level co-occurrence matrix can be adapted to work on 3D imaging of seismic data. ::: ::: Texture analysis is the extraction of textural features from images (Tuceryan and Jain, 1998). The meaning of texture varies, depending on the area of science in which it is used. In general, texture refers to the physical character of an object or the appearance of an image. In image analysis, texture is defined as a function of the spatial variation in intensities of pixels (Tuceryan and Jain, 1998). Seismic texture refers to the magnitude and variability of neighbouring amplitudes at sample locations and is physically related to the distribution of scattering objects (geological texture) within a small volume at the corresponding subsurface location (Gao, 2008). Four principal methods have been developed for the analysis of seismic texture (Figure 1). These are texture classification, segmentation, synthesis, and shape. --- paper_title: Texture features for browsing and retrieval of image data paper_content: Image content based retrieval is emerging as an important research area with application to digital libraries and multimedia databases. The focus of this paper is on the image processing aspects and in particular using texture information for browsing and retrieval of large image data. We propose the use of Gabor wavelet features for texture analysis and provide a comprehensive experimental evaluation. Comparisons with other multiresolution texture features using the Brodatz texture database indicate that the Gabor features provide the best pattern retrieval accuracy. An application to browsing large air photos is illustrated. --- paper_title: Grey level co-occurrence matrix and its application to seismic data paper_content: Christoph Georg Eichkitz, John Davies, Johannes Amtmann, Marcellus Gregor Schreilechner and Paul de Groot demonstrate how grey level co-occurrence matrix can be adapted to work on 3D imaging of seismic data. ::: ::: Texture analysis is the extraction of textural features from images (Tuceryan and Jain, 1998). The meaning of texture varies, depending on the area of science in which it is used. In general, texture refers to the physical character of an object or the appearance of an image. In image analysis, texture is defined as a function of the spatial variation in intensities of pixels (Tuceryan and Jain, 1998). Seismic texture refers to the magnitude and variability of neighbouring amplitudes at sample locations and is physically related to the distribution of scattering objects (geological texture) within a small volume at the corresponding subsurface location (Gao, 2008). Four principal methods have been developed for the analysis of seismic texture (Figure 1). These are texture classification, segmentation, synthesis, and shape. --- paper_title: Measuring intra-urban poverty using land cover and texture metrics derived from remote sensing data paper_content: Abstract This paper contributes empirical evidence about the usefulness of remote sensing imagery to quantify the degree of poverty at the intra-urban scale. This concept is based on two premises: first, that the physical appearance of an urban settlement is a reflection of the society; and second, that the people who reside in urban areas with similar physical housing conditions have similar social and demographic characteristics. We use a very high spatial resolution (VHR) image from one of the most socioeconomically divergent cities in the world, Medellin (Colombia), to extract information on land cover composition using per-pixel classification and on urban texture and structure using an automated tool for texture and structure feature extraction at object level. We evaluate the potential of these descriptors to explain a measure of poverty known as the Slum Index. We found that these variables explain up to 59% of the variability in the Slum Index. Similar approaches could be used to lower the cost of socioeconomic surveys by developing an econometric model from a sample and applying that model to the rest of the city and to perform intercensal or intersurvey estimates of intra-urban Slum Index maps. --- paper_title: Multispectral Texture Features from Visible and Near-Infrared Synthetic Face Images for Face Recognition paper_content: Recently, high-performance face recognition has attracted research attention in real-world scenarios. Thanks to the advances in sensor technology, face recognition system equipped with multiple sensors has been widely researched. Among them, face recognition system with near-infrared imagery has been one important research topic. In this paper, complementary effect resided in face images captured by nearinfrared and visible rays is exploited by combining two distinct spectral images (i.e., face images captured by near-infrared and visible rays). We propose a new texture feature (i.e., multispectral texture feature) extraction method with synthesized face images to achieve high-performance face recognition with illumination-invariant property. The experimental results show that the proposed method enhances the discriminative power of features thanks the complementary effect. --- paper_title: Particle filter based on joint color texture histogram for object tracking paper_content: Particle filter has grown to be a standard tool for solving visual tracking problems in real world applications. One of the critical tasks in object tracking is the tracking of fast-moving objects in complex environments, which contain cluttered background and scale change. In this paper, a new tracking algorithm is presented by using the joint color texture histogram to represent a target and then applying it to particle filter algorithm called PFJCTH. The texture features of the object are extracted by using the local binary pattern (LBP) technique to represent the object. The proposed algorithm extracts effectively the edge and corner features in the target region, which characterize better and represent more robustly the target. The experiments showed that this new proposed algorithm produces excellent tracking results and outperforms other tracking algorithms. --- paper_title: Breast cancer detection using MRF-based probable texture feature and decision-level fusion-based classification using HMM on thermography images paper_content: Breast cancer is one of the major causes of death for women in the last decade. Thermography is a breast imaging technique that can detect cancerous masses much faster than the conventional mammography technology. In this paper, a breast cancer detection algorithm based on asymmetric analysis as primitive decision and decision-level fusion by using Hidden Markov Model (HMM) is proposed. In this decision structure, by using primitive decisions obtained from extracted features from left and right breasts and also asymmetric analysis, final decision is determined by a new application of HMM. For this purpose, a novel texture feature based on Markov Random Field (MRF) model that is named MRF-based probable texture feature and another texture feature based on a new scheme in Local Binary Pattern (LBP) of the images are extracted. In the MRF-based probable texture feature, we try to capture breast texture information by using proper definition of neighborhood system and clique and also determination of new potential functions. Ultimately, our proposed breast cancer detection algorithm is evaluated on a variety dataset of thermography images and false negative rate of 8.3% and false positive rate of 5% are obtained on test image dataset. We propose a two-stage breast cancer detection algorithm by decision-level fusion.We tried to improve false accept of previous algorithms by our proposed algorithm.We used Hidden Markov Model as a fusion algorithm to fuse primitive decisions.We propose a novel texture feature based on Markov Random Field model.To extract color and edge information of images, we modified Local Binary Pattern. --- paper_title: Genetic based LBP feature extraction and selection for facial recognition paper_content: This paper presents a novel approach to LBP feature extraction. Unlike other LBP feature extraction methods, we evolve the number, position, and the size of the areas of feature extraction. The approach described in this paper also attempts to minimize the number of areas as well as the size in an effort to reduce the total number of features needed for LBP-based face recognition. In addition to reducing the number of features by 63%, our approach also increases recognition accuracy from an average of 99.04% to 99.84%. --- paper_title: Local Derivative Pattern Versus Local Binary Pattern: Face Recognition With High-Order Local Pattern Descriptor paper_content: This paper proposes a novel high-order local pattern descriptor, local derivative pattern (LDP), for face recognition. LDP is a general framework to encode directional pattern features based on local derivative variations. The nth-order LDP is proposed to encode the (n-1)th -order local derivative direction variations, which can capture more detailed information than the first-order local pattern used in local binary pattern (LBP). Different from LBP encoding the relationship between the central point and its neighbors, the LDP templates extract high-order local information by encoding various distinctive spatial relationships contained in a given local region. Both gray-level images and Gabor feature images are used to evaluate the comparative performances of LDP and LBP. Extensive experimental results on FERET, CAS-PEAL, CMU-PIE, Extended Yale B, and FRGC databases show that the high-order LDP consistently performs much better than LBP for both face identification and face verification under various conditions. --- paper_title: Face Recognition Using Local Binary Patterns (LBP) paper_content: The face of a human being conveys a lot of information about identity and emotional state of the person. Face recognition is an interesting and challenging problem, and impacts important applications in many areas such as identification for law enforcement, authentication for banking and security system access, and personal identification among others. In our research work mainly consists of three parts, namely face representation, feature extraction and classification. Face representation represents how to model a face and determines the successive algorithms of detection and recognition. The most useful and unique features of the face image are extracted in the feature extraction phase. In the classification the face image is compared with the images from the database. In our research work, we empirically evaluate face recognition which considers both shape and texture information to represent face images based on Local Binary Patterns for person-independent face recognition. The face area is first divided into small regions from which Local Binary Patterns (LBP), histograms are extracted and concatenated into a single feature vector. This feature vector forms an efficient representation of the face and is used to measure similarities between images. --- paper_title: Image texture as a remotely sensed measure of vegetation structure paper_content: Abstract Ecologists commonly collect data on vegetation structure, which is an important attribute for characterizing habitat. However, measuring vegetation structure across large areas is logistically difficult. Our goal was to evaluate the degree to which sample-point pixel values and image texture of remotely sensed data are associated with vegetation structure in a North American grassland–savanna–woodland mosaic. In the summers of 2008–2009 we collected vegetation structure measurements at 193 sample points from which we calculated foliage-height diversity and horizontal vegetation structure at Fort McCoy Military Installation, Wisconsin, USA. We also calculated sample-point pixel values and first- and second-order image texture measures, from two remotely sensed data sources: an infrared air photo (1-m resolution) and a Landsat TM satellite image (30-m resolution). We regressed foliage-height diversity against, and correlated horizontal vegetation structure with, sample-point pixel values and texture measures within and among habitats. Within grasslands, savanna, and woodland habitats, sample-point pixel values and image texture measures explained 26–60% of foliage-height diversity. Similarly, within habitats, sample-point pixel values and image texture measures were correlated with 40–70% of the variation of horizontal vegetation structure. Among habitats, the mean of the texture measure ‘second-order contrast’ from the air photo explained 79% of the variation in foliage-height diversity while ‘first-order variance’ from the air photo was correlated with 73% of horizontal vegetation structure. Our results suggest that sample-point pixel values and image texture measures calculated from remotely sensed data capture components of foliage-height diversity and horizontal vegetation structure within and among grassland, savanna, and woodland habitats. Vegetation structure, which is a key component of animal habitat, can thus be mapped using remotely sensed data. --- paper_title: Fabric defect detection based on GLCM and Gabor filter: A comparison paper_content: Abstract Fabric defect detection has been an active area of research since a long time and still a robust system is needed which can fulfill industrial requirements. A robust automatic fabric defect detection system (FDDS) would results in quality products and more revenues. Many different approaches and method have been tried to implement FDDS. Most of them are based on two approaches, one is statistical like gray level co-occurrence (GLCM) and other is transform based like Gabor filter. This paper presents a new scheme for automated FDDS implementation using GLCM and also compare it with Gabor filter approach. GLCM texture statistics are extracted and plotted against the inter-pixel distance of GLCM as signal graph. The non-defective fabric image information is compared with the test fabric image. In Gabor filter based approach, a bank of Gabor filter with different scales and orientations is generated and fabric images are filtered with convolution mask. The generated magnitude responses are compared for defect decision. In our implementation of both approaches in same environment, the GLCM approach produces higher defect detection accuracies than Gabor filter approach and more computationally efficient. --- paper_title: Textural Features for Image Classification paper_content: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications. --- paper_title: Breast cancer detection using MRF-based probable texture feature and decision-level fusion-based classification using HMM on thermography images paper_content: Breast cancer is one of the major causes of death for women in the last decade. Thermography is a breast imaging technique that can detect cancerous masses much faster than the conventional mammography technology. In this paper, a breast cancer detection algorithm based on asymmetric analysis as primitive decision and decision-level fusion by using Hidden Markov Model (HMM) is proposed. In this decision structure, by using primitive decisions obtained from extracted features from left and right breasts and also asymmetric analysis, final decision is determined by a new application of HMM. For this purpose, a novel texture feature based on Markov Random Field (MRF) model that is named MRF-based probable texture feature and another texture feature based on a new scheme in Local Binary Pattern (LBP) of the images are extracted. In the MRF-based probable texture feature, we try to capture breast texture information by using proper definition of neighborhood system and clique and also determination of new potential functions. Ultimately, our proposed breast cancer detection algorithm is evaluated on a variety dataset of thermography images and false negative rate of 8.3% and false positive rate of 5% are obtained on test image dataset. We propose a two-stage breast cancer detection algorithm by decision-level fusion.We tried to improve false accept of previous algorithms by our proposed algorithm.We used Hidden Markov Model as a fusion algorithm to fuse primitive decisions.We propose a novel texture feature based on Markov Random Field model.To extract color and edge information of images, we modified Local Binary Pattern. --- paper_title: A comparative study of texture measures with classification based on featured distributions paper_content: This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Object recognition from local scale-invariant features paper_content: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds. --- paper_title: Distinctive Image Features from Scale-Invariant Keypoints paper_content: The Scale-Invariant Feature Transform (or SIFT) algorithm is a highly robust method to extract and consequently match distinctive invariant features from images. These features can then be used to reliably match objects in diering images. The algorithm was rst proposed by Lowe [12] and further developed to increase performance resulting in the classic paper [13] that served as foundation for SIFT which has played an important role in robotic and machine vision in the past decade. --- paper_title: Texture Classification using Curvelet Transform paper_content: Texture classification has played an important role in many real life applications. Now, classification based on wavelet transform is being very popular. Wavelets are very effective in representing objects with isolated point singularities, but failed to represent line singularities. Recently, ridgelet transform which deal effectively with line singularities in 2-D is introduced. But images often contain curves rather than straight lines, so curvelet transform is designed to handle it. It allows representing edges and other singularities along lines in a more efficient way when compared with other transforms. In this paper, the issue of texture classification based on curvelet transform has been analyzed. One group feature vector can be constructed by the mean and variance of the curvelet statistical features, which are derived from the sub-bands of the curvelet decomposition and are used for classification. Experimental results show that this approach allows obtaining high degree of success rate in classification. --- paper_title: Texture classification using Gabor filters paper_content: Abstract An unsupervised texture classification scheme is proposed in this paper. The texture features are based on the image local spectrum which is obtained by a bank of Gabor filters. The fuzzy clustering algorithm is used for unsupervised classification. In many applications, this algorithm depends on assumptions made about the number of subgroups present in the data. Therefore we discuss ideas behind cluster validity measures and propose a method for choosing the optimal number of clusters. --- paper_title: Texture classification using ridgelet transform paper_content: Texture classification has long been an important research topic in image processing. Classification based on the wavelet transform has become very popular. Wavelets are very effective in representing objects with isolated point singularities, but failed to represent line singularities. Recently, a ridgelet transform which deals effectively with line singularities in 2-D is introduced. It allows representing edges and other singularities along lines in a more efficient way. In this paper, the issue of texture classification based on a ridgelet transform has been analyzed. Features are derived from sub-bands of the ridgelet decomposition and are used for classification for a data set containing 20 texture images. Experimental results show that this approach allows to obtain a high degree of success in classification. --- paper_title: Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression paper_content: A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D Gabor representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations, which provide a complete image description in terms of locally windowed 2-D spectral coordinates embedded within global 2-D spatial coordinates. In the present neural network approach, based on interlaminar interactions involving two layers with fixed weights and one layer with adjustable weights, the network finds coefficients for complete conjoint 2-D Gabor transforms without restrictive conditions. In wavelet expansions based on a biologically inspired log-polar ensemble of dilations, rotations, and translations of a single underlying 2-D Gabor wavelet template, image compression is illustrated with ratios up to 20:1. Also demonstrated is image segmentation based on the clustering of coefficients in the complete 2-D Gabor transform. > --- paper_title: Unsupervised texture segmentation using Gabor filters paper_content: Abstract This paper presents a texture segmentation algorithm inspired by the multi-channel filtering theory for visual information processing in the early stages of human visual system. The channels are characterized by a bank of Gabor filters that nearly uniformly covers the spatial-frequency domain, and a systematic filter selection scheme is proposed, which is based on reconstruction of the input image from the filtered images. Texture features are obtained by subjecting each (selected) filtered image to a nonlinear transformation and computing a measure of “energy” in a window around each pixel. A square-error clustering algorithm is then used to integrate the feature images and produce a segmentation. A simple procedure to incorporate spatial information in the clustering process is proposed. A relative index is used to estimate the “true” number of texture categories. --- paper_title: Multi-scale gray level co-occurrence matrices for texture description paper_content: Abstract Texture information plays an important role in image analysis. Although several descriptors have been proposed to extract and analyze texture, the development of automatic systems for image interpretation and object recognition is a difficult task due to the complex aspects of texture. Scale is an important information in texture analysis, since a same texture can be perceived as different texture patterns at distinct scales. Gray level co-occurrence matrices (GLCM) have been proved to be an effective texture descriptor. This paper presents a novel strategy for extending the GLCM to multiple scales through two different approaches, a Gaussian scale-space representation, which is constructed by smoothing the image with larger and larger low-pass filters producing a set of smoothed versions of the original image, and an image pyramid, which is defined by sampling the image both in space and scale. The performance of the proposed approach is evaluated by applying the multi-scale descriptor on five benchmark texture data sets and the results are compared to other well-known texture operators, including the original GLCM, that even though faster than the proposed method, is significantly outperformed in accuracy. --- paper_title: Binary Gabor pattern: An efficient and robust descriptor for texture classification paper_content: In this paper, we present a simple yet efficient and effective multi-resolution approach to gray-scale and rotation invariant texture classification. Given a texture image, we at first convolve it with J Gabor filters sharing the same parameters except the parameter of orientation. Then by binarizing the obtained responses, we can get J bits at each location. Then, each location can be assigned a unique integer, namely “rotation invariant binary Gabor pattern (BGP ri )”, formed from J bits associated with it using some rule. The classification is based on the image's histogram of its BGP ri s at multiple scales. Using BGP ri , there is no need for a pre-training step to learn a texton dictionary, as required in methods based on clustering such as MR8. Extensive experiments conducted on the CUReT database demonstrate the overall superiority of BGP ri over the other state-of-the-art texture representation methods evaluated. The Matlab source codes are publicly available at http://sse.tongji.edu.cn/linzhang/IQA/BGP/BGP.htm --- paper_title: Local spiking pattern and its application to rotation- and illumination-invariant texture classification paper_content: Abstract Automatic classification of texture images is an important and challenging task in the applications of image analysis and scene understanding. In this paper, we focus on the problem of the classification of texture images acquired under various rotation and illumination conditions and propose a new local image descriptor which is named local spiking pattern (LSP). Specifically, the proposed LSP uses a 2-dimensional neural network, which is made up of a series of interconnected spiking neurons, to generate binary images by iteration. The binary images are then encoded to generate discriminative feature vectors. In classification phase, we use a nearest neighborhood classifier to achieve supervised classification. Finally, LSP is evaluated by comparison with some state-of-the-art local image descriptors. Experimental results on Outex texture database show that LSP outperforms most of the other local image descriptors in the noiseless case and shows high robustness when texture images are distorted by salt & pepper noise. --- paper_title: Local spiking pattern and its application to rotation- and illumination-invariant texture classification paper_content: Abstract Automatic classification of texture images is an important and challenging task in the applications of image analysis and scene understanding. In this paper, we focus on the problem of the classification of texture images acquired under various rotation and illumination conditions and propose a new local image descriptor which is named local spiking pattern (LSP). Specifically, the proposed LSP uses a 2-dimensional neural network, which is made up of a series of interconnected spiking neurons, to generate binary images by iteration. The binary images are then encoded to generate discriminative feature vectors. In classification phase, we use a nearest neighborhood classifier to achieve supervised classification. Finally, LSP is evaluated by comparison with some state-of-the-art local image descriptors. Experimental results on Outex texture database show that LSP outperforms most of the other local image descriptors in the noiseless case and shows high robustness when texture images are distorted by salt & pepper noise. --- paper_title: Color Texture Classification Approach Based on Combination of Primitive Pattern Units and Statistical Features paper_content: Texture classification became one of the problems which has been paid much attention on by image processing scientists since late 80s. Consequently, since now many different methods have been proposed to solve this problem. In most of these methods the researchers attempted to describe and discriminate textures based on linear and non-linear patterns. The linear and non-linear patterns on any window are based on formation of Grain Components in a particular order. Grain component is a primitive unit of morphology that most meaningful information often appears in the form of occurrence of that. The approach which is proposed in this paper could analyze the texture based on its grain components and then by making grain components histogram and extracting statistical features from that would classify the textures. Finally, to increase the accuracy of classification, proposed approach is expanded to color images to utilize the ability of approach in analyzing each RGB channels, individually. Although, this approach is a general one and it could be used in different applications, the method has been tested on the stone texture and the results can prove the quality of approach. --- paper_title: SVM-PSO based rotation-invariant image texture classification in SVD and DWT domains paper_content: The paper presents a new image classification technique which first extracts rotation-invariant image texture features in singular value decomposition (SVD) and discrete wavelet transform (DWT) domains. Subsequently, it exploits a support vector machine (SVM) to perform image texture classification. For convenience, it is called the SRITCSD method hereafter. First, the method applies the SVD to enhance image textures of an image. Then, it extracts the texture features in the DWT domain of the SVD version of the image. Also, the SRITCSD method employs the SVM to serve as a multiclassifier for image texture features. Meanwhile, the particle swarm optimization (PSO) algorithm is utilized to optimize the SRITCSD method, which is exploited to select a nearly optimal combination of features and a set of parameters utilized in the SVM. The experimental results demonstrate that the SRITCSD method can achieve satisfying results and outperform other existing methods under considerations here. The following structure displays the conceptual design of the SRITCSD method for image texture classification. More specifically, it depicts the structure of the training phase and the testing phase for the SRITCSD method. In the training phase, the DWT-FE component denotes the feature-extraction scheme applied for a DWT version image. The feature set, f D W T j , in Eq. (8) is computed via feeding the DWT-FE component with a DWT version image. Let I S V D j represent that image I j is enhanced via the SVD. Another feature set, f S V D , D W T j , in Eq. (11) is calculated via feeding the DWT-FE component with a DWT version of I S V D j . Also, the SVM performs as a multiclassifier with respect to a set of training patterns which are constructed using image texture features, f D W T j and f S V D , D W T j . Meanwhile, the PSO algorithm is employed to optimize the SRITCSD method, which selects the nearly optimal combination of features and a set of parameters utilized in the SVM. In the testing phase of the SRITCSD method, two feature sets, f D W T q and f S V D , D W T q , are computed for a query image I q . The classification result can be obtained via feeding the trained SVM model with f S V D , D W T q to estimate which category the image I q belongs to.Display Omitted The paper presents an image classification technique which extracts rotation-invariant image texture features in singular value decomposition (SVD) and discrete wavelet transform (DWT) domains.First, the method applies the SVD to enhance image textures of an image. Then, it extracts the texture features in the DWT domain of the SVD version of the image.Also, the SVM serves as a multiclassifier for image texture features.Meanwhile, the particle swarm optimization (PSO) algorithm is exploited to select a nearly optimal combination of features and a set of parameters utilized in the SVM.The experimental results demonstrate that the method can achieve satisfying results and outperform other existing methods. --- paper_title: SVM-PSO based rotation-invariant image texture classification in SVD and DWT domains paper_content: The paper presents a new image classification technique which first extracts rotation-invariant image texture features in singular value decomposition (SVD) and discrete wavelet transform (DWT) domains. Subsequently, it exploits a support vector machine (SVM) to perform image texture classification. For convenience, it is called the SRITCSD method hereafter. First, the method applies the SVD to enhance image textures of an image. Then, it extracts the texture features in the DWT domain of the SVD version of the image. Also, the SRITCSD method employs the SVM to serve as a multiclassifier for image texture features. Meanwhile, the particle swarm optimization (PSO) algorithm is utilized to optimize the SRITCSD method, which is exploited to select a nearly optimal combination of features and a set of parameters utilized in the SVM. The experimental results demonstrate that the SRITCSD method can achieve satisfying results and outperform other existing methods under considerations here. The following structure displays the conceptual design of the SRITCSD method for image texture classification. More specifically, it depicts the structure of the training phase and the testing phase for the SRITCSD method. In the training phase, the DWT-FE component denotes the feature-extraction scheme applied for a DWT version image. The feature set, f D W T j , in Eq. (8) is computed via feeding the DWT-FE component with a DWT version image. Let I S V D j represent that image I j is enhanced via the SVD. Another feature set, f S V D , D W T j , in Eq. (11) is calculated via feeding the DWT-FE component with a DWT version of I S V D j . Also, the SVM performs as a multiclassifier with respect to a set of training patterns which are constructed using image texture features, f D W T j and f S V D , D W T j . Meanwhile, the PSO algorithm is employed to optimize the SRITCSD method, which selects the nearly optimal combination of features and a set of parameters utilized in the SVM. In the testing phase of the SRITCSD method, two feature sets, f D W T q and f S V D , D W T q , are computed for a query image I q . The classification result can be obtained via feeding the trained SVM model with f S V D , D W T q to estimate which category the image I q belongs to.Display Omitted The paper presents an image classification technique which extracts rotation-invariant image texture features in singular value decomposition (SVD) and discrete wavelet transform (DWT) domains.First, the method applies the SVD to enhance image textures of an image. Then, it extracts the texture features in the DWT domain of the SVD version of the image.Also, the SVM serves as a multiclassifier for image texture features.Meanwhile, the particle swarm optimization (PSO) algorithm is exploited to select a nearly optimal combination of features and a set of parameters utilized in the SVM.The experimental results demonstrate that the method can achieve satisfying results and outperform other existing methods. --- paper_title: Support Vector Method for Multivariate Density Estimation paper_content: A new method for multivariate density estimation is developed based on the Support Vector Method (SVM) solution of inverse ill-posed problems. The solution has the form of a mixture of densities. This method with Gaussian kernels compared favorably to both Parzen's method and the Gaussian Mixture Model method. For synthetic data we achieve more accurate estimates for densities of 2, 6, 12, and 40 dimensions. --- paper_title: Large margin dags for multiclass classification paper_content: We present a new learning architecture: the Decision Directed Acyclic Graph (DDAG), which is used to combine many two-class classifiers into a multiclass classifier. For an N-class problem, the DDAG contains N(N - 1)/2 classifiers, one for each pair of classes. We present a VC analysis of the case when the node classifiers are hyperplanes; the resulting bound on the test error depends on N and on the margin achieved at the nodes, but not on the dimension of the space. This motivates an algorithm, DAGSVM, which operates in a kernel-induced feature space and uses two-class maximal margin hyperplanes at each decision-node of the DDAG. The DAGSVM is substantially faster to train and evaluate than either the standard algorithm or Max Wins, while maintaining comparable accuracy to both of these algorithms. --- paper_title: Support-Vector Networks paper_content: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. ::: ::: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. ---
``` Title: Texture Image Analysis and Texture Classification Methods - A Review Section 1: INTRODUCTION Description 1: Provide an overview of texture analysis, including the definition of textures, its recognition in human and machine vision, and the main areas of texture analysis. Section 2: Challenges in Texture Images Description 2: Discuss the major challenges in texture image analysis such as rotation, noise, scale, viewpoint, and intensity of light. Section 3: Feature Extraction Method for Categorizing Textures Description 3: Explain the two-step process of texture classification—feature extraction and classification phase. Include a description of various feature extraction methods. Section 4: Classification Phase Description 4: Describe the classification stage including machine learning algorithms used to select the appropriate class for each image. Section 5: Texture Analysis Application Description 5: Highlight different practical applications of texture analysis in various domains such as face detection, tracking objects in videos, product quality diagnostics, medical image analysis, remote sensing, and vegetation. Section 6: Paper Organization Description 6: Provide an overview of the paper's organization and the main sections to be covered. Section 7: Texture Classification Methods Description 7: Discuss the four main categories of texture classification methods—statistical, structural, model-based, and transform methods. Provide details and examples of each method. Section 8: Combinational State-of-the-Art Texture Analysis Algorithms Description 8: Review several combined methods for texture analysis that integrate multiple approaches for improved effectiveness. Section 9: Benchmark Texture Image Datasets Description 9: Introduce various standard datasets used for evaluating the efficiency of texture classification algorithms. Section 10: Classification Phase in Texture Classification Methods Description 10: Provide detailed descriptions of various machine learning classifiers used in the classification phase, including K-Nearest Neighbor, Support Vector Machine, Naive Bayes Classifier, Decision Tree, Artificial Neural Networks, and other classifiers. Section 11: RESULTS Description 11: Summarize the properties of texture databases, and methods with their advantages, disadvantages, accuracy, and dataset of each method in tabular form. Section 12: CONCLUSIONS Description 12: Conclude the study by summarizing the methods for texture analysis and discussing the main challenges each new method aims to overcome. ```
A Short Survey On Memory Based Reinforcement Learning
19
--- paper_title: Decision Making and Reward in Frontal Cortex: Complementary Evidence From Neurophysiological and Neuropsychological Studies paper_content: Patients with damage to the prefrontal cortex (PFC)—especially the ventral and medial parts of PFC—often show a marked inability to make choices that meet their needs and goals. These decisionmaking impairments often reflect both a deficit in learning concerning the consequences of a choice, as well as deficits in the ability to adapt future choices based on experienced value of the current choice. Thus, areas of PFC must support some value computations that are necessary for optimal choice. However, recent frameworks of decision making have highlighted that optimal and adaptive decision making does not simply rest on a single computation, but a number of different value computations may be necessary. Using this framework as a guide, we summarize evidence from both lesion studies and single-neuron physiology for the representation of different value computations across PFC areas. --- paper_title: Sequence to Sequence Learning with Neural Networks paper_content: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. --- paper_title: Deep Recurrent Q-Learning for Partially Observable MDPs paper_content: Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting \textit{Deep Recurrent Q-Network} (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes. --- paper_title: Playing Atari with Deep Reinforcement Learning paper_content: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. --- paper_title: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks paper_content: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. --- paper_title: Speech recognition with deep recurrent neural networks paper_content: Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score. --- paper_title: Reinforcement Learning: An Introduction paper_content: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning. --- paper_title: Playing Atari with Deep Reinforcement Learning paper_content: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. --- paper_title: Trust Region Policy Optimization paper_content: In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. --- paper_title: Actor-critic algorithms paper_content: In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence. --- paper_title: Hippocampal Contributions to Control: The Third Way paper_content: Recent experimental studies have focused on the specialization of different neural structures for different types of instrumental behavior. Recent theoretical work has provided normative accounts for why there should be more than one control system, and how the output of different controllers can be integrated. Two particlar controllers have been identified, one associated with a forward model and the prefrontal cortex and a second associated with computationally simpler, habitual, actor-critic methods and part of the striatum. We argue here for the normative appropriateness of an additional, but so far marginalized control system, associated with episodic memory, and involving the hippocampus and medial temporal cortices. We analyze in depth a class of simple environments to show that episodic control should be useful in a range of cases characterized by complexity and inferential noise, and most particularly at the very early stages of learning, long before habitization has set in. We interpret data on the transfer of control from the hippocampus to the striatum in the light of this hypothesis. --- paper_title: Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory paper_content: Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. Our account of this suggests that memories are first stored via synaptic changes in the hippocampal system; that these changes support reinstatement of recent memories in the neocortex; that neocortical synapses change a little on each reinstatement; and that remote memory is based on accumulated neocortical changes. Models that learn via adaptive changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems. Psychological Review, in press --- paper_title: Playing Atari with Deep Reinforcement Learning paper_content: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. --- paper_title: Incremental multi-step Q-learning paper_content: This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic programming-based reinforcement learning method, with the TD(A) return estimation process, which is typically used in actor-critic learning, another well-known dynamic programming-based reinforcement learning method. The parameter A is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovian effect of coarse state-space quantization. The resulting algorithm, Q(λ)-learning, thus combines some of the best features of the Q-learning and actor-critic learning paradigms. The behavior of this algorithm is demonstrated through computer simulations of the standard benchmark control problem of learning to balance a pole on a cart. --- paper_title: Neural Episodic Control paper_content: Deep reinforcement learning methods attain super-human performance in a wide range of environments. Such methods are grossly inefficient, often taking orders of magnitudes more data than humans to achieve reasonable performance. We propose Neural Episodic Control: a deep reinforcement learning agent that is able to rapidly assimilate new experiences and act upon them. Our agent uses a semi-tabular representation of the value function: a buffer of past experience containing slowly changing state representations and rapidly updated estimates of the value function. We show across a wide range of environments that our agent learns significantly faster than other state-of-the-art, general purpose deep reinforcement learning agents. --- paper_title: Actor-critic algorithms paper_content: In this article, we propose and analyze a class of actor-critic algorithms. These are two-time-scale algorithms in which the critic uses temporal difference learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction, based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actor-critic algorithms for Markov decision processes with Polish state and action spaces. We state and prove two results regarding their convergence. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. --- paper_title: Memory Augmented Control Networks paper_content: Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory. But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan. To mitigate these challenges we introduce the Memory Augmented Control Network (MACN). The proposed network architecture consists of three main parts. The first part uses convolutions to extract features and the second part uses a neural network-based planning module to pre-plan in the environment. The third part uses a network controller that learns to store those specific instances of past information that are necessary for planning. The performance of the network is evaluated in discrete grid world environments for path planning in the presence of simple and complex obstacles. We show that our network learns to plan and can generalize to new environments. --- paper_title: Learning Options in Reinforcement Learning paper_content: Temporally extended actions (e.g., macro actions) have proven very useful for speeding up learning, ensuring robustness and building prior knowledge into AI systems. The options framework (Precup, 2000; Sutton, Precup & Singh, 1999) provides a natural way of incorporating such actions into reinforcement learning systems, but leaves open the issue of howgood options might be identified. In this paper, we empirically explore a simple approach to creating options. The underlying assumption is that the agent will be asked to perform different goalachievement tasks in an environment that is otherwise the same over time. Our approach is based on the intuition that states that are frequently visited on system trajectories, could prove to be useful subgoals (e.g., McGovern & Barto, 2001; Iba, 1989).We propose a greedy algorithm for identifying subgoals based on state visitation counts. We present empirical studies of this approach in two gridworld navigation tasks. One of the environments we explored contains bottleneck states, and the algorithm indeed finds these states, as expected. The second environment is an empty gridworld with no obstacles. Although the environment does not contain any obvious subgoals, our approach still finds useful options, which essentially allow the agent to explore the environment more quickly. --- paper_title: Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update paper_content: We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation. In contrast to the conventional use of the experience replay with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state to its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate directly through all transitions of the sampled episode. We theoretically prove the convergence of the EBU method and experimentally demonstrate its performance in both deterministic and stochastic environments. Especially in 49 games of Atari 2600 domain, EBU achieves the same mean and median human normalized performance of DQN by using only 5% and 10% of samples, respectively. --- paper_title: Hippocampal Contributions to Control: The Third Way paper_content: Recent experimental studies have focused on the specialization of different neural structures for different types of instrumental behavior. Recent theoretical work has provided normative accounts for why there should be more than one control system, and how the output of different controllers can be integrated. Two particlar controllers have been identified, one associated with a forward model and the prefrontal cortex and a second associated with computationally simpler, habitual, actor-critic methods and part of the striatum. We argue here for the normative appropriateness of an additional, but so far marginalized control system, associated with episodic memory, and involving the hippocampus and medial temporal cortices. We analyze in depth a class of simple environments to show that episodic control should be useful in a range of cases characterized by complexity and inferential noise, and most particularly at the very early stages of learning, long before habitization has set in. We interpret data on the transfer of control from the hippocampus to the striatum in the light of this hypothesis. --- paper_title: Playing Atari with Deep Reinforcement Learning paper_content: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. --- paper_title: Playing Atari with Deep Reinforcement Learning paper_content: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. --- paper_title: Curiosity-Driven Exploration by Self-Supervised Prediction paper_content: In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL --- paper_title: Episodic Curiosity through Reachability paper_content: Rewards are sparse in the real world and most today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known"couch-potato"issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. --- paper_title: Asynchronous Methods for Deep Reinforcement Learning paper_content: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input. --- paper_title: Playing Atari with Deep Reinforcement Learning paper_content: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. ---
Title: A Short Survey On Memory Based Reinforcement Learning Section 1: INTRODUCTION Description 1: Introduce the topic of memory-based reinforcement learning, describing the significance, recent advancements, and challenges in the field. Section 2: BACKGROUND Description 2: Provide an overview of fundamental concepts such as Markov Decision Processes (MDP), value functions, Q-learning, and their extensions into deep reinforcement learning. Section 3: DEEP Q-NETWORK Description 3: Explain the concept of Deep Q-Networks (DQN) and how they integrate neural networks with Q-learning for approximating action-value functions. Section 4: POLICY GRADIENT ALGORITHMS Description 4: Discuss policy gradient methods, including the basics of policy optimization and popular algorithms like Actor-Critic, TRPO, and PPO. Section 5: EPISODIC MEMORY Description 5: Introduce the concept of episodic memory and its significance in reinforcement learning, backed by biological inspirations from human cognition. Section 6: MEMORY MODULES Description 6: Review different memory architectures proposed for reinforcement learning, such as Model Free Episodic Control, Neural Episodic Control, and others. Section 7: MODEL FREE EPISODIC CONTROL Description 7: Detail the methods and contributions of Model Free Episodic Control as an early work involving external memory in reinforcement learning. Section 8: NEURAL EPISODIC CONTROL Description 8: Describe the end-to-end architecture of Neural Episodic Control and its key components like Differential Neural Dictionary. Section 9: MASKED EXPERIENCE MEMORY Description 9: Explain the Masked Experience Memory module and how it uses attention mechanisms for read and write operations in memory. Section 10: INTEGRATING EPISODIC MEMORY WITH RESERVOIR SAMPLING Description 10: Discuss the integration of episodic memory with reservoir sampling techniques and how these improve reinforcement learning efficiency. Section 11: NEURAL MAP Description 11: Present the Neural Map memory module, specifically designed for episodic decision making in 3D partially observable environments. Section 12: MEMORY, RL, AND INFERENCE NETWORK (MERLIN) Description 12: Highlight MERLIN, a unified system combining external memory, reinforcement learning, and variational inference. Section 13: MEMORY AUGMENTED CONTROL NETWORK Description 13: Review the Memory Augmented Control Network, which addresses partially observable environments with sparse rewards through hierarchical planning. Section 14: USAGE OF EPISODIC MEMORY Description 14: Describe various algorithms that heavily depend on episodic memory to accomplish specific reinforcement learning tasks. Section 15: EPISODIC BACKWARD UPDATE Description 15: Explain the Episodic Backward Update method and its application in training Deep Q-Networks with backward updates using episodic memory. Section 16: EPISODIC MEMORY DEEP Q-NETWORKS Description 16: Discuss improvements to standard DQN through the incorporation of episodic memory, resulting in faster reward propagation and sample efficiency. Section 17: EPISODIC CURIOSITY THROUGH REACHABILITY Description 17: Introduce the concept of episodic curiosity for environments with sparse rewards, detailing how states that require effort to reach are rewarded. Section 18: STANDARD TESTING ENVIRONMENTS Description 18: Outline common testing environments used in the field, such as the Arcade Learning Environment, maze environments, and the Concentration game. Section 19: CONCLUSIONS Description 19: Summarize the survey, discussing the importance of external memory in reinforcement learning and future directions for research.
Smart Home Survey on Security and Privacy
8
--- paper_title: Lightweight and Secure Session-Key Establishment Scheme in Smart Home Environments paper_content: The proliferation of current wireless communications and information technologies have been altering humans lifestyle and social interactions—the next frontier is the smart home environments or spaces. A smart home consists of low capacity devices (e.g., sensors) and wireless networks, and therefore, all working together as a secure system that needs an adequate level of security. This paper introduces lightweight and secure session key establishment scheme for smart home environments. To establish trust among the network, every sensor and control unit uses a short authentication token and establishes a secure session key. The proposed scheme provides important security attributes including prevention of various popular attacks, such as denial-of-service and eavesdropping attacks. The preliminary evaluation and feasibility tests are demonstrated by the proof-of-concept implementation. In addition, the proposed scheme attains both computation efficiency and communication efficiency as compared with other schemes from the literature. --- paper_title: Verifiable Round-Robin Scheme for Smart Homes paper_content: Advances in sensing, networking, and actuation technologies have resulted in the IoT wave that is expected to revolutionize all aspects of modern society. This paper focuses on the new challenges of privacy that arise in IoT in the context of smart homes. Specifically, the paper focuses on preventing the user's privacy via inferences through channel and in-home device activities. We propose a method for securely scheduling the devices while decoupling the device and channels activities. The proposed solution avoids any attacks that may reveal the coordinated schedule of the devices, and hence, also, assures that inferences that may compromise individual's privacy are not leaked due to device and channel level activities. Our experiments also validate the proposed approach, and consequently, an adversary cannot infer device and channel activities by just observing the network traffic. --- paper_title: Efficient Identification and Signatures for Smart Cards paper_content: We present an efficient interactive identification scheme and a related signature scheme that are based on discrete logarithms and which are particularly suited for smart cards. Previous cryptoschemes, based on the discrete logarithm, have been proposed by El Gamal (1985), Chaum, Evertse, Graaf (1988), Beth (1988) and Gunter (1989). The new scheme comprises the following novel features. --- paper_title: Time-based coordination in geo-distributed cyber-physical systems paper_content: Emerging Cyber-Physical Systems (CPS) such as connected vehicles and smart cities span large geographical areas. These systems are increasingly distributed and interconnected. Hence, a hierarchy of cloudlet and cloud deployments will be key to enable scaling, while simultaneously hosting the intelligence behind these systems. Given that CPS applications are often safety-critical, existing techniques focus on reducing latency to provide real-time performance. While low latency is useful, a shared and precise notion of time is key to enabling coordinated action in distributed CPS. In this position paper, we argue for a global Quality of Time (QoT)-based architecture, centered around a shared virtualized notion of time, based on the timeline abstraction [1]. Our architecture allows applications to specify their QoT requirements, while exposing timing uncertainty to the application. The timeline abstraction with the associated knowledge of QoT enables scalable geo-distributed coordination in CPS, while providing avenues for fault tolerance and graceful degradation in the face of adversity. --- paper_title: New Directions in Cryptography paper_content: Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have given rise to a need for new types of cryptographic systems, which minimize the need for secure key distribution channels and supply the equivalent of a written signature. This paper suggests ways to solve these currently open problems. It also discusses how the theories of communication and computation are beginning to provide the tools to solve cryptographic problems of long standing. --- paper_title: Charm: a framework for rapidly prototyping cryptosystems paper_content: We describe Charm, an extensible framework for rapidly prototyping cryptographic systems. Charm provides a number of features that explicitly support the development of new protocols, including support for modular composition of cryptographic building blocks, infrastructure for developing interactive protocols, and an extensive library of re-usable code. Our framework also provides a series of specialized tools that enable different cryptosystems to interoperate. We implemented over 40 cryptographic schemes using Charm, including some new ones that, to our knowledge, have never been built in practice. This paper describes our modular architecture, which includes a built-in benchmarking module to compare the performance of Charm primitives to existing C implementations. We show that in many cases our techniques result in an order of magnitude decrease in code size, while inducing an acceptable performance impact. Lastly, the Charm framework is freely available to the research community and to date, we have developed a large, active user base. --- paper_title: Schnorr-Like Identification Scheme Resistant to Malicious Subliminal Setting of Ephemeral Secret paper_content: In this paper we propose a modification of the Schnorr Identification Scheme (\({\mathsf {IS}}\)), which is immune to malicious subliminal setting of ephemeral secret. We introduce a new strong security model in which, during the query stage, we allow the adversary verifier to set random values used on the prover side in the commitment phase. We define the \({\mathsf {IS}}\) scheme to be secure if such a setting will not enable the adversary to impersonate the prover later on. Subsequently we prove the security of the modified Schnorr \({\mathsf {IS}}\) in our strong model. We assume the proposition is important for scenarios in which we do not control the production process of the device on which the scheme is implemented, and where the erroneous pseudo-random number generators make such attacks possible. --- paper_title: Provably Secure and Practical Identification Schemes and Corresponding Signature Schemes paper_content: This paper presents a three-move interactive identification scheme and proves it to be as secure as the discrete logarithm problem. This provably secure scheme is almost as efficient as the Schnorr identification scheme, while the Schnorr scheme is not provably secure. This paper also presents another practical identification scheme which is proven to be as secure as the factoring problem and is almost as efficient as the Guillou-Quisquater identification scheme: the Guillou-Quisquater scheme is not provably secure. We also propose practical digital signature schemes based on these identification schemes. The signature schemes are almost as efficient as the Schnorr and Guillou-Quisquater signature schemes, while the security assumptions of our signature schemes are weaker than those of the Schnorr and Guillou-Quisquater. signature schemes. This paper also gives a theoretically generalized result: a three-move identification scheme can be constructed which is as secure as the random-self-reducible problem. Moreover, this paper proposes a variant which is proven to be as secuie as the difficulty of solving both the discrete logarithm problem and the specific factoring problem simultaneously. Some other variants such as an identity-based variant and an elliptic curve variant are also proposed. --- paper_title: From identification to signatures via the Fiat-Shamir transform: Minimizing assumptions for security and forward-security paper_content: The Fiat-Shamir paradigm for transforming identification schemes into signature schemes has been popular since its introduction because it yields efficient signature schemes, and has been receiving renewed interest of late as the main tool in deriving forward-secure signature schemes. We find minimal (meaning necessary and sufficient) conditions on the identification scheme to ensure security of the signature scheme in the random oracle model, in both the usual and the forward-secure cases. Specifically we show that the signature scheme is secure (resp. forward-secure) against chosen-message attacks in the random oracle model if and only if the underlying identification scheme is secure (resp. forward-secure) against impersonation under passive (i.e., eavesdropping only) attacks, and has its commitments drawn at random from a large space. An extension is proven incorporating a random seed into the Fiat-Shamir transform so that the commitment space assumption may be removed. --- paper_title: Wallet Databases with Observers paper_content: Previously there have been essentially only two models for computers that people can use to handle ordinary consumer transactions: (1) the tamper-proof module, such as a smart card, that the person cannot modify or probe: and (2) the personal workstation whose inner working is totally under control of the individual. The first part of this article argues that a particular combination of these two kinds of mechanism can overcome the limitations of each alone, providing both security and correctness for organizations as well as privacy and even anonymity for individuals.Then it is shown how this combined device, called a wallet, ran carry a database containing personal information. The construction presented ensures that no single part of the device (i.e. neither the tamper-proof part nor the workstation) can learn the contents of the database -- this information can only be recovered by the two parts together. ---
Title: Smart Home Survey on Security and Privacy Section 1: INTRODUCTION Description 1: Describe the rapid growth of IoT in smart homes, the typical structure of a Home Area Network (HAN), and its associated security and privacy challenges. Section 2: SECURITY THREATS AND GOALS Description 2: Discuss the vulnerabilities of existing HANs, present a taxonomy of threats, and provide an overview of basic security requirements necessary for proximity-based communication in HANs. Section 3: Proximity Model Description 3: Explain the proximity communication model based on space and time dimensions. Discuss different device interaction scenarios categorized by time and space, along with corresponding security solutions. Section 4: Key Generation Description 4: Describe general methods for secure key management, such as key pre-sharing, key evolution, and PUF-enabled key-store, and classify device communication into categories based on these methods. Section 5: Device-to-Device Wireless (D2DWL) Connectivity Description 5: Elaborate on the security adaptations for D2D wireless connectivity, including reactive authentication and distance-bounding protocols. Section 6: Device-to-Device Wired (D2DW) Connectivity Description 6: Present the security solutions for D2D wired connectivity, including the application of symmetric and asymmetric key protocols like ISO-KE, SIGMA, and TLS. Section 7: Owner-to-Cloud (O2C) Connectivity Description 7: Discuss the security considerations for O2C connectivity, including secure storage, query processing, and authentication. Illustrate protocols like Schnorr and Okamoto Identification Schemes, Pedersen’s Commitment Scheme, proxy re-encryption, and homomorphic encryption. Section 8: CONCLUSION Description 8: Summarize the study of security protocols for proximity-based communication models in HANs, and highlight the suitability of different security protocols based on time and space dimensions.
A Selective Overview of Deep Learning
13
--- paper_title: ImageNet Large Scale Visual Recognition Challenge paper_content: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements. --- paper_title: Neural Ordinary Differential Equations paper_content: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: How Well Can Generative Adversarial Networks (GAN) Learn Densities: A Nonparametric View paper_content: We study in this paper the rate of convergence for learning densities under the Generative Adversarial Networks (GANs) framework, borrowing insights from nonparametric statistics. We introduce an improved GAN estimator that achieves a faster rate, through leveraging the level of smoothness in the target density and the evaluation metric, which in theory remedies the mode collapse problem reported in the literature. A minimax lower bound is constructed to show that when the dimension is large, the exponent in the rate for the new GAN estimator is near optimal. One can view our results as answering in a quantitative way how well GAN learns a wide range of densities with different smoothness properties, under a hierarchy of evaluation metrics. As a byproduct, we also obtain improved bounds for GAN with deeper ReLU discriminator network. --- paper_title: Clinically applicable deep learning for diagnosis and referral in retinal disease paper_content: The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting. --- paper_title: Mastering the game of Go without human knowledge paper_content: A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo. --- paper_title: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations paper_content: We propose a new algorithm for solving parabolic partial differential equations (PDEs) and backward stochastic differential equations (BSDEs) in high dimension, by making an analogy between the BSDE and reinforcement learning with the gradient of the solution playing the role of the policy function, and the loss function given by the error between the prescribed terminal condition and the solution of the BSDE. The policy function is then approximated by a neural network, as is done in deep reinforcement learning. Numerical results using TensorFlow illustrate the efficiency and accuracy of the proposed algorithms for several 100-dimensional nonlinear PDEs from physics and finance such as the Allen-Cahn equation, the Hamilton-Jacobi-Bellman equation, and a nonlinear pricing model for financial derivatives. --- paper_title: Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper_content: Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units ("wordpieces") for both input and output. This method provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system. --- paper_title: Nonparametric regression using deep neural networks with ReLU activation function paper_content: Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to $\log n$-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates. --- paper_title: Robust Estimation and Generative Adversarial Nets paper_content: Robust estimation under Huber's $\epsilon$-contamination model has become an important topic in statistics and theoretical computer science. Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning. Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's $\epsilon$-contamination model. Interestingly, the hidden layers for the neural net structure in the discriminator class is shown to be necessary for robust estimation. --- paper_title: Sliced Inverse Regression for Dimension Reduction paper_content: Abstract Modern advances in computing power have greatly widened scientists' scope in gathering and investigating information from many variables, information which might have been ignored in the past. Yet to effectively scan a large pool of variables is not an easy task, although our ability to interact with data has been much enhanced by recent innovations in dynamic graphics. In this article, we propose a novel data-analytic tool, sliced inverse regression (SIR), for reducing the dimension of the input variable x without going through any parametric or nonparametric model-fitting process. This method explores the simplicity of the inverse view of regression; that is, instead of regressing the univariate output variable y against the multivariate x, we regress x against y. Forward regression and inverse regression are connected by a theorem that motivates this method. The theoretical properties of SIR are investigated under a model of the form, y = f(β 1 x, …, β K x, e), where the β k 's are the unknown... --- paper_title: The DeepTune framework for modeling and characterizing neurons in visual cortex area V4 paper_content: Deep neural network models have recently been shown to be effective in predicting single neuron responses in primate visual cortex areas V4. Despite their high predictive accuracy, these models are generally difficult to interpret. This limits their applicability in characterizing V4 neuron function. Here, we propose the DeepTune framework as a way to elicit interpretations of deep neural network-based models of single neurons in area V4. V4 is a midtier visual cortical area in the ventral visual pathway. Its functional role is not yet well understood. Using a dataset of recordings of 71 V4 neurons stimulated with thousands of static natural images, we build an ensemble of 18 neural network-based models per neuron that accurately predict its response given a stimulus image. To interpret and visualize these models, we use a stability criterion to form optimal stimuli (DeepTune images) by pooling the 18 models together. These DeepTune images not only confirm previous findings on the presence of diverse shape and texture tuning in area V4, but also provide rich, concrete and naturalistic characterization of receptive fields of individual V4 neurons. The population analysis of DeepTune images for 71 neurons reveals how different types of curvature tuning are distributed in V4. In addition, it also suggests strong suppressive tuning for nearly half of the V4 neurons. Though we focus exclusively on the area V4, the DeepTune framework could be applied more generally to enhance the understanding of other visual cortex areas. --- paper_title: Projection Pursuit Regression paper_content: Abstract A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, does not require the definition of a metric in the predictor space, and lends itself to graphical interpretation. --- paper_title: Understanding Neural Networks Through Deep Visualization paper_content: Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup. --- paper_title: Fisher Lecture: Dimension Reduction in Regression paper_content: Beginning with a discussion of R. A. Fisher's early written remarks that relate to dimension reduction, this article revisits principal components as a reductive method in regression, develops several model-based extensions and ends with descriptions of general approaches to model-based and model-free dimension reduction in regression. It is argued that the role for principal components and related methodology may be broader than previously seen and that the common practice of conditioning on observed values of the predictors may unnecessarily limit the choice of regression methodology. --- paper_title: The Marginal Value of Adaptive Gradient Methods in Machine Learning paper_content: Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks. --- paper_title: Visualizing the Loss Landscape of Neural Nets paper_content: Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers. --- paper_title: Understanding Neural Networks Through Deep Visualization paper_content: Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup. --- paper_title: Learning internal representations by error propagation paper_content: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion --- paper_title: Neural Network Learning: Theoretical Foundations paper_content: This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics. --- paper_title: The DeepTune framework for modeling and characterizing neurons in visual cortex area V4 paper_content: Deep neural network models have recently been shown to be effective in predicting single neuron responses in primate visual cortex areas V4. Despite their high predictive accuracy, these models are generally difficult to interpret. This limits their applicability in characterizing V4 neuron function. Here, we propose the DeepTune framework as a way to elicit interpretations of deep neural network-based models of single neurons in area V4. V4 is a midtier visual cortical area in the ventral visual pathway. Its functional role is not yet well understood. Using a dataset of recordings of 71 V4 neurons stimulated with thousands of static natural images, we build an ensemble of 18 neural network-based models per neuron that accurately predict its response given a stimulus image. To interpret and visualize these models, we use a stability criterion to form optimal stimuli (DeepTune images) by pooling the 18 models together. These DeepTune images not only confirm previous findings on the presence of diverse shape and texture tuning in area V4, but also provide rich, concrete and naturalistic characterization of receptive fields of individual V4 neurons. The population analysis of DeepTune images for 71 neurons reveals how different types of curvature tuning are distributed in V4. In addition, it also suggests strong suppressive tuning for nearly half of the V4 neurons. Though we focus exclusively on the area V4, the DeepTune framework could be applied more generally to enhance the understanding of other visual cortex areas. --- paper_title: Neural Network Learning: Theoretical Foundations paper_content: This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics. --- paper_title: Gradient-Based Learning Applied to Document Recognition paper_content: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day. --- paper_title: Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition paper_content: A neural network model, called a “neocognitron”, is proposed for a mechanism of visual pattern recognition. It is demonstrated by computer simulation that the neocognitron has characteristics similar to those of visual systems of vertebrates. --- paper_title: Gradient-Based Learning Applied to Document Recognition paper_content: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day. --- paper_title: Deep Learning and Its Applications in Biomedicine paper_content: Abstract Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images , electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning -based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e. , deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. --- paper_title: Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling paper_content: Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that was designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we explore LSTM RNN architectures for large scale acoustic modeling in speech recognition. We recently showed that LSTM RNNs are more effective than DNNs and conventional RNNs for acoustic modeling, considering moderately-sized models trained on a single machine. Here, we introduce the first distributed training of LSTM RNNs using asynchronous stochastic gradient descent optimization on a large cluster of machines. We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance. This architecture makes more effective use of model parameters than the others considered, converges quickly, and outperforms a deep feed forward neural network having an order of magnitude more parameters. Index Terms: Long Short-Term Memory, LSTM, recurrent neural network, RNN, speech recognition, acoustic modeling. --- paper_title: Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation paper_content: Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units ("wordpieces") for both input and output. This method provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system. --- paper_title: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation paper_content: In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. --- paper_title: Long Short-Term Memory paper_content: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms. --- paper_title: Network In Network paper_content: We propose a novel deep network structure called "Network In Network" (NIN) to enhance model discriminability for local patches within the receptive field. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to abstract the data within the receptive field. We instantiate the micro neural network with a multilayer perceptron, which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner as CNN; they are then fed into the next layer. Deep NIN can be implemented by stacking mutiple of the above described structure. With enhanced local modeling via the micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers. We demonstrated the state-of-the-art classification performances with NIN on CIFAR-10 and CIFAR-100, and reasonable performances on SVHN and MNIST datasets. --- paper_title: Going deeper with convolutions paper_content: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size paper_content: Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet). --- paper_title: Identity Mappings in Deep Residual Networks paper_content: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 % error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https://github.com/KaimingHe/resnet-1k-layers. --- paper_title: Densely Connected Convolutional Networks paper_content: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL . --- paper_title: Efficient learning of sparse representations with an energy-based model paper_content: We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces "stroke detectors" when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps. --- paper_title: Extracting and composing robust features with denoising autoencoders paper_content: Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. --- paper_title: Regression Shrinkage and Selection Via the Lasso paper_content: SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described. --- paper_title: Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties paper_content: Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of ... --- paper_title: Generative Adversarial Nets paper_content: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. --- paper_title: Improved Techniques for Training GANs paper_content: We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes. --- paper_title: Generative Moment Matching Networks paper_content: We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks (Goodfellow et al., 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database. --- paper_title: Stability and generalization paper_content: We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification. --- paper_title: f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization paper_content: Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models. --- paper_title: Circuit Complexity and Neural Networks paper_content: Computers and computation the discrete neuron the Boolean neuron alternating circuits small, shallow alternating circuits threshold circuits cyclic networks probabilistic neural networks. --- paper_title: Deep Learning and Its Applications in Biomedicine paper_content: Abstract Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images , electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning -based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e. , deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. --- paper_title: Random Approximants and Neural Networks paper_content: LetDbe a set with a probability measure?,?(D)=1, and letKbe a compact subset ofLq(D, ?), 1?q<∞. Forf?Lq,n=1, 2, ?, let?n(f, K)=inf?f?gn?q, where the infimum is taken over allgnof the formgn=?ni=1ai?i, with arbitrary?i?Kandai?R. It is shown that forformula], under some mild restrictions,?n(f, K)?Cq?n(K)n?1/2, where?n(K)?0 asn?∞. This fact is used to estimate the errors of certain neural net approximations. For the latter, also the lower estimates of errors are given. --- paper_title: Approximation theory of the MLP model in neural networks paper_content: In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of the more popular and practical of the many neural network models. Mathematically it is also one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are approximation-theoretic in character. Most of the research we will discuss is of very recent vintage. We will report on what has been done and on various unanswered questions. We will not be presenting practical (algorithmic) methods. We will, however, be exploring the capabilities and limitations of this model. --- paper_title: Universal approximation bounds for superpositions of a sigmoidal function paper_content: Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1/n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1/n/sup 2/d/ uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. > --- paper_title: Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval paper_content: This paper considers the problem of solving systems of quadratic equations, namely, recovering an object of interest \(\varvec{x}^{\natural }\in {\mathbb {R}}^{n}\) from m quadratic equations/samples \(y_{i}=(\varvec{a}_{i}^{\top }\varvec{x}^{\natural })^{2}, 1\le i\le m\). This problem, also dubbed as phase retrieval, spans multiple domains including physical sciences and machine learning. We investigate the efficacy of gradient descent (or Wirtinger flow) designed for the nonconvex least squares problem. We prove that under Gaussian designs, gradient descent—when randomly initialized—yields an \(\epsilon \)-accurate solution in \(O\big (\log n+\log (1/\epsilon )\big )\) iterations given nearly minimal samples, thus achieving near-optimal computational and sample complexities at once. This provides the first global convergence guarantee concerning vanilla gradient descent for phase retrieval, without the need of (i) carefully-designed initialization, (ii) sample splitting, or (iii) sophisticated saddle-point escaping schemes. All of these are achieved by exploiting the statistical models in analyzing optimization algorithms, via a leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient descent iterates and the data. --- paper_title: On the Near Optimality of the Stochastic Approximation of Smooth Functions by Neural Networks paper_content: We consider the problem of approximating the Sobolev class of functions by neural networks with a single hidden layer, establishing both upper and lower bounds. The upper bound uses a probabilistic approach, based on the Radon and wavelet transforms, and yields similar rates to those derived recently under more restrictive conditions on the activation function. Moreover, the construction using the Radon and wavelet transforms seems very natural to the problem. Additionally, geometrical arguments are used to establish lower bounds for two types of commonly used activation functions. The results demonstrate the tightness of the bounds, up to a factor logarithmic in the number of nodes of the neural network. --- paper_title: Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review paper_content: The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures. --- paper_title: The Power of Depth for Feedforward Neural Networks paper_content: We show that there is a simple (approximately radial) function on $\reals^d$, expressible by a small 3-layer feedforward neural networks, which cannot be approximated by any 2-layer network, to more than a certain constant accuracy, unless its width is exponential in the dimension. The result holds for virtually all known activation functions, including rectified linear units, sigmoids and thresholds, and formally demonstrates that depth -- even if increased by 1 -- can be exponentially more valuable than width for standard feedforward neural networks. Moreover, compared to related results in the context of Boolean functions, our result requires fewer assumptions, and the proof techniques and construction are very different. --- paper_title: Why does deep and cheap learning work so well? paper_content: We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through"cheap learning"with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various"no-flattening theorems"showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss, for example, we show that $n$ variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer. --- paper_title: Learning Functions: When Is Deep Better Than Shallow paper_content: While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower number of training parameters as well as VC-dimension. This theorem settles an old conjecture by Bengio on the role of depth in networks. We then define a general class of scalable, shift-invariant algorithms to show a simple and natural set of requirements that justify deep convolutional networks. --- paper_title: Nonparametric regression using deep neural networks with ReLU activation function paper_content: Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to $\log n$-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates. --- paper_title: The power of deeper networks for expressing natural functions paper_content: It is well-known that neural networks are universal approximators, but that deeper networks tend to be much more efficient than shallow ones. We shed light on this by proving that the total number of neurons $m$ required to approximate natural classes of multivariate polynomials of $n$ variables grows only linearly with $n$ for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from $1$ to $k$, the neuron requirement grows exponentially not with $n$ but with $n^{1/k}$, suggesting that the minimum number of layers required for computational tractability grows only logarithmically with $n$. --- paper_title: Why does deep and cheap learning work so well? paper_content: We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through"cheap learning"with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various"no-flattening theorems"showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss, for example, we show that $n$ variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer. --- paper_title: Nonparametric regression using deep neural networks with ReLU activation function paper_content: Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to $\log n$-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates. --- paper_title: A Stochastic Approximation Method paper_content: Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability. --- paper_title: Fisher Lecture: Dimension Reduction in Regression paper_content: Beginning with a discussion of R. A. Fisher's early written remarks that relate to dimension reduction, this article revisits principal components as a reductive method in regression, develops several model-based extensions and ends with descriptions of general approaches to model-based and model-free dimension reduction in regression. It is argued that the role for principal components and related methodology may be broader than previously seen and that the common practice of conditioning on observed values of the predictors may unnecessarily limit the choice of regression methodology. --- paper_title: A Stochastic Approximation Method paper_content: Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability. --- paper_title: A Convergence Theory for Deep Learning via Over-Parameterization paper_content: Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, the neural networks used in practice are going wider and deeper. On the theoretical side, a long line of works have been focusing on why we can train neural networks when there is only one hidden layer. The theory of multi-layer networks remains somewhat unsettled. In this work, we prove why simple algorithms such as stochastic gradient descent (SGD) can find $\textit{global minima}$ on the training objective of DNNs. We only make two assumptions: the inputs do not degenerate and the network is over-parameterized. The latter means the number of hidden neurons is sufficiently large: $\textit{polynomial}$ in $L$, the number of DNN layers and in $n$, the number of training samples. As concrete examples, on the training set and starting from randomly initialized weights, we show that SGD attains 100% accuracy in classification tasks, or minimizes regression loss in linear convergence speed $\varepsilon \propto e^{-\Omega(T)}$, with a number of iterations that only scales polynomial in $n$ and $L$. Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet). --- paper_title: Gradient Descent Finds Global Minima of Deep Neural Networks paper_content: Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result. --- paper_title: Acceleration of Stochastic Approximation by Averaging paper_content: A new recursive algorithm of stochastic approximation type with the averaging of trajectories is investigated. Convergence with probability one is proved for a variety of classical optimization and identification problems. It is also demonstrated for these problems that the proposed algorithm achieves the highest possible rate of convergence. --- paper_title: Stochastic Approximation and Recursive Algorithms and Applications paper_content: Introduction 1 Review of Continuous Time Models 1.1 Martingales and Martingale Inequalities 1.2 Stochastic Integration 1.3 Stochastic Differential Equations: Diffusions 1.4 Reflected Diffusions 1.5 Processes with Jumps 2 Controlled Markov Chains 2.1 Recursive Equations for the Cost 2.2 Optimal Stopping Problems 2.3 Discounted Cost 2.4 Control to a Target Set and Contraction Mappings 2.5 Finite Time Control Problems 3 Dynamic Programming Equations 3.1 Functionals of Uncontrolled Processes 3.2 The Optimal Stopping Problem 3.3 Control Until a Target Set Is Reached 3.4 A Discounted Problem with a Target Set and Reflection 3.5 Average Cost Per Unit Time 4 Markov Chain Approximation Method: Introduction 4.1 Markov Chain Approximation 4.2 Continuous Time Interpolation 4.3 A Markov Chain Interpolation 4.4 A Random Walk Approximation 4.5 A Deterministic Discounted Problem 4.6 Deterministic Relaxed Controls 5 Construction of the Approximating Markov Chains 5.1 One Dimensional Examples 5.2 Numerical Simplifications 5.3 The General Finite Difference Method 5.4 A Direct Construction 5.5 Variable Grids 5.6 Jump Diffusion Processes 5.7 Reflecting Boundaries 5.8 Dynamic Programming Equations 5.9 Controlled and State Dependent Variance 6 Computational Methods for Controlled Markov Chains 6.1 The Problem Formulation 6.2 Classical Iterative Methods 6.3 Error Bounds 6.4 Accelerated Jacobi and Gauss-Seidel Methods 6.5 Domain Decomposition 6.6 Coarse Grid-Fine Grid Solutions 6.7 A Multigrid Method 6.8 Linear Programming 7 The Ergodic Cost Problem: Formulation and Algorithms 7.1 Formulation of the Control Problem 7.2 A Jacobi Type Iteration 7.3 Approximation in Policy Space 7.4 Numerical Methods 7.5 The Control Problem 7.6 The Interpolated Process 7.7 Computations 7.8 Boundary Costs and Controls 8 Heavy Traffic and Singular Control 8.1 Motivating Examples &nb --- paper_title: On the Insufficiency of Existing Momentum Schemes for Stochastic Optimization paper_content: Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Theoretically, these ```fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there are simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching. Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Accelerated Gradient Descent. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. --- paper_title: On the importance of initialization and momentum in deep learning paper_content: Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. ::: ::: Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods. --- paper_title: Accelerating Stochastic Gradient Descent paper_content: There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e.g. Nesterov's acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation, a notion made precise in d'Aspremont 2008 and Devolder, Glineur, and Nesterov 2014. This work considers these issues for the special case of stochastic approximation for the least squares regression problem, and our main result refutes the conventional wisdom by showing that acceleration can be made robust to statistical errors. In particular, this work introduces an accelerated stochastic gradient method that provably achieves the minimax optimal statistical risk faster than stochastic gradient descent. Critical to the analysis is a sharp characterization of accelerated stochastic gradient descent as a stochastic process. We hope this characterization gives insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex optimization problems. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Adaptive subgradient methods for online learning and stochastic optimization paper_content: We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms. --- paper_title: Imagenet classification with deep convolutional neural networks paper_content: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. --- paper_title: Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift paper_content: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters. --- paper_title: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks paper_content: Recent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: ::: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR'17]. ::: (ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size. ::: (iii) Learnability of a broad class of smooth functions by 2-layer ReLU nets trained via gradient descent. ::: The key idea is to track dynamics of training and generalization via properties of a related kernel. --- paper_title: Improving neural networks by preventing co-adaptation of feature detectors paper_content: When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition. --- paper_title: Dropout Training as Adaptive Regularization paper_content: Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset. --- paper_title: Understanding deep learning requires rethinking generalization paper_content: Despite their massive size, successful deep artificial neural networks can ::: exhibit a remarkably small difference between training and test performance. ::: Conventional wisdom attributes small generalization error either to properties ::: of the model family, or to the regularization techniques used during training. ::: ::: Through extensive systematic experiments, we show how these traditional ::: approaches fail to explain why large neural networks generalize well in ::: practice. Specifically, our experiments establish that state-of-the-art ::: convolutional networks for image classification trained with stochastic ::: gradient methods easily fit a random labeling of the training data. This ::: phenomenon is qualitatively unaffected by explicit regularization, and occurs ::: even if we replace the true images by completely unstructured random noise. We ::: corroborate these experimental findings with a theoretical construction ::: showing that simple depth two neural networks already have perfect finite ::: sample expressivity as soon as the number of parameters exceeds the ::: number of data points as it usually does in practice. ::: ::: We interpret our experimental findings by comparison with traditional models. --- paper_title: Do CIFAR-10 Classifiers Generalize to CIFAR-10? paper_content: Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution. --- paper_title: Understanding Machine Learning: From Theory to Algorithms paper_content: Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering. --- paper_title: On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond paper_content: We propose a generalization error bound for a general family of deep neural networks based on the depth and width of the networks, as well as the spectral norm of weight matrices. Through introducing a novel characterization of the Lipschitz properties of neural network family, we achieve a tighter generalization error bound. We further obtain a result that is free of linear dependence on norms for bounded losses. Besides the general deep neural networks, our results can be applied to derive new bounds for several popular architectures, including convolutional neural networks (CNNs), residual networks (ResNets), and hyperspherical networks (SphereNets). When achieving same generalization errors with previous arts, our bounds allow for the choice of much larger parameter spaces of weight matrices, inducing potentially stronger expressive ability for neural networks. --- paper_title: Projection Pursuit Regression paper_content: Abstract A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, does not require the definition of a metric in the predictor space, and lends itself to graphical interpretation. --- paper_title: A Priori Estimates of the Population Risk for Residual Networks paper_content: Optimal a priori estimates are derived for the population risk, also known as the generalization error, of a regularized residual network model. An important part of the regularized model is the usage of a new path norm, called the weighted path norm, as the regularization term. The weighted path norm treats the skip connections and the nonlinearities differently so that paths with more nonlinearities are regularized by larger weights. The error estimates are a priori in the sense that the estimates depend only on the target function, not on the parameters obtained in the training process. The estimates are optimal, in a high dimensional setting, in the sense that both the bound for the approximation and estimation errors are comparable to the Monte Carlo error rates. A crucial step in the proof is to establish an optimal bound for the Rademacher complexity of the residual networks. Comparisons are made with existing norm-based generalization error bounds. --- paper_title: Understanding Machine Learning: From Theory to Algorithms paper_content: Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering. --- paper_title: Norm-Based Capacity Control in Neural Networks paper_content: We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks. --- paper_title: Risk Bounds for High-dimensional Ridge Function Combinations Including Neural Networks paper_content: Let $ f^{\star} $ be a function on $ \mathbb{R}^d $ with an assumption of a spectral norm $ v_{f^{\star}} $. For various noise settings, we show that $ \mathbb{E}\|\hat{f} - f^{\star} \|^2 \leq \left(v^4_{f^{\star}}\frac{\log d}{n}\right)^{1/3} $, where $ n $ is the sample size and $ \hat{f} $ is either a penalized least squares estimator or a greedily obtained version of such using linear combinations of sinusoidal, sigmoidal, ramp, ramp-squared or other smooth ridge functions. The candidate fits may be chosen from a continuum of functions, thus avoiding the rigidity of discretizations of the parameter space. On the other hand, if the candidate fits are chosen from a discretization, we show that $ \mathbb{E}\|\hat{f} - f^{\star} \|^2 \leq \left(v^3_{f^{\star}}\frac{\log d}{n}\right)^{2/5} $. This work bridges non-linear and non-parametric function estimation and includes single-hidden layer nets. Unlike past theory for such settings, our bound shows that the risk is small even when the input dimension $ d $ of an infinite-dimensional parameterized dictionary is much larger than the available sample size. When the dimension is larger than the cube root of the sample size, this quantity is seen to improve the more familiar risk bound of $ v_{f^{\star}}\left(\frac{d\log (n/d)}{n}\right)^{1/2} $, also investigated here. --- paper_title: Neural Network Learning: Theoretical Foundations paper_content: This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics. --- paper_title: Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties paper_content: Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of ... --- paper_title: Size-Independent Sample Complexity of Neural Networks paper_content: We study the sample complexity of learning neural networks, by providing new bounds on their Rademacher complexity assuming norm constraints on the parameter matrix of each layer. Compared to previous work, these bounds have improved dependence on the network depth, and under some additional assumptions, are fully independent of the network size (both depth and width). These results are derived using some novel techniques, which may be of independent interest. --- paper_title: Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error paper_content: Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or"loss"function used to train the network. We show that, when the number $n$ of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of $n$. We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as $O(n^{-1})$. Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as $d=25$. --- paper_title: Analysis of a Two-Layer Neural Network via Displacement Convexity paper_content: Fitting a function by using linear combinations of a large number $N$ of `simple' components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is non-convex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches. ::: Here we consider the problem of learning a concave function $f$ on a compact convex domain $\Omega\subseteq {\mathbb R}^d$, using linear combinations of `bump-like' components (neurons). The parameters to be fitted are the centers of $N$ bumps, and the resulting empirical risk minimization problem is highly non-convex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over $\Omega$. Further, when the bump width $\delta$ tends to $0$, this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for $N\to\infty$, $\delta\to 0$. ::: Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of $\delta, N$. Explaining this phenomenon, and understanding the dependence on $\delta,N$ in a quantitative manner remains an outstanding challenge. --- paper_title: Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit paper_content: We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in $R^D$ (where $D$ is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension $D$. In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. ::: Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. --- paper_title: On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport paper_content: Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension. --- paper_title: A mean field view of the landscape of two-layer neural networks paper_content: Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that-in a suitable scaling limit-SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for "averaging out" some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD. --- paper_title: High probability generalization bounds for uniformly stable algorithms with nearly optimal rate paper_content: Algorithmic stability is a classical approach to understanding and analysis of the generalization error of learning algorithms. A notable weakness of most stability-based generalization bounds is that they hold only in expectation. Generalization with high probability has been established in a landmark paper of Bousquet and Elisseeff (2002) albeit at the expense of an additional $\sqrt{n}$ factor in the bound. Specifically, their bound on the estimation error of any $\gamma$-uniformly stable learning algorithm on $n$ samples and range in $[0,1]$ is $O(\gamma \sqrt{n \log(1/\delta)} + \sqrt{\log(1/\delta)/n})$ with probability $\geq 1-\delta$. The $\sqrt{n}$ overhead makes the bound vacuous in the common settings where $\gamma \geq 1/\sqrt{n}$. A stronger bound was recently proved by the authors (Feldman and Vondrak, 2018) that reduces the overhead to at most $O(n^{1/4})$. Still, both of these results give optimal generalization bounds only when $\gamma = O(1/n)$. ::: We prove a nearly tight bound of $O(\gamma \log(n)\log(n/\delta) + \sqrt{\log(1/\delta)/n})$ on the estimation error of any $\gamma$-uniformly stable algorithm. It implies that algorithms that are uniformly stable with $\gamma = O(1/\sqrt{n})$ have essentially the same estimation error as algorithms that output a fixed function. Our result leads to the first high-probability generalization bounds for multi-pass stochastic gradient descent and regularized ERM for stochastic convex problems with nearly optimal rate --- resolving open problems in prior work. Our proof technique is new and we introduce several analysis tools that might find additional applications. --- paper_title: Distribution-Free Performance Bounds for Potential Function Rules paper_content: In the discrimination problem the random variable \theta , known to take values in {1, \cdots ,M} , is estimated from the random vector X . All that is known about the joint distribution of (X, \theta) is that which can be inferred from a sample (X_{1}, \theta_{1}), \cdots ,(X_{n}, \theta_{n}) of size n drawn from that distribution. A discrimination nde is any procedure which determines a decision \hat{ \theta} for \theta from X and (X_{1}, \theta_{1}) , \cdots , (X_{n}, \theta_{n}) . For rules which are determined by potential functions it is shown that the mean-square difference between the probability of error for the nde and its deleted estimate is bounded by A/ \sqrt{n} where A is an explicitly given constant depending only on M and the potential function. The O(n ^{-1/2}) behavior is shown to be the best possible for one of the most commonly encountered rules of this type. --- paper_title: Learnability, stability and uniform convergence paper_content: The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization. In this paper, we consider the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases. We show that in this setting, there are non-trivial learning problems where uniform convergence does not hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms. Instead of uniform convergence, we identify stability as the key necessary and sufficient condition for learnability. Moreover, we show that the conditions for learnability in the general setting are significantly more complex than in supervised classification and regression. --- paper_title: Train faster, generalize better: Stability of stochastic gradient descent paper_content: We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. ::: Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit. --- paper_title: Stability and generalization paper_content: We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification. --- paper_title: Heuristics of instability and stabilization in model selection paper_content: In model selection, usually a best predictor is chosen from a collection {μ(.,s)} of predictors where μ(.,s) is the minimum least-squares predictor in a collection U s of predictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensional/smoother the models in U s . If L is the data used to derive the sequence {μ(., s)}, the procedure is called unstable if a small change in L can cause large changes in {μ(., s)}. With a crystal ball, one could pick the predictor in {μ(.,s)} having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball selection and the statistician's choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence {μ(., s)} and then averaging over many such predictor sequences. --- paper_title: The Implicit Bias of Gradient Descent on Separable Data paper_content: We show that gradient descent on an unregularized logistic regression problem with separable data converges to the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore, we show this convergence is very slow, and only logarithmic in the convergence of the loss itself. This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero and the training loss is extremely small, and, as we show, even if the validation loss increases. Our methodology can also aid in understanding implicit regularization in more complex models and with other optimization methods. --- paper_title: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation paper_content: In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases. --- paper_title: Learning One Convolutional Layer with Overlapping Patches paper_content: We give the first provably efficient algorithm for learning a one hidden layer convolutional network with respect to a general class of (potentially overlapping) patches. Additionally, our algorithm requires only mild conditions on the underlying distribution. We prove that our framework captures commonly used schemes from computer vision, including one-dimensional and two-dimensional"patch and stride"convolutions. Our algorithm-- $Convotron$ -- is inspired by recent work applying isotonic regression to learning neural networks. Convotron uses a simple, iterative update rule that is stochastic in nature and tolerant to noise (requires only that the conditional mean function is a one layer convolutional network, as opposed to the realizable setting). In contrast to gradient descent, Convotron requires no special initialization or learning-rate tuning to converge to the global optimum. We also point out that learning one hidden convolutional layer with respect to a Gaussian distribution and just $one$ disjoint patch $P$ (the other patches may be arbitrary) is $easy$ in the following sense: Convotron can efficiently recover the hidden weight vector by updating $only$ in the direction of $P$. --- paper_title: Risk and parameter convergence of logistic regression paper_content: The logistic loss is strictly convex and does not attain its infimum; consequently the solutions of logistic regression are in general off at infinity. This work provides a convergence analysis of gradient descent applied to logistic regression under no assumptions on the problem instance. Firstly, the risk is shown to converge at a rate $\mathcal{O}(\ln(t)^2/t)$. Secondly, the parameter convergence is characterized along a unique pair of complementary subspaces defined by the problem instance: one subspace along which strong convexity induces parameters to converge at rate $\mathcal{O}(\ln(t)^2/\sqrt{t})$, and its orthogonal complement along which separability induces parameters to converge in direction at rate $\mathcal{O}(\ln\ln(t) / \ln(t))$. --- paper_title: Implicit Bias of Gradient Descent on Linear Convolutional Networks paper_content: We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear support vector machine solution, regardless of depth. --- paper_title: Characterizing Implicit Bias in Terms of Optimization Geometry paper_content: We study the bias of generic optimization methods, including Mirror Descent, Natural Gradient Descent and Steepest Descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We ask the question of whether the global minimum (among the many possible global minima) reached by optimization algorithms can be characterized in terms of the potential or norm, and independently of hyperparameter choices such as step size and momentum. --- paper_title: Learning ReLUs via Gradient Descent paper_content: In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form $max(0, )$ with $w$ denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d.~from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialization at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures. --- paper_title: Approximability of Discriminators Implies Diversity in GANs paper_content: While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al. suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency. --- paper_title: Recovery Guarantees for One-hidden-layer Neural Networks paper_content: In this paper, we consider regression problems with one-hidden-layer neural networks (1NNs). We distill some properties of activation functions that lead to $\mathit{local~strong~convexity}$ in the neighborhood of the ground-truth parameters for the 1NN squared-loss objective. Most popular nonlinear activation functions satisfy the distilled properties, including rectified linear units (ReLUs), leaky ReLUs, squared ReLUs and sigmoids. For activation functions that are also smooth, we show $\mathit{local~linear~convergence}$ guarantees of gradient descent under a resampling rule. For homogeneous activations, we show tensor methods are able to initialize the parameters to fall into the local strong convexity region. As a result, tensor initialization followed by gradient descent is guaranteed to recover the ground truth with sample complexity $ d \cdot \log(1/\epsilon) \cdot \mathrm{poly}(k,\lambda )$ and computational complexity $n\cdot d \cdot \mathrm{poly}(k,\lambda) $ for smooth homogeneous activations with high probability, where $d$ is the dimension of the input, $k$ ($k\leq d$) is the number of hidden nodes, $\lambda$ is a conditioning property of the ground-truth parameter matrix between the input layer and the hidden layer, $\epsilon$ is the targeted precision and $n$ is the number of samples. To the best of our knowledge, this is the first work that provides recovery guarantees for 1NNs with both sample complexity and computational complexity $\mathit{linear}$ in the input dimension and $\mathit{logarithmic}$ in the precision. --- paper_title: How transferable are features in deep neural networks? paper_content: Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. --- paper_title: Local Geometry of One-Hidden-Layer Neural Networks for Logistic Regression paper_content: We study the local geometry of a one-hidden-layer fully-connected neural network where the training samples are generated from a multi-neuron logistic regression model. We prove that under Gaussian input, the empirical risk function employing quadratic loss exhibits strong convexity and smoothness uniformly in a local neighborhood of the ground truth, for a class of smooth activation functions satisfying certain properties, including sigmoid and tanh, as soon as the sample complexity is sufficiently large. This implies that if initialized in this neighborhood, gradient descent converges linearly to a critical point that is provably close to the ground truth without requiring a fresh set of samples at each iteration. This significantly improves upon prior results on learning shallow neural networks with multiple neurons. To the best of our knowledge, this is the first global convergence guarantee for one-hidden-layer neural networks using gradient descent over the empirical risk function without resampling at the near-optimal sampling and computational complexity. --- paper_title: On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition paper_content: We establish connections between the problem of learning a two-layer neural network and tensor decomposition. We consider a model with feature vectors x, ::: r hidden units with weights w_i and output y, i.e., y=\sum_{i=1}^r \sigma(w_i^{T} x), ::: with activation functions given by low-degree polynomials. ::: In particular, if \sigma(x) = a_0+a_1x+a_3x^3, we prove that no polynomial-time algorithm can outperform the trivial predictor that assigns to ::: each example the response variable E(y), when d^{3/2}<< r << d^2. ::: Our conclusion holds for a 'natural data distribution', namely standard Gaussian feature vectors x, and output distributed according ::: to a two-layer neural network with random isotropic weights, and under a certain complexity-theoretic assumption on tensor decomposition. ::: Roughly speaking, we assume that no polynomial-time algorithm can substantially outperform current methods for tensor decomposition ::: based on the sum-of-squares hierarchy. ::: ::: We also prove generalizations of this statement for higher degree polynomial activations, and non-random weight vectors. ::: Remarkably, several existing algorithms for learning two-layer networks with rigorous guarantees are ::: based on tensor decomposition. Our results support the idea that this is indeed the core computational difficulty in learning such networks, ::: under the stated generative model for the data. As a side result, we show that under this model learning the network requires accurate learning of its weights, a ::: property that does not hold in a more general setting. --- paper_title: Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization paper_content: This paper studies noisy low-rank matrix completion: given partial and corrupted entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently. Arguably one of the most popular paradigms to tackle this problem is convex relaxation, which achieves remarkable efficacy in practice. However, the theoretical support of this approach is still far from optimal in the noisy setting, falling short of explaining the empirical success. ::: We make progress towards demystifying the practical efficacy of convex relaxation vis-a-vis random noise. When the rank of the unknown matrix is a constant, we demonstrate that the convex programming approach achieves near-optimal estimation errors --- in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss --- for a wide range of noise levels. All of this is enabled by bridging convex relaxation with the nonconvex Burer-Monteiro approach, a seemingly distinct algorithmic paradigm that is provably robust against noise. More specifically, we show that an approximate critical point of the nonconvex formulation serves as an extremely tight approximation of the convex solution, allowing us to transfer the desired statistical guarantees of the nonconvex approach to its convex counterpart. --- paper_title: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks paper_content: Recent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: ::: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR'17]. ::: (ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size. ::: (iii) Learnability of a broad class of smooth functions by 2-layer ReLU nets trained via gradient descent. ::: The key idea is to track dynamics of training and generalization via properties of a related kernel. --- paper_title: Neural Tangent Kernel: Convergence and Generalization in Neural Networks paper_content: At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function $f_\theta$ (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function $f_\theta$ follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit. --- paper_title: Learning One Convolutional Layer with Overlapping Patches paper_content: We give the first provably efficient algorithm for learning a one hidden layer convolutional network with respect to a general class of (potentially overlapping) patches. Additionally, our algorithm requires only mild conditions on the underlying distribution. We prove that our framework captures commonly used schemes from computer vision, including one-dimensional and two-dimensional"patch and stride"convolutions. Our algorithm-- $Convotron$ -- is inspired by recent work applying isotonic regression to learning neural networks. Convotron uses a simple, iterative update rule that is stochastic in nature and tolerant to noise (requires only that the conditional mean function is a one layer convolutional network, as opposed to the realizable setting). In contrast to gradient descent, Convotron requires no special initialization or learning-rate tuning to converge to the global optimum. We also point out that learning one hidden convolutional layer with respect to a Gaussian distribution and just $one$ disjoint patch $P$ (the other patches may be arbitrary) is $easy$ in the following sense: Convotron can efficiently recover the hidden weight vector by updating $only$ in the direction of $P$. --- paper_title: Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval paper_content: This paper considers the problem of solving systems of quadratic equations, namely, recovering an object of interest \(\varvec{x}^{\natural }\in {\mathbb {R}}^{n}\) from m quadratic equations/samples \(y_{i}=(\varvec{a}_{i}^{\top }\varvec{x}^{\natural })^{2}, 1\le i\le m\). This problem, also dubbed as phase retrieval, spans multiple domains including physical sciences and machine learning. We investigate the efficacy of gradient descent (or Wirtinger flow) designed for the nonconvex least squares problem. We prove that under Gaussian designs, gradient descent—when randomly initialized—yields an \(\epsilon \)-accurate solution in \(O\big (\log n+\log (1/\epsilon )\big )\) iterations given nearly minimal samples, thus achieving near-optimal computational and sample complexities at once. This provides the first global convergence guarantee concerning vanilla gradient descent for phase retrieval, without the need of (i) carefully-designed initialization, (ii) sample splitting, or (iii) sophisticated saddle-point escaping schemes. All of these are achieved by exploiting the statistical models in analyzing optimization algorithms, via a leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient descent iterates and the data. --- paper_title: Ideal spatial adaptation by wavelet shrinkage paper_content: SUMMARY With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle offers dramatic advantages over traditional linear estimation by nonadaptive kernels; however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. We show that variable-knot spline fits and piecewise-polynomial fits, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coefficients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality in multivariate normal decision theory which we call the oracle inequality shows that attained performance differs from ideal performance by at most a factor of approximately 2 log n, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone. --- paper_title: Deep Boltzmann machines paper_content: We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks. --- paper_title: Human-level control through deep reinforcement learning paper_content: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. --- paper_title: Hierarchical interpretations for neural network predictions paper_content: Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables. However, the inability to effectively visualize these relationships has led to DNNs being characterized as black boxes and consequently limited their applications. To ameliorate this problem, we introduce the use of hierarchical interpretations to explain DNN predictions through our proposed method, agglomerative contextual decomposition (ACD). Given a prediction from a trained DNN, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive. Using examples from Stanford Sentiment Treebank and ImageNet, we show that ACD is effective at diagnosing incorrect predictions and identifying dataset bias. Through human experiments, we demonstrate that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN's outputs. We also find that ACD's hierarchy is largely robust to adversarial perturbations, implying that it captures fundamental aspects of the input and ignores spurious noise. ---
Title: A Selective Overview of Deep Learning Section 1: Introduction Description 1: Introduce the concept and significance of learning from data, and highlight the progress and impact of deep learning. Section 2: Intriguing new characteristics of deep learning Description 2: Explore the unique features of deep learning, including over-parametrization, nonconvexity, depth, algorithmic regularization, and implicit prior learning. Section 3: Towards theory of deep learning Description 3: Discuss the theoretical background of deep learning, focusing on the approximation-estimation decomposition, representation power, and generalization error. Section 4: Feed-forward neural networks Description 4: Introduce the fundamentals of feed-forward neural networks, including model setup, back-propagation, and popular variants like convolutional and recurrent neural networks. Section 5: Model setup Description 5: Detail the architecture of feed-forward neural networks, including the mathematical formulation and choice of activation functions. Section 6: Back-propagation in computational graphs Description 6: Explain the back-propagation algorithm and its implementation in computational graphs for efficient training of neural networks. Section 7: Popular models Description 7: Describe popular deep learning models such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). Section 8: Deep unsupervised learning Description 8: Present unsupervised learning techniques in deep learning, focusing on autoencoders and generative adversarial networks (GANs). Section 9: Representation power: approximation theory Description 9: Delve into the approximation theory for neural networks, highlighting the benefits of depth and the universal approximation theory. Section 10: Training deep neural nets Description 10: Discuss methods for training deep neural networks, including challenges, stochastic gradient descent (SGD), and its variants. Section 11: Easing numerical instability Description 11: Address methods to alleviate numerical instabilities during training, such as ReLU activation, skip connections, and batch normalization. Section 12: Regularization techniques Description 12: Examine regularization techniques like weight decay and dropout to improve generalization performance of trained models. Section 13: Generalization power Description 13: Explore the factors influencing the generalization power of deep learning models, including algorithm-independent and algorithm-dependent controls. Section 14: Discussion Description 14: Highlight omitted topics and emerging theories in deep learning, and identify important directions for future research.
Gender Differences in Online Dating: What Do We Know So Far? A Systematic Literature Review
11
--- paper_title: Internet dating and respectable women: Gender expectations in an untraditional partnership and marriage market - the case of Slovenia paper_content: Some theoreticians support notions of the Internet as a media that makes the ::: social differences of those who use it irrelevant or at least less important. ::: The Internet is also often regarded as a medium that improves the free ::: expression of thoughts and wishes of marginalised groups that cannot express ::: themselves in face-to-face relationships due to several normative obstacles. ::: The article deals with the question of gendered normativity related to ::: expressions of femininity in the case of building of intimate romantic ::: partnership within Internet dating. It is based on data gathered by ::: qualitative research. 66 in-depth semi-structured interviews with 34 men and ::: 32 women with Internet dating experiences were conducted in Slovenia in order ::: to get insight into several sociological aspects of internet dating, among ::: which question of gendered expectations related to partnership and family ::: building will be discussed in article. Results show traditional expectations ::: of gender roles are more pervasive as could be expected. Traditional ::: normative understandings of gender were identified especially in the field of ::: expectations related to women and womanhood and were revealed in men’s ::: hierarchical positioning of women regarding their status, in women’s ::: endeavours to present themselves as respectable and in men’s disapproval of ::: women’s sexualities. --- paper_title: Marital satisfaction and break-ups differ across on-line and off-line meeting venues paper_content: Marital discord is costly to children, families, and communities. The advent of the Internet, social networking, and on-line dating has affected how people meet future spouses, but little is known about the prevalence or outcomes of these marriages or the demographics of those involved. We addressed these questions in a nationally representative sample of 19,131 respondents who married between 2005 and 2012. Results indicate that more than one-third of marriages in America now begin on-line. In addition, marriages that began on-line, when compared with those that began through traditional off-line venues, were slightly less likely to result in a marital break-up (separation or divorce) and were associated with slightly higher marital satisfaction among those respondents who remained married. Demographic differences were identified between respondents who met their spouse through on-line vs. traditional off-line venues, but the findings for marital break-up and marital satisfaction remained significant after statistically controlling for these differences. These data suggest that the Internet may be altering the dynamics and outcomes of marriage itself. --- paper_title: The Social demography of internet dating in the United States paper_content: The objective of this article is to identify the sociodemographic correlates of Internet dating net of selective processes that determine who is "at risk." We also examine the role of computer literacy, social networks, and attitudes toward Internet dating among single Internet users. Copyright (c) 2010 by the Southwestern Social Science Association. --- paper_title: Sex Differences in Social Behavior : A Social-role interpretation paper_content: Contents: The Analysis of Sex Differences in Social Behavior: A New Theory and a New Method. Sex Differences in Helping Behavior. Sex Differences in Aggressive Behavior. Sex Differences in Other Social Behaviors. The Interpretation of Sex Differences in Social Behavior. --- paper_title: Internet dating: a British survey paper_content: Purpose – An online survey was carried out with the purpose of finding out the extent to which internet users subscribe to online dating services. The paper aims to assess users' experiences of such services and their eventual outcomes.Design/methodology/approach – Data were obtained through a self‐completion online questionnaire survey posted on the website of a leading internet research agency, utilising its online panel of c. 30,000 UK respondents.Findings – More than 3,800 online panellists responded of whom 29 per cent said they had used an online dating site. Most of these respondents (90 per cent) had spent up to £200 on internet dating in the past two years, with 70 per cent of users achieving at least one date, 43 per cent enjoying at least one sexual relationship, and 9 per cent finding a marriage partner.Research limitations/implications – Despite the limitations over sample control of self‐completion surveying, a large online sample was achieved that indicated the growing importance of the int... --- paper_title: The measurement of observer agreement for categorical data. paper_content: This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature. --- paper_title: What makes you click?—Mate preferences in online dating paper_content: We estimate mate preferences using a novel data set from an online dating service. The data set contains detailed information on user attributes and the decision to contact a potential mate after viewing his or her profile. This decision provides the basis for our preference estimation approach. A potential problem arises if the site users strategically shade their true preferences. We provide a simple test and a bias correction method for strategic behavior. The main findings are (i) There is no evidence for strategic behavior. (ii) Men and women have a strong preference for similarity along many (but not all) attributes. (iii) In particular, the site users display strong same-race preferences. Race preferences do not differ across users with different age, income, or education levels in the case of women, and differ only slightly in the case of men. For men, but not for women, the revealed same-race preferences correspond to the same-race preference stated in the users’ profile. (iv) There are gender differences in mate preferences; in particular, women have a stronger preference than men for income over physical attributes. --- paper_title: Who Visits Online Dating Sites? Exploring Some Characteristics of Online Daters paper_content: Although online dating has become an important strategy in finding a romantic partner, academic research into the antecedents of online dating is still scarce. The aim of this study was to investigate (a) the demographic predictors of online dating and (b) the validity of two opposite hypotheses that explain users' tendency to use the Internet for online dating: the social compensation and the rich-get-richer hypotheses. We presented 367 single Dutch Internet users between 18 and 60 years old with an online questionnaire. We found that online dating was unrelated to income and educational level. Respondents between 30 and 50 years old were the most active online daters. In support of the rich-get-richer hypothesis, people low in dating anxiety were more active online daters than people high in dating anxiety. --- paper_title: Mate selection in cyberspace: The intersection of race, gender, and education paper_content: In this article, the authors examine how race, gender, and education jointly shape interaction among heterosexual Internet daters. They find that racial homophily dominates mate-searching behavior for both men and women. A racial hierarchy emerges in the reciprocating process. Women respond only to men of similar or more dominant racial status, while nonblack men respond to all but black women. Significantly, the authors find that education does not mediate the observed racial preferences among white men and white women. White men and white women with a college degree are more likely to contact and to respond to white daters without a college degree than they are to black daters with a college degree. --- paper_title: The influence of biological and personality traits on gratifications obtained through online dating websites paper_content: Women were less likely to use online dating sites to find sexual partners.Homosexual uses: relationship, sex partner, distraction, and convenient companion.Neurotics use ODSs for identity, convenient companion, and distraction. Online dating sites (ODSs) have become popular with users trying to find partners. The purpose of this study was to determine the role that biological and personality traits play in the use of online dating websites. A cross sectional survey with 678 participants-including cohorts from college as well as the general population-provided data for this study. The Five Factor Model personality model (FFM), sexual orientation, and biological sex were utilized as antecedents to the uses of and gratifications from online dating sites. Results uncover sex and sexual orientation differences in both personality traits and gratifications sought from online dating sites. Specifically, women and homosexuals were found to be more neurotic, women were more agreeable, and homosexuals were more open to experiences. Homosexual users sought a wider range of gratifications (relationship, sex partner, distraction, and convenient companion) from online dating sites than their heterosexual counterparts. Women were less likely to use ODSs to find sexual partners, but more likely to use ODSs to be social. Those who were neurotic use dating sites to build an identity, as a convenient companion, and as a distraction. People who are open to experiences were found to use dating sites to be social. Disagreeable people use dating sites because of peer pressure and as a status symbol, and conscientious people were found to use dating sites to find a relationship. --- paper_title: Overcoming relationship-initiation barriers: The impact of a computer-dating system on sex role, shyness, and appearance inhibitions paper_content: A survey of the users of an online computer-mediated matchmaking service showed that their communication patterns and objectives varied by their sex, shyness level, and appearance. Men generally contacted women more than vice versa, but a substantial minority of the women contacted a great number of men, suggesting that the safety and anonymity the system offered helped them break free from traditional sex role norms. More than half of the women reported starting a romantic or sexual relationship through the system, as compared to less than a third of the men, reflecting, in part, that men outnumber women on the system nearly three to one. Users who scored higher on a shyness scale were much more likely than less shy users to say they were using the system to find romance or sex, suggesting shier users employ the system as a way to overcome their inhibitions. Women who rated their own appearance as average were less likely to be contacted by men than those who rated their appearance as above average, but there was no significant difference between appearance groups concerning the likelihood of starting a romantic or sexual relationship. Intrinsic aspects of this computer-mediated matchmaking system helped some users overcome relationship-initiation barriers rooted in sex role, shyness, and appearance inhibitions. --- paper_title: Internet dating: a British survey paper_content: Purpose – An online survey was carried out with the purpose of finding out the extent to which internet users subscribe to online dating services. The paper aims to assess users' experiences of such services and their eventual outcomes.Design/methodology/approach – Data were obtained through a self‐completion online questionnaire survey posted on the website of a leading internet research agency, utilising its online panel of c. 30,000 UK respondents.Findings – More than 3,800 online panellists responded of whom 29 per cent said they had used an online dating site. Most of these respondents (90 per cent) had spent up to £200 on internet dating in the past two years, with 70 per cent of users achieving at least one date, 43 per cent enjoying at least one sexual relationship, and 9 per cent finding a marriage partner.Research limitations/implications – Despite the limitations over sample control of self‐completion surveying, a large online sample was achieved that indicated the growing importance of the int... --- paper_title: Income Attraction: An Online Dating Field Experiment paper_content: We measured gender differences in preferences for mate income ex-ante to interaction (“income attraction”) in a field experiment on one of China's largest online dating websites. To rule out unobserved factors correlated with income as the basis of attraction, we randomly assigned income levels to 360 artificial profiles and recorded the incomes of nearly 4000 “visits” to full versions of these profiles from search engine results, which displayed abbreviated versions. We found that men of all income levels visited our female profiles of different income levels at roughly equal rates. In contrast, women of all income levels visited our male profiles with higher incomes at higher rates. Surprisingly, these higher rates increased with the women's own incomes and even jumped discontinuously when the male profiles’ incomes went above that of the women's own. Our male profiles with the highest level of income received 10 times more visits than the lowest. This gender difference in ex-ante preferences for mate income could help explain marriage and spousal income patterns found in prior empirical studies. --- paper_title: For Love or Money? The Influence of Personal Resources and Environmental Resource Pressures on Human Mate Preferences paper_content: A growing body of evidence shows that human mating preferences, like those of other animal species, can vary geographically. For example, women living in areas with a high cost of living have been shown to seek potential mates that can provide resources (e.g., large salaries). In this study, we present data from a large (N = 2944) nationally representative (United States) sample of Internet dating profiles. The profiles allowed daters’ to report their own income and the minimum income they desired in a dating partner, and we analyzed these data at the level of zip code. Our analysis shows that women engage in more resource seeking than men. We also find a positive relationship between cost of living in the dater’s zip code and resource seeking among both men and women. Importantly, however, this relationship disappears if one’s own income is accounted for in the analysis; that is, individuals of both sexes seek mates with an income similar to their own, regardless of local resource pressures. Our data highlight the importance of considering individual characteristics when measuring the effects of environmental factors on behavior. --- paper_title: Sex Differences in the Attractiveness Halo Ef- fect in the Online Dating Environment paper_content: The following study with 113 participants analyzes the evaluation bias effects that happen when people are confronted with a typical profile of a online dating service that contains a false photo, i.e. a photo that obviously does not portray the profile owner. It is known that profile photos have a great impact how the profile owner is judged. As predicted by the attractiveness halo effect, an attractive photo leads to better evaluations of the displayed person than an unattractive photo. The results of this study show that even if the evaluator knows that the photo does not actually portray the profile owner, he is nevertheless influenced by the perceived attractiveness of the displayed person. But this is only the case for men judging women. Women seem to be more resistant against this automatic evaluation bias. The findings are fully in line with other empirical findings that support an evolutionary perspective, namely that men ascribe a higher value to physical attractiveness in judging women than women do in judging men. --- paper_title: Matching and sorting in online dating paper_content: Using data on user attributes and interactions from an online dating site, we estimate mate preferences, and use the Gale-Shapley algorithm to predict sta ble matches. The predicted matches are similar to the actual matches achieved by the dating site, and the actual matches are approximately efficient. Outof-sample predictions of offline matches, i.e., marriages, exhibit assortative mating patterns similar to those observed in actual marriages. Thus, mate pref erences, without resort to search frictions, can generate sorting in marriages. However, we underpredict some of the correlation patterns; search frictions may play a role in explaining the discrepancy. ( --- paper_title: What makes you click?—Mate preferences in online dating paper_content: We estimate mate preferences using a novel data set from an online dating service. The data set contains detailed information on user attributes and the decision to contact a potential mate after viewing his or her profile. This decision provides the basis for our preference estimation approach. A potential problem arises if the site users strategically shade their true preferences. We provide a simple test and a bias correction method for strategic behavior. The main findings are (i) There is no evidence for strategic behavior. (ii) Men and women have a strong preference for similarity along many (but not all) attributes. (iii) In particular, the site users display strong same-race preferences. Race preferences do not differ across users with different age, income, or education levels in the case of women, and differ only slightly in the case of men. For men, but not for women, the revealed same-race preferences correspond to the same-race preference stated in the users’ profile. (iv) There are gender differences in mate preferences; in particular, women have a stronger preference than men for income over physical attributes. --- paper_title: Homophily in online dating: when do you like someone like yourself? paper_content: Psychologists have found that actual and perceived similarity between potential romantic partners in demographics, attitudes, values, and attractiveness correlate positively with attraction and, later, relationship satisfaction. Online dating systems provide a new way for users to identify and communicate with potential partners, but the information they provide differs dramatically from what a person might glean from face-to-face interaction. An analysis of dyadic interactions of approximately 65,000 heterosexual users of an online dating system in the U.S. showed that, despite these differences, users of the system sought people like them much more often than chance would predict, just as in the offline world. The users' preferences were most strongly same-seeking for attributes related to the life course, like marital history and whether one wants children, but they also demonstrated significant homophily in self-reported physical build, physical attractiveness, and smoking habits. --- paper_title: Predicting User Replying Behavior on a Large Online Dating Site paper_content: Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites. --- paper_title: Who's Right and Who Writes: People, Profiles, Contacts, and Replies in Online Dating paper_content: In this analysis of profiles and messaging behavior on a major online dating service, we find that, consistent with predictions of evolutionary psychology, women as compared to men state more restrictive preferences for their ideal date. Furthermore, women contact and reply to others more selectively than men. Additionally, we identify connections among messaging behavior, textual self-descriptions in dating profiles, and relationship-relevant traits such as neuroticism. --- paper_title: Conditional mate preferences: Factors influencing preferences for height paper_content: Abstract Physical stature plays an important role in human mate choice because it may signal dominance, high status, access to resources, and underlying heritable qualities. Although past research has examined overall preferences for height, we propose these preferences are modified by evolved mechanisms that consider one’s own height and prevailing social norms. We examined this proposal using samples of 2000 personal ads and 382 undergraduates. Both sexes preferred relationships where the woman was shorter when specifying the shortest acceptable, tallest acceptable, and ideal dating partner. In the personal ads sample, this norm was more strongly enforced by women than by men: 23% of men compared to only 4% of women would accept a dating relationship where the woman was taller. Preferences for the male-taller norm were less pronounced in short men and tall women, who shifted towards preferring someone closer to their own height. This limited their potential dating pool but ensured they would select a mate within the typical range of variation for height. Surprisingly, endorsement of traditional gender role norms was only weakly related to height preferences, particularly for women. These findings highlight the utility of examining how evolutionary factors, including endorsement of social norms, may influence mate preferences. --- paper_title: Revealing the 'real' me, searching for the 'actual' you: Presentations of self on an internet dating site paper_content: This paper considers the presentation of self on an internet dating site. Thirty men and thirty women were interviewed about their online dating experiences. They were asked about how they constructed their profiles and how they viewed other individuals’ profiles. What types of presentations of self led to more successful offline romantic relationships were also investigated. Additionally, gender differences were examined. In line with previous research on presentation of self online, individuals were quite strategic in their online presentations. However, important differences between initiating a relationship on an internet dating site and other spaces (online and offline) included the type of self disclosed as well as the depth of breadth of information individuals self-disclosed about themselves before any one-on-one conversations took place. --- paper_title: Indicating Mate Preferences by Mixing Survey and Process-generated Data. The Case of Attitudes and Behaviour in Online Mate Search paper_content: Die Indikation partnerschaftlicher Praferenzen mithilfe der Mi- schung von Befragungs- und Prozessdaten. Das Beispiel von Einstellung und Verhalten bei der Online-Partnersuche«. Web-based process-generated data is produced by social agency of users and recorded by the respective provider without any originally scientific purpose. We support our idea of advantageous applications of process-generated data by outlining a research example that uses data generated by email contacting on an online dating website for the in- vestigation of mate preferences. This approach follows the paradigm of indi- cating or 'revealing' preferences by observing choosing acts. Advantages and disadvantages of this approach in comparison to the traditional 'stated prefer- ence'-paradigm of survey research are discussed. Both approaches suffer dif- ferent informational restrictions and induce different problems of valid infer- ence. In conclusion we offer an outlook towards research strategies of an integration of the two quantitative paradigms. --- paper_title: Who Contacts Whom? Educational Homophily in Online Mate Selection paper_content: Data from an online dating platform are used to study the importance of education for initiating and replying to online contacts. We analyse how these patterns are influenced by educational homophily and opportunity structures. Social exchange theory and mate search theory are used to explain online mate selection behaviour. Our results show that educational homophily is the dominant mechanism in online mate choice. Similarity in education significantly increases the rate of both sending and replying to initial contacts. After controlling for the opportunity structure on the platform, the preference for similar educated others is the most important factor, particularly among women. Our results also support the exchange theoretical idea that homophily increases with educational level. If dissimilarity contacting patterns are found, women are highly reluctant to contact partners with lower educational qualifications. Men, in contrast, do not have any problems to contact lower-qualified women. Studies of educational homogamy generally show that couples where women have a higher level of education are rare. Our study demonstrates that this is mainly the result of women's reluctance to contact lower qualified men. --- paper_title: If I ’ m Not Hot , Are You Hot or Not ? Physical-Attractiveness Evaluations and Dating Preferences as a Function of One ’ s Own Attractiveness paper_content: Prior research has established that people's own physical attractiveness affects their selection of romantic partners. This article provides further support for this effect and also examines a different, yet related, question: When less attractivepeopleaccept less attractive dates, do they persuade themselves that the people they choose to date are more physically attractive than others perceive them to be? Our analysis of data from the pop- ular Web site HOTorNOT.com suggests that this is not the case: Less attractive people do not delude themselves into thinking that their dates are more physically attractive than others perceive them to be. Furthermore, the results also show that males, compared with females, are less affected by their own attractiveness when choosing whom to date. --- paper_title: Romance and the Internet: The E-Mergence of Edating paper_content: This study explores the meaning and essence of a relatively new phenomenon-electronic (EDating) in regards to its relationships and effects on gender. In our phenomenology, we look at the Internet-based form of EDating via the experiences of college-aged singles in the U.S. First, we explore the emergence of the EDating Internet dating phenomenon with all preconceived experiences aside. We denote clusters of meanings from the words participants ascribe to the dating experience. Second, we examine the implications of Internet-based dating on the more traditional offline dating. Our article addresses these issues via a series of focus group interviews. The researchers interviewed college-aged singles in a sub-urban American city over the time span of twenty months. We assume human experience can be consciously expressed and explained through narrative description. We apply the Social Exchange Theory along with sociocultural, semiotic, and humanistic perspectives to understand and interpret emerging trends. We horizonalize statements, create units of meaning, compile themes, advance description, and integrate our data to an exhaustive analysis of Internet dating. We conclude that Internet dating does not replace existing rituals associated with dating. Nevertheless, the effects of technology are visible in modern dating rituals. --- paper_title: Dispositional factors predicting use of online dating sites and behaviors related to online dating paper_content: Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. --- paper_title: The truth about lying in online dating profiles paper_content: Online dating is a popular new tool for initiating romantic relationships, although recent research and media reports suggest that it may also be fertile ground for deception. Unlike previous studies that rely solely on self-report data, the present study establishes ground truth for 80 online daters' height, weight and age, and compares ground truth data to the information provided in online dating profiles. The results suggest that deception is indeed frequently observed, but that the magnitude of the deceptions is usually small. As expected, deceptions differ by gender. Results are discussed in light of the Hyperpersonal model and the self-presentational tensions experienced by online dating participants. --- paper_title: Searching for Love in all the “Write” Places: Exploring Internet Personals Use by Sexual Orientation, Gender, and Age paper_content: ABSTRACT Few researchers of Internet sexual exploration have systematically compared variance of use across sexual orientations, with even fewer surveying bisexual respondents. In 2004, 15,246 individuals responded to an online survey of their use of Internet personals and adult websites. Gay men, lesbians, and bisexuals (GLBs) were more likely than heterosexuals to have exchanged correspondence, met others offline, and had sex with someone they met through personal ads. Whereas gay men and lesbians of all ages were most likely to have established a long-term relationship as a result of personals, heterosexuals over age 40 were more likely to have established a long-term relationship than younger heterosexuals. Further, compared to men, women were approximately two times as likely to have established a serious relationship as a result of personals. Qualitative findings suggest that the Internet functions not only as a means of screening for desired characteristics, but also as a shield against prejudice i... --- paper_title: First Comes Love, Then Comes Google: An Investigation of Uncertainty Reduction Strategies and Self-Disclosure in Online Dating: paper_content: This study investigates relationships between privacy concerns, uncertainty reduction behaviors, and self-disclosure among online dating participants, drawing on uncertainty reduction theory and the warranting principle. The authors propose a conceptual model integrating privacy concerns, self-efficacy, and Internet experience with uncertainty reduction strategies and amount of self-disclosure and then test this model on a nationwide sample of online dating participants ( N = 562). The study findings confirm that the frequency of use of uncertainty reduction strategies is predicted by three sets of online dating concerns—personal security, misrepresentation, and recognition—as well as self-efficacy in online dating. Furthermore, the frequency of uncertainty reduction strategies mediates the relationship between these variables and amount of self-disclosure with potential online dating partners. The authors explore the theoretical implications of these findings for our understanding of uncertainty reductio... --- paper_title: Contradictory deceptive behavior in online dating paper_content: Deceptive behavior is common in online dating because personal profiles can be easily manipulated. This study conducts two experiments to examine contradictory deceptive behavior in online dating. The results of Experiment 1 showed that users have lower perceptions of authenticity evaluations of daters’ self-provided photographs with strong physical attractiveness than for those with low physical attractiveness, and the authenticity perceptions of daters’ self-provided photographs have a positive relationship with the authenticity evaluation of online daters’ text-based self-presentations. Although users are suspicious of the authenticity of beautiful or handsome daters’ photographs, the results of Experiment 2 showed that people still employ higher levels of deception in self-presentations toward daters with highly attractive photographs to increase their possibilities of securing a date with those daters. The results also show that women employ higher levels of deception in self-presentation than men in online dating environments. --- paper_title: Overcoming relationship-initiation barriers: The impact of a computer-dating system on sex role, shyness, and appearance inhibitions paper_content: A survey of the users of an online computer-mediated matchmaking service showed that their communication patterns and objectives varied by their sex, shyness level, and appearance. Men generally contacted women more than vice versa, but a substantial minority of the women contacted a great number of men, suggesting that the safety and anonymity the system offered helped them break free from traditional sex role norms. More than half of the women reported starting a romantic or sexual relationship through the system, as compared to less than a third of the men, reflecting, in part, that men outnumber women on the system nearly three to one. Users who scored higher on a shyness scale were much more likely than less shy users to say they were using the system to find romance or sex, suggesting shier users employ the system as a way to overcome their inhibitions. Women who rated their own appearance as average were less likely to be contacted by men than those who rated their appearance as above average, but there was no significant difference between appearance groups concerning the likelihood of starting a romantic or sexual relationship. Intrinsic aspects of this computer-mediated matchmaking system helped some users overcome relationship-initiation barriers rooted in sex role, shyness, and appearance inhibitions. --- paper_title: Matching and sorting in online dating paper_content: Using data on user attributes and interactions from an online dating site, we estimate mate preferences, and use the Gale-Shapley algorithm to predict sta ble matches. The predicted matches are similar to the actual matches achieved by the dating site, and the actual matches are approximately efficient. Outof-sample predictions of offline matches, i.e., marriages, exhibit assortative mating patterns similar to those observed in actual marriages. Thus, mate pref erences, without resort to search frictions, can generate sorting in marriages. However, we underpredict some of the correlation patterns; search frictions may play a role in explaining the discrepancy. ( --- paper_title: Exploring gender differences in member profiles of an online dating site across 35 countries paper_content: Online communities such as forums, general purpose social networking and dating sites, have rapidly become one of the important data sources for analysis of human behavior fostering research in different scientific domains such as computer science, psychology, anthropology, and social science. The key component of most of the online communities and Social Networking Sites (SNS) in particular, is the user profile, which plays a role of a self-advertisement in the aggregated form. While some scientists investigate privacy implications of information disclosure, others test or generate social and behavioral hypotheses based on the information provided by users in their profiles or by interviewing members of these SNS. In this paper, we apply a number of analytical procedures on a large-scale SNS dataset of 10 million public profiles with more than 40 different attributes from one of the largest dating sites in the Russian segment of the Internet to explore similarities and differences in patterns of self-disclosure. Particularly, we build gender classification models for the residents of the 35 most active countries, and investigate differences between genders within and across countries. The results show that while Russian language and culture are unifying factors for people's interaction on the dating site, the patterns of self-disclosure are different across countries. Some geographically close countries exhibit higher similarity between patterns of self-disclosure which was also confirmed by studies on cross-cultural differences and personality traits. To the best of our knowledge, this is the first attempt to conduct a large-scale analysis of SNS profiles, emphasize gender differences on a country level, investigate patterns of self-disclosure and to provide exact rules that characterize genders within and across countries. --- paper_title: Homophily in online dating: when do you like someone like yourself? paper_content: Psychologists have found that actual and perceived similarity between potential romantic partners in demographics, attitudes, values, and attractiveness correlate positively with attraction and, later, relationship satisfaction. Online dating systems provide a new way for users to identify and communicate with potential partners, but the information they provide differs dramatically from what a person might glean from face-to-face interaction. An analysis of dyadic interactions of approximately 65,000 heterosexual users of an online dating system in the U.S. showed that, despite these differences, users of the system sought people like them much more often than chance would predict, just as in the offline world. The users' preferences were most strongly same-seeking for attributes related to the life course, like marital history and whether one wants children, but they also demonstrated significant homophily in self-reported physical build, physical attractiveness, and smoking habits. --- paper_title: Putting Your Best Face Forward: The Accuracy of Online Dating Photographs paper_content: This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters' self-reported accuracy, (b) independent judges' perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias. --- paper_title: Mate selection in cyberspace: The intersection of race, gender, and education paper_content: In this article, the authors examine how race, gender, and education jointly shape interaction among heterosexual Internet daters. They find that racial homophily dominates mate-searching behavior for both men and women. A racial hierarchy emerges in the reciprocating process. Women respond only to men of similar or more dominant racial status, while nonblack men respond to all but black women. Significantly, the authors find that education does not mediate the observed racial preferences among white men and white women. White men and white women with a college degree are more likely to contact and to respond to white daters without a college degree than they are to black daters with a college degree. --- paper_title: Marital satisfaction and break-ups differ across on-line and off-line meeting venues paper_content: Marital discord is costly to children, families, and communities. The advent of the Internet, social networking, and on-line dating has affected how people meet future spouses, but little is known about the prevalence or outcomes of these marriages or the demographics of those involved. We addressed these questions in a nationally representative sample of 19,131 respondents who married between 2005 and 2012. Results indicate that more than one-third of marriages in America now begin on-line. In addition, marriages that began on-line, when compared with those that began through traditional off-line venues, were slightly less likely to result in a marital break-up (separation or divorce) and were associated with slightly higher marital satisfaction among those respondents who remained married. Demographic differences were identified between respondents who met their spouse through on-line vs. traditional off-line venues, but the findings for marital break-up and marital satisfaction remained significant after statistically controlling for these differences. These data suggest that the Internet may be altering the dynamics and outcomes of marriage itself. --- paper_title: An Examination of Language Use in Online Dating Profiles paper_content: This paper contributes to the study of self-presentation in online dating systems by performing a factor analysis on the text portions of online profiles. Findings include a similarity in the overall factor structures between male and female profiles, including use of tentative words by men. Contrasts between sexes were also found in a cluster analysis of the profiles using their factor scores. Finally, we also found similarities in frequent words used by the gender groups. --- paper_title: Predicting User Replying Behavior on a Large Online Dating Site paper_content: Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites. --- paper_title: The Role of Linguistic Properties in Online Dating Communication - A Large-scale Study of Contact Initiation Messages paper_content: For people who look for a partner, online dating largely increases the pool of potential mates. At the same time, users of online dating platforms have to cope with a large number of approaches and, therefore, need to choose selectively who they decide to engage in a conversation with. Especially, since the costs of rejection are low on online dating platforms, it is a common strategy to spam others with superficial approaches. With this in mind, and in the absence of nonverbal cues, targets base their decision of whether or not to respond to a message on (a) their impression of the sender’s pictures, and (b) cues which they extract from the content of the message. The purpose of this study is to hypothesize on which linguistic properties of a message in computer-mediated communication may signal various qualities of its sender, to predict how those properties determine a target’s decision of whether to respond or to ignore an initial message. Employing the Linguistic Inquiry and Word Count (LIWC) text analysis, relevant variables are operationalized from a corpus of 167,276 initial messages of an online dating platform. Regression analysis is performed in order to test the hypotheses. Results are discussed with respect to design implications for online dating platforms. --- paper_title: The Language of Love: Sex, Sexual Orientation, and Language Use in Online Personal Advertisements paper_content: Stereotypes and biological theories suggest that psychological gender differences found in predominantly heterosexual samples are smaller or reversed among gay men and lesbians. Computerized text analysis that compares people’s language style on a wide range of dimensions from pronoun use to body references offers a multivariate personality marker to test such assumptions. Analysis of over 1,500 internet personal advertisements placed by heterosexual men, heterosexual women, gay men, and lesbians found little evidence that orientation alters the impact of gender on linguistic behaviors. Previously reported gender differences were replicated in the gay as well as the heterosexual advertisements studied. Main effects of sexual orientation indicated that gay people of both sexes apparently felt less need to differentiate themselves from potential mates than did heterosexual people. Virtually no crossover sexual orientation by sex interactions emerged indicating that several popular models of sexual orientation are not supported on a linguistic level. --- paper_title: Who's Right and Who Writes: People, Profiles, Contacts, and Replies in Online Dating paper_content: In this analysis of profiles and messaging behavior on a major online dating service, we find that, consistent with predictions of evolutionary psychology, women as compared to men state more restrictive preferences for their ideal date. Furthermore, women contact and reply to others more selectively than men. Additionally, we identify connections among messaging behavior, textual self-descriptions in dating profiles, and relationship-relevant traits such as neuroticism. --- paper_title: Less Is More : The Lure of Ambiguity , or Why Familiarity Breeds Contempt paper_content: The present research shows that although people believe that learning more about others leads to greater liking, more information about others leads, on average, to less liking. Thus, ambiguity--lacking information about another--leads to liking, whereas familiarity--acquiring more information--can breed contempt. This "less is more" effect is due to the cascading nature of dissimilarity: Once evidence of dissimilarity is encountered, subsequent information is more likely to be interpreted as further evidence of dissimilarity, leading to decreased liking. The authors document the negative relationship between knowledge and liking in laboratory studies and with pre- and postdate data from online daters, while showing the mediating role of dissimilarity. --- paper_title: Animal Mating Systems: A Synthesis Based on Selection Theory paper_content: Following principles used by A. J. Bateman, we identify the relationship between fecundity and mating success as the central feature in the operation of mating systems. Using selection theory from the field of quantitative genetics, we define the sexual selection gradient as the average slope of the relationship between fecundity and mating success and show how it can be estimated from data. We argue that sexual selection gradients are the key to understanding how the intensity of sexual selection is affected by mate provisioning, parental investment, and sex ratio. --- paper_title: Internet dating: a British survey paper_content: Purpose – An online survey was carried out with the purpose of finding out the extent to which internet users subscribe to online dating services. The paper aims to assess users' experiences of such services and their eventual outcomes.Design/methodology/approach – Data were obtained through a self‐completion online questionnaire survey posted on the website of a leading internet research agency, utilising its online panel of c. 30,000 UK respondents.Findings – More than 3,800 online panellists responded of whom 29 per cent said they had used an online dating site. Most of these respondents (90 per cent) had spent up to £200 on internet dating in the past two years, with 70 per cent of users achieving at least one date, 43 per cent enjoying at least one sexual relationship, and 9 per cent finding a marriage partner.Research limitations/implications – Despite the limitations over sample control of self‐completion surveying, a large online sample was achieved that indicated the growing importance of the int... --- paper_title: One-Way Mirrors and Weak-Signaling in Online Dating: A Randomized Field Experiment paper_content: The growing popularity of online dating sites is altering one of the most fundamental human activities of finding a date or a marriage partner. Online dating platforms offer new capabilities, such as intensive search, big-data based mate recommendations and varying levels of anonymity, whose parallels do not exist in the physical world. In this study we examine the impact of anonymity feature on matching outcomes. Based on a large scale randomized experiment in partnership with one of the largest online dating companies, we demonstrate causally that anonymity indeed lets users browse more freely, but at the same time impacts the existing social dating norms (what we call a weak signaling mechanism) and thus produces negative impact on matches. Our results show that this weak signaling is especially helpful for women, helping them overcome social frictions coming from established social norms that discourage them from making the first move in dating. © (2013) by the AIS/ICIS Administrative Office. All rights reserved. --- paper_title: Dating deception: Gender, online dating, and exaggerated self-presentation paper_content: This study examined how differences in expectations about meeting impacted the degree of deceptive self-presentation individuals displayed within the context of dating. Participants filled out personality measures in one of four anticipated meeting conditions: face-to-face, email, no meeting, and a control condition with no pretense of dating. Results indicated that, compared to baseline measures, male participants increased the amount they self-presented when anticipating a future interaction with a prospective date. Specifically, male participants emphasized their positive characteristics more if the potential date was less salient (e.g., email meeting) compared to a more salient condition (e.g., face-to-face meeting) or the control conditions. Implications for self-presentation theory, online social interaction, and online dating research will be discussed. --- paper_title: Revealing the 'real' me, searching for the 'actual' you: Presentations of self on an internet dating site paper_content: This paper considers the presentation of self on an internet dating site. Thirty men and thirty women were interviewed about their online dating experiences. They were asked about how they constructed their profiles and how they viewed other individuals’ profiles. What types of presentations of self led to more successful offline romantic relationships were also investigated. Additionally, gender differences were examined. In line with previous research on presentation of self online, individuals were quite strategic in their online presentations. However, important differences between initiating a relationship on an internet dating site and other spaces (online and offline) included the type of self disclosed as well as the depth of breadth of information individuals self-disclosed about themselves before any one-on-one conversations took place. --- paper_title: Partner Preferences Across the Life Span: Online Dating by Older Adults paper_content: Stereotypes of older adults as withdrawn or asexual fail to recognize that romantic relationships in later life are increasingly common. The authors analyzed 600 Internet personal ads from 4 age groups: 20-34, 40-54, 60-74, and 75+ years. Predictions from evolutionary theory held true in later life, when reproduction is no longer a concern. Across the life span, men sought physical attractiveness and offered status-related information more than women; women were more selective than men and sought status more than men. With age, men desired women increasingly younger than themselves, whereas women desired older men until ages 75 and over, when they sought men younger than themselves. --- paper_title: Self-Presentation in Online Personals: The Role of Anticipated Future Interaction, Self-Disclosure, and Perceived Success in Internet Dating paper_content: This study investigates self-disclosure in the novel context of online dating relationships. Using a national random sample of Match.com members (N = 349), the authors tested a model of relational goals, self-disclosure, and perceived success in online dating. The authors’ findings provide support for social penetration theory and the social information processing and hyperpersonal perspectives as well as highlight the positive effect of anticipated future face-to-face interaction on online self-disclosure. The authors find that perceived online dating success is predicted by four dimensions of self-disclosure (honesty, amount, intent, and valence), although honesty has a negative effect. Furthermore, online dating experience is a strong predictor of perceived success in online dating. Additionally, the authors identify predictors of strategic success versus self-presentation success. This research extends existing theory on computer-mediated communication, self-disclosure, and relational success to the i... --- paper_title: Strategic misrepresentation in online dating: The effects of gender, self-monitoring, and personality traits: paper_content: This study examines factors (including gender, self-monitoring, the big five personality traits, and demographic characteristics) that influence online dating service users’ strategic misrepresentation (i.e., the conscious and intentional misrepresentation of personal characteristics). Using data from a survey of online dating service users (N = 5,020), seven categories of misrepresentation — personal assets, relationship goals, personal interests, personal attributes, past relationships, weight, and age — were examined. The study found that men are more likely to misrepresent personal assets, relationship goals, personal interests, and personal attributes, whereas women are more likely to misrepresent weight. The study further discovered that self-monitoring (specifically other-directedness) was the strongest and most consistent predictor of misrepresentation in online dating. Agreeableness, conscientiousness, and openness also showed consistent relationships with misrepresentation. --- paper_title: Attractiveness and sexual behavior: Does attractiveness enhance mating success paper_content: Abstract If attractiveness is an important cue for mate choice, as proposed by evolutionary psychologists, then attractive individuals should have greater mating success than their peers. We tested this hypothesis in a large sample of adults. Facial attractiveness correlated with the number of short-term, but not long-term, sexual partners, for males, and with the number of long-term, but not short-term, sexual partners and age of first sex, for females. Body attractiveness also correlated significantly with the number of short-term, but not long-term, sexual partners, for males, and attractive males became sexually active earlier than their peers. Body attractiveness did not correlate with any sexual behavior variable for females. To determine which aspects of attractiveness were important, we examined associations between sexual behaviors and three components of attractiveness: sexual dimorphism, averageness, and symmetry. Sexual dimorphism showed the clearest associations with sexual behaviors. Masculine males (bodies, similar trend for faces) had more short-term sexual partners, and feminine females (faces) had more long-term sexual partners than their peers. Feminine females (faces) also became sexually active earlier than their peers. Average males (faces and bodies) had more short-term sexual partners and more extra-pair copulations (EPC) than their peers. Symmetric women (faces) became sexually active earlier than their peers. Given that male reproductive success depends more on short-term mating opportunities than does female reproductive success, these findings suggest that individuals of high phenotypic quality have higher mating success than their lower quality counterparts. --- paper_title: Sex differences in intra-sex variations in human mating tactics: An evolutionary approach paper_content: Abstract We assessed sex differences in the effects of physical attractiveness and earning potential on mate selection, and sex differences in preferences and motivations with regard to short-term and long-term mating. We also investigated the effect of a variable likely to produce intra-sex variations in the selection of mating tactics, self-perceived mating success. Forty-eight university students were presented with pictures and short descriptions of persons of the opposite sex varying in physical attractiveness and earning potential. Dating interest was influenced, for both sexes, by stimulus-person's physical attractiveness and earning potential, but these two characteristics interacted only for female raters. Male and female subjects showed discrepant preferences and motivations with regard to short-term and long-term mating. In addition, self-perceived mating success was related to mating tactics in males only: Males who perceived themselves as more successful, compared to males who perceived themselves as less successful, tended to prefer and to more often select short-term mating. This effect was maximized when the stimulus person was very attractive and of high earning potential. These results confirm sex differences in mating preferences, strongly suggest a proximal factor of tactic selection, and suggest that males' mating strategies may be more variable than females'. ---
Title: Gender Differences in Online Dating: What Do We Know So Far? A Systematic Literature Review Section 1: Introduction Description 1: Introduce the significance and context of online dating and outline the aim and importance of the study. Section 2: Theoretical Foundations Description 2: Discuss the theoretical background, including social role, self-construal, and evolutionary theories as they pertain to gender differences in online dating. Section 3: Methodology Description 3: Detail the methods used in selecting and analyzing the literature, including criteria for inclusion and the process of data collection. Section 4: Daters' Characteristics Description 4: Summarize the user characteristics collected, focusing on differences between male and female users. Section 5: Motivation Description 5: Explore the initial motives for engaging in online dating for males and females and how these motives align with mating theory. Section 6: Preferences Description 6: Analyze the preferences of men and women in selecting potential partners on online dating platforms. Section 7: Disclosure Description 7: Examine the patterns of self-disclosure by male and female daters on online dating platforms. Section 8: Misrepresentation Description 8: Discuss the tendencies of misrepresentation among male and female users and the types of information often falsified. Section 9: Interaction Description 9: Investigate the nature of interactions between male and female users on online dating platforms, including initiation and response patterns. Section 10: Outcome Description 10: Summarize the outcomes of online dating interactions, focusing on gender differences in meeting offline and relationship success. Section 11: Concluding Remarks Description 11: Provide a summary of key findings and their implications for theory and practice, and suggest avenues for future research.
A Survey on Chordal Rings, N2R and Other Related Topologies
6
--- paper_title: Improving Topological Routing in N2R Networks paper_content: Topological routing is basically table free, and allows for very fast restoration and thus a high level of reliability in communication. It has already been satisfactorily proposed for some regular structures such as Grid or Honeycomb. An initial proposal has also been developed for the N2R structures. This paper proposes a modification of this previous algorithm, and in addition two other alternatives. The three options are systematically analyzed in terms of executing time and path distances, showing that trade-offs are needed in order to determine which algorithm is best for a given case. Also, the possible practical applications the methods could have, are discussed for different traffic scenarios. --- paper_title: Self-dual configurations and regular graphs paper_content: A rotary internal combustion engine including a plurality of segregated chambers arranged in communicating relation with one another such that compression, combustion and expansion of the gases used to rotate the engine may take place in succession, step-like manner. Each of the chambers have mounted therein a rotary pump which may be in the form of a pair of multi-lobed intermeshing rotors mounted on two parallel drive shafts. Alternately, the rotary pumps may be in the form of a single rotor mounted on a single drive shaft which extends through each of the successively arranged chambers so as to position the rotor in movable working relation with the interior of the cylinder. --- paper_title: Traffic load on interconnection lines of generalized double ring network structures paper_content: Generalized double ring (N2R) network structures possess a number of good properties, but being not planar they are hard to physically embed in communication networks. However, if some of the lines, the interconnection lines, are implemented by wireless technologies, the remaining structure consists of two planar rings, which are easily embedded by fiber or other wired solutions. It is shown that for large N2R structures, the interconnection lines carry notably lower loads than the other lines if shortest-path routing is used, and the effects of two other routing schemes are explored, leading to lower load on interconnection lines at the price of larger efficient average distance and diameter --- paper_title: Five Novel Selection Policies for N2R Network Structures paper_content: This paper shows how 5 new selection policies can be applied to N2R structures. For each number of nodes, a selection policy determines which topology is chosen. Compared to approaches taken previously, the policies proposed in this paper allow us to choose structures which are significantly easier to implement, while having only slightly longer distances. The 5 policies reflect different trade-offs between distances and ease of implementation, and two of them explore the potentials of using N2R(p; q; r) instead of N2R(p; q) structures. --- paper_title: Describing N2R Properties Using Ideal Graphs paper_content: N2R structures are a subset of Generalized Petersen Graphs and a potentially good option to be used for implementing degree three networks. The previous works on these structures verify that N2R are better than other degree three regular topologies such as Double Rings or Honeycomb in terms of path distances. The cost of this good performance is a more complex and non-planar interconnection configuration; It is complex to find analytical models to be used, for example for describing topological parameters. This paper proposes and verifies the possibility of approximating real N2R graphs to optimal and ideal graphs, which are much easier to model, obtaining fairly accurate results. A main result of the paper is a simple formula for approximating average distance and diameter given the number of nodes in a N2R graph. --- paper_title: Using Different Chord Lengths in Degree Three Chordal Rings and N2R Topologies paper_content: Degree Three Chordal Rings and N2R Topologies are useful for physial and optical network topologies due to the combination of short dis- tances, regularity and low degrees. In this paper we show how distances in terms of average distances and diameters can be significantly decreased by us- ing chords of different lengths. These topologies are slightly less symmetric than the traditional ones, but the distances are virtually the same no matter which node in a given topology they are measured from. ---
Title: A Survey on Chordal Rings, N2R and Other Related Topologies Section 1: Introduction Description 1: Describe the significance and the increasing dependence of society on communication infrastructures, and provide an overview of the role of network topologies in ensuring connectivity, reliability, and quality. Section 2: Basic notation and definitions Description 2: Define the basic terms and parameters used in network topologies, such as graphs, nodes, edges, path length, distance, and node degree, along with metrics for evaluating topologies. Section 3: N2R Description 3: Discuss the Network of 2 Rings (N2R) topology, its structure, properties, and various studies about its distances, traffic distribution, routing schemes, and reliability. Section 4: Chordal Rings Description 4: Explain the chordal ring topology, including its definition, structure with chords, and degree, and summarize related studies on distances, routing, and comparative analysis with N2R. Section 5: Further works Description 5: Highlight collaborative efforts between the two universities on extending the work on CR and N2R topologies, proposing new variants, analyzing their characteristics, and studying their reliability and routing mechanisms. Section 6: Conclusion and outlook Description 6: Provide a summary of the key findings from the survey, and discuss the potential future directions in the study and implementation of these network topologies.
Event-based Vision: A Survey
16
--- paper_title: A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS paper_content: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56 dB (9.3 bit) for >10 Lx illuminance. --- paper_title: A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change paper_content: A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN --- paper_title: A 64x64 aer logarithmic temporal derivative silicon retina paper_content: Real time artificial vision is traditionally limited to the frame rate. In many scenarios most frames contain information redundant both within and across frames. Here we report on the development of an Address-Event Representation (AER) [1] silicon retina chip `TMPDIFF’ that generates events corresponding to changes in log intensity. The resulting address-events are output asynchronously on a shared digital bus. This chip responds with high temporal and low spatial resolution, analogous to the biological magnocellular pathway. It has 64x64 pixels, each with 2 outputs (ON and OFF), which are communicated off-chip on a 13-bit digital bus. It is fabricated in a 0.35u 4M 2P process and occupies an area of (3.3 mm). Each (40u) pixel has 28 transistors and 3 capacitors and uses a self-clocked switched-capacitor design to limit response FPN. Dynamic operating range is at least 5 decades and minimum scene illumination with f/1.4 lens is less than 10 lux. Chip power consumption is 7mW. --- paper_title: A 240x180 120dB 10mW 12us‐latency sparse output vision sensor for mobile applications paper_content: This paper proposes a CMOS vision sensor that combines event-driven asychronous readout of temporal contrast with synchronous frams-based active pixel sensor (APS) readout of intensity. The image frames can be used for scene content analysis and the temporal constrast events can be used to track fast moving objects, to adjust the frame rate, or to guide a region of interest readout Therefore the sensor is suitable for mobile applications because it allows low latency at low data rate and low system-level power consumption. Sharing the photodiode for both readout types allows a compact pixel design that is 60% smaller than a comparable design. The 240x180 sensor has a power consumption of 10mW. It is built in 0.18um technology with 18.5um pixels. The temporal contrast pathway has a minimum latency of 12us, a dynamic range of 120dB, 12% contrast detection threshold and 3.5% contrast matching. The APS readout has 55dB dynamic range with 1% FPN. --- paper_title: 64x64 Event-Driven Logarithmic Temporal Derivative Silicon Retina paper_content: Real time artificial vision is traditionally limited to the frame rate. In many scenarios most frames contain information redundant both within and across frames. Here we report on the development of an AddressEvent Representation (AER) [1] silicon retina chip `TMPDIFF’ that generates events corresponding to changes in log intensity. The resulting address-events are output asynchronously on a shared digital bus. This chip responds with high temporal and low spatial resolution, analogous to the biological magnocellular pathway. It has 64x64 pixels, each with 2 outputs (ON and OFF), which are communicated off-chip on a 13bit digital bus. It is fabricated in a 0.35u 4M 2P process and occupies an area of (3.3 mm). Each pixel has 28 transistors and 3 capacitors and uses a self-clocked switched-capacitor design to limit response FPN. Dynamic operating range is at least 5 decades and minimum scene illumination with f/1.4 lens is less than 10 lux. --- paper_title: A burst-mode word-serial address-event link-I: transmitter design paper_content: We present a transmitter for a scalable multiple-access inter-chip link that communicates binary activity between two-dimensional arrays fabricated in deep submicrometer CMOS. Transmission is initiated by active cells but cells are not read individually. An entire row is read in parallel; this increases communication capacity with integration density. Access is random but not inequitable. A row is not reread until all those waiting are serviced; this increases parallelism as more of its cells become active in the mean time. Row and column addresses identify active cells but they are not transmitted simultaneously. The row address is followed sequentially by a column address for each active cell; this cuts pad count in half without sacrificing capacity. We synthesized an asynchronous implementation by performing a series of program decompositions, starting from a high-level description. Links using this design have been implemented successfully in three generations of submicrometer CMOS technology. --- paper_title: A 128 X 128 120db 30mw asynchronous vision sensor that responds to relative intensity change paper_content: A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN --- paper_title: A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS paper_content: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56 dB (9.3 bit) for >10 Lx illuminance. --- paper_title: Live demonstration: A 768 × 640 pixels 200Meps dynamic vision sensor paper_content: We demonstrate a high resolution Dynamic Vision Sensor (DVS) with 768 × 640 pixels, and 200Meps (event per second) high speed readout. The sensor has a dual-channel synchronous interface and can operate at 100 MHz. It has a few unique features, namely three-in-one (coordinate, brightness and time stamp) event packet, capability of producing full-array picture-on-demand [1] and on-chip optical flow computation. The sensor will find broad applications in real-time machine vision. --- paper_title: Activity-driven, event-based vision sensors paper_content: The four chips [1–4] presented in the special session on "Activity-driven, event-based vision sensors" quickly output compressed digital data in the form of events. These sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers. The digital sensor output is easily interfaced to conventional digital post processing, where it reduces the latency and cost of post processing compared to imagers. The asynchronous data could spawn a new area of DSP that breaks from conventional Nyquist rate signal processing. This paper reviews the rationale and history of this event-based approach, introduces sensor functionalities, and gives an overview of the papers in this session. The paper concludes with a brief discussion on open questions. --- paper_title: Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output paper_content: State-of-the-art image sensors suffer from significant limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of “snapshot” images, recorded at discrete points in time. Visual information gets time quantized at a predetermined frame rate which has no relation to the dynamics present in the scene. Furthermore, each recorded frame conveys the information from all pixels, regardless of whether this information, or a part of it, has changed since the last frame had been acquired. This acquisition method limits the temporal resolution, potentially missing important information, and leads to redundancy in the recorded image data, unnecessarily inflating data rate and volume. Biology is leading the way to a more efficient style of image acquisition. Biological vision systems are driven by events happening within the scene in view, and not, like image sensors, by artificially created timing and control signals. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. In this paper, recent developments in bioinspired, neuromorphic optical sensing and artificial vision are presented and discussed. It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency. Demanding vision tasks such as real-time 3-D mapping, complex multiobject tracking, or fast visual feedback loops for sensory-motor action, tasks that often pose severe, sometimes insurmountable, challenges to conventional artificial vision systems, are in reach using bioinspired vision sensing and processing techniques. --- paper_title: A 240x180 120dB 10mW 12us‐latency sparse output vision sensor for mobile applications paper_content: This paper proposes a CMOS vision sensor that combines event-driven asychronous readout of temporal contrast with synchronous frams-based active pixel sensor (APS) readout of intensity. The image frames can be used for scene content analysis and the temporal constrast events can be used to track fast moving objects, to adjust the frame rate, or to guide a region of interest readout Therefore the sensor is suitable for mobile applications because it allows low latency at low data rate and low system-level power consumption. Sharing the photodiode for both readout types allows a compact pixel design that is 60% smaller than a comparable design. The 240x180 sensor has a power consumption of 10mW. It is built in 0.18um technology with 18.5um pixels. The temporal contrast pathway has a minimum latency of 12us, a dynamic range of 120dB, 12% contrast detection threshold and 3.5% contrast matching. The APS readout has 55dB dynamic range with 1% FPN. --- paper_title: Fun with asynchronous vision sensors and processing paper_content: This paper provides a personal perspective on our group's efforts in building event-based vision sensors, algorithms, and applications over the period 2002-2012. Some recent advances from other groups are also briefly described. --- paper_title: Accelerated frame-free time-encoded multi-step imaging paper_content: This paper presents a frame-free time-domain imaging approach designed to alleviate the non-ideality of finite exposure measurement time (intrinsic to all integrating imagers), limiting the temporal resolution of the ATIS asynchronous time-based image sensor concept. The method uses the time-domain correlated double sampling (TCDS) and change detection circuitry already present in the data-driven autonomous ATIS pixels and does not involve any additional data to be transmitted by the sensor, but is entirely based on the data available in normal operation. Three consecutive exposure estimation / measurement steps apply different trade-offs between measurement speed, accuracy and noise. The early estimates yield between 10 and 100 times faster pixel updates than the standard full-swing integrating exposure measurement operation. The results from the three individual measurement steps can be used separately or in combination, enabling event-driven asynchronous high-speed imaging at moderate light levels. --- paper_title: A QVGA 143dB dynamic range asynchronous address-event PWM dynamic image sensor with lossless pixel-level video compression paper_content: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired, which is usually not long ago. This method obviously leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data. Acquisition and handling of these dispensable data consume valuable resources; sophisticated and resource-hungry video compression methods have been developed to deal with these data. --- paper_title: Time-derivative adaptive silicon photoreceptor array paper_content: We designed and tested a two-dimensional silicon receptor array constructed from pixels that temporally high-pass filter the incident image. There are no surround interactions in the array; all pixels operate independently except for their correlation due to the input image. The high- pass output signal is computed by sampling the output of an adaptive, high-gain, logarithmic photoreceptor during the scanout of the array. After a pixel is sampled, the output of the pixel is reset to a fixed value. An interesting capacitive coupling mechanism results in a controllable high-pass filtering operation. The resulting array has very low offsets. The computation that the array performs may be useful for time-domain image processing, for example, motion computation. --- paper_title: Neuromorphic sensory systems paper_content: Biology provides examples of efficient machines which greatly outperform conventional technology. Designers in neuromorphic engineering aim to construct electronic systems with the same efficient style of computation. This task requires a melding of novel engineering principles with knowledge gleaned from neuroscience. We discuss recent progress in realizing neuromorphic sensory systems which mimic the biological retina and cochlea, and subsequent sensor processing. The main trends are the increasing number of sensors and sensory systems that communicate through asynchronous digital signals analogous to neural spikes; the improved performance and usability of these sensors; and novel sensory processing methods which capitalize on the timing of spikes from these sensors. Experiments using these sensors can impact how we think the brain processes sensory information. --- paper_title: A Low Power, Fully Event-Based Gesture Recognition System paper_content: We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5% out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions. --- paper_title: End-to-End Learning of Driving Models from Large-Scale Video Datasets paper_content: Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions. --- paper_title: CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking paper_content: This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies. --- paper_title: A pencil balancing robot using a pair of AER dynamic vision sensors paper_content: Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencil's location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencil's base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop. --- paper_title: On-board real-time optic-flow for miniature event-based vision sensors paper_content: This paper presents a novel, drastically simplified method to compute optic flow on a miniaturized embedded vision system, suitable on-board of miniaturized indoor flying robots. Estimating optic flow is a common technique for robotic motion stabilization in systems without ground contact, such as unmanned aerial vehicles (UAVs). Because of high computing power requirements to process video camera data, most optic flow algorithms are implemented off-board on PCs or on dedicated hardware, connected through tethered or wireless links. Here, in contrast, we present a miniaturized stand-alone embedded system that utilizes a novel neuro-biologically inspired event-based vision sensor (DVS) to extract optic flow on-board in real-time with minimal computing requirements. The DVS provides asynchronous events that resemble temporal contrast changes at individual pixel level, instead of full image frames at regular time intervals. Such a representation provides high temporal resolution while simultaneously reducing the amount of data to be processed. We present a simple algorithm to extract optic flow information from such event-based vision data, which is sufficiently efficient in terms of data storage and processing power to be executed on an embedded 32bit ARM7 microcontroller in real-time. The developed stand-alone system is small, lightweight and energy efficient, and is ready to serve as sensor for ego motion estimates based on optic flow in autonomous UAVs. --- paper_title: Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor paper_content: Dynamic and active pixel vision sensors (DAVISs) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout. This paper demonstrates that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data. The algorithm performs an online optimization of the event decoding in real time. Example scenes were recorded by the 240×180 pixel sensor at sub-Hz frame rates and successfully decompressed yielding an equivalent frame rate of 2kHz. A quantitative analysis of the compression quality resulted in an average pixel error of 0.5DN intensity resolution for non-saturating stimuli. The system exhibits an adaptive compression ratio which depends on the activity in a scene; for stationary scenes it can go up to 1862. The low data rate and power consumption of the proposed video compression system make it suitable for distributed sensor networks. --- paper_title: Vision: Human And Electronic paper_content: The series of lectures on the process of vision in both human and electronic systems v/as based predominantly on a number of publications in scattered parts of the literature. Several of these papers are reproduced here and serve, at least, the convenience of juxtaposition. --- paper_title: Toward real-time particle tracking using an event-based dynamic vision sensor paper_content: Optically based measurements in high Reynolds number fluid flows often require high-speed imaging techniques. These cameras typically record data internally and thus are limited by the amount of onboard memory available. A novel camera technology for use in particle tracking velocimetry is presented in this paper. This technology consists of a dynamic vision sensor in which pixels operate in parallel, transmitting asynchronous events only when relative changes in intensity of approximately 10% are encountered with a temporal resolution of 1 μs. This results in a recording system whose data storage and bandwidth requirements are about 100 times smaller than a typical high-speed image sensor. Post-processing times of data collected from this sensor also increase to about 10 times faster than real time. We present a proof-of-concept study comparing this novel sensor with a high-speed CMOS camera capable of recording up to 2,000 fps at 1,024 × 1,024 pixels. Comparisons are made in the ability of each system to track dense (ρ >1 g/cm3) particles in a solid–liquid two-phase pipe flow. Reynolds numbers based on the bulk velocity and pipe diameter up to 100,000 are investigated. --- paper_title: Fast sensory motor control based on event-based hybrid neuromorphic-procedural system paper_content: Fast sensory-motor processing is challenging when using traditional frame-based cameras and computers. Here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal. The system consists of a 128times128 retina that asynchronously reports scene reflectance changes, a laptop PC, and a servo motor controller. Components are interconnected by USB. The retina looks down onto the field in front of the goal. Moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal. The ball's position and velocity are used to control the servo motor. Running under Windows XP, the reaction latency is 2.8plusmn0.5 ms at a CPU load of 1 million events per second (Meps), although fast balls only create ~30 keps. This system demonstrates the advantages of hybrid event-based sensory motor processing --- paper_title: A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications paper_content: Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f. --- paper_title: Temperature and Parasitic Photocurrent Effects in Dynamic Vision Sensors paper_content: The effect of temperature and parasitic photocurrent on event-based dynamic vision sensors (DVS) is important because of their application in uncontrolled robotic, automotive, and surveillance applications. This paper considers the temperature dependence of DVS threshold temporal contrast (TC), dark current, and background activity caused by junction leakage. New theory shows that if bias currents have a constant ratio, then ideally the DVS threshold TC is temperature independent, but the presence of temperature dependent junction leakage currents causes nonideal behavior at elevated temperature. Both measured photodiode dark current and leakage induced event activity follow Arhenius activation. This paper also defines a new metric for parasitic photocurrent quantum efficiency and measures the sensitivity of DVS pixels to parasitic photocurrent. --- paper_title: Live demonstration: Behavioural emulation of event-based vision sensors paper_content: This demonstration shows how an inexpensive high frame-rate USB camera is used to emulate existing and proposed activity-driven event-based vision sensors. A PS3-Eye camera which runs at a maximum of 125 frames/second with colour QVGA (320×240) resolution is used to emulate several event-based vision sensors, including a Dynamic Vision Sensor (DVS), a colour-change sensitive DVS (cDVS), and a hybrid vision sensor with DVS+cDVS pixels. The emulator is integrated into the jAER software project for event-based real-time vision and is used to study use cases for future vision sensor designs. --- paper_title: Authors’ Reply to Comment on “Temperature and Parasitic Photocurrent Effects in Dynamic Vision Sensors” paper_content: We thank the reviewers for their careful analysis of [1] , especially for spotting two errors in the formulas for inferring temporal contrast effects of leak and parasitic photocurrent. The revised results increase the values of the inferred parasitic leak currents by a factor of about $11\times $ . --- paper_title: A Dynamic Vision Sensor With 1% Temporal Contrast Sensitivity and In-Pixel Asynchronous Delta Modulator for Event Encoding paper_content: A dynamic vision sensor (DVS) encodes temporal contrast (TC) of light intensity into address-events that are asynchronously transmitted for subsequent processing. This paper describes a DVS with improved TC sensitivity and event encoding. To enhance the TC sensitivity, each pixel employs a common-gate photoreceptor for low output noise and a capacitively-coupled programmable gain amplifier for continuous-time signal amplification without sacrificing the intra-scene dynamic range. A proposed in-pixel asynchronous delta modulator (ADM) better preserves signal integrity in event encoding compared with self-timed reset (STR) used in previous DVSs. A 60 $\times$ 30 prototype sensor array with a 31.2 $~\mu\hbox{m}$ pixel pitch was fabricated in a 1P6M 0.18 $~\mu\hbox{m}$ CMOS technology. It consumes 720 $~\mu\hbox{W}$ at a 100k event/s output rate. Measurements show that a 1% TC sensitivity with a 35% relative standard deviation is achieved and that the in-pixel ADM is up to 3.5 times less susceptible to signal loss than STR in terms of event number. These improvements can facilitate the application of DVSs in areas like optical neuroimaging which is demonstrated in a simulated experiment. --- paper_title: O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors paper_content: Neuromorphic vision sensors are an emerging technology inspired by how retina processing images. A neuromorphic vision sensor only reports when a pixel value changes rather than continuously outputting the value every frame as is done in an "ordinary" Active Pixel Sensor (ASP). This move from a continuously sampled system to an asynchronous event driven one effectively allows for much faster sampling rates; it also fundamentally changes the sensor interface. In particular, these sensors are highly sensitive to noise, as any additional event reduces the bandwidth, and thus effectively lowers the sampling rate. In this work we introduce a novel spatiotemporal filter with O(N) memory complexity for reducing background activity noise in neuromorphic vision sensors. Our design consumes $10 \times$ less memory and has $100 \times$ reduction in error compared to previous designs. Our filter is also capable of recovering real events and can pass up to 180% more real events. --- paper_title: Analysis of Encoding Degradation in Spiking Sensors Due to Spike Delay Variation paper_content: Spiking sensors such as the silicon retina and cochlea encode analog signals into massively parallel asynchronous spike train output where the information is contained in the precise spike timing. The variation of the spike timing that arises from spike transmission degrades signal encoding quality. Using the signal-to-distortion ratio (SDR) metric with nonlinear spike train decoding based on frame theory, two particular sources of delay variation including comparison delay $T_{\mathbf {DC}}$ and queueing delay $T_{\mathbf {DQ}}$ are evaluated on two encoding mechanisms which have been used for implementations of silicon array spiking sensors: asynchronous delta modulation and self-timed reset. As specific examples, $T_{\mathbf {DC}}$ is obtained from a 2T current-mode comparator, and $T_{\mathbf {DQ}}$ is obtained from an M/D/1 queue for 1-D sensors like the silicon cochlea and an $\text {M}^{\mathrm {\mathbf {X}}}$ /D/1 queue for 2-D sensors like the silicon retina. Quantitative relations between the SDR and the circuit and system parameters of spiking sensors are established. The analysis method presented in this work will be useful for future specifications-guided designs of spiking sensors. --- paper_title: Low-latency event-based visual odometry paper_content: The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion. --- paper_title: Event-based Camera Pose Tracking using a Generative Event Model paper_content: Event-based vision sensors mimic the operation of biological retina and they represent a major paradigm shift from traditional cameras. Instead of providing frames of intensity measurements synchronously, at artificially chosen rates, event-based cameras provide information on brightness changes asynchronously, when they occur. Such non-redundant pieces of information are called "events". These sensors overcome some of the limitations of traditional cameras (response time, bandwidth and dynamic range) but require new methods to deal with the data they output. We tackle the problem of event-based camera localization in a known environment, without additional sensing, using a probabilistic generative event model in a Bayesian filtering framework. Our main contribution is the design of the likelihood function used in the filter to process the observed events. Based on the physical characteristics of the sensor and on empirical evidence of the Gaussian-like distribution of spiked events with respect to the brightness change, we propose to use the contrast residual as a measure of how well the estimated pose of the event-based camera and the environment explain the observed events. The filter allows for localization in the general case of six degrees-of-freedom motions. --- paper_title: Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras. --- paper_title: Simultaneous mosaicing and tracking with an event camera paper_content: © 2014. The copyright of this document resides with its authors. An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering. --- paper_title: Comparison of spike encoding schemes in asynchronous vision sensors: Modeling and design paper_content: Two in-pixel encoding mechanisms to convert analog input to spike output for vision sensors are modeled and compared with the consideration of feedback delay: one is feedback and reset (FAR), and the other is feedback and subtract (FAS). MATLAB simulations of linear signal reconstruction from spike trains generated by the two encoders show that FAR in general has a lower signal-to-distortion ratio (SDR) compared to FAS due to signal loss during the reset phase and hold period, and the SDR merit of FAS increases as the quantization bit number and input signal frequency increases. A 500 µm2 in-pixel circuit implementation of FAS using asynchronous switched capacitors in a UMC 0.18µm 1P6M process is described, and the post-layout simulation results are given to verify the FAS encoding mechanism. --- paper_title: Frame-free dynamic digital vision paper_content: Conventional image sensors produce massive amounts of redundant data and are limited in temporal resolution by the frame rate. This paper reviews our recent breakthrough in the development of a high- performance spike-event based dynamic vision sensor (DVS) that discards the frame concept entirely, and then describes novel digital methods for efficient low-level filtering and feature extraction and high-level object tracking that are based on the DVS spike events. These methods filter events, label them, or use them for object tracking. Filtering reduces the number of events but improves the ratio of informative events. Labeling attaches additional interpretation to the events, e.g. orientation or local optical flow. Tracking uses the events to track moving objects. Processing occurs on an event-by-event basis and uses the event time and identity as the basis for computation. A common memory object for filtering and labeling is a spatial map of most recent past event times. Processing methods typically use these past event times together with the present event in integer branching logic to filter, label, or synthesize new events. These methods are straightforwardly computed on serial digital hardware, resulting in a new event- and timing-based approach for visual computation that efficiently integrates a neural style of computation with digital hardware. All code is open- sourced in the jAER project (jaer.wiki.sourceforge.net). --- paper_title: Evaluating noise filtering for event-based asynchronous change detection image sensors paper_content: Bio-inspired Address Event Representation change detection image sensors, also known as silicon retinae, have matured to the point where they can be purchased commercially, and are easily operated by laymen. Noise is present in the output of these sensors, and improved noise filtering will enhance performance in many applications. A novel approach is proposed for quantifying the quality of data received from a silicon retina, and quantifying the performance of different noise filtering algorithms. We present a test rig which repetitively records printed test patterns, along with a method for averaging over repeated recordings to estimate the likelihood of an event being signal or noise. The calculated signal and noise probabilities are used to quantitatively compare the performance of 8 different filtering algorithms while varying each filter's parameters. We show how the choice of best filter and parameters varies as a function of the stimulus, particularly the temporal rate of change of intensity for a pixel, especially when the assumption of sharp temporal edges is not valid. --- paper_title: Continuous-time Intensity Estimation Using Event Cameras paper_content: Event cameras provide asynchronous, data-driven measurements of local temporal contrast over a large dynamic range with extremely high temporal resolution. Conventional cameras capture low-frequency reference intensity information. These two sensor modalities provide complementary information. We propose a computationally efficient, asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution, high-dynamic-range image state. In absence of conventional image frames, the filter can be run on events only. We present experimental results on high-speed, high-dynamic-range sequences, as well as on new ground truth datasets we generate to demonstrate the proposed algorithm outperforms existing state-of-the-art methods. --- paper_title: Integration of dynamic vision sensor with inertial measurement unit for electronically stabilized event-based vision paper_content: Neuromorphic spike event-based dynamic vision sensors (DVS) offer the possibility of fast, computationally efficient visual processing for navigation in mobile robotics. To extract motion parallax cues relating to 3D scene structure, the uninformative camera rotation must be removed from the visual input to allow the un-blurred features and informative relative optical flow to be analyzed. Here we describe the integration of an inertial measurement unit (IMU) with a 240×180 pixel DVS. The algorithm for electronic stabilization of the visual input against camera rotation is described. Examples are presented showing the stabilization performance of the system. --- paper_title: Toward real-time particle tracking using an event-based dynamic vision sensor paper_content: Optically based measurements in high Reynolds number fluid flows often require high-speed imaging techniques. These cameras typically record data internally and thus are limited by the amount of onboard memory available. A novel camera technology for use in particle tracking velocimetry is presented in this paper. This technology consists of a dynamic vision sensor in which pixels operate in parallel, transmitting asynchronous events only when relative changes in intensity of approximately 10% are encountered with a temporal resolution of 1 μs. This results in a recording system whose data storage and bandwidth requirements are about 100 times smaller than a typical high-speed image sensor. Post-processing times of data collected from this sensor also increase to about 10 times faster than real time. We present a proof-of-concept study comparing this novel sensor with a high-speed CMOS camera capable of recording up to 2,000 fps at 1,024 × 1,024 pixels. Comparisons are made in the ability of each system to track dense (ρ >1 g/cm3) particles in a solid–liquid two-phase pipe flow. Reynolds numbers based on the bulk velocity and pipe diameter up to 100,000 are investigated. --- paper_title: Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison paper_content: Back side illumination has become standard image sensor technology owing to its superior quantum efficiency and fill factor. A direct comparison of front and back side illumination (FSI and BSI) used in event-based dynamic and active pixel vision sensors (DAVIS) is interesting because of the potential of BSI to greatly increase the small 20% fill factor of these complex pixels. This brief compares identically designed front and back illuminated DAVIS silicon retina vision sensors. They are compared in term of quantum efficiency (QE), leak activity and modulation transfer function (MTF). The BSI DAVIS achieves a peak QE of 93% compared with the FSI DAVIS, peak QE of 24%, but reduced MTF, due to pixel crosstalk and parasitic photocurrent. Significant “leak events” in the BSI DAVIS limit its use to controlled illumination scenarios without very bright light sources. Effects of parasitic photocurrent and modulation transfer functions with and without IR cut filters are also reported. --- paper_title: Temperature and Parasitic Photocurrent Effects in Dynamic Vision Sensors paper_content: The effect of temperature and parasitic photocurrent on event-based dynamic vision sensors (DVS) is important because of their application in uncontrolled robotic, automotive, and surveillance applications. This paper considers the temperature dependence of DVS threshold temporal contrast (TC), dark current, and background activity caused by junction leakage. New theory shows that if bias currents have a constant ratio, then ideally the DVS threshold TC is temperature independent, but the presence of temperature dependent junction leakage currents causes nonideal behavior at elevated temperature. Both measured photodiode dark current and leakage induced event activity follow Arhenius activation. This paper also defines a new metric for parasitic photocurrent quantum efficiency and measures the sensitivity of DVS pixels to parasitic photocurrent. --- paper_title: Integration of dynamic vision sensor with inertial measurement unit for electronically stabilized event-based vision paper_content: Neuromorphic spike event-based dynamic vision sensors (DVS) offer the possibility of fast, computationally efficient visual processing for navigation in mobile robotics. To extract motion parallax cues relating to 3D scene structure, the uninformative camera rotation must be removed from the visual input to allow the un-blurred features and informative relative optical flow to be analyzed. Here we describe the integration of an inertial measurement unit (IMU) with a 240×180 pixel DVS. The algorithm for electronic stabilization of the visual input against camera rotation is described. Examples are presented showing the stabilization performance of the system. --- paper_title: Color separation in an active pixel cell imaging array using a triple-well structure paper_content: Differences utilizing different wavelengths (400-490nm, 490-575nm, 575-700nm) light absorption length in silicon imager digital color separation apparatus. A preferred imaging array (102, 104) based on the use of a triple well structure (100) of three-color pixel sensor. The array of three primary colors by measuring at the same location (RGB) pixels within each color to eliminate confusion. --- paper_title: A Microbolometer Asynchronous Dynamic Vision Sensor for LWIR paper_content: In this paper, a novel event-based dynamic IR vision sensor is presented. The device combines an uncooled microbolometer array with biology-inspired (ldquoneuromorphicrdquo) readout circuitry to implement an asynchronous, ldquospikingrdquo vision sensor for the 8-15 mum thermal infrared spectral range. The sensor's autonomous pixels independently respond to changes in thermal IR radiation and communicate detected variations in the form of asynchronous ldquoaddress-events.rdquo The 64times64 pixel ROIC chip has been fabricated in a 0.35 mum 2P4M standard CMOS process, covers about 4times4 mm2 of silicon area and consumes 8 mW of power. An amorphous silicon (a-Si) microbolometer array has been processed on top of the ROIC and contacted to the pixel circuits. We discuss the bolometer detector properties, describe the pixel circuits and the implemented sensor architecture, and show measurement results of the readout circuits. Subsequently, a DFT-based approach to the characterization of asynchronous, spiking sensor arrays is discussed and applied. Test results and analysis of sensitivity, bandwidth, and noise of the fabricated IR sensor prototype are presented. --- paper_title: A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications paper_content: Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f. --- paper_title: Design of an RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor paper_content: This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm. --- paper_title: A Dynamic Vision Sensor With 1% Temporal Contrast Sensitivity and In-Pixel Asynchronous Delta Modulator for Event Encoding paper_content: A dynamic vision sensor (DVS) encodes temporal contrast (TC) of light intensity into address-events that are asynchronously transmitted for subsequent processing. This paper describes a DVS with improved TC sensitivity and event encoding. To enhance the TC sensitivity, each pixel employs a common-gate photoreceptor for low output noise and a capacitively-coupled programmable gain amplifier for continuous-time signal amplification without sacrificing the intra-scene dynamic range. A proposed in-pixel asynchronous delta modulator (ADM) better preserves signal integrity in event encoding compared with self-timed reset (STR) used in previous DVSs. A 60 $\times$ 30 prototype sensor array with a 31.2 $~\mu\hbox{m}$ pixel pitch was fabricated in a 1P6M 0.18 $~\mu\hbox{m}$ CMOS technology. It consumes 720 $~\mu\hbox{W}$ at a 100k event/s output rate. Measurements show that a 1% TC sensitivity with a 35% relative standard deviation is achieved and that the in-pixel ADM is up to 3.5 times less susceptible to signal loss than STR in terms of event number. These improvements can facilitate the application of DVSs in areas like optical neuroimaging which is demonstrated in a simulated experiment. --- paper_title: Eyeing the camera: Into the next century paper_content: In the two centuries of photography, there has been a wealth of invention and innovation aimed at capturing a realistic and pleasing full-color twodimensional representation of a scene. In this paper, we look back at the historical milestones of color photography and bring into focus a fascinating parallelism between the evolution of chemical based color imaging starting over a century ago, and the evolution of electronic photography which continues today. The second part of our paper is dedicated to a technical discussion of the new Foveon X3 multilayer color image sensor; what could be descried as a new more advanced species of camera sensor technology. The X3 technology is compared to other competing sensor technologies; we compare spectral sensitivities using one of many possible figures of merit. Finally we show and describe how, like the human visual system, the Foveon X3 sensor has an inherent luminancechrominance behavior which results in higher image quality using fewer image pixels. --- paper_title: Color temporal contrast sensitivity in dynamic vision sensors paper_content: This paper introduces the first simulations and measurements of event data obtained from the first Dynamic and Active Vision Sensors (DAVIS) with RGBW color filters. The absolute quantum efficiency spectral responses of the RGBW photodiodes were measured, the behavior of the color-sensitive DVS pixels were simulated and measured, and reconstruction through color events interpolation was developed. --- paper_title: CED: Color Event Camera Dataset paper_content: Event cameras are novel, bio-inspired visual sensors, whose pixels output asynchronous and independent timestamped spikes at local intensity changes, called 'events'. Event cameras offer advantages over conventional frame-based cameras in terms of latency, high dynamic range (HDR) and temporal resolution. Until recently, event cameras have been limited to outputting events in the intensity channel, however, recent advances have resulted in the development of color event cameras, such as the Color-DAVIS346. In this work, we present and release the first Color Event Camera Dataset (CED), containing 50 minutes of footage with both color frames and events. CED features a wide variety of indoor and outdoor scenes, which we hope will help drive forward event-based vision research. We also present an extension of the event camera simulator ESIM that enables simulation of color events. Finally, we present an evaluation of three state-of-the-art image reconstruction methods that can be used to convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to visualise the event stream, and for use in downstream vision applications. --- paper_title: A Bio-Inspired AER Temporal Tri-Color Differentiator Pixel Array paper_content: This article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors. The in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas. Color transitions in a pixel are represented by voltage spikes, which are akin to a neuron's action potential. These spikes are conveyed off-chip by the Address Event Representation (AER) protocol. To achieve sensitivity to three different color spectra within the visual spectrum, each pixel has three stacked photodiodes at different depths in the silicon substrate. The sensor has been fabricated in the standard TSMC 90 nm CMOS technology. A post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene. Experimental results are provided. Color transitions can be detected at high speed (up to 2.7 kHz). The sensor has a dynamic range of 58 dB and a power consumption of 22.5 mW. This type of sensor can be of use in industrial, robotics, automotive and other applications where essential information is contained in transient emissions shifts within the visual spectrum. --- paper_title: Self-timed vertacolor dichromatic vision sensor for low power pattern detection paper_content: This paper proposes a simple focal plane pattern detector architecture using a novel pixel sensor based on the dichromatic vertacolor structure. Additionally, the sensor transfers dichromatic intensity values using a self-timed time-to- first-spike scheme, which provides high dynamic range imaging. The intensity information is transmitted using the address event representation protocol. The spectral information is sampled automatically at each intensity reading in a ratioed way that maintains high dynamic range. A test chip consisting of 20 pixels has been fabricated in 1.5 um 2P 2M CMOS and characterized. The combined pattern detector/ imager core consumes 45 uA at 5 V supply voltage. --- paper_title: Dichromatic spectral measurement circuit in vanilla CMOS paper_content: The circuit described in this paper uses a "verta-color" stacked two-diode structure to measure relative long and short wavelength spectral content. The p-type source-drain to nwell forms the top diode and the nwell-psubstrate diode forms the bottom diode. The circuit output is a digital PWM signal whose frequency encodes absolute intensity and whose duty cycle encodes the relative photodiode current. This signal is formed by a self-timed circuit that alternately discharges the top and bottom photodiodes. This circuit was fabricated in a standard 3M 2P 0.5 mum CMOS process. Monochromatic stimulation shows that the duty cycle varies between 50% and 7% as the photon wavelength is varied between 400 nm to 750 nm. The output frequency is 150 Hz at incident irradiance of 1.7 W/m2. Chip-to-chip variation of PWM duty cycle and frequency is about 1% measured over 5 chips. Power consumption is 20 muW. A modified version of this circuit could form the basis for simple color vision sensors built in widely-available vanilla CMOS. --- paper_title: Real-time classification and sensor fusion with a spiking deep belief network paper_content: Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. --- paper_title: HFirst: A Temporal Approach to Object Recognition paper_content: This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% $\pm$ 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% $\pm$ 1.9% for a new more difficult 36 class character recognition task. --- paper_title: Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification paper_content: Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. --- paper_title: HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition paper_content: This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. --- paper_title: Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets paper_content: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules. --- paper_title: Simultaneous localization and mapping for event-based vision systems paper_content: We propose a novel method for vision based simultaneous localization and mapping (vSLAM) using a biologically inspired vision sensor that mimics the human retina. The sensor consists of a 128x128 array of asynchronously operating pixels, which independently emit events upon a temporal illumination change. Such a representation generates small amounts of data with high temporal precision; however, most classic computer vision algorithms need to be reworked as they require full RGB(-D) images at fixed frame rates. Our presented vSLAM algorithm operates on individual pixel events and generates high-quality 2D environmental maps with precise robot localizations. We evaluate our method with a state-of-the-art marker-based external tracking system and demonstrate real-time performance on standard computing hardware. --- paper_title: A Low Power, Fully Event-Based Gesture Recognition System paper_content: We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5% out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions. --- paper_title: SLAYER: Spike Layer Error Reassignment in Time paper_content: Configuring deep Spiking Neural Networks (SNNs) is an exciting research avenue for low power spike event based computation. However, the spike generation function is non-differentiable and therefore not directly compatible with the standard error backpropagation algorithm. In this paper, we introduce a new general backpropagation mechanism for learning synaptic weights and axonal delays which overcomes the problem of non-differentiability of the spike function and uses a temporal credit assignment policy for backpropagating error to preceding layers. We describe and release a GPU accelerated software implementation of our method which allows training both fully connected and convolutional neural network (CNN) architectures. Using our software, we compare our method against existing SNN based learning approaches and standard ANN to SNN conversion techniques and show that our method achieves state of the art performance for an SNN on the MNIST, NMNIST, DVS Gesture, and TIDIGITS datasets. --- paper_title: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing paper_content: Deep neural networks such as Convolutional Networks (ConvNets) and Deep Belief Networks (DBNs) represent the state-of-the-art for many machine learning and computer vision classification problems. To overcome the large computational cost of deep networks, spiking deep networks have recently been proposed, given the specialized hardware now available for spiking neural networks (SNNs). However, this has come at the cost of performance losses due to the conversion from analog neural networks (ANNs) without a notion of time, to sparsely firing, event-driven SNNs. Here we analyze the effects of converting deep ANNs into SNNs with respect to the choice of parameters for spiking neurons such as firing rates and thresholds. We present a set of optimization techniques to minimize performance loss in the conversion process for ConvNets and fully connected deep networks. These techniques yield networks that outperform all previous SNNs on the MNIST database to date, and many networks here are close to maximum performance after only 20 ms of simulated time. The techniques include using rectified linear units (ReLUs) with zero bias during training, and using a new weight normalization method to help regulate firing rates. Our method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time. The presented analysis and optimization techniques boost the value of spiking deep networks as an attractive framework for neuromorphic computing platforms aimed at fast and efficient pattern recognition. --- paper_title: Low-latency event-based visual odometry paper_content: The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion. --- paper_title: Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras. --- paper_title: Simultaneous mosaicing and tracking with an event camera paper_content: © 2014. The copyright of this document resides with its authors. An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering. --- paper_title: Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas paper_content: We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naive users. --- paper_title: Spike time based unsupervised learning of receptive fields for event-driven vision paper_content: Event-driven vision sensors have the potential to support a new generation of efficient and robust robots. This requires the development of a new computational framework that exploits not only the spatial information, like in the traditional frame-based approach, but also the temporal content of the sensory data. We propose a method for unsupervised learning of filters for the processing of the visual signal from event-driven sensors. This method exploits the temporal coincidence of events generated by each object in a spatial location of the visual field. The approach is based on a modification of Spike Timing Dependent Plasticity that takes into account the specific implementation on the robot and the characteristics of the used sensor. It gives rise to oriented spatial filters that are very similar to the receptive fields observed in the primary visual cortex and traditionally used in bio-inspired hierarchical structures for object recognition, as well as to novel curved spatial structures. Using mutual information measure we provide a quantitative evidence that such curved spatial filters provide more information than equivalent oriented Gabor filters and can be an important aspect for object recognition in robotic applications. --- paper_title: Low-latency visual odometry using event-based feature tracks paper_content: New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional camera and an event-based sensor in the same pixel array. These sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. In this paper, we present a low-latency visual odometry algorithm for the DAVIS sensor using event-based feature tracks. Features are first detected in the grayscale frames and then tracked asynchronously using the stream of events. The features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping. We show that our method successfully tracks the 6-DOF motion of the sensor in natural scenes. This is the first work on event-based visual odometry with the DAVIS sensor using feature tracks. --- paper_title: Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera paper_content: We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data. --- paper_title: Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing paper_content: Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. --- paper_title: Continuous-time Intensity Estimation Using Event Cameras paper_content: Event cameras provide asynchronous, data-driven measurements of local temporal contrast over a large dynamic range with extremely high temporal resolution. Conventional cameras capture low-frequency reference intensity information. These two sensor modalities provide complementary information. We propose a computationally efficient, asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution, high-dynamic-range image state. In absence of conventional image frames, the filter can be run on events only. We present experimental results on high-speed, high-dynamic-range sequences, as well as on new ground truth datasets we generate to demonstrate the proposed algorithm outperforms existing state-of-the-art methods. --- paper_title: Simultaneous Optical Flow and Intensity Estimation from an Event Camera paper_content: Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur. --- paper_title: HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition paper_content: This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. --- paper_title: DDD17: End-To-End DAVIS Driving Dataset paper_content: Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in end-to-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car's on-board diagnostics interface. As an example application, we performed a preliminary end-to-end learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data. --- paper_title: Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars paper_content: Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras. --- paper_title: Real-Time Pose Estimation for Event Cameras with Stacked Spatial LSTM Networks paper_content: We present a new method to estimate the 6DOF pose of the event camera solely based on the event stream. Our method first creates the event image from a list of events that occurs in a very short time interval, then a Stacked Spatial LSTM Network (SP-LSTM) is used to learn and estimate the camera pose. Our SP-LSTM comprises a CNN to learn deep features from the event images and a stack of LSTM to learn spatial dependencies in the image features space. We show that the spatial dependency plays an important role in the pose estimation task and the SP-LSTM can effectively learn that information. The experimental results on the public dataset show that our approach outperforms recent methods by a substantial margin. Overall, our proposed method reduces about 6 times the position error and 3 times the orientation error over the state of the art. The source code and trained models will be released. --- paper_title: Asynchronous Corner Detection and Tracking for Event Cameras in Real Time paper_content: The recent emergence of bioinspired event cameras has opened up exciting new possibilities in high-frequency tracking, bringing robustness to common problems in traditional vision, such as lighting changes and motion blur. In order to leverage these attractive attributes of the event cameras, research has been focusing on understanding how to process their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream essentially forming frames of events grouped according to their timestamp, we are still to exploit the power of these cameras. In this spirit, this letter proposes a new, purely event-based corner detector, and a novel corner tracker, demonstrating that it is possible to detect corners and track them directly on the event stream in real time. Evaluation on benchmarking datasets reveals a significant boost in the number of detected corners and the repeatability of such detections over the state of the art even in challenging scenarios with the proposed approach while enabling more than a 4$\times$ speed-up when compared to the most efficient algorithm in the literature. The proposed pipeline detects and tracks corners at a rate of more than 7.5 million events per second, promising great impact in high-speed applications. --- paper_title: Steering a predator robot using a mixed frame/event-driven convolutional neural network paper_content: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Adaptive Time-Slice Block-Matching Optical Flow Algorithm for Dynamic Vision Sensors paper_content: Dynamic Vision Sensors (DVS) output asynchronous log intensity change events. They have potential applications in high-speed robotics, autonomous cars and drones. The precise event timing, sparse output, and wide dynamic range of the events are well suited for optical flow, but conventional optical flow (OF) algorithms are not well matched to the event stream data. This paper proposes an event-driven OF algorithm called adaptive block-matching optical flow (ABMOF). ABMOF uses time slices of accumulated DVS events. The time slices are adaptively rotated based on the input events and OF results. Compared with other methods such as gradient-based OF, ABMOF can efficiently be implemented in compact logic circuits. We developed both ABMOF and Lucas-Kanade (LK) algorithms using our adapted slices. Results shows that ABMOF accuracy is comparable with LK accuracy on natural scene data including sparse and dense texture, high dynamic range, and fast motion exceeding 30,000 pixels per second. --- paper_title: Event-Based Visual Inertial Odometry paper_content: Event-based cameras provide a new visual sensing model by detecting changes in image intensity asynchronously across all pixels on the camera. By providing these events at extremely high rates (up to 1MHz), they allow for sensing in both high speed and high dynamic range situations where traditional cameras may fail. In this paper, we present the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit, to provide accurate metric tracking of a cameras full 6dof pose. Our algorithm is asynchronous, and provides measurement updates at a rate proportional to the camera velocity. The algorithm selects features in the image plane, and tracks spatiotemporal windows around these features within the event stream. An Extended Kalman Filter with a structureless measurement model then fuses the feature tracks with the output of the IMU. The camera poses from the filter are then used to initialize the next step of the tracker and reject failed tracks. We show that our method successfully tracks camera motion on the Event-Camera Dataset in a number of challenging situations. --- paper_title: PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper_content: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: Event-based feature tracking with probabilistic data association paper_content: Asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking. The few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models. Such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks. In this paper, we introduce a novel soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. We show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras. --- paper_title: Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe- based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Fur- thermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s. --- paper_title: Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras paper_content: This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly. --- paper_title: Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion paper_content: In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes. --- paper_title: Unsupervised Learning of Dense Optical Flow and Depth from Sparse Event Data paper_content: In this work we present unsupervised learning of depth and motion from sparse event data generated by a Dynamic Vision Sensor (DVS). To tackle this low level vision task, we use a novel encoder-decoder neural network architecture that aggregates multi-level features and addresses the problem at multiple resolutions. A feature decorrelation technique is introduced to improve the training of the network. A non-local sparse smoothness constraint is used to alleviate the challenge of data sparsity. Our work is the first that generates dense depth and optical flow information from sparse event data. Our results show significant improvements upon previous works that used deep learning for flow estimation from both images and events. --- paper_title: End-to-End Learning of Representations for Asynchronous Event-Based Data paper_content: Event cameras are vision sensors that record asynchronous streams of per-pixel brightness changes, referred to as "events". They have appealing advantages over frame-based cameras for computer vision, including high temporal resolution, high dynamic range, and no motion blur. Due to the sparse, non-uniform spatiotemporal layout of the event signal, pattern recognition algorithms typically aggregate events into a grid-based representation and subsequently process it by a standard vision pipeline, e.g., Convolutional Neural Network (CNN). In this work, we introduce a general framework to convert event streams into grid-based representations through a sequence of differentiable operations. Our framework comes with two main advantages: (i) allows learning the input event representation together with the task dedicated network in an end to end manner, and (ii) lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones. Empirically, we show that our approach to learning the event representation end-to-end yields an improvement of approximately 12% on optical flow estimation and object recognition over state-of-the-art methods. --- paper_title: Live demonstration: Convolutional neural network driven by dynamic vision sensor playing RoShamBo paper_content: This demonstration presents a convolutional neural network (CNN) playing “RoShamBo” (“rock-paper-scissors”) against human opponents in real time. The network is driven by dynamic and active-pixel vision sensor (DAVIS) events, acquired by accumulating events into fixed event-number frames. --- paper_title: Continuous-Time Trajectory Estimation for Event-based Vision Sensors paper_content: Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), do not output a sequence of video frames like standard cameras, but a stream of asynchronous events. An event is triggered when a pixel detects a change of brightness in the scene. An event contains the location, sign, and precise timestamp of the change. The high dynamic range and temporal resolution of the DVS, which is in the order of micro-seconds, make this a very promising sensor for high-speed applications, such as robotics and wearable computing. However, due to the fundamentally different structure of the sensor's output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. In this paper, we address ego-motion estimation for an event-based vision sensor using a continuous-time framework to directly integrate the information conveyed by the sensor. The DVS pose trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines and it is optimized according to the observed events. We evaluate our method using datasets acquired from sensor-in-the-loop simulations and onboard a quadrotor performing flips. The results are compared to the ground truth, showing the good performance of the proposed technique. --- paper_title: Fast event-based corner detection paper_content: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20. --- paper_title: Frame-free dynamic digital vision paper_content: Conventional image sensors produce massive amounts of redundant data and are limited in temporal resolution by the frame rate. This paper reviews our recent breakthrough in the development of a high- performance spike-event based dynamic vision sensor (DVS) that discards the frame concept entirely, and then describes novel digital methods for efficient low-level filtering and feature extraction and high-level object tracking that are based on the DVS spike events. These methods filter events, label them, or use them for object tracking. Filtering reduces the number of events but improves the ratio of informative events. Labeling attaches additional interpretation to the events, e.g. orientation or local optical flow. Tracking uses the events to track moving objects. Processing occurs on an event-by-event basis and uses the event time and identity as the basis for computation. A common memory object for filtering and labeling is a spatial map of most recent past event times. Processing methods typically use these past event times together with the present event in integer branching logic to filter, label, or synthesize new events. These methods are straightforwardly computed on serial digital hardware, resulting in a new event- and timing-based approach for visual computation that efficiently integrates a neural style of computation with digital hardware. All code is open- sourced in the jAER project (jaer.wiki.sourceforge.net). --- paper_title: EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras paper_content: We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. ::: Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects simultaneously in the camera field of view. The objects and the camera are tracked by the VICON motion capture system. By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that our approach far surpasses its rivals and is well suited for scene constrained robotics applications. --- paper_title: Focus Is All You Need: Loss Functions for Event-Based Vision paper_content: Event cameras are novel vision sensors that output pixel-level brightness changes ("events") instead of traditional video frames. These asynchronous sensors offer several advantages over traditional cameras, such as, high temporal resolution, very high dynamic range, and no motion blur. To unlock the potential of such sensors, motion compensation methods have been recently proposed. We present a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches. We call them focus loss functions since they have strong connections with functions used in traditional shape-from-focus applications. The proposed loss functions allow bringing mature computer vision tools to the realm of event cameras. We compare the accuracy and runtime performance of all loss functions on a publicly available dataset, and conclude that the variance, the gradient and the Laplacian magnitudes are among the best loss functions. The applicability of the loss functions is shown on multiple tasks: rotational motion, depth and optical flow estimation. The proposed focus loss functions allow to unlock the outstanding properties of event cameras. --- paper_title: Low-latency visual odometry using event-based feature tracks paper_content: New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional camera and an event-based sensor in the same pixel array. These sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. In this paper, we present a low-latency visual odometry algorithm for the DAVIS sensor using event-based feature tracks. Features are first detected in the grayscale frames and then tracked asynchronously using the stream of events. The features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping. We show that our method successfully tracks the 6-DOF motion of the sensor in natural scenes. This is the first work on event-based visual odometry with the DAVIS sensor using feature tracks. --- paper_title: Events-To-Video: Bringing Modern Computer Vision to Event Cameras paper_content: Event cameras are novel sensors that report brightness changes in the form of asynchronous "events" instead of intensity frames. They have significant advantages over conventional cameras: high temporal resolution, high dynamic range, and no motion blur. Since the output of event cameras is fundamentally different from conventional cameras, it is commonly accepted that they require the development of specialized algorithms to accommodate the particular nature of events. In this work, we take a different view and propose to apply existing, mature computer vision techniques to videos reconstructed from event data. We propose a novel, recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data. Our experiments show that our approach surpasses state-of-the-art reconstruction methods by a large margin (> 20%) in terms of image quality. We further apply off-the-shelf computer vision algorithms to videos reconstructed from event data on tasks such as object classification and visual-inertial odometry, and show that this strategy consistently outperforms algorithms that were specifically designed for event data. We believe that our approach opens the door to bringing the outstanding properties of event cameras to an entirely new range of tasks. --- paper_title: Event-Based Visual Flow paper_content: This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost. --- paper_title: EventNet: Asynchronous Recursive Event Processing paper_content: Event cameras are bio-inspired vision sensors that mimic retinas to asynchronously report per-pixel intensity changes rather than outputting an actual intensity image at regular intervals. This new paradigm of image sensor offers significant potential advantages; namely, sparse and non-redundant data representation. Unfortunately, however, most of the existing artificial neural network architectures, such as a CNN, require dense synchronous input data, and therefore, cannot make use of the sparseness of the data. We propose EventNet, a neural network designed for real-time processing of asynchronous event streams in a recursive and event-wise manner. EventNet models dependence of the output on tens of thousands of causal events recursively using a novel temporal coding scheme. As a result, at inference time, our network operates in an event-wise manner that is realized with very few sum-of-the-product operations---look-up table and temporal feature aggregation---which enables processing of 1 mega or more events per second on standard CPU. In experiments using real data, we demonstrated the real-time performance and robustness of our framework. --- paper_title: Address-Event Based Stereo Vision with Bio-Inspired Silicon Retina Imagers paper_content: Several industry, home, or automotive applications need 3D or at least range data of the observed environment to operate. Such applications are, e.g., driver assistance systems, home care systems, or 3D sensing and measurement for industrial production. State-of-the-art range sensors are laser range finders or laser scanners (LIDAR, light detection and ranging), time-of-flight (TOF) cameras, and ultrasonic sound sensors. All of them are embedded, which means that the sensors operate independently and have an integrated processing unit. This is advantageous because the processing power in the mentioned applications is limited and they are computationally intensive anyway. Another benefits of embedded systems are a low power consumption and a small form factor. Furthermore, embedded systems are full customizable by the developer and can be adapted to the specific application in an optimal way. A promising alternative to the mentioned sensors is stereo vision. Classic stereo vision uses a stereo camera setup, which is built up of two cameras (stereo camera head), mounted in parallel and separated by the baseline. It captures a synchronized stereo pair consisting of the left camera’s image and the right camera’s image. The main challenge of stereo vision is the reconstruction of 3D information of a scene captured from two different points of view. Each visible scene point is projected on the image planes of the cameras. Pixels which represent the same scene points on different image planes correspond to each other. These correspondences can then be used to determine the three dimensional position of the projected scene point in a defined coordinate system. In more detail, the horizontal displacement, called the disparity, is inverse proportional to the scene point’s depth. With this information and the camera’s intrinsic parameters (principal point and focal length), the 3D position can be reconstructed. Fig. 1 shows a typical stereo camera setup. The projections of scene point P are pl and pr. Once the correspondences are found, the disparity is calculated with --- paper_title: Bio-inspired Stereo Vision System with Silicon Retina Imagers paper_content: This paper presents a silicon retina-based stereo vision system, which is used for a pre-crash warning application for side impacts. We use silicon retina imagers for this task, because the advantages of the camera, derived from the human vision system, are high temporal resolution up to 1ms and the handling of various lighting conditions with a dynamic range of ~120dB . A silicon retina delivers asynchronous data which are called address events (AE). Different stereo matching algorithms are available, but these algorithms normally work with full frame images. In this paper we evaluate how the AE data from the silicon retina sensors must be adapted to work with full-frame area-based and feature-based stereo matching algorithms. --- paper_title: Cooperative computation of stereo disparity paper_content: Perhaps one of the most striking differences between a brain and today’s computers is the amount of “wiring.” In a digital computer the ratio of connections to components is about 3, whereas for the mammalian cortex it lies between 10 and 10,000 (1). --- paper_title: Vergence control with a neuromorphic iCub paper_content: Vergence control and tracking allow a robot to maintain an accurate estimate of a dynamic object three dimensions, improving depth estimation at the fixation point. Brain-inspired implementations of vergence control are based on models of complex binocular cells of the visual cortex sensitive to disparity. The energy of cells activation provides a disparity-related signal that can be reliably used for vergence control. We implemented such a model on the neuromorphic iCub, equipped with a pair of brain inspired vision sensors. Such sensors provide low-latency, compressed and high temporal resolution visual information related to changes in the scene. We demonstrate the feasibility of a fully neuromorphic system for vergence control and show that this implementation works in real-time, providing fast and accurate control for a moving stimulus up to 2 Hz, sensibly decreasing the latency associated to frame-based cameras. Additionally, thanks to the high dynamic range of the sensor, the control shows the same accuracy under very different illumination. --- paper_title: A spiking neural network architecture for visual motion estimation paper_content: Current interest in neuromorphic computing continues to drive development of sensors and hardware for spike-based computation. Here we describe a hierarchical architecture for visual motion estimation which uses a spiking neural network to exploit the sparse high temporal resolution data provided by neuromorphic vision sensors. Although spike-based computation differs from traditional computer vision approaches, our architecture is similar in principle to the canonical Lucas-Kanade algorithm. Output spikes from the architecture represent the direction of motion to the nearest 45 degrees, and the speed within a factor of √2 over the range 0.02 to 0.27 pixels/ms. --- paper_title: On the use of orientation filters for 3D reconstruction in event-driven stereo vision paper_content: The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of restrictions applied to the matching algorithm. This strategy provides a larger number of pairs of matching events, improving the final 3D reconstruction. --- paper_title: Spiking Cooperative Stereo-Matching at 2 ms Latency with Neuromorphic Hardware paper_content: We demonstrate a spiking neural network that extracts spatial depth information from a stereoscopic visual input stream. The system makes use of a scalable neuromorphic computing platform, SpiNNaker, and neuromorphic vision sensors, so called silicon retinas, to solve the stereo matching (correspondence) problem in real-time. It dynamically fuses two retinal event streams into a depth-resolved event stream with a fixed latency of 2 ms, even at input rates as high as several 100,000 events per second. The network design is simple and portable so it can run on many types of neuromorphic computing platforms including FPGAs and dedicated silicon. --- paper_title: Event-driven sensing and processing for high-speed robotic vision paper_content: Comunicacion presentada al "BioCAS 2014" celebrado en Laussane (Suiza) del 22 al 24 de Octubre de 2014 --- paper_title: CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking paper_content: This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies. --- paper_title: Spatial and temporal receptive fields of geniculate and cortical cells and directional selectivity paper_content: Abstract The spatio-temporal receptive fields (RFs) of cells in the macaque monkey lateral geniculate nucleus (LGN) and striate cortex (V1) have been examined and two distinct sub-populations of non-directional V1 cells have been found: those with a slow largely monophasic temporal RF, and those with a fast very biphasic temporal response. These two sub-populations are in temporal quadrature, the fast biphasic cells crossing over from one response phase to the reverse just as the slow monophasic cells reach their peak response. The two sub-populations also differ in the spatial phases of their RFs. A principal components analysis of the spatio-temporal RFs of directional V1 cells shows that their RFs could be constructed by a linear combination of two components, one of which has the temporal and spatial characteristics of a fast biphasic cell, and the other the temporal and spatial characteristics of a slow monophasic cell. Magnocellular LGN cells are fast and biphasic and lead the fast-biphasic V1 subpopulation by 7 ms; parvocellular LGN cells are slow and largely monophasic and lead the slow monophasic V1 sub-population by 12 ms. We suggest that directional V1 cells get inputs in the approximate temporal and spatial quadrature required for motion detection by combining signals from the two non-directional cortical sub-populations which have been identified, and that these sub-populations have their origins in magno and parvo LGN cells, respectively. --- paper_title: An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data paper_content: This paper introduces a novel methodology for training an event-based classifier with synthetic and raw dynamic vision sensor (DVS) data. The proposed supervised method takes advantage of the spiking activity to build histograms and train the classifier in the frame domain using the stochastic gradient descend algorithm. In addition, this approach can cope with neuron leakages, a desirable feature for real-world applications, since it captures the dynamics of the spikes. We tested our method on the MNIST data set using different encodings and DVS-based data sets such as N-MNIST, MNIST-DVS, and Fast-Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST to date with a spiking convolutional network 97.77%, as well as, 100% on the Fast-Poker-DVS data set. Moreover, by using the proposed method we were able to retrain the output layer of a spiking neural network and increase its performance by 2% suggesting that our classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. Lastly, this work also presents a comparison between different data sets in terms of total activity and network latency. --- paper_title: Computational modelling of visual attention paper_content: Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment. First, the perceptual saliency of stimuli critically depends on the surrounding context. Second, a unique 'saliency map' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy. Third, inhibition of return, the process by which the currently attended location is prevented from being attended again, is a crucial element of attentional deployment. Fourth, attention and eye movements tightly interplay, posing computational challenges with respect to the coordinate system used to control attention. And last, scene understanding and object recognition strongly constrain the selection of attended locations. Insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention. --- paper_title: Spike time based unsupervised learning of receptive fields for event-driven vision paper_content: Event-driven vision sensors have the potential to support a new generation of efficient and robust robots. This requires the development of a new computational framework that exploits not only the spatial information, like in the traditional frame-based approach, but also the temporal content of the sensory data. We propose a method for unsupervised learning of filters for the processing of the visual signal from event-driven sensors. This method exploits the temporal coincidence of events generated by each object in a spatial location of the visual field. The approach is based on a modification of Spike Timing Dependent Plasticity that takes into account the specific implementation on the robot and the characteristics of the used sensor. It gives rise to oriented spatial filters that are very similar to the receptive fields observed in the primary visual cortex and traditionally used in bio-inspired hierarchical structures for object recognition, as well as to novel curved spatial structures. Using mutual information measure we provide a quantitative evidence that such curved spatial filters provide more information than equivalent oriented Gabor filters and can be an important aspect for object recognition in robotic applications. --- paper_title: Neuromorphic Deep Learning Machines paper_content: An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. --- paper_title: Frame-free dynamic digital vision paper_content: Conventional image sensors produce massive amounts of redundant data and are limited in temporal resolution by the frame rate. This paper reviews our recent breakthrough in the development of a high- performance spike-event based dynamic vision sensor (DVS) that discards the frame concept entirely, and then describes novel digital methods for efficient low-level filtering and feature extraction and high-level object tracking that are based on the DVS spike events. These methods filter events, label them, or use them for object tracking. Filtering reduces the number of events but improves the ratio of informative events. Labeling attaches additional interpretation to the events, e.g. orientation or local optical flow. Tracking uses the events to track moving objects. Processing occurs on an event-by-event basis and uses the event time and identity as the basis for computation. A common memory object for filtering and labeling is a spatial map of most recent past event times. Processing methods typically use these past event times together with the present event in integer branching logic to filter, label, or synthesize new events. These methods are straightforwardly computed on serial digital hardware, resulting in a new event- and timing-based approach for visual computation that efficiently integrates a neural style of computation with digital hardware. All code is open- sourced in the jAER project (jaer.wiki.sourceforge.net). --- paper_title: Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input paper_content: Computational models of visual processing often use frame-based image acquisition techniques to process a temporally changing stimulus. This approach is unlike biological mechanisms that are spike-based and independent of individual frames. The neuromorphic Dynamic Vision Sensor (DVS) [Lichtsteiner et al., 2008] provides a stream of independent visual events that indicate local illumination changes, resembling spiking neurons at a retinal level. We introduce a new approach for the modelling of cortical mechanisms of motion detection along the dorsal pathway using this type of representation. Our model combines filters with spatio-temporal tunings also found in visual cortex to yield spatio-temporal and direction specificity. We probe our model with recordings of test stimuli, articulated motion and ego-motion. We show how our approach robustly estimates optic flow and also demonstrate how this output can be used for classification purposes. --- paper_title: Spiking Elementary Motion Detector in Neuromorphic Systems paper_content: Apparent motion of the surroundings on an agent's retina can be used to navigate through cluttered environments, avoid collisions with obstacles, or track targets of interest. The pattern of apparent motion of objects, (i.e., the optic flow), contains spatial information about the surrounding environment. For a small, fast-moving agent, as used in search and rescue missions, it is crucial to estimate the distance to close-by objects to avoid collisions quickly. This estimation cannot be done by conventional methods, such as frame-based optic flow estimation, given the size, power, and latency constraints of the necessary hardware. A practical alternative makes use of event-based vision sensors. Contrary to the frame-based approach, they produce so-called events only when there are changes in the visual scene. We propose a novel asynchronous circuit, the spiking elementary motion detector (sEMD), composed of a single silicon neuron and synapse, to detect elementary motion from an event-based vision sensor. The sEMD encodes the time an object's image needs to travel across the retina into a burst of spikes. The number of spikes within the burst is proportional to the speed of events across the retina. A fast but imprecise estimate of the time-to-travel can already be obtained from the first two spikes of a burst and refined by subsequent interspike intervals. The latter encoding scheme is possible due to an adaptive nonlinear synaptic efficacy scaling. We show that the sEMD can be used to compute a collision avoidance direction in the context of robotic navigation in a cluttered outdoor environment and compared the collision avoidance direction to a frame-based algorithm. The proposed computational principle constitutes a generic spiking temporal correlation detector that can be applied to other sensory modalities (e.g., sound localization), and it provides a novel perspective to gating information in spiking neural networks. --- paper_title: A neuromorphic controller for a robotic vehicle equipped with a dynamic vision sensor paper_content: Neuromorphic electronic systems exhibit advantageous characteristics, in terms of low energy consumption and low response latency, which can be useful in robotic applications that require compact and low power embedded computing resources. However, these neuromorphic circuits still face significant limitations that make their usage challenging: these include low precision, variability of components, sensitivity to noise and temperature drifts, as well as the currently limited number of neurons and synapses that are typically emulated on a single chip. In this paper, we show how it is possible to achieve functional robot control strategies using a mixed signal analog/digital neuromorphic processor interfaced to a mobile robotic platform equipped with an event-based dynamic vision sensor. We provide a proof of concept implementation of obstacle avoidance and target acquisition using biologically plausible spiking neural networks directly emulated by the neuromorphic hardware. To our knowledge, this is the first demonstration of a working spike-based neuromorphic robotic controller in this type of hardware which illustrates the feasibility, as well as limitations, of this approach. --- paper_title: What Can Neuromorphic Event-Driven Precise Timing Add to Spike-Based Pattern Recognition? paper_content: This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, previously unknown information is available event based. This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies 30-60i¾ Hz. The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations. --- paper_title: Event-driven visual attention for the humanoid robot iCub paper_content: Fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment. Here we present a biologically inspired attention system developed for the humanoid robot iCub. It is based on input from unconventional event-driven vision sensors and an efficient computational method. The resulting system shows low-latency and fast determination of the location of the focus of attention. The performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics. Results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend. --- paper_title: A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems paper_content: Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. --- paper_title: Neuromorphic sensory systems paper_content: Biology provides examples of efficient machines which greatly outperform conventional technology. Designers in neuromorphic engineering aim to construct electronic systems with the same efficient style of computation. This task requires a melding of novel engineering principles with knowledge gleaned from neuroscience. We discuss recent progress in realizing neuromorphic sensory systems which mimic the biological retina and cochlea, and subsequent sensor processing. The main trends are the increasing number of sensors and sensory systems that communicate through asynchronous digital signals analogous to neural spikes; the improved performance and usability of these sensors; and novel sensory processing methods which capitalize on the timing of spikes from these sensors. Experiments using these sensors can impact how we think the brain processes sensory information. --- paper_title: Differential Evolution and Bayesian Optimisation for Hyper-Parameter Selection in Mixed-Signal Neuromorphic Circuits Applied to UAV Obstacle Avoidance. paper_content: The Lobula Giant Movement Detector (LGMD) is a an identified neuron of the locust that detects looming objects and triggers its escape responses. Understanding the neural principles and networks that lead to these fast and robust responses can lead to the design of efficient facilitate obstacle avoidance strategies in robotic applications. Here we present a neuromorphic spiking neural network model of the LGMD driven by the output of a neuromorphic Dynamic Vision Sensor (DVS), which has been optimised to produce robust and reliable responses in the face of the constraints and variability of its mixed signal analogue-digital circuits. As this LGMD model has many parameters, we use the Differential Evolution (DE) algorithm to optimise its parameter space. We also investigate the use of Self-Adaptive Differential Evolution (SADE) which has been shown to ameliorate the difficulties of finding appropriate input parameters for DE. We explore the use of two biological mechanisms: synaptic plasticity and membrane adaptivity in the LGMD. We apply DE and SADE to find parameters best suited for an obstacle avoidance system on an unmanned aerial vehicle (UAV), and show how it outperforms state-of-the-art Bayesian optimisation used for comparison. --- paper_title: Improved Cooperative Stereo Matching for Dynamic Vision Sensors with Ground Truth Evaluation paper_content: Event-based vision, as realized by bio-inspired Dynamic Vision Sensors (DVS), is gaining more and more popularity due to its advantages of high temporal resolution, wide dynamic range and power efficiency at the same time. Potential applications include surveillance, robotics, and autonomous navigation under uncontrolled environment conditions. In this paper, we deal with event-based vision for 3D reconstruction of dynamic scene content by using two stationary DVS in a stereo configuration. We focus on a cooperative stereo approach and suggest an improvement over a previously published algorithm that reduces the measured mean error by over 50 percent. An available ground truth data set for stereo event data is utilized to analyze the algorithm's sensitivity to parameter variation and for comparison with competing techniques. --- paper_title: Hierarchical models of object recognition in cortex paper_content: Visual processing in cortex is classically modeled as a hierarchy of increasingly sophisticated representations, naturally extending the model of simple to complex cells of Hubel and Wiesel. Surprisingly, little quantitative modeling has been done to explore the biological feasibility of this class of models to explain aspects of higher-level visual processing such as object recognition. We describe a new hierarchical model consistent with physiological data from inferotemporal cortex that accounts for this complex visual task and makes testable predictions. The model is based on a MAX-like operation applied to inputs to certain cortical neurons that may have a general role in cortical function. --- paper_title: ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation paper_content: Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) are useful for many practical tasks in machine learning. Synaptic weights, as well as neuron activation functions within the deep network are typically stored with high-precision formats, e.g. 32 bit floating point. However, since storage capacity is limited and each memory access consumes power, both storage capacity and memory access are two crucial factors in these networks. Here we present a method and present the ADaPTION toolbox to extend the popular deep learning library Caffe to support training of deep CNNs with reduced numerical precision of weights and activations using fixed point notation. ADaPTION includes tools to measure the dynamic range of weights and activations. Using the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit weights and activations with only 0.8% drop in Top-1 accuracy. The quantization, especially of the activations, leads to increase of up to 50% of sparsity especially in early and intermediate layers, which we exploit to skip multiplications with zero, thus performing faster and computationally cheaper inference. --- paper_title: Modeling orientation selectivity using a neuromorphic multi-chip system paper_content: The growing interest in pulse-mode processing by neural networks is encouraging the development of hardware implementations of massively parallel, distributed networks of integrate-and-fire (I&F) neurons. We have developed a reconfigurable multi-chip neuronal system for modeling feature selectivity and applied it to oriented visual stimuli. Our system comprises a temporally differentiating imager and a VLSI competitive network of neurons which use an asynchronous address event representation (AER) for communication. Here we describe the overall system, and present experimental data demonstrating the effect of recurrent connectivity on the pulse-based orientation selectivity. --- paper_title: Simultaneous Optical Flow and Intensity Estimation from an Event Camera paper_content: Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur. --- paper_title: Spatiotemporal multiple persons tracking using Dynamic Vision Sensor paper_content: Although motion analysis has been extensively investigated in the literature and a wide variety of tracking algorithms have been proposed, the problem of tracking objects using the Dynamic Vision Sensor requires a slightly different approach. Dynamic Vision Sensors are biologically inspired vision systems that asynchronously generate events upon relative light intensity changes. Unlike conventional vision systems, the output of such sensor is not an image (frame) but an address events stream. Therefore, most of the conventional tracking algorithms are not appropriate for the DVS data processing. In this paper, we introduce algorithm for spatiotemporal tracking that is suitable for Dynamic Vision Sensor. In particular, we address the problem of multiple persons tracking in the occurrence of high occlusions. We investigate the possibility to apply Gaussian Mixture Models for detection, description and tracking objects. Preliminary results prove that our approach can successfully track people even when their trajectories are intersecting. --- paper_title: Toward real-time particle tracking using an event-based dynamic vision sensor paper_content: Optically based measurements in high Reynolds number fluid flows often require high-speed imaging techniques. These cameras typically record data internally and thus are limited by the amount of onboard memory available. A novel camera technology for use in particle tracking velocimetry is presented in this paper. This technology consists of a dynamic vision sensor in which pixels operate in parallel, transmitting asynchronous events only when relative changes in intensity of approximately 10% are encountered with a temporal resolution of 1 μs. This results in a recording system whose data storage and bandwidth requirements are about 100 times smaller than a typical high-speed image sensor. Post-processing times of data collected from this sensor also increase to about 10 times faster than real time. We present a proof-of-concept study comparing this novel sensor with a high-speed CMOS camera capable of recording up to 2,000 fps at 1,024 × 1,024 pixels. Comparisons are made in the ability of each system to track dense (ρ >1 g/cm3) particles in a solid–liquid two-phase pipe flow. Reynolds numbers based on the bulk velocity and pipe diameter up to 100,000 are investigated. --- paper_title: Fast sensory motor control based on event-based hybrid neuromorphic-procedural system paper_content: Fast sensory-motor processing is challenging when using traditional frame-based cameras and computers. Here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal. The system consists of a 128times128 retina that asynchronously reports scene reflectance changes, a laptop PC, and a servo motor controller. Components are interconnected by USB. The retina looks down onto the field in front of the goal. Moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal. The ball's position and velocity are used to control the servo motor. Running under Windows XP, the reaction latency is 2.8plusmn0.5 ms at a CPU load of 1 million events per second (Meps), although fast balls only create ~30 keps. This system demonstrates the advantages of hybrid event-based sensory motor processing --- paper_title: Asynchronous Corner Detection and Tracking for Event Cameras in Real Time paper_content: The recent emergence of bioinspired event cameras has opened up exciting new possibilities in high-frequency tracking, bringing robustness to common problems in traditional vision, such as lighting changes and motion blur. In order to leverage these attractive attributes of the event cameras, research has been focusing on understanding how to process their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream essentially forming frames of events grouped according to their timestamp, we are still to exploit the power of these cameras. In this spirit, this letter proposes a new, purely event-based corner detector, and a novel corner tracker, demonstrating that it is possible to detect corners and track them directly on the event stream in real time. Evaluation on benchmarking datasets reveals a significant boost in the number of detected corners and the repeatability of such detections over the state of the art even in challenging scenarios with the proposed approach while enabling more than a 4$\times$ speed-up when compared to the most efficient algorithm in the literature. The proposed pipeline detects and tracks corners at a rate of more than 7.5 million events per second, promising great impact in high-speed applications. --- paper_title: Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor paper_content: This work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic "silicon-retina" image sensor, and the algorithm developed for processing the asynchronous output data delivered by this sensor. The main purpose of these efforts is to provide a flexible, compact, low-power and low-cost traffic monitoring system which is capable of determining the velocity of passing vehicles simultaneously on multiple lanes. The system and algorithm proposed exploit the unique characteristics of the image sensor with focal-plane analog preprocessing. These features include sparse asynchronous data output with high temporal resolution and low latency, high dynamic range and low power consumption. The system is able to measure velocities of vehicles in the range 20 to 300 km/h on up to four lanes simultaneously, day and night and under variable atmospheric conditions, with a resolution of 1 km/h. Results of vehicle speed measurements taken from a test installation of the system on a four-lane highway are presented and discussed. The accuracy of the speed estimate has been evaluated on the basis of calibrated light-barrier speed measurements. The speed estimation error has a standard deviation of 2.3 km/h and near zero mean --- paper_title: Fast event-based Harris corner detection exploiting the advantages of event-driven cameras paper_content: The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost. --- paper_title: Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception paper_content: The combination of spiking neural networks and event-based vision sensors holds the potential of highly efficient and high-bandwidth optical flow estimation. This paper presents the first hierarchical spiking architecture in which motion (direction and speed) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera. A novel adaptive neuron model and stable spike-timing-dependent plasticity formulation are at the core of this neural network governing its spike-based processing and learning, respectively. After convergence, the neural architecture exhibits the main properties of biological visual motion systems, namely feature extraction and local and global motion perception. Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer. The proposed solution is validated using synthetic and real event sequences. Along with this paper, we provide the cuSNN library, a framework that enables GPU-accelerated simulations of large-scale spiking neural networks. Source code and samples are available at https://github.com/tudelft/cuSNN. --- paper_title: Bio-inspired Motion Estimation with Event-Driven Sensors paper_content: This paper presents a method for image motion estimation for event-based sensors. Accurate and fast image flow estimation still challenges Computer Vision. A new paradigm based on asynchronous event-based data provides an interesting alternative and has shown to provide good estimation at high contrast contours by estimating motion based on very accurate timing. However, these techniques still fail in regions of high-frequency texture. This work presents a simple method for locating those regions, and a novel phase-based method for event sensors that estimates more accurately these regions. Finally, we evaluate and compare our results with other state-of-the-art techniques. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Adaptive Time-Slice Block-Matching Optical Flow Algorithm for Dynamic Vision Sensors paper_content: Dynamic Vision Sensors (DVS) output asynchronous log intensity change events. They have potential applications in high-speed robotics, autonomous cars and drones. The precise event timing, sparse output, and wide dynamic range of the events are well suited for optical flow, but conventional optical flow (OF) algorithms are not well matched to the event stream data. This paper proposes an event-driven OF algorithm called adaptive block-matching optical flow (ABMOF). ABMOF uses time slices of accumulated DVS events. The time slices are adaptively rotated based on the input events and OF results. Compared with other methods such as gradient-based OF, ABMOF can efficiently be implemented in compact logic circuits. We developed both ABMOF and Lucas-Kanade (LK) algorithms using our adapted slices. Results shows that ABMOF accuracy is comparable with LK accuracy on natural scene data including sparse and dense texture, high dynamic range, and fast motion exceeding 30,000 pixels per second. --- paper_title: Independent motion detection with event-driven cameras paper_content: Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target. --- paper_title: Event-Based Visual Inertial Odometry paper_content: Event-based cameras provide a new visual sensing model by detecting changes in image intensity asynchronously across all pixels on the camera. By providing these events at extremely high rates (up to 1MHz), they allow for sensing in both high speed and high dynamic range situations where traditional cameras may fail. In this paper, we present the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit, to provide accurate metric tracking of a cameras full 6dof pose. Our algorithm is asynchronous, and provides measurement updates at a rate proportional to the camera velocity. The algorithm selects features in the image plane, and tracks spatiotemporal windows around these features within the event stream. An Extended Kalman Filter with a structureless measurement model then fuses the feature tracks with the output of the IMU. The camera poses from the filter are then used to initialize the next step of the tracker and reject failed tracks. We show that our method successfully tracks camera motion on the Event-Camera Dataset in a number of challenging situations. --- paper_title: Machine learning for high-speed corner detection paper_content: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of tlie available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion. --- paper_title: Robust visual tracking with a freely-moving event camera paper_content: Event cameras are a new technology that can enable low-latency, fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes in the scene and have a very high temporal resolution (< 1μs). Moving targets produce dense spatio-temporal streams of events that do not suffer from information loss “between frames”, which can occur when traditional cameras are used to track fast-moving targets. Event-based tracking algorithms need to be able to follow the target position within the spatio-temporal data, while rejecting clutter events that occur as a robot moves in a typical office setting. We introduce a particle filter with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections. The proposed system provides a more persistent tracking compared to prior state-of-the-art, especially when the robot is actively following a target with its gaze. Experiments are performed on the iCub humanoid robot performing ball tracking and gaze following. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: Event-based feature tracking with probabilistic data association paper_content: Asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking. The few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models. Such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks. In this paper, we introduce a novel soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. We show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras. --- paper_title: Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking paper_content: This paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor. It allows the tracking of multiple visual features in real time, achieving an update rate of several hundred kilohertz on a standard desktop PC. The approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework. Various kernels, such as Gaussian, Gabor, combinations of Gabor functions, and arbitrary user-defined kernels, are used to track features from incoming events. The trackers described in this paper are capable of handling variations in position, scale, and orientation through the use of multiple pools of trackers. This approach avoids the $N^{2}$ operations per event associated with conventional kernel-based convolution operations with $N \times N$ kernels. The tracking performance was evaluated experimentally for each type of kernel in order to demonstrate the robustness of the proposed solution. --- paper_title: Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe- based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Fur- thermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s. --- paper_title: Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras paper_content: This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly. --- paper_title: Asynchronous Event-Based Visual Shape Tracking for Stable Haptic Feedback in Microrobotics paper_content: Micromanipulation systems have recently been receiving increased attention. Teleoperated or automated micromanipulation is a challenging task due to the need for high-frequency position or force feedback to guarantee stability. In addition, the integration of sensors within micromanipulation platforms is complex. Vision is a commonly used solution for sensing; unfortunately, the update rate of the frame-based acquisition process of current available cameras cannot ensure-at reasonable costs-stable automated or teleoperated control at the microscale level, where low inertia produces highly unreachable dynamic phenomena. This paper presents a novel vision-based microrobotic system combining both an asynchronous address event representation silicon retina and a conventional frame-based camera. Unlike frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events in a manner similar to the output cells of a biological retina, enabling high update rates. This paper introduces an event-based iterative closest point algorithm to track a microgripper's position at a frequency of 4 kHz. The temporal precision of the asynchronous silicon retina is used to provide a haptic feedback to assist users during manipulation tasks, whereas the frame-based camera is used to retrieve the position of the object that must be manipulated. This paper presents the results of an experiment on teleoperating a sphere of diameter around 50 μm using a piezoelectric gripper in a pick-and-place task. --- paper_title: Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion paper_content: In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes. --- paper_title: An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking paper_content: Object tracking is an important step in many artificial vision tasks. The current state-of-the-art implementations remain too computationally demanding for the problem to be solved in real time with high dynamics. This paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera. This paper extends the pictorial structures model introduced by Fischler and Elschlager 40 years ago and introduces a new formulation of the problem, allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional PC. It relies on the concept of representing an object as a set of basic elements linked by springs. These basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kilohertz on a conventional computer. For each incoming event, the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real time. This introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane. The elastic energy of this virtual mechanical system provides a quality criterion for tracking and can be used to determine whether the measured deformations are caused by the perspective projection of the perceived object or by occlusions. Experiments on real-world data show the robustness of the method in the context of dynamic face tracking. --- paper_title: Unsupervised Learning of Dense Optical Flow and Depth from Sparse Event Data paper_content: In this work we present unsupervised learning of depth and motion from sparse event data generated by a Dynamic Vision Sensor (DVS). To tackle this low level vision task, we use a novel encoder-decoder neural network architecture that aggregates multi-level features and addresses the problem at multiple resolutions. A feature decorrelation technique is introduced to improve the training of the network. A non-local sparse smoothness constraint is used to alleviate the challenge of data sparsity. Our work is the first that generates dense depth and optical flow information from sparse event data. Our results show significant improvements upon previous works that used deep learning for flow estimation from both images and events. --- paper_title: Speed Invariant Time Surface for Learning to Detect Corner Points With Event-Based Cameras paper_content: We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras ---our implementation processes up to 1.6Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance. --- paper_title: Block-matching optical flow for dynamic vision sensors: Algorithm and FPGA implementation paper_content: Rapid and low power computation of optical flow (OF) is potentially useful in robotics. The dynamic vision sensor (DVS) event camera produces quick and sparse output, and has high dynamic range, but conventional OF algorithms are frame-based and cannot be directly used with event-based cameras. Previous DVS OF methods do not work well with dense textured input and are designed for implementation in logic circuits. This paper proposes a new block-matching based DVS OF algorithm which is inspired by motion estimation methods used for MPEG video compression. The algorithm was implemented both in software and on FPGA. For each event, it computes the motion direction as one of 9 directions. The speed of the motion is set by the sample interval. Results show that the Average Angular Error can be improved by 30% compared with previous methods. The OF can be calculated on FPGA with 50 MHz clock in 0.2 us per event (11 clock cycles), 20 times faster than a Java software implementation running on a desktop PC. Sample data is shown that the method works on scenes dominated by edges, sparse features, and dense texture. --- paper_title: Asynchronous Neuromorphic Event-Driven Image Filtering paper_content: This paper introduces a new methodology to process asynchronously sampled image data captured by a new generation of biomimetic vision sensors. Unlike conventional cameras, these neuromorphic sensors acquire data not at fixed points in time for the entire array (frame-based) but sparse in space and time, i.e., pixel-individually and precisely timed only if new information is available (event-based). In this paper, we introduce a filtering methodology for asynchronously acquired gray-level data from an event-driven time-encoding imager. The paper first studies the properties of level-crossing sampling parameters in order to define threshold level properties and associated bandwidth needs. In a second stage, we introduce asynchronous linear and nonlinear filtering techniques. Examples are shown and examined on real data. Finally, the paper introduces a methodology to compare frame-based versus event-based computational costs. Implementations and experiments show that event-based gray-level filtering produces equivalent filtering accuracy as compared to frame-based ones. The main result of this work shows that, based on the number of operations to be carried out, beyond 3 frames per second (fps), event-based processing outperforms frame-based processing in terms of computational cost. --- paper_title: Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS) paper_content: Because standard cameras sample the scene at constant time intervals, they do not provide any information in the blind time between subsequent frames. However, for many high-speed robotic and vision applications, it is crucial to provide high-frequency measurement updates also during this blind time. This can be achieved using a novel vision sensor, called DAVIS, which combines a standard camera and an asynchronous event-based sensor in the same pixel array. The DAVIS encodes the visual content between two subsequent frames by an asynchronous stream of events that convey pixel-level brightness changes at microsecond resolution. We present the first algorithm to detect and track visual features using both the frames and the event data provided by the DAVIS. Features are first detected in the grayscale frames and then tracked asynchronously in the blind time between frames using the stream of events. To best take into account the hybrid characteristics of the DAVIS, features are built based on large, spatial contrast variations (i.e., visual edges), which are the source of most of the events generated by the sensor. An event-based algorithm is further presented to track the features using an iterative, geometric registration approach. The performance of the proposed method is evaluated on real data acquired by the DAVIS. --- paper_title: An iterative image registration technique with an application to stereo vision paper_content: Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. --- paper_title: Fast event-based corner detection paper_content: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20. --- paper_title: Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input paper_content: Computational models of visual processing often use frame-based image acquisition techniques to process a temporally changing stimulus. This approach is unlike biological mechanisms that are spike-based and independent of individual frames. The neuromorphic Dynamic Vision Sensor (DVS) [Lichtsteiner et al., 2008] provides a stream of independent visual events that indicate local illumination changes, resembling spiking neurons at a retinal level. We introduce a new approach for the modelling of cortical mechanisms of motion detection along the dorsal pathway using this type of representation. Our model combines filters with spatio-temporal tunings also found in visual cortex to yield spatio-temporal and direction specificity. We probe our model with recordings of test stimuli, articulated motion and ego-motion. We show how our approach robustly estimates optic flow and also demonstrate how this output can be used for classification purposes. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: Asynchronous event-based high speed vision for microparticle tracking paper_content: Summary This paper presents a new high speed vision system using an asynchronous address-event representation camera. Within this framework, an asynchronous event-based real-time Hough circle transform is developed to track microspheres. Thetechnologypresentedinthispaperallowsforarobustrealtimeevent-basedmultiobjectpositiondetectionatafrequency of several kHz with a low computational cost. Brownian motion is also detected within this context with both high speed and precision. The carried-out work is adapted to the automated or remote-operated microrobotic systems fulfilling theirneedofanextremelyfastvisionfeedback.Itisalsoavery promising solution tothemicrophysical phenomena analysis and particularly for the micro/nanoscale force measurement. --- paper_title: A combined corner and edge detector paper_content: The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed. --- paper_title: Spiking Optical Flow for Event-Based Sensors Using IBM's TrueNorth Neurosynaptic System paper_content: This paper describes a fully spike-based neural network for optical flow estimation from Dynamic Vision Sensor data. A low power embedded implementation of the method which combines the Asynchronous Time-based Image Sensor with IBM's TrueNorth Neurosynaptic System is presented. The sensor generates spikes with sub-millisecond resolution in response to scene illumination changes. These spike are processed by a spiking neural network running on TrueNorth with a 1 millisecond resolution to accurately determine the order and time difference of spikes from neighboring pixels, and therefore infer the velocity. The spiking neural network is a variant of the Barlow Levick method for optical flow estimation. The system is evaluated on two recordings for which ground truth motion is available, and achieves an Average Endpoint Error of 11% at an estimated power budget of under 80mW for the sensor and computation. --- paper_title: EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras paper_content: We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. ::: Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects simultaneously in the camera field of view. The objects and the camera are tracked by the VICON motion capture system. By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that our approach far surpasses its rivals and is well suited for scene constrained robotics applications. --- paper_title: Low-latency visual odometry using event-based feature tracks paper_content: New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional camera and an event-based sensor in the same pixel array. These sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. In this paper, we present a low-latency visual odometry algorithm for the DAVIS sensor using event-based feature tracks. Features are first detected in the grayscale frames and then tracked asynchronously using the stream of events. The features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping. We show that our method successfully tracks the 6-DOF motion of the sensor in natural scenes. This is the first work on event-based visual odometry with the DAVIS sensor using feature tracks. --- paper_title: Lifetime estimation of events from Dynamic Vision Sensors paper_content: We propose an algorithm to estimate the “lifetime” of events from retinal cameras, such as a Dynamic Vision Sensor (DVS). Unlike standard CMOS cameras, a DVS only transmits pixel-level brightness changes (“events”) at the time they occur with micro-second resolution. Due to its low latency and sparse output, this sensor is very promising for high-speed mobile robotic applications. We develop an algorithm that augments each event with its lifetime, which is computed from the event's velocity on the image plane. The generated stream of augmented events gives a continuous representation of events in time, hence enabling the design of new algorithms that outperform those based on the accumulation of events over fixed, artificially-chosen time intervals. A direct application of this augmented stream is the construction of sharp gradient (edge-like) images at any time instant. We successfully demonstrate our method in different scenarios, including high-speed quadrotor flips, and compare it to standard visualization methods. --- paper_title: Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor paper_content: We present an algorithm (SOFAS) to estimate the optical flow of events generated by a dynamic vision sensor (DVS). Where traditional cameras produce frames at a fixed rate, DVSs produce asynchronous events in response to intensity changes with a high temporal resolution. Our algorithm uses the fact that events are generated by edges in the scene to not only estimate the optical flow but also to simultaneously segment the image into objects which are travelling at the same velocity. This way it is able to avoid the aperture problem which affects other implementations such as Lucas-Kanade. Finally, we show that SOFAS produces more accurate results than traditional optic flow algorithms. --- paper_title: The Representation and Matching of Pictorial Structures paper_content: The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of "goodness" of matching or detection. --- paper_title: Event-Based Visual Flow paper_content: This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost. --- paper_title: Asynchronous, Photometric Feature Tracking using Events and Frames paper_content: We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called"events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes. --- paper_title: A pencil balancing robot using a pair of AER dynamic vision sensors paper_content: Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencil's location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencil's base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop. --- paper_title: Embedded Vision System for Real-Time Object Tracking using an Asynchronous Transient Vision Sensor paper_content: This paper presents an embedded vision system for object tracking applications based on a 128times128 pixel CMOS temporal contrast vision sensor. This imager asynchronously responds to relative illumination intensity changes in the visual scene, exhibiting a usable dynamic range of 120 dB and a latency of under 100 mus. The information is encoded in the form of address-event representation (AER) data. An algorithm for object tracking with 1 millisecond timestamp resolution of the AER data stream is presented. As a real-world application example, vehicle tracking for a traffic-monitoring is demonstrated in real time. The potential of the proposed algorithm for people tracking is also shown. Due to the efficient data pre-processing in the imager chip focal plane, the embedded vision system can be implemented using a low-cost, low-power digital signal processor --- paper_title: On event-based optical flow detection paper_content: Event-based sensing, i.e. the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. --- paper_title: DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition paper_content: Benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision (LeCun et al., 1998; Deng et al., 2009;). The challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches, and have led to great advances of the state of the art. Even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as ImageNet (Krizhevsky et al., 2012; LeCun et al., 2015). Neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor (DVS; Lichtsteiner et al., 2008). These sensors and their DAVIS (Dynamic and Active-pixel Vision Sensor) and ATIS (Asynchronous Time-based Image Sensor) derivatives (Brandli et al., 2014; Posch et al., 2014) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes. They thereby greatly reduce the amount of data to be processed, and their dynamic nature makes them a good fit for domains such as optical flow, object tracking, action recognition, or dynamic scene understanding. Compared to classical computer vision, neuromorphic vision is a younger and much smaller field of research, and lacks benchmarks, which impedes the progress of the field. To address this we introduce the largest event-based vision benchmark dataset published to date, hoping to satisfy a growing demand and stimulate challenges for the community. In particular, the availability of such benchmarks should help the development of algorithms processing event-based vision input, allowing a direct fair comparison of different approaches. We have explicitly chosen mostly dynamic vision tasks such as action recognition or tracking, which could benefit from the strengths of neuromorphic vision sensors, although algorithms that exploit these features are largely missing. ::: ::: A major reason for the lack of benchmarks is that currently neuromorphic vision sensors are only available as RD see Tan et al. (2015) for an informative review. Unlabeled DVS data was made available around 2007 in the jAER project1 and was used for development of spike timing-based unsupervised feature learning e.g., in Bichler et al. (2012). The first labeled and published event-based neuromorphic vision sensor benchmarks were created from the MNIST digit recognition dataset by jiggling the image on the screen (see Serrano-Gotarredona and Linares-Barranco, 2015 for an informative history) and later to reduce frame artifacts by jiggling the camera view with a pan-tilt unit (Orchard et al., 2015). These datasets automated the scene movement necessary to generate DVS output from the static images, and will be an important step forward for evaluating neuromorphic object recognition systems such as spiking deep networks (Perez-Carrasco et al., 2013; O'Connor et al., 2013; Cao et al., 2014; Diehl et al., 2015), which so far have been tested mostly on static image datasets converted into Poisson spike trains. But static image recognition is not the ideal use case for event-based vision sensors that are designed for dynamic scenes. Recently several additional DVS datasets were made available in the Frontiers research topic “Benchmarks and Challenges for Neuromorphic Engineering”2; in particular for navigation using multiple sensor modalities (Barranco et al., 2016) and for developing and benchmarking DVS and DAVIS optical flow methods (Rueckauer and Delbruck, 2016). ::: ::: This data report summarizes a new benchmark dataset in which we converted established visual video benchmarks for object tracking, action recognition and object recognition into spiking neuromorphic datasets, recorded with the DVS output (Lichtsteiner et al., 2008) of a DAVIS camera (Berner et al., 2013; Brandli et al., 2014). This report presents our approach for sensor calibration and capture of frame-based videos into neuromorphic vision datasets with minimal human intervention. We converted four widely used dynamic datasets: the VOT Challenge 2015 Dataset (Kristan et al., 2016), TrackingDataset3, the UCF-50 Action Recognition Dataset (Reddy and Shah, 2012), and the Caltech-256 Object Category Dataset (Griffin et al., 2006). We conclude with statistics and summaries of the datasets. --- paper_title: Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor paper_content: Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. --- paper_title: On-board real-time optic-flow for miniature event-based vision sensors paper_content: This paper presents a novel, drastically simplified method to compute optic flow on a miniaturized embedded vision system, suitable on-board of miniaturized indoor flying robots. Estimating optic flow is a common technique for robotic motion stabilization in systems without ground contact, such as unmanned aerial vehicles (UAVs). Because of high computing power requirements to process video camera data, most optic flow algorithms are implemented off-board on PCs or on dedicated hardware, connected through tethered or wireless links. Here, in contrast, we present a miniaturized stand-alone embedded system that utilizes a novel neuro-biologically inspired event-based vision sensor (DVS) to extract optic flow on-board in real-time with minimal computing requirements. The DVS provides asynchronous events that resemble temporal contrast changes at individual pixel level, instead of full image frames at regular time intervals. Such a representation provides high temporal resolution while simultaneously reducing the amount of data to be processed. We present a simple algorithm to extract optic flow information from such event-based vision data, which is sufficiently efficient in terms of data storage and processing power to be executed on an embedded 32bit ARM7 microcontroller in real-time. The developed stand-alone system is small, lightweight and energy efficient, and is ready to serve as sensor for ego motion estimates based on optic flow in autonomous UAVs. --- paper_title: Simultaneous Optical Flow and Intensity Estimation from an Event Camera paper_content: Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur. --- paper_title: A spiking neural network architecture for visual motion estimation paper_content: Current interest in neuromorphic computing continues to drive development of sensors and hardware for spike-based computation. Here we describe a hierarchical architecture for visual motion estimation which uses a spiking neural network to exploit the sparse high temporal resolution data provided by neuromorphic vision sensors. Although spike-based computation differs from traditional computer vision approaches, our architecture is similar in principle to the canonical Lucas-Kanade algorithm. Output spikes from the architecture represent the direction of motion to the nearest 45 degrees, and the speed within a factor of √2 over the range 0.02 to 0.27 pixels/ms. --- paper_title: Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception paper_content: The combination of spiking neural networks and event-based vision sensors holds the potential of highly efficient and high-bandwidth optical flow estimation. This paper presents the first hierarchical spiking architecture in which motion (direction and speed) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera. A novel adaptive neuron model and stable spike-timing-dependent plasticity formulation are at the core of this neural network governing its spike-based processing and learning, respectively. After convergence, the neural architecture exhibits the main properties of biological visual motion systems, namely feature extraction and local and global motion perception. Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer. The proposed solution is validated using synthetic and real event sequences. Along with this paper, we provide the cuSNN library, a framework that enables GPU-accelerated simulations of large-scale spiking neural networks. Source code and samples are available at https://github.com/tudelft/cuSNN. --- paper_title: Bio-inspired Motion Estimation with Event-Driven Sensors paper_content: This paper presents a method for image motion estimation for event-based sensors. Accurate and fast image flow estimation still challenges Computer Vision. A new paradigm based on asynchronous event-based data provides an interesting alternative and has shown to provide good estimation at high contrast contours by estimating motion based on very accurate timing. However, these techniques still fail in regions of high-frequency texture. This work presents a simple method for locating those regions, and a novel phase-based method for event sensors that estimates more accurately these regions. Finally, we evaluate and compare our results with other state-of-the-art techniques. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Adaptive Time-Slice Block-Matching Optical Flow Algorithm for Dynamic Vision Sensors paper_content: Dynamic Vision Sensors (DVS) output asynchronous log intensity change events. They have potential applications in high-speed robotics, autonomous cars and drones. The precise event timing, sparse output, and wide dynamic range of the events are well suited for optical flow, but conventional optical flow (OF) algorithms are not well matched to the event stream data. This paper proposes an event-driven OF algorithm called adaptive block-matching optical flow (ABMOF). ABMOF uses time slices of accumulated DVS events. The time slices are adaptively rotated based on the input events and OF results. Compared with other methods such as gradient-based OF, ABMOF can efficiently be implemented in compact logic circuits. We developed both ABMOF and Lucas-Kanade (LK) algorithms using our adapted slices. Results shows that ABMOF accuracy is comparable with LK accuracy on natural scene data including sparse and dense texture, high dynamic range, and fast motion exceeding 30,000 pixels per second. --- paper_title: Asynchronous frameless event-based optical flow paper_content: This paper introduces a process to compute optical flow using an asynchronous event-based retina at high speed and low computational load. A new generation of artificial vision sensors has now started to rely on biologically inspired designs for light acquisition. Biological retinas, and their artificial counterparts, are totally asynchronous and data driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework for processing visual data using asynchronous event-based acquisition, providing a method for the evaluation of optical flow. The paper shows that current limitations of optical flow computation can be overcome by using event-based visual acquisition, where high data sparseness and high temporal resolution permit the computation of optical flow with micro-second accuracy and at very low computational cost. --- paper_title: Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion paper_content: In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes. --- paper_title: Unsupervised Learning of Dense Optical Flow and Depth from Sparse Event Data paper_content: In this work we present unsupervised learning of depth and motion from sparse event data generated by a Dynamic Vision Sensor (DVS). To tackle this low level vision task, we use a novel encoder-decoder neural network architecture that aggregates multi-level features and addresses the problem at multiple resolutions. A feature decorrelation technique is introduced to improve the training of the network. A non-local sparse smoothness constraint is used to alleviate the challenge of data sparsity. Our work is the first that generates dense depth and optical flow information from sparse event data. Our results show significant improvements upon previous works that used deep learning for flow estimation from both images and events. --- paper_title: Block-matching optical flow for dynamic vision sensors: Algorithm and FPGA implementation paper_content: Rapid and low power computation of optical flow (OF) is potentially useful in robotics. The dynamic vision sensor (DVS) event camera produces quick and sparse output, and has high dynamic range, but conventional OF algorithms are frame-based and cannot be directly used with event-based cameras. Previous DVS OF methods do not work well with dense textured input and are designed for implementation in logic circuits. This paper proposes a new block-matching based DVS OF algorithm which is inspired by motion estimation methods used for MPEG video compression. The algorithm was implemented both in software and on FPGA. For each event, it computes the motion direction as one of 9 directions. The speed of the motion is set by the sample interval. Results show that the Average Angular Error can be improved by 30% compared with previous methods. The OF can be calculated on FPGA with 50 MHz clock in 0.2 us per event (11 clock cycles), 20 times faster than a Java software implementation running on a desktop PC. Sample data is shown that the method works on scenes dominated by edges, sparse features, and dense texture. --- paper_title: Frame-free dynamic digital vision paper_content: Conventional image sensors produce massive amounts of redundant data and are limited in temporal resolution by the frame rate. This paper reviews our recent breakthrough in the development of a high- performance spike-event based dynamic vision sensor (DVS) that discards the frame concept entirely, and then describes novel digital methods for efficient low-level filtering and feature extraction and high-level object tracking that are based on the DVS spike events. These methods filter events, label them, or use them for object tracking. Filtering reduces the number of events but improves the ratio of informative events. Labeling attaches additional interpretation to the events, e.g. orientation or local optical flow. Tracking uses the events to track moving objects. Processing occurs on an event-by-event basis and uses the event time and identity as the basis for computation. A common memory object for filtering and labeling is a spatial map of most recent past event times. Processing methods typically use these past event times together with the present event in integer branching logic to filter, label, or synthesize new events. These methods are straightforwardly computed on serial digital hardware, resulting in a new event- and timing-based approach for visual computation that efficiently integrates a neural style of computation with digital hardware. All code is open- sourced in the jAER project (jaer.wiki.sourceforge.net). --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: Spiking Optical Flow for Event-Based Sensors Using IBM's TrueNorth Neurosynaptic System paper_content: This paper describes a fully spike-based neural network for optical flow estimation from Dynamic Vision Sensor data. A low power embedded implementation of the method which combines the Asynchronous Time-based Image Sensor with IBM's TrueNorth Neurosynaptic System is presented. The sensor generates spikes with sub-millisecond resolution in response to scene illumination changes. These spike are processed by a spiking neural network running on TrueNorth with a 1 millisecond resolution to accurately determine the order and time difference of spikes from neighboring pixels, and therefore infer the velocity. The spiking neural network is a variant of the Barlow Levick method for optical flow estimation. The system is evaluated on two recordings for which ground truth motion is available, and achieves an Average Endpoint Error of 11% at an estimated power budget of under 80mW for the sensor and computation. --- paper_title: EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras paper_content: We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. ::: Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects simultaneously in the camera field of view. The objects and the camera are tracked by the VICON motion capture system. By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that our approach far surpasses its rivals and is well suited for scene constrained robotics applications. --- paper_title: Event-Based Visual Flow paper_content: This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost. --- paper_title: EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time paper_content: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras. --- paper_title: Cooperative computation of stereo disparity paper_content: Perhaps one of the most striking differences between a brain and today’s computers is the amount of “wiring.” In a digital computer the ratio of connections to components is about 3, whereas for the mammalian cortex it lies between 10 and 10,000 (1). --- paper_title: Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach paper_content: This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: the stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have the advantage to allow simultaneously high temporal resolution (better than 10μs) and wide dynamic range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order to exploit the potential of DVS and benefit from its features, depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor. This work deals with developing an appropriate approach for the asynchronous, event-driven stereo algorithm. We propose a modification of the cooperative network in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time - as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well suited for DVS data and can be successfully used for our efficient passive depth camera. --- paper_title: Context-aware event-driven stereo matching paper_content: Similarity measuring plays as an import role in stereo matching, whether for visual data from standard cameras or for those from novel sensors such as Dynamic Vision Sensors (DVS). Generally speaking, robust feature descriptors contribute to designing a powerful similarity measurement, as demonstrated by classic stereo matching methods. However, the kind and representative ability of feature descriptors for DVS data are so limited that achieving accurate stereo matching on DVS data becomes very challenging. In this paper, a novel feature descriptor is proposed to improve the accuracy for DVS stereo matching. Our feature descriptor can describe the local context or distribution of the DVS data, contributing to constructing an effective similarity measurement for DVS data matching, yielding an accurate stereo matching result. Our method is evaluated by testing our method on groundtruth data and comparing with various standard stereo methods. Experiments demonstrate the efficiency and effectiveness of our method. --- paper_title: HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition paper_content: This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. --- paper_title: On the use of orientation filters for 3D reconstruction in event-driven stereo vision paper_content: The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of restrictions applied to the matching algorithm. This strategy provides a larger number of pairs of matching events, improving the final 3D reconstruction. --- paper_title: EMVS: Event-based Multi-View Stereo paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (i) its ability to respond to scene edges—which naturally provide semidense geometric information without any pre-processing operation—and (ii) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semidense depth maps. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. --- paper_title: Event-based stereo matching using semiglobal matching paper_content: In this article, we focus on the problem of depth estimation from a stereo pair of event-based sensors. These sensors asynchronously capture pixel-level brightness changes information (events) instead of standard intensity images at a specified frame rate. So, these sensors provide sparse data at low latency and high temporal resolution over a wide intrascene dynamic range. However, new asynchronous, event-based processing algorithms are required to process the event streams. We propose a fully event-based stereo three-dimensional depth estimation algorithm inspired by semiglobal matching. Our algorithm considers the smoothness constraints between the nearby events to remove the ambiguous and wrong matches when only using the properties of a single event or local features. Experimental validation and comparison with several state-of-the-art, event-based stereo matching methods are provided on five different scenes of event-based stereo data sets. The results show that our method can operate well in an eve... --- paper_title: DTAM: Dense tracking and mapping in real-time paper_content: DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application. --- paper_title: Asynchronous Event-based Cooperative Stereo Matching Using Neuromorphic Silicon Retinas paper_content: Biologically-inspired event-driven silicon retinas, so called dynamic vision sensors (DVS), allow efficient solutions for various visual perception tasks, e.g. surveillance, tracking, or motion detection. Similar to retinal photoreceptors, any perceived light intensity change in the DVS generates an event at the corresponding pixel. The DVS thereby emits a stream of spatiotemporal events to encode visually perceived objects that in contrast to conventional frame-based cameras, is largely free of redundant background information. The DVS offers multiple additional advantages, but requires the development of radically new asynchronous, event-based information processing algorithms. In this paper we present a fully event-based disparity matching algorithm for reliable 3D depth perception using a dynamic cooperative neural network. The interaction between cooperative cells applies cross-disparity uniqueness-constraints and within-disparity continuity-constraints, to asynchronously extract disparity for each new event, without any need of buffering individual events. We have investigated the algorithm's performance in several experiments; our results demonstrate smooth disparity maps computed in a purely event-based manner, even in the scenes with temporally-overlapping stimuli. --- paper_title: Spiking Cooperative Stereo-Matching at 2 ms Latency with Neuromorphic Hardware paper_content: We demonstrate a spiking neural network that extracts spatial depth information from a stereoscopic visual input stream. The system makes use of a scalable neuromorphic computing platform, SpiNNaker, and neuromorphic vision sensors, so called silicon retinas, to solve the stereo matching (correspondence) problem in real-time. It dynamically fuses two retinal event streams into a depth-resolved event stream with a fixed latency of 2 ms, even at input rates as high as several 100,000 events per second. The network design is simple and portable so it can run on many types of neuromorphic computing platforms including FPGAs and dedicated silicon. --- paper_title: Event-driven stereo matching for real-time 3D panoramic vision paper_content: This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data. --- paper_title: A Low Power, High Throughput, Fully Event-Based Stereo System paper_content: We introduce a stereo correspondence system implemented fully on event-based digital hardware, using a fully graph-based non von-Neumann computation model, where no frames, arrays, or any other such data-structures are used. This is the first time that an end-to-end stereo pipeline from image acquisition and rectification, multi-scale spatiotemporal stereo correspondence, winner-take-all, to disparity regularization is implemented fully on event-based hardware. Using a cluster of TrueNorth neurosynaptic processors, we demonstrate their ability to process bilateral event-based inputs streamed live by Dynamic Vision Sensors (DVS), at up to 2,000 disparity maps per second, producing high fidelity disparities which are in turn used to reconstruct, at low power, the depth of events produced from rapidly changing scenes. Experiments on real-world sequences demonstrate the ability of the system to take full advantage of the asynchronous and sparse nature of DVS sensors for low power depth reconstruction, in environments where conventional frame-based cameras connected to synchronous processors would be inefficient for rapidly moving objects. System evaluation on event-based sequences demonstrates a ~ 200 A— improvement in terms of power per pixel per disparity map compared to the closest state-of-the-art, and maximum latencies of up to 11ms from spike injection to disparity map ejection. --- paper_title: Stereo Processing by Semiglobal Matching and Mutual Information paper_content: This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems. --- paper_title: Semi-Dense 3D Reconstruction with a Stereo Event Camera paper_content: Event cameras are bio-inspired sensors that offer several advantages, such as low latency, high-speed and high dynamic range, to tackle challenging scenarios in computer vision. This paper presents a solution to the problem of 3D reconstruction from data captured by a stereo event-camera rig moving in a static scene, such as in the context of stereo Simultaneous Localization and Mapping. The proposed method consists of the optimization of an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes. To improve the density of the reconstruction and to reduce the uncertainty of the estimation, a probabilistic depth-fusion strategy is also developed. The resulting method has no special requirements on either the motion of the stereo event-camera rig or on prior knowledge about the scene. Experiments demonstrate our method can deal with both texture-rich scenes as well as sparse scenes, outperforming state-of-the-art stereo methods based on event data image representations. --- paper_title: Realtime Time Synchronized Event-based Stereo paper_content: In this work, we propose a novel event based stereo method which addresses the problem of motion blur for a moving event camera. Our method uses the velocity of the camera and a range of disparities to synchronize the positions of the events, as if they were captured at a single point in time. We represent these events using a pair of novel time synchronized event disparity volumes, which we show remove motion blur for pixels at the correct disparity in the volume, while further blurring pixels at the wrong disparity. We then apply a novel matching cost over these time synchronized event disparity volumes, which both rewards similarity between the volumes while penalizing blurriness. We show that our method outperforms more expensive, smoothing based event stereo methods, by evaluating on the Multi Vehicle Stereo Event Camera dataset. --- paper_title: An Event-Driven Stereo System for Real-Time 3-D 360° Panoramic Vision paper_content: A new multiperspective stereo concept for real-time 3-D panoramic vision is presented in this paper. The main contribution is a novel event-driven stereo approach enabling 3-D 360° high-dynamic-range panoramic vision for real-time application in a natural environment. This approach makes use of a sparse visual code generated by a rotating pair of dynamic vision line sensors. The use of this system allows panoramic images to be generated by the transformation of events, eliminating the need to capture a large set of images. It thereby increases the acquisition speed, which improves accuracy in dynamic scenes. This paper focuses on its 3-D reconstruction and performance analysis using such a rotating multiperspective vision system. First, a theoretical analysis of the stereo matching accuracy is performed. Second, a depth error formulation is developed, which takes motion into consideration and reveals the leverage of scene dynamics on depth estimation. In this paper, disparity is measured in time units, which allows accurate depth maps to be estimated from a moving sensor system. Third, a stereo matching workflow is presented using standard stereo image matching to assess the 3-D reconstruction accuracy. Finally, experimental results are reported on real-world sensor data, showing that the system allows for the 3-D reconstruction of high-resolution round views even under challenging illumination conditions. --- paper_title: Live demonstration: Gesture-based remote control using stereo pair of dynamic vision sensors paper_content: This demonstration shows a natural gesture interface for console entertainment devices using as input a stereo pair of dynamic vision sensors. The event-based processing of the sparse sensor output allows fluid interaction at a laptop processor load of less than 3%. --- paper_title: Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age paper_content: Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved? --- paper_title: EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera paper_content: We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data. --- paper_title: A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems paper_content: Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. --- paper_title: Asynchronous Event-Based Hebbian Epipolar Geometry paper_content: Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based-rather than frame-based-vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor. --- paper_title: Address-Event Based Stereo Vision with Bio-Inspired Silicon Retina Imagers paper_content: Several industry, home, or automotive applications need 3D or at least range data of the observed environment to operate. Such applications are, e.g., driver assistance systems, home care systems, or 3D sensing and measurement for industrial production. State-of-the-art range sensors are laser range finders or laser scanners (LIDAR, light detection and ranging), time-of-flight (TOF) cameras, and ultrasonic sound sensors. All of them are embedded, which means that the sensors operate independently and have an integrated processing unit. This is advantageous because the processing power in the mentioned applications is limited and they are computationally intensive anyway. Another benefits of embedded systems are a low power consumption and a small form factor. Furthermore, embedded systems are full customizable by the developer and can be adapted to the specific application in an optimal way. A promising alternative to the mentioned sensors is stereo vision. Classic stereo vision uses a stereo camera setup, which is built up of two cameras (stereo camera head), mounted in parallel and separated by the baseline. It captures a synchronized stereo pair consisting of the left camera’s image and the right camera’s image. The main challenge of stereo vision is the reconstruction of 3D information of a scene captured from two different points of view. Each visible scene point is projected on the image planes of the cameras. Pixels which represent the same scene points on different image planes correspond to each other. These correspondences can then be used to determine the three dimensional position of the projected scene point in a defined coordinate system. In more detail, the horizontal displacement, called the disparity, is inverse proportional to the scene point’s depth. With this information and the camera’s intrinsic parameters (principal point and focal length), the 3D position can be reconstructed. Fig. 1 shows a typical stereo camera setup. The projections of scene point P are pl and pr. Once the correspondences are found, the disparity is calculated with --- paper_title: Dynamic stereo vision system for real-time tracking paper_content: Biologically-inspired dynamic vision sensors have been introduced in 2002 which asynchronously detect the significant relative light intensity changes in a scene and output them in a form of Address-Event representation. These vision sensors capture dynamical discontinuities on-chip for a reduced data volume compared to that from intensity images. Therefore, they support detection, segmentation and tracking of moving objects in the Address-Event space by exploiting the generated events, as a reaction to intensity changes, resulting from the scene dynamics. Object tracking has been previously demonstrated and reported in scientific publications using monocular dynamic vision sensors. This paper contributes with presenting and demonstrating a tracking algorithm using the 3D sensing technology based on the stereo dynamic vision sensor. This system is capable of detecting and tracking persons within a 4m range at an effective refresh rate of the depth map of up to 200 per second. The 3D system is evaluated for people tracking and the tests showed that up to 60k Address-Events/s can be processed for real-time tracking. --- paper_title: Event-based stereo matching approaches for frameless address event stereo data paper_content: In this paper we present different approaches of 3D stereo matching for bio-inspired image sensors. In contrast to conventional digital cameras, this image sensor, called Silicon Retina, delivers asynchronous events instead of synchronous intensity or color images. The events represent either an increase (on-event) or a decrease (off-event) of a pixel's intensity. The sensor can provide events with a time resolution of up to 1ms and it operates in a dynamic range of up to 120dB. In this work we use two silicon retina cameras as a stereo sensor setup for 3D reconstruction of the observed scene, as already known from conventional cameras. The polarity, the timestamp, and a history of the events are used for stereo matching. Due to the different information content and data type of the events, in comparison to conventional pixels, standard stereo matching approaches cannot directly be used. Thus, we developed an area-based, an event-image-based, and a time-based approach and evaluated the results achieving promising results for stereo matching based on events. --- paper_title: EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time paper_content: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras. --- paper_title: EMVS: Event-based Multi-View Stereo paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (i) its ability to respond to scene edges—which naturally provide semidense geometric information without any pre-processing operation—and (ii) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semidense depth maps. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. --- paper_title: MC3D: Motion Contrast 3D Scanning paper_content: Structured light 3D scanning systems are fundamentally constrained by limited sensor bandwidth and light source power, hindering their performance in real-world applications where depth information is essential, such as industrial automation, autonomous transportation, robotic surgery, and entertainment. We present a novel structured light technique called Motion Contrast 3D scanning (MC3D) that maximizes bandwidth and light source power to avoid performance trade-offs. The technique utilizes motion contrast cameras that sense temporal gradients asynchronously, i.e., independently for each pixel, a property that minimizes redundant sampling. This allows laser scanning resolution with single-shot speed, even in the presence of strong ambient illumination, significant inter-reflections, and highly reflective surfaces. The proposed approach will allow 3D vision systems to be deployed in challenging and hitherto inaccessible real-world scenarios requiring high performance using limited power and bandwidth. --- paper_title: Simultaneous localization and mapping for event-based vision systems paper_content: We propose a novel method for vision based simultaneous localization and mapping (vSLAM) using a biologically inspired vision sensor that mimics the human retina. The sensor consists of a 128x128 array of asynchronously operating pixels, which independently emit events upon a temporal illumination change. Such a representation generates small amounts of data with high temporal precision; however, most classic computer vision algorithms need to be reworked as they require full RGB(-D) images at fixed frame rates. Our presented vSLAM algorithm operates on individual pixel events and generates high-quality 2D environmental maps with precise robot localizations. We evaluate our method with a state-of-the-art marker-based external tracking system and demonstrate real-time performance on standard computing hardware. --- paper_title: Event-based 3D SLAM with a depth-augmented dynamic vision sensor paper_content: o We present the D-eDVSn a combined event-based 3D sensor n and a novel event-based full-3D simultaneous lo- calization and mapping algorithm which works exclusively with the sparse stream of visual data provided by the D-eDVS. The D-eDVS is a combination of the established PrimeSense RGB- D sensor and a biologically inspired embedded dynamic vision sensor. Dynamic vision sensors only react to dynamic contrast changes and output data in form of a sparse stream of events which represent individual pixel locations. We demonstrate how an event-based dynamic vision sensor can be fused with a classic frame-based RGB-D sensor to produce a sparse stream of depth-augmented 3D points. The advantages of a sparse, event-based stream are a much smaller amount of generated data, thus more efcient resource usage, and a continuous representation of motion allowing lag-free tracking. Our event- based SLAM algorithm is highly efcient and runs 20 times faster than realtime, provides localization updates at several hundred Hertz, and produces excellent results. We compare our method against ground truth from an external tracking system and two state-of-the-art algorithms on a new dataset which we release in combination with this paper. --- paper_title: The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception paper_content: Event-based cameras are a new passive sensing modality with a number of benefits over traditional cameras, including extremely low latency, asynchronous data acquisition, high dynamic range, and very low power consumption. There has been a lot of recent interest and development in applying algorithms to use the events to perform a variety of three-dimensional perception tasks, such as feature tracking, visual odometry, and stereo depth estimation. However, there currently lacks the wealth of labeled data that exists for traditional cameras to be used for both testing and development. In this letter, we present a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments. From each camera, we provide the event stream, grayscale images, and inertial measurement unit (IMU) readings. In addition, we utilize a combination of IMU, a rigidly mounted lidar system, indoor and outdoor motion capture, and GPS to provide accurate pose and depth images for each camera at up to 100 Hz. For comparison, we also provide synchronized grayscale images and IMU readings from a frame-based stereo camera system. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: An Active Approach to Solving the Stereo Matching Problem using Event-Based Sensors paper_content: The problem of inferring distances from a visual sensor to objects in a scene — referred to as depth estimation — can be solved in various ways. Among those, stereo vision is a method in which two sensors observe the same scene from different viewpoints. To recover the three-dimensional coordinates of a point, its two projections — one in each view — can be used for triangulation. However, the pair of points in the two views that correspond to each other has to be found first. This is known as stereo-matching and is usually a computationally expensive operation. Traditionally, this is performed by describing a point in the first view with some information from its surrounding, e.g. in a feature vector, and then searching for a match with a point described in a similar way in the other view. In this work, we propose a simple idea that alleviates this stereo-matching problem using an active component: a mirror-galvanometer driven laser. The laser beam is deflected by actuating two mirrors, thus creating a sequence of "light spots" in the scene. At these spots, contrast changes quickly. We capture those contrast changes by two Dynamic Vision Sensors (DVS). The high time-resolution of these sensors enables the detection of the laser-induced events in time and their matching using lightweight computation. This method enables event-based depth estimation at a high speed, low computational cost, and without exact sensor synchronization. --- paper_title: Low-latency event-based visual odometry paper_content: The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion. --- paper_title: Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras. --- paper_title: Simultaneous mosaicing and tracking with an event camera paper_content: © 2014. The copyright of this document resides with its authors. An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering. --- paper_title: Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor paper_content: Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm. --- paper_title: EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. --- paper_title: Low-latency visual odometry using event-based feature tracks paper_content: New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional camera and an event-based sensor in the same pixel array. These sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. In this paper, we present a low-latency visual odometry algorithm for the DAVIS sensor using event-based feature tracks. Features are first detected in the grayscale frames and then tracked asynchronously using the stream of events. The features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping. We show that our method successfully tracks the 6-DOF motion of the sensor in natural scenes. This is the first work on event-based visual odometry with the DAVIS sensor using feature tracks. --- paper_title: Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera paper_content: We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data. --- paper_title: Interacting maps for fast visual interpretation paper_content: Biological systems process visual input using a distributed representation, with different areas encoding different aspects of the visual interpretation. While current engineering habits tempt us to think of this processing in terms of a pipelined sequence of filters and other feed-forward processing stages, cortical anatomy suggests quite a different architecture, using strong recurrent connectivity between visual areas. Here we design a network to interpret input from a neuromorphic sensor by means of recurrently interconnected areas, each of which encodes a different aspect of the visual interpretation, such as light intensity or optic flow. As each area of the network tries to be consistent with the information in neighboring areas, the visual interpretation converges towards global mutual consistency. Rather than applying input in a traditional feed-forward manner, the sensory input is only used to weakly influence the information flowing both ways through the middle of the network. Even with this seemingly weak use of input, this network of interacting maps is able to maintain its interpretation of the visual scene in real time, proving the viability of this interacting map approach to computation. --- paper_title: Real-time panoramic tracking for event cameras paper_content: Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset [18] and self-recorded sequences. --- paper_title: Event-based, 6-DOF pose tracking for high-speed maneuvers paper_content: In the last few years, we have witnessed impres-sive demonstrations of aggressive flights and acrobatics using quadrotors. However, those robots are actually blind. They do not see by themselves, but through the “eyes” of an external motion capture system. Flight maneuvers using onboard sensors are still slow compared to those attainable with motion capture systems. At the current state, the agility of a robot is limited by the latency of its perception pipeline. To obtain more agile robots, we need to use faster sensors. In this paper, we present the first onboard perception system for 6-DOF localization during high-speed maneuvers using a Dynamic Vision Sensor (DVS). Unlike a standard CMOS camera, a DVS does not wastefully send full image frames at a fixed frame rate. Conversely, similar to the human eye, it only transmits pixel-level brightness changes at the time they occur with microsecond resolution, thus, offering the possibility to create a perception pipeline whose latency is negligible compared to the dynamics of the robot. We exploit these characteristics to estimate the pose of a quadrotor with respect to a known pattern during high-speed maneuvers, such as flips, with rotational speeds up to 1,200◦/s. Additionally, we provide a versatile method to capture ground-truth data using a DVS. --- paper_title: EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time paper_content: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras. --- paper_title: Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios paper_content: Event cameras are bioinspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this letter, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate-to the best of our knowledge-the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high dynamic range scenes. Videos of the experiments: http://rpg.ifi.uzh.ch/ultimateslam.html. --- paper_title: Simultaneous localization and mapping for event-based vision systems paper_content: We propose a novel method for vision based simultaneous localization and mapping (vSLAM) using a biologically inspired vision sensor that mimics the human retina. The sensor consists of a 128x128 array of asynchronously operating pixels, which independently emit events upon a temporal illumination change. Such a representation generates small amounts of data with high temporal precision; however, most classic computer vision algorithms need to be reworked as they require full RGB(-D) images at fixed frame rates. Our presented vSLAM algorithm operates on individual pixel events and generates high-quality 2D environmental maps with precise robot localizations. We evaluate our method with a state-of-the-art marker-based external tracking system and demonstrate real-time performance on standard computing hardware. --- paper_title: Event-based 3D SLAM with a depth-augmented dynamic vision sensor paper_content: o We present the D-eDVSn a combined event-based 3D sensor n and a novel event-based full-3D simultaneous lo- calization and mapping algorithm which works exclusively with the sparse stream of visual data provided by the D-eDVS. The D-eDVS is a combination of the established PrimeSense RGB- D sensor and a biologically inspired embedded dynamic vision sensor. Dynamic vision sensors only react to dynamic contrast changes and output data in form of a sparse stream of events which represent individual pixel locations. We demonstrate how an event-based dynamic vision sensor can be fused with a classic frame-based RGB-D sensor to produce a sparse stream of depth-augmented 3D points. The advantages of a sparse, event-based stream are a much smaller amount of generated data, thus more efcient resource usage, and a continuous representation of motion allowing lag-free tracking. Our event- based SLAM algorithm is highly efcient and runs 20 times faster than realtime, provides localization updates at several hundred Hertz, and produces excellent results. We compare our method against ground truth from an external tracking system and two state-of-the-art algorithms on a new dataset which we release in combination with this paper. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe- based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Fur- thermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s. --- paper_title: Low-latency event-based visual odometry paper_content: The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion. --- paper_title: Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras. --- paper_title: Simultaneous mosaicing and tracking with an event camera paper_content: © 2014. The copyright of this document resides with its authors. An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering. --- paper_title: Continuous-Time Trajectory Estimation for Event-based Vision Sensors paper_content: Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), do not output a sequence of video frames like standard cameras, but a stream of asynchronous events. An event is triggered when a pixel detects a change of brightness in the scene. An event contains the location, sign, and precise timestamp of the change. The high dynamic range and temporal resolution of the DVS, which is in the order of micro-seconds, make this a very promising sensor for high-speed applications, such as robotics and wearable computing. However, due to the fundamentally different structure of the sensor's output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. In this paper, we address ego-motion estimation for an event-based vision sensor using a continuous-time framework to directly integrate the information conveyed by the sensor. The DVS pose trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines and it is optimized according to the observed events. We evaluate our method using datasets acquired from sensor-in-the-loop simulations and onboard a quadrotor performing flips. The results are compared to the ground truth, showing the good performance of the proposed technique. --- paper_title: EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera paper_content: We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data. --- paper_title: Event-based particle filtering for robot self-localization paper_content: We propose a novel algorithm for robot self-localization using an embedded event-based sensor. This sensor produces a stream of events at microsecond time resolution which only represents pixel-level illumination changes in a scene, as e.g. caused by perceived motion. This is in contrast to classical image sensors, which wastefully transmit redundant information at a much lower frame rate. Our method adapts the commonly used Condensation Particle Filter Tracker to such event-based sensors. It works directly with individual, highly ambiguous pixel-events and does not employ event integration over time. The lack of complete discrete sensory measurements is addressed by applying an exponential decay model for hypotheses likelihood computation. The proposed algorithm demonstrates robust performance at low computation requirements; turning it suitable for implementation in embedded hardware on small autonomous robots. We evaluate our algorithm in a simulation environment and with experimental recorded data. --- paper_title: Asynchronous, Photometric Feature Tracking using Events and Frames paper_content: We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called"events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes. --- paper_title: Interacting maps for fast visual interpretation paper_content: Biological systems process visual input using a distributed representation, with different areas encoding different aspects of the visual interpretation. While current engineering habits tempt us to think of this processing in terms of a pipelined sequence of filters and other feed-forward processing stages, cortical anatomy suggests quite a different architecture, using strong recurrent connectivity between visual areas. Here we design a network to interpret input from a neuromorphic sensor by means of recurrently interconnected areas, each of which encodes a different aspect of the visual interpretation, such as light intensity or optic flow. As each area of the network tries to be consistent with the information in neighboring areas, the visual interpretation converges towards global mutual consistency. Rather than applying input in a traditional feed-forward manner, the sensory input is only used to weakly influence the information flowing both ways through the middle of the network. Even with this seemingly weak use of input, this network of interacting maps is able to maintain its interpretation of the visual scene in real time, proving the viability of this interacting map approach to computation. --- paper_title: Real-time panoramic tracking for event cameras paper_content: Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset [18] and self-recorded sequences. --- paper_title: Event-based, 6-DOF pose tracking for high-speed maneuvers paper_content: In the last few years, we have witnessed impres-sive demonstrations of aggressive flights and acrobatics using quadrotors. However, those robots are actually blind. They do not see by themselves, but through the “eyes” of an external motion capture system. Flight maneuvers using onboard sensors are still slow compared to those attainable with motion capture systems. At the current state, the agility of a robot is limited by the latency of its perception pipeline. To obtain more agile robots, we need to use faster sensors. In this paper, we present the first onboard perception system for 6-DOF localization during high-speed maneuvers using a Dynamic Vision Sensor (DVS). Unlike a standard CMOS camera, a DVS does not wastefully send full image frames at a fixed frame rate. Conversely, similar to the human eye, it only transmits pixel-level brightness changes at the time they occur with microsecond resolution, thus, offering the possibility to create a perception pipeline whose latency is negligible compared to the dynamics of the robot. We exploit these characteristics to estimate the pose of a quadrotor with respect to a known pattern during high-speed maneuvers, such as flips, with rotational speeds up to 1,200◦/s. Additionally, we provide a versatile method to capture ground-truth data using a DVS. --- paper_title: EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time paper_content: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras. --- paper_title: A Spline-Based Trajectory Representation for Sensor Fusion and Rolling Shutter Cameras paper_content: The use of multiple sensors for ego-motion estimation is an approach often used to provide more accurate and robust results. However, when representing ego-motion as a discrete series of poses, fusing information of unsynchronized sensors is not straightforward. The framework described in this paper aims to provide a unified solution for solving ego-motion estimation problems involving high-rate unsynchronized devices. Instead of a discrete-time pose representation, we present a continuous-time formulation that makes use of cumulative cubic B-Splines parameterized in the Lie Algebra of the group $$\mathbb {SE}3$$SE3. This trajectory representation has several advantages for sensor fusion: (1) it has local control, which enables sliding window implementations; (2) it is $$C^2$$C2 continuous, allowing predictions of inertial measurements; (3) it closely matches torque-minimal motions; (4) it has no singularities when representing rotations; (5) it easily handles measurements from multiple sensors arriving a different times when timestamps are available; and (6) it deals with rolling shutter cameras naturally. We apply this continuous-time framework to visual---inertial simultaneous localization and mapping and show that it can also be used to calibrate the entire system. --- paper_title: HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition paper_content: This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. --- paper_title: Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars paper_content: Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras. --- paper_title: Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios paper_content: Event cameras are bioinspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this letter, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate-to the best of our knowledge-the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high dynamic range scenes. Videos of the experiments: http://rpg.ifi.uzh.ch/ultimateslam.html. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization paper_content: We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call"dynamic marginalization". This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art. --- paper_title: Event-Based Visual Inertial Odometry paper_content: Event-based cameras provide a new visual sensing model by detecting changes in image intensity asynchronously across all pixels on the camera. By providing these events at extremely high rates (up to 1MHz), they allow for sensing in both high speed and high dynamic range situations where traditional cameras may fail. In this paper, we present the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit, to provide accurate metric tracking of a cameras full 6dof pose. Our algorithm is asynchronous, and provides measurement updates at a rate proportional to the camera velocity. The algorithm selects features in the image plane, and tracks spatiotemporal windows around these features within the event stream. An Extended Kalman Filter with a structureless measurement model then fuses the feature tracks with the output of the IMU. The camera poses from the filter are then used to initialize the next step of the tracker and reject failed tracks. We show that our method successfully tracks camera motion on the Event-Camera Dataset in a number of challenging situations. --- paper_title: Machine learning for high-speed corner detection paper_content: Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of tlie available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: Event-based feature tracking with probabilistic data association paper_content: Asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking. The few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models. Such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks. In this paper, we introduce a novel soft data association modeled with probabilities. The association probabilities are computed in an intertwined EM scheme with the optical flow computation that maximizes the expectation (marginalization) over all associations. In addition, to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence. The computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow. We show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras. --- paper_title: Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe- based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Fur- thermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s. --- paper_title: Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion paper_content: In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes. --- paper_title: On-Manifold Preintegration for Real-Time Visual-Inertial Odometry paper_content: Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches. --- paper_title: A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation paper_content: In this paper, we present an extended Kalman filter (EKF)-based algorithm for real-time vision-aided inertial navigation. The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature position in the state vector of the EKF and is optimal, up to linearization errors. The vision-aided inertial navigation algorithm we propose has computational complexity only linear in the number of features, and is capable of high-precision pose estimation in large-scale real-world environments. The performance of the algorithm is demonstrated in extensive experimental results, involving a camera/IMU system localizing within an urban area. --- paper_title: An iterative image registration technique with an application to stereo vision paper_content: Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification paper_content: Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation. --- paper_title: The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM paper_content: New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data. --- paper_title: EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time paper_content: We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras. --- paper_title: Simultaneous Optical Flow and Intensity Estimation from an Event Camera paper_content: Event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image. This proposed paradigm shift away from traditional frame cameras offers significant potential advantages: namely avoiding high data rates, dynamic range limitations and motion blur. Unfortunately, however, established computer vision algorithms may not at all be applied directly to event cameras. Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera, e.g. allowing only rotation. Here, we propose, to the best of our knowledge, the first algorithm to simultaneously recover the motion field and brightness image, while the camera undergoes a generic motion through any scene. Our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval. Our implementation relies on GPU optimisation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework, including in situations where conventional cameras suffer from dynamic range limitations and motion blur. --- paper_title: Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor paper_content: Dynamic and active pixel vision sensors (DAVISs) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout. This paper demonstrates that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data. The algorithm performs an online optimization of the event decoding in real time. Example scenes were recorded by the 240×180 pixel sensor at sub-Hz frame rates and successfully decompressed yielding an equivalent frame rate of 2kHz. A quantitative analysis of the compression quality resulted in an average pixel error of 0.5DN intensity resolution for non-saturating stimuli. The system exhibits an adaptive compression ratio which depends on the activity in a scene; for stationary scenes it can go up to 1862. The low data rate and power consumption of the proposed video compression system make it suitable for distributed sensor networks. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Robust visual tracking with a freely-moving event camera paper_content: Event cameras are a new technology that can enable low-latency, fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes in the scene and have a very high temporal resolution (< 1μs). Moving targets produce dense spatio-temporal streams of events that do not suffer from information loss “between frames”, which can occur when traditional cameras are used to track fast-moving targets. Event-based tracking algorithms need to be able to follow the target position within the spatio-temporal data, while rejecting clutter events that occur as a robot moves in a typical office setting. We introduce a particle filter with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections. The proposed system provides a more persistent tracking compared to prior state-of-the-art, especially when the robot is actively following a target with its gaze. Experiments are performed on the iCub humanoid robot performing ball tracking and gaze following. --- paper_title: Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion paper_content: In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes. --- paper_title: Unsupervised Learning of Dense Optical Flow and Depth from Sparse Event Data paper_content: In this work we present unsupervised learning of depth and motion from sparse event data generated by a Dynamic Vision Sensor (DVS). To tackle this low level vision task, we use a novel encoder-decoder neural network architecture that aggregates multi-level features and addresses the problem at multiple resolutions. A feature decorrelation technique is introduced to improve the training of the network. A non-local sparse smoothness constraint is used to alleviate the challenge of data sparsity. Our work is the first that generates dense depth and optical flow information from sparse event data. Our results show significant improvements upon previous works that used deep learning for flow estimation from both images and events. --- paper_title: Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras. --- paper_title: Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation paper_content: Event cameras or neuromorphic cameras mimic the human perception system as they measure the per-pixel intensity change rather than the actual intensity level. In contrast to traditional cameras, such cameras capture new information about the scene at MHz frequency in the form of sparse events. The high temporal resolution comes at the cost of losing the familiar per-pixel intensity information. In this work we propose a variational model that accurately models the behaviour of event cameras, enabling reconstruction of intensity images with arbitrary frame rate in real-time. Our method is formulated on a per-event-basis, where we explicitly incorporate information about the asynchronous nature of events via an event manifold induced by the relative timestamps of events. In our experiments we verify that solving the variational model on the manifold produces high-quality images without explicitly estimating optical flow. --- paper_title: Simultaneous mosaicing and tracking with an event camera paper_content: © 2014. The copyright of this document resides with its authors. An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera paper_content: We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data. --- paper_title: Events-To-Video: Bringing Modern Computer Vision to Event Cameras paper_content: Event cameras are novel sensors that report brightness changes in the form of asynchronous "events" instead of intensity frames. They have significant advantages over conventional cameras: high temporal resolution, high dynamic range, and no motion blur. Since the output of event cameras is fundamentally different from conventional cameras, it is commonly accepted that they require the development of specialized algorithms to accommodate the particular nature of events. In this work, we take a different view and propose to apply existing, mature computer vision techniques to videos reconstructed from event data. We propose a novel, recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data. Our experiments show that our approach surpasses state-of-the-art reconstruction methods by a large margin (> 20%) in terms of image quality. We further apply off-the-shelf computer vision algorithms to videos reconstructed from event data on tasks such as object classification and visual-inertial odometry, and show that this strategy consistently outperforms algorithms that were specifically designed for event data. We believe that our approach opens the door to bringing the outstanding properties of event cameras to an entirely new range of tasks. --- paper_title: Asynchronous, Photometric Feature Tracking using Events and Frames paper_content: We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called"events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes. --- paper_title: Interacting maps for fast visual interpretation paper_content: Biological systems process visual input using a distributed representation, with different areas encoding different aspects of the visual interpretation. While current engineering habits tempt us to think of this processing in terms of a pipelined sequence of filters and other feed-forward processing stages, cortical anatomy suggests quite a different architecture, using strong recurrent connectivity between visual areas. Here we design a network to interpret input from a neuromorphic sensor by means of recurrently interconnected areas, each of which encodes a different aspect of the visual interpretation, such as light intensity or optic flow. As each area of the network tries to be consistent with the information in neighboring areas, the visual interpretation converges towards global mutual consistency. Rather than applying input in a traditional feed-forward manner, the sensory input is only used to weakly influence the information flowing both ways through the middle of the network. Even with this seemingly weak use of input, this network of interacting maps is able to maintain its interpretation of the visual scene in real time, proving the viability of this interacting map approach to computation. --- paper_title: Direct face detection and video reconstruction from event cameras paper_content: Event cameras are emerging as a new class of cameras, to potentially rival conventional CMOS cameras, because of their high speed operation and low power consumption. Pixels in an event camera operate in parallel and fire asynchronous spikes when individual pixels encounter a change in intensity that is greater than a pre-determined threshold. Such event-based cameras have an immense potential in battery-operated or always-on application scenarios, owing to their low power consumption. These event-based cameras can be used for direct detection from event streams, and we demonstrate this potential using face detection as an example application. We first propose and develop a patch-based model for the event streams acquired from such cameras. We demonstrate the utility and robustness of the patch-based model for event-based video reconstruction and event-based direct face detection. We are able to reconstruct images and videos at over 2,000 fps from the acquired event streams. In addition, we demonstrate the first direct face detection from event streams, highlighting the potential of these event-based cameras for power-efficient vision applications. --- paper_title: Continuous-time Intensity Estimation Using Event Cameras paper_content: Event cameras provide asynchronous, data-driven measurements of local temporal contrast over a large dynamic range with extremely high temporal resolution. Conventional cameras capture low-frequency reference intensity information. These two sensor modalities provide complementary information. We propose a computationally efficient, asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution, high-dynamic-range image state. In absence of conventional image frames, the filter can be run on events only. We present experimental results on high-speed, high-dynamic-range sequences, as well as on new ground truth datasets we generate to demonstrate the proposed algorithm outperforms existing state-of-the-art methods. --- paper_title: HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification paper_content: Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation. --- paper_title: The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM paper_content: New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data. --- paper_title: Event-based Moving Object Detection and Tracking paper_content: Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), are ideally suited for real-time motion analysis. The unique properties encompassed in the readings of such sensors provide high temporal resolution, superior sensitivity to light and low latency. These properties provide the grounds to estimate motion efficiently and reliably in the most sophisticated scenarios, but these advantages come at a price - modern event-based vision sensors have extremely low resolution, produce a lot of noise and require the development of novel algorithms to handle the asynchronous event stream. This paper presents a new, efficient approach to object tracking with asynchronous cameras. We present a novel event stream representation which enables us to utilize information about the dynamic (temporal)component of the event stream. The 3D geometry of the event stream is approximated with a parametric model to motion-compensate for the camera (without feature tracking or explicit optical flow computation), and then moving objects that don't conform to the model are detected in an iterative process. We demonstrate our framework on the task of independent motion detection and tracking, where we use the temporal model inconsistencies to locate differently moving objects in challenging situations of very fast motion. --- paper_title: Independent motion detection with event-driven cameras paper_content: Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target. --- paper_title: Robust visual tracking with a freely-moving event camera paper_content: Event cameras are a new technology that can enable low-latency, fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes in the scene and have a very high temporal resolution (< 1μs). Moving targets produce dense spatio-temporal streams of events that do not suffer from information loss “between frames”, which can occur when traditional cameras are used to track fast-moving targets. Event-based tracking algorithms need to be able to follow the target position within the spatio-temporal data, while rejecting clutter events that occur as a robot moves in a typical office setting. We introduce a particle filter with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities, which can lead to a loss in visual information and missed detections. The proposed system provides a more persistent tracking compared to prior state-of-the-art, especially when the robot is actively following a target with its gaze. Experiments are performed on the iCub humanoid robot performing ball tracking and gaze following. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: Event-driven ball detection and gaze fixation in clutter paper_content: The fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking and control, for example a robot catching a ball. When the event-driven iCub humanoid robot grasps an object its head and torso move, inducing camera motion, and tracked objects become no longer trivially segmented amongst the mass of background clutter. Current event-based tracking algorithms have mostly considered stationary cameras that have clean event-streams with minimal clutter. This paper introduces novel methods to extend the Hough-based circle detection algorithm using optical flow information that is readily extracted from the spatio-temporal event space. Results indicate the proposed directed-Hough algorithm is more robust to other moving objects and the background event-clutter. Finally, we demonstrate successful on-line robot control and gaze following on the iCub robot. --- paper_title: EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras paper_content: We present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset - EV-IMO - which includes accurate pixel-wise motion masks, egomotion and ground truth depth. Our approach is based on an efficient implementation of the SfM learning pipeline using a low parameter neural network architecture on event data. In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects. We also train a shallow network with just 40k parameters, which is able to compute depth and egomotion. ::: Our EV-IMO dataset features 32 minutes of indoor recording with up to 3 fast moving objects simultaneously in the camera field of view. The objects and the camera are tracked by the VICON motion capture system. By 3D scanning the room and the objects, accurate depth map ground truth and pixel-wise object masks are obtained, which are reliable even in poor lighting conditions and during fast motion. We then train and evaluate our learning pipeline on EV-IMO and demonstrate that our approach far surpasses its rivals and is well suited for scene constrained robotics applications. --- paper_title: Event-Based Motion Segmentation by Motion Compensation paper_content: In contrast to traditional cameras, whose pixels have a common exposure time, event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes (called "events"), with microsecond resolution. Since events are caused by the apparent motion of objects, event-based cameras sample visual information based on the scene dynamics and are, therefore, a more natural fit than traditional cameras to acquire motion, especially at high speeds, where traditional cameras suffer from motion blur. However, distinguishing between events caused by different moving objects and by the camera's ego-motion is a challenging task. We present the first per-event segmentation method for splitting a scene into independently moving objects. Our method jointly estimates the event-object associations (i.e., segmentation) and the motion parameters of the objects (or the background) by maximization of an objective function, which builds upon recent results on event-based motion-compensation. We provide a thorough evaluation of our method on a public dataset, outperforming the state-of-the-art by as much as 10%. We also show the first quantitative evaluation of a segmentation algorithm for event cameras, yielding around 90% accuracy at 4 pixels relative displacement. --- paper_title: Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor paper_content: We present an algorithm (SOFAS) to estimate the optical flow of events generated by a dynamic vision sensor (DVS). Where traditional cameras produce frames at a fixed rate, DVSs produce asynchronous events in response to intensity changes with a high temporal resolution. Our algorithm uses the fact that events are generated by edges in the scene to not only estimate the optical flow but also to simultaneously segment the image into objects which are travelling at the same velocity. This way it is able to avoid the aperture problem which affects other implementations such as Lucas-Kanade. Finally, we show that SOFAS produces more accurate results than traditional optic flow algorithms. --- paper_title: Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor paper_content: Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. --- paper_title: Real-time classification and sensor fusion with a spiking deep belief network paper_content: Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. --- paper_title: HFirst: A Temporal Approach to Object Recognition paper_content: This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% $\pm$ 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% $\pm$ 1.9% for a new more difficult 36 class character recognition task. --- paper_title: Data and Power Efficient Intelligence with Neuromorphic Learning Machines paper_content: The success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate. This review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world, human-centric tasks. We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are necessary, neuromorphic technologies can achieve significant advantages over main-stream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field, block the road to major breakthroughs. We suggest that a neuromorphic learning framework, tuned specifically for the spatial and temporal constraints of the neuromorphic substrate, will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data. --- paper_title: HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition paper_content: This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. --- paper_title: Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars paper_content: Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras. --- paper_title: Fast sensory motor control based on event-based hybrid neuromorphic-procedural system paper_content: Fast sensory-motor processing is challenging when using traditional frame-based cameras and computers. Here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal. The system consists of a 128times128 retina that asynchronously reports scene reflectance changes, a laptop PC, and a servo motor controller. Components are interconnected by USB. The retina looks down onto the field in front of the goal. Moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal. The ball's position and velocity are used to control the servo motor. Running under Windows XP, the reaction latency is 2.8plusmn0.5 ms at a CPU load of 1 million events per second (Meps), although fast balls only create ~30 keps. This system demonstrates the advantages of hybrid event-based sensory motor processing --- paper_title: Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets paper_content: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules. --- paper_title: Steering a predator robot using a mixed frame/event-driven convolutional neural network paper_content: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing. --- paper_title: A Low Power, Fully Event-Based Gesture Recognition System paper_content: We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5% out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Active Perception With Dynamic Vision Sensors. Minimum Saccades With Optimum Recognition paper_content: Vision processing with dynamic vision sensors (DVSs) is becoming increasingly popular. This type of a bio-inspired vision sensor does not record static images. The DVS pixel activity relies on the changes in light intensity. In this paper, we introduce a platform for the object recognition with a DVS in which the sensor is installed on a moving pan-tilt unit in a closed loop with a recognition neural network. This neural network is trained to recognize objects observed by a DVS, while the pan-tilt unit is moved to emulate micro-saccades. We show that performing more saccades in different directions can result in having more information about the object, and therefore, more accurate object recognition is possible. However, in high-performance and low-latency platforms, performing additional saccades adds latency and power consumption. Here, we show that the number of saccades can be reduced while keeping the same recognition accuracy by performing intelligent saccadic movements, in a closed action-perception smart loop. We propose an algorithm for smart saccadic movement decisions that can reduce the number of necessary saccades to half, on average, for a predefined accuracy on the N-MNIST dataset. Additionally, we show that by replacing this control algorithm with an artificial neural network that learns to control the saccades, we can also reduce to half the average number of saccades needed for the N-MNIST recognition. --- paper_title: SLAYER: Spike Layer Error Reassignment in Time paper_content: Configuring deep Spiking Neural Networks (SNNs) is an exciting research avenue for low power spike event based computation. However, the spike generation function is non-differentiable and therefore not directly compatible with the standard error backpropagation algorithm. In this paper, we introduce a new general backpropagation mechanism for learning synaptic weights and axonal delays which overcomes the problem of non-differentiability of the spike function and uses a temporal credit assignment policy for backpropagating error to preceding layers. We describe and release a GPU accelerated software implementation of our method which allows training both fully connected and convolutional neural network (CNN) architectures. Using our software, we compare our method against existing SNN based learning approaches and standard ANN to SNN conversion techniques and show that our method achieves state of the art performance for an SNN on the MNIST, NMNIST, DVS Gesture, and TIDIGITS datasets. --- paper_title: CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking paper_content: This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asynchronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45 k neurons (spiking cells), up to 5 M synapses, performs 12 G synaptic operations per second, and achieves millisecond object recognition and tracking latencies. --- paper_title: Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas paper_content: We propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor (DVS) silicon retinas with neuromorphic event-driven postprocessing. Compared with conventional vision or 3-D sensors, the use of DVSs, which output asynchronous and sparse events in response to motion, eliminates the need to extract movements from sequences of video frames, and allows significantly faster and more energy-efficient processing. In addition, the rate of input events depends on the observed movements, and thus provides an additional cue for solving the gesture spotting problem, i.e., finding the onsets and offsets of gestures. We propose a postprocessing framework based on spiking neural networks that can process the events received from the DVSs in real time, and provides an architecture for future implementation in neuromorphic hardware devices. The motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the DVSs by using leaky integrate-and-fire (LIF) neurons. Adaptive thresholds of the LIF neurons achieve the segmentation of trajectories, which are then translated into discrete and finite feature vectors. The feature vectors are classified with hidden Markov models, using a separate Gaussian mixture model for spotting irrelevant transition gestures. The disparity information from stereovision is used to adapt LIF neuron parameters to achieve recognition invariant of the distance of the user to the sensor, and also helps to filter out movements in the background of the user. Exploiting the high dynamic range of DVSs, furthermore, allows gesture recognition over a 60-dB range of scene illuminance. The system achieves recognition rates well over 90% under a variety of variable conditions with static and dynamic backgrounds with naive users. --- paper_title: Event-driven embodied system for feature extraction and object recognition in robotic applications paper_content: A major challenge in robotic applications is the interaction with a dynamic environment and humans which is typically constrained by the capability of visual sensors and the computational cost of signal processing algorithms. Addressing this problem the paper presents an event-driven based embodied system for feature extraction and object recognition as a novel efficient sensory approach in robotic applications. The system is established for a mobile humanoid robot which provides the infrastructure for interfacing asynchronous vision sensors with the processing unit of the robot. By applying event-feature ”mapping” the address event representation of the sensors is enhanced by additional information that can be used for object recognition. The system is presented in the context of an exemplary application in which the robot has to detect and grasp a ball in an arbitrary state of motion. --- paper_title: A Motion-Based Feature for Event-Based Pattern Recognition paper_content: This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping into a matrix, the distribution of the optical flow along the contours of the moving objects in the visual scene. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating ''spiking'' events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature represents equitably the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the genericness of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition. --- paper_title: Effective sensor fusion with event-based sensors and deep network architectures paper_content: The use of spiking neuromorphic sensors with state-of-art deep networks is currently an active area of research. Still relatively unexplored are the pre-processing steps needed to transform spikes from these sensors and the types of network architectures that can produce high-accuracy performance using these sensors. This paper discusses several methods for preprocessing the spiking data from these sensors for use with various deep network architectures. The outputs of these preprocessing methods are evaluated using different networks including a deep fusion network composed of Convolutional Neural Networks and Recurrent Neural Networks, to jointly solve a recognition task using the MNIST (visual) and TIDIGITS (audio) benchmark datasets. With only 1000 visual input spikes from a spiking hardware retina, the classification accuracy of 64.5% achieved by a particular trained fusion network increases to 98.31% when combined with inputs from a spiking hardware cochlea. --- paper_title: A pencil balancing robot using a pair of AER dynamic vision sensors paper_content: Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencil's location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencil's base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop. --- paper_title: Direct face detection and video reconstruction from event cameras paper_content: Event cameras are emerging as a new class of cameras, to potentially rival conventional CMOS cameras, because of their high speed operation and low power consumption. Pixels in an event camera operate in parallel and fire asynchronous spikes when individual pixels encounter a change in intensity that is greater than a pre-determined threshold. Such event-based cameras have an immense potential in battery-operated or always-on application scenarios, owing to their low power consumption. These event-based cameras can be used for direct detection from event streams, and we demonstrate this potential using face detection as an example application. We first propose and develop a patch-based model for the event streams acquired from such cameras. We demonstrate the utility and robustness of the patch-based model for event-based video reconstruction and event-based direct face detection. We are able to reconstruct images and videos at over 2,000 fps from the acquired event streams. In addition, we demonstrate the first direct face detection from event streams, highlighting the potential of these event-based cameras for power-efficient vision applications. --- paper_title: HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification paper_content: Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation. --- paper_title: Efficient high speed signal estimation with neuromorphic vision sensors paper_content: Recently developed neuromorphic vision sensors present a high speed, event-based alternative to conventional vision in robotic systems. We present a method for the design of simple, low computation estimators that take advantage of the remarkable properties of these sensors to provide low latency, high bandwidth sensing for feedback control of fast dynamical systems. It is shown that under certain circumstances a simple transformation of the event stream from such a sensor can allow it to be treated as an asynchronous configuration sensor, with minimal computation required to achieve high speed signal reconstruction. These results are applicable to any robotic control problems requiring high performance visual feedback control with low computation. --- paper_title: Fast sensory motor control based on event-based hybrid neuromorphic-procedural system paper_content: Fast sensory-motor processing is challenging when using traditional frame-based cameras and computers. Here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal. The system consists of a 128times128 retina that asynchronously reports scene reflectance changes, a laptop PC, and a servo motor controller. Components are interconnected by USB. The retina looks down onto the field in front of the goal. Moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal. The ball's position and velocity are used to control the servo motor. Running under Windows XP, the reaction latency is 2.8plusmn0.5 ms at a CPU load of 1 million events per second (Meps), although fast balls only create ~30 keps. This system demonstrates the advantages of hybrid event-based sensory motor processing --- paper_title: Implications of rational inattention paper_content: Abstract A constraint that actions can depend on observations only through a communication channel with finite Shannon capacity is shown to be able to play a role very similar to that of a signal extraction problem or an adjustment cost in standard control problems. The resulting theory looks enough like familiar dynamic rational expectations theories to suggest that it might be useful and practical, while the implications for policy are different enough to be interesting. --- paper_title: Comparison of Periodic and Event-Based Sampling for Linear State Estimation paper_content: Abstract In this paper, the state estimation problem for continuous-time linear systems with two types of sampling is considered. First, the optimal state estimator under periodic sampling is presented. Then the state estimator with event-based updates is designed, i.e., when an event occurs the estimator is updated linearly by using the measurement of output, while between the consecutive event times the estimator is updated by minimum mean-squared error criteria. The average estimation errors under both sampling schemes are compared quantitatively for first and second order systems, respectively. A numerical example is given to compare the effectiveness of two state estimators. --- paper_title: Efficient neuromorphic optomotor heading regulation paper_content: “Neuromorphic” vision sensors are a recent development in sensing technology. They can be thought of a camera sensor whose output is a sequence of “retinal events” rather than frames. Events are generated independently by each pixel as they detect a change in the light field. These sensors have low latency ( 120 dB), and very low power consumption. Therefore, they are well suited for control applications where power is limited yet high performance is necessary. Existing computer vision algorithms that work on frames cannot be adapted to process retinal events from neuromorphic sensors, so a new class of algorithms needs to be investigated. This papers considers the problem of designing a regulator for the heading of a vehicle based on the feedback from an on-board neuromorphic sensor. It is shown that a nonlinear function of the events retinal positions, followed by retinal integration, followed by a linear filter is a simple design that is sufficient to guarantee stability. This shows that computationally simple controllers are sufficient to control motion tasks even with the feedback from noisy and ambiguous event data, and without having to compute explicit representations for the state. --- paper_title: Stabilization of linear continuous-time systems using neuromorphic vision sensors paper_content: Recently developed neuromorphic vision sensors have become promising candidates for agile and autonomous robotic applications primarily due to, in particular, their high temporal resolution and low latency. Each pixel of this sensor independently fires an asynchronous stream of “retinal events” once a change in the light field is detected. Existing computer vision algorithms can only process periodic frames and so a new class of algorithms needs to be developed that can efficiently process these events for control tasks. In this paper, we investigate the problem of quadratically stabilizing a continuous-time linear time invariant (LTI) system using measurements from a neuromorphic sensor. We present an H ∞ controller that stabilizes a continuous-time LTI system and provide the set of stabilizing neuromorphic sensor based cameras for the given system. The effectiveness of our approach is illustrated on an unstable system. --- paper_title: A Power-Performance Approach to Comparing Sensor Families, with application to comparing neuromorphic to traditional vision sensors paper_content: There is considerable freedom in choosing the sensors to be equipped on a robot. Currently many sensing technologies are available (radar, lidar, vision sensors, time-of-flight cameras, etc.). For each class, there are additional choices regarding the exact sensor parameters (spatial resolution, frame rate, etc.). Which sensor is best? In general, this question needs to be qualified. It depends on the task. In an estimation task, the answer depends on the prior for the signal. In a control task, the answer depends exactly on which are the sufficient statistics for computing the control signal. This paper shows that an ulterior qualification that needs to be made: the answer depends on the power available for sensing, even when the task is fixed. We define the “power-performance” curve as the performance attainable on a task for a given level of sensing power. We show that this approach is well suited to comparing a traditional CMOS sensor with the recently available “neuromorphic” sensors. We discuss estimation tasks with different priors for the signal. We find priors for which one sensor dominates the other and vice-versa, priors for which they are equivalent, and priors for which the answer depends on the power available. This shows that comparing sensors is a quite delicate problem. It also suggests that the optimal architecture might have more that one sensor, and would switch sensors on and off according to the performance level required instantaneously. --- paper_title: A pencil balancing robot using a pair of AER dynamic vision sensors paper_content: Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencil's location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencil's base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop. --- paper_title: Low-latency heading feedback control with neuromorphic vision sensors using efficient approximated incremental inference paper_content: Asynchronous neuromorphic vision sensors have unique properties that make them ideal for high speed control applications. We consider a one dimensional simplification of a more general six dimensional trajectory tracking problem for mobile platforms, and present a computationally efficient method for feedback control that takes advantage of the asynchronous, event-based nature of these sensors to provide very high bandwidth and low latency feedback. This is an important step toward application of these incredible sensors to mobile robotic systems and useful in its own right. Through experimental tests we compare sensors and show that neuromorphic vision sensors can provide good closed loop performance in terms of computation, data rate, frequency and latency, and tracking error. --- paper_title: Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor paper_content: Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. --- paper_title: A Low Power, Fully Event-Based Gesture Recognition System paper_content: We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5% out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions. --- paper_title: A Low Power, High Throughput, Fully Event-Based Stereo System paper_content: We introduce a stereo correspondence system implemented fully on event-based digital hardware, using a fully graph-based non von-Neumann computation model, where no frames, arrays, or any other such data-structures are used. This is the first time that an end-to-end stereo pipeline from image acquisition and rectification, multi-scale spatiotemporal stereo correspondence, winner-take-all, to disparity regularization is implemented fully on event-based hardware. Using a cluster of TrueNorth neurosynaptic processors, we demonstrate their ability to process bilateral event-based inputs streamed live by Dynamic Vision Sensors (DVS), at up to 2,000 disparity maps per second, producing high fidelity disparities which are in turn used to reconstruct, at low power, the depth of events produced from rapidly changing scenes. Experiments on real-world sequences demonstrate the ability of the system to take full advantage of the asynchronous and sparse nature of DVS sensors for low power depth reconstruction, in environments where conventional frame-based cameras connected to synchronous processors would be inefficient for rapidly moving objects. System evaluation on event-based sequences demonstrates a ~ 200 A— improvement in terms of power per pixel per disparity map compared to the closest state-of-the-art, and maximum latencies of up to 11ms from spike injection to disparity map ejection. --- paper_title: A million spiking-neuron integrated circuit with a scalable communication network and interface paper_content: ): – requirements throughout surface waters of the N-limited North Pacific. Recent isotopic analysis of skeleton material from deep-sea corals near Hawaii also exhibit a decreasing trend over this time period, which has been interpreted as a signal of increasing N inputs from N2 fixation (36). However, because isotopic and stoichiometric signals of denitrification are transported from the anoxic zone into the subtropical gyre (37), the reported coral trends may originate partly from the OMZ. Any remaining signal attributable to N2 fixation would imply that the ecological niche of diazotrophs in the central gyre is uncoupled from the major N loss in the OMZ (38), and that a substantial imbalance of the Pacific N budget has persisted over the 20th century. --- paper_title: Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware paper_content: Using Intel's Loihi neuromorphic research chip and ABR's Nengo Deep Learning toolkit, we analyze the inference speed, dynamic power consumption, and energy cost per inference of a two-layer neural network keyword spotter trained to recognize a single phrase. We perform comparative analyses of this keyword spotter running on more conventional hardware devices including a CPU, a GPU, Nvidia's Jetson TX1, and the Movidius Neural Compute Stick. Our results indicate that for this real-time inference application, Loihi outperforms all of these alternatives on an energy cost per inference basis while maintaining equivalent inference accuracy. Furthermore, an analysis of tradeoffs between network size, inference speed, and energy cost indicates that Loihi's comparative advantage over other low-power computing devices improves for larger networks. --- paper_title: Loihi: A Neuromorphic Manycore Processor with On-Chip Learning paper_content: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions. --- paper_title: Braindrop: A Mixed-Signal Neuromorphic Architecture With a Dynamical Systems-Based Programming Model paper_content: Braindrop is the first neuromorphic system designed to be programmed at a high level of abstraction. Previous neuromorphic systems were programmed at the neurosynaptic level and required expert knowledge of the hardware to use. In stark contrast, Braindrop’s computations are specified as coupled nonlinear dynamical systems and synthesized to the hardware by an automated procedure. This procedure not only leverages Braindrop’s fabric of subthreshold analog circuits as dynamic computational primitives but also compensates for their mismatched and temperature-sensitive responses at the network level. Thus, a clean abstraction is presented to the user. Fabricated in a 28-nm FDSOI process, Braindrop integrates 4096 neurons in $0.65~\text {mm}^{2}$ . Two innovations—sparse encoding through analog spatial convolution and weighted spike-rate summation though digital accumulative thinning—cut digital traffic drastically, reducing the energy Braindrop consumes per equivalent synaptic operation to 381 fJ for typical network configurations. --- paper_title: Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios paper_content: Event cameras are bioinspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this letter, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate-to the best of our knowledge-the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high dynamic range scenes. Videos of the experiments: http://rpg.ifi.uzh.ch/ultimateslam.html. --- paper_title: Vertical Landing for Micro Air Vehicles using Event-Based Optical Flow paper_content: Small flying robots can perform landing maneuvers using bio-inspired optical flow by maintaining a constant divergence. However, optical flow is typically estimated from frame sequences recorded by standard miniature cameras. This requires processing full images on-board, limiting the update rate of divergence measurements, and thus the speed of the control loop and the robot. Event-based cameras overcome these limitations by only measuring pixel-level brightness changes at microsecond temporal accuracy, hence providing an efficient mechanism for optical flow estimation. This paper presents, to the best of our knowledge, the first work integrating event-based optical flow estimation into the control loop of a flying robot. We extend an existing 'local plane fitting' algorithm to obtain an improved and more computationally efficient optical flow estimation method, valid for a wide range of optical flow velocities. This method is validated for real event sequences. In addition, a method for estimating the divergence from event-based optical flow is introduced, which accounts for the aperture problem. The developed algorithms are implemented in a constant divergence landing controller on-board of a quadrotor. Experiments show that, using event-based optical flow, accurate divergence estimates can be obtained over a wide range of speeds. This enables the quadrotor to perform very fast landing maneuvers. --- paper_title: Event-driven ball detection and gaze fixation in clutter paper_content: The fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking and control, for example a robot catching a ball. When the event-driven iCub humanoid robot grasps an object its head and torso move, inducing camera motion, and tracked objects become no longer trivially segmented amongst the mass of background clutter. Current event-based tracking algorithms have mostly considered stationary cameras that have clean event-streams with minimal clutter. This paper introduces novel methods to extend the Hough-based circle detection algorithm using optical flow information that is readily extracted from the spatio-temporal event space. Results indicate the proposed directed-Hough algorithm is more robust to other moving objects and the background event-clutter. Finally, we demonstrate successful on-line robot control and gaze following on the iCub robot. --- paper_title: Cooperative SLAM on small mobile robots paper_content: We present a method for simultaneous localization and mapping for robots. The focus of our work is to use multiple small mobile robots which have only limited sensing and computational resources. Our robots each uses a laser pointer and an event based vision sensor to compute distances to its surroundings. The acquired data is used to update an occupancy map which can be shared among many robots at the same time. Here we demonstrate initial results for our proof-of-concept implementation. It runs in real-time using a mobile robot platform with an event based vision sensor for data acquisition. Distance estimates to objects are transferred to a remote computer to build a map of the environment. The results of our work can be used in future technical implementations, and for further investigations into cooperative mapping. --- paper_title: Fast sensory motor control based on event-based hybrid neuromorphic-procedural system paper_content: Fast sensory-motor processing is challenging when using traditional frame-based cameras and computers. Here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina, a computer, and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal. The system consists of a 128times128 retina that asynchronously reports scene reflectance changes, a laptop PC, and a servo motor controller. Components are interconnected by USB. The retina looks down onto the field in front of the goal. Moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal. The ball's position and velocity are used to control the servo motor. Running under Windows XP, the reaction latency is 2.8plusmn0.5 ms at a CPU load of 1 million events per second (Meps), although fast balls only create ~30 keps. This system demonstrates the advantages of hybrid event-based sensory motor processing --- paper_title: Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor paper_content: This work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic "silicon-retina" image sensor, and the algorithm developed for processing the asynchronous output data delivered by this sensor. The main purpose of these efforts is to provide a flexible, compact, low-power and low-cost traffic monitoring system which is capable of determining the velocity of passing vehicles simultaneously on multiple lanes. The system and algorithm proposed exploit the unique characteristics of the image sensor with focal-plane analog preprocessing. These features include sparse asynchronous data output with high temporal resolution and low latency, high dynamic range and low power consumption. The system is able to measure velocities of vehicles in the range 20 to 300 km/h on up to four lanes simultaneously, day and night and under variable atmospheric conditions, with a resolution of 1 km/h. Results of vehicle speed measurements taken from a test installation of the system on a four-lane highway are presented and discussed. The accuracy of the speed estimate has been evaluated on the basis of calibrated light-barrier speed measurements. The speed estimation error has a standard deviation of 2.3 km/h and near zero mean --- paper_title: Steering a predator robot using a mixed frame/event-driven convolutional neural network paper_content: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing. --- paper_title: The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception paper_content: Event-based cameras are a new passive sensing modality with a number of benefits over traditional cameras, including extremely low latency, asynchronous data acquisition, high dynamic range, and very low power consumption. There has been a lot of recent interest and development in applying algorithms to use the events to perform a variety of three-dimensional perception tasks, such as feature tracking, visual odometry, and stereo depth estimation. However, there currently lacks the wealth of labeled data that exists for traditional cameras to be used for both testing and development. In this letter, we present a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments. From each camera, we provide the event stream, grayscale images, and inertial measurement unit (IMU) readings. In addition, we utilize a combination of IMU, a rigidly mounted lidar system, indoor and outdoor motion capture, and GPS to provide accurate pose and depth images for each camera at up to 100 Hz. For comparison, we also provide synchronized grayscale images and IMU readings from a frame-based stereo camera system. --- paper_title: Event-driven ball detection and gaze fixation in clutter paper_content: The fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking and control, for example a robot catching a ball. When the event-driven iCub humanoid robot grasps an object its head and torso move, inducing camera motion, and tracked objects become no longer trivially segmented amongst the mass of background clutter. Current event-based tracking algorithms have mostly considered stationary cameras that have clean event-streams with minimal clutter. This paper introduces novel methods to extend the Hough-based circle detection algorithm using optical flow information that is readily extracted from the spatio-temporal event space. Results indicate the proposed directed-Hough algorithm is more robust to other moving objects and the background event-clutter. Finally, we demonstrate successful on-line robot control and gaze following on the iCub robot. --- paper_title: Event-driven embodied system for feature extraction and object recognition in robotic applications paper_content: A major challenge in robotic applications is the interaction with a dynamic environment and humans which is typically constrained by the capability of visual sensors and the computational cost of signal processing algorithms. Addressing this problem the paper presents an event-driven based embodied system for feature extraction and object recognition as a novel efficient sensory approach in robotic applications. The system is established for a mobile humanoid robot which provides the infrastructure for interfacing asynchronous vision sensors with the processing unit of the robot. By applying event-feature ”mapping” the address event representation of the sensors is enhanced by additional information that can be used for object recognition. The system is presented in the context of an exemplary application in which the robot has to detect and grasp a ball in an arbitrary state of motion. --- paper_title: Human vs. computer slot car racing using an event and frame-based DAVIS vision sensor paper_content: This paper describes an open-source implementation of an event-based dynamic and active pixel vision sensor (DAVIS) for racing human vs. computer on a slot car track. The DAVIS is mounted in "eye-of-god" view. The DAVIS image frames are only used for setup and are subsequently turned off because they are not needed. The dynamic vision sensor (DVS) events are then used to track both the human and computer controlled cars. The precise control of throttle and braking afforded by the low latency of the sensor output enables consistent outperformance of human drivers at a laptop CPU load of <3% and update rate of 666Hz. The sparse output of the DVS event stream results in a data rate that is about 1000 times smaller than from a frame-based camera with the same resolution and update rate. The scaled average lap speed of the 1/64 scale cars is about 450km/h which is twice as fast as the fastest Formula 1 lap speed. A feedbackcontroller mode allows competitive racing by slowing the computer controlled car when it is ahead of the human. In tests of human vs. computer racing the computer still won more than 80% of the races. --- paper_title: A pencil balancing robot using a pair of AER dynamic vision sensors paper_content: Balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds. This demonstration shows how a pair of spike-based silicon retina dynamic vision sensors (DVS) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil. Two DVSs view the pencil from right angles. Movements of the pencil cause spike address-events (AEs) to be emitted from the DVSs. These AEs are transmitted to a PC over USB interfaces and are processed procedurally in real time. The PC updates its estimate of the pencil's location and angle in 3d space upon each incoming AE, applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil. A PD-controller adjusts X-Y-position and velocity of the table to maintain the pencil balanced upright. The controller also minimizes the deviation of the pencil's base from the center of the table. The actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller. Our system can balance any small, thin object such as a pencil, pen, chop-stick, or rod for many minutes. Balancing is only possible when incoming AEs are processed as they arrive from the sensors, typically at intervals below millisecond ranges. Controlling at normal image sensor sample rates (e.g. 60 Hz) results in too long latencies for a stable control loop. --- paper_title: The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM paper_content: New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data. --- paper_title: The Event-Driven Software Library for YARP—With Algorithms and iCub Applications paper_content: Event-driven (ED) cameras are an emerging technology that sample the visual signal based on changes in the signal magnitude, rather than at a fixed-rate over time. The change in paradigm results in a camera with a lower latency, that uses less power, has reduced bandwidth, and higher dynamic range. Such cameras offer many potential advantages for on-line, autonomous, robots; however the sensor data does not directly integrate with current "image-based" frameworks and software libraries. The iCub robot uses Yet Another Robot Platform (YARP) as middleware to provide modular processing and connectivity to sensors and actuators. This paper introduces a library that incorporates an event-based framework into the YARP architecture, allowing event cameras to be used with the iCub (and other YARP-based) robots. We describe the philosophy and methods for structuring events to facilitate processing, while maintaining low-latency and real-time operation. We also describe several processing modules made available open-source, and three example demonstrations that can be run on the neuromorphic iCub. --- paper_title: Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor paper_content: Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. --- paper_title: HFirst: A Temporal Approach to Object Recognition paper_content: This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5% $\pm$ 3.5%) for a previously published four class card pip recognition task and an accuracy of 84.9% $\pm$ 1.9% for a new more difficult 36 class character recognition task. --- paper_title: A Dataset for Visual Navigation with Neuromorphic Methods paper_content: Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS) and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets. --- paper_title: HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition paper_content: This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy. --- paper_title: Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets paper_content: Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules. --- paper_title: Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios paper_content: Event cameras are bioinspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this letter, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate-to the best of our knowledge-the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high dynamic range scenes. Videos of the experiments: http://rpg.ifi.uzh.ch/ultimateslam.html. --- paper_title: Asynchronous Corner Detection and Tracking for Event Cameras in Real Time paper_content: The recent emergence of bioinspired event cameras has opened up exciting new possibilities in high-frequency tracking, bringing robustness to common problems in traditional vision, such as lighting changes and motion blur. In order to leverage these attractive attributes of the event cameras, research has been focusing on understanding how to process their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream essentially forming frames of events grouped according to their timestamp, we are still to exploit the power of these cameras. In this spirit, this letter proposes a new, purely event-based corner detector, and a novel corner tracker, demonstrating that it is possible to detect corners and track them directly on the event stream in real time. Evaluation on benchmarking datasets reveals a significant boost in the number of detected corners and the repeatability of such detections over the state of the art even in challenging scenarios with the proposed approach while enabling more than a 4$\times$ speed-up when compared to the most efficient algorithm in the literature. The proposed pipeline detects and tracks corners at a rate of more than 7.5 million events per second, promising great impact in high-speed applications. --- paper_title: Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor paper_content: This work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic "silicon-retina" image sensor, and the algorithm developed for processing the asynchronous output data delivered by this sensor. The main purpose of these efforts is to provide a flexible, compact, low-power and low-cost traffic monitoring system which is capable of determining the velocity of passing vehicles simultaneously on multiple lanes. The system and algorithm proposed exploit the unique characteristics of the image sensor with focal-plane analog preprocessing. These features include sparse asynchronous data output with high temporal resolution and low latency, high dynamic range and low power consumption. The system is able to measure velocities of vehicles in the range 20 to 300 km/h on up to four lanes simultaneously, day and night and under variable atmospheric conditions, with a resolution of 1 km/h. Results of vehicle speed measurements taken from a test installation of the system on a four-lane highway are presented and discussed. The accuracy of the speed estimate has been evaluated on the basis of calibrated light-barrier speed measurements. The speed estimation error has a standard deviation of 2.3 km/h and near zero mean --- paper_title: Are We Ready for Autonomous Drone Racing? The UZH-FPV Drone Racing Dataset paper_content: Despite impressive results in visual-inertial state estimation in recent years, high speed trajectories with six degree of freedom motion remain challenging for existing estimation algorithms. Aggressive trajectories feature large accelerations and rapid rotational motions, and when they pass close to objects in the environment, this induces large apparent motions in the vision sensors, all of which increase the difficulty in estimation. Existing benchmark datasets do not address these types of trajectories, instead focusing on slow speed or constrained trajectories, targeting other tasks such as inspection or driving. We introduce the UZH-FPV Drone Racing dataset, consisting of over 27 sequences, with more than 10 km of flight distance, captured on a first-person-view (FPV) racing quadrotor flown by an expert pilot. The dataset features camera images, inertial measurements, event-camera data, and precise ground truth poses. These sequences are faster and more challenging, in terms of apparent scene motion, than any existing dataset. Our goal is to enable advancement of the state of the art in aggressive motion estimation by providing a dataset that is beyond the capabilities of existing state estimation algorithms. --- paper_title: Steering a predator robot using a mixed frame/event-driven convolutional neural network paper_content: This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing. --- paper_title: Live demonstration: Behavioural emulation of event-based vision sensors paper_content: This demonstration shows how an inexpensive high frame-rate USB camera is used to emulate existing and proposed activity-driven event-based vision sensors. A PS3-Eye camera which runs at a maximum of 125 frames/second with colour QVGA (320×240) resolution is used to emulate several event-based vision sensors, including a Dynamic Vision Sensor (DVS), a colour-change sensitive DVS (cDVS), and a hybrid vision sensor with DVS+cDVS pixels. The emulator is integrated into the jAER software project for event-based real-time vision and is used to study use cases for future vision sensor designs. --- paper_title: EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras paper_content: Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain. --- paper_title: Event-based 3D SLAM with a depth-augmented dynamic vision sensor paper_content: o We present the D-eDVSn a combined event-based 3D sensor n and a novel event-based full-3D simultaneous lo- calization and mapping algorithm which works exclusively with the sparse stream of visual data provided by the D-eDVS. The D-eDVS is a combination of the established PrimeSense RGB- D sensor and a biologically inspired embedded dynamic vision sensor. Dynamic vision sensors only react to dynamic contrast changes and output data in form of a sparse stream of events which represent individual pixel locations. We demonstrate how an event-based dynamic vision sensor can be fused with a classic frame-based RGB-D sensor to produce a sparse stream of depth-augmented 3D points. The advantages of a sparse, event-based stream are a much smaller amount of generated data, thus more efcient resource usage, and a continuous representation of motion allowing lag-free tracking. Our event- based SLAM algorithm is highly efcient and runs 20 times faster than realtime, provides localization updates at several hundred Hertz, and produces excellent results. We compare our method against ground truth from an external tracking system and two state-of-the-art algorithms on a new dataset which we release in combination with this paper. --- paper_title: Event-Based Visual Inertial Odometry paper_content: Event-based cameras provide a new visual sensing model by detecting changes in image intensity asynchronously across all pixels on the camera. By providing these events at extremely high rates (up to 1MHz), they allow for sensing in both high speed and high dynamic range situations where traditional cameras may fail. In this paper, we present the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit, to provide accurate metric tracking of a cameras full 6dof pose. Our algorithm is asynchronous, and provides measurement updates at a rate proportional to the camera velocity. The algorithm selects features in the image plane, and tracks spatiotemporal windows around these features within the event stream. An Extended Kalman Filter with a structureless measurement model then fuses the feature tracks with the output of the IMU. The camera poses from the filter are then used to initialize the next step of the tracker and reject failed tracks. We show that our method successfully tracks camera motion on the Event-Camera Dataset in a number of challenging situations. --- paper_title: The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception paper_content: Event-based cameras are a new passive sensing modality with a number of benefits over traditional cameras, including extremely low latency, asynchronous data acquisition, high dynamic range, and very low power consumption. There has been a lot of recent interest and development in applying algorithms to use the events to perform a variety of three-dimensional perception tasks, such as feature tracking, visual odometry, and stereo depth estimation. However, there currently lacks the wealth of labeled data that exists for traditional cameras to be used for both testing and development. In this letter, we present a large dataset with a synchronized stereo pair event based camera system, carried on a handheld rig, flown by a hexacopter, driven on top of a car, and mounted on a motorcycle, in a variety of different illumination levels and environments. From each camera, we provide the event stream, grayscale images, and inertial measurement unit (IMU) readings. In addition, we utilize a combination of IMU, a rigidly mounted lidar system, indoor and outdoor motion capture, and GPS to provide accurate pose and depth images for each camera at up to 100 Hz. For comparison, we also provide synchronized grayscale images and IMU readings from a frame-based stereo camera system. --- paper_title: Accurate Angular Velocity Estimation With an Event Camera paper_content: We present an algorithm to estimate the rotational motion of an event camera. In contrast to traditional cameras, which produce images at a fixed rate, event cameras have independent pixels that respond asynchronously to brightness changes, with microsecond resolution. Our method leverages the type of information conveyed by these novel sensors (i.e., edges) to directly estimate the angular velocity of the camera, without requiring optical flow or image intensity estimation. The core of the method is a contrast maximization design. The method performs favorably against ground truth data and gyroscopic measurements from an Inertial Measurement Unit, even in the presence of very high-speed motions (close to 1000 deg/s). --- paper_title: Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization paper_content: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. We propose a novel, accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions, such as high-speed motion or high dynamic range scenes. The method tracks a set of features (extracted on the image plane) through time. To achieve that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe- based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera’s 6-DOF pose, velocity, and IMU biases. The proposed method is evaluated quantitatively on the public Event Camera Dataset [19] and significantly outperforms the state-of-the-art [28], while being computationally much more efficient: our pipeline can run much faster than real-time on a laptop and even on a smartphone processor. Fur- thermore, we demonstrate qualitatively the accuracy and robustness of our pipeline on a large-scale dataset, and an extremely high-speed dataset recorded by spinning an event camera on a leash at 850 deg/s. --- paper_title: Live demonstration: Convolutional neural network driven by dynamic vision sensor playing RoShamBo paper_content: This demonstration presents a convolutional neural network (CNN) playing “RoShamBo” (“rock-paper-scissors”) against human opponents in real time. The network is driven by dynamic and active-pixel vision sensor (DAVIS) events, acquired by accumulating events into fixed event-number frames. --- paper_title: Fast event-based corner detection paper_content: Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel- level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state- of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the lat- est event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of pro- cessing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20. --- paper_title: A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation paper_content: We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras. --- paper_title: Low-latency visual odometry using event-based feature tracks paper_content: New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional camera and an event-based sensor in the same pixel array. These sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. In this paper, we present a low-latency visual odometry algorithm for the DAVIS sensor using event-based feature tracks. Features are first detected in the grayscale frames and then tracked asynchronously using the stream of events. The features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping. We show that our method successfully tracks the 6-DOF motion of the sensor in natural scenes. This is the first work on event-based visual odometry with the DAVIS sensor using feature tracks. --- paper_title: Asynchronous, Photometric Feature Tracking using Events and Frames paper_content: We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called"events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes. --- paper_title: Real-time panoramic tracking for event cameras paper_content: Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset [18] and self-recorded sequences. --- paper_title: HATS: Histograms of Averaged Time Surfaces for Robust Event-Based Object Classification paper_content: Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation. --- paper_title: The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM paper_content: New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data. --- paper_title: Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach paper_content: This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: the stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have the advantage to allow simultaneously high temporal resolution (better than 10μs) and wide dynamic range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order to exploit the potential of DVS and benefit from its features, depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor. This work deals with developing an appropriate approach for the asynchronous, event-driven stereo algorithm. We propose a modification of the cooperative network in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time - as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well suited for DVS data and can be successfully used for our efficient passive depth camera. ---
Title: Event-based Vision: A Survey Section 1: Introduction and Applications Description 1: Introduce event-based vision and discuss various applications and the motivation behind their use. Section 2: Principle of Operation of Event Cameras Description 2: Explain how event cameras operate, including the technology they use and how they differ from standard cameras. Section 3: Event Camera Types Description 3: Describe the various types of event cameras that have been developed over the years, highlighting their characteristics and advancements. Section 4: Advantages of Event Cameras Description 4: Discuss the key advantages of event cameras, such as high temporal resolution, low latency, low power consumption, and high dynamic range. Section 5: Challenges Due to the Novel Sensing Paradigm Description 5: Outline the challenges associated with event-based vision, including the need for novel algorithms and approaches to process the sensor data effectively. Section 6: Event Generation Model Description 6: Present models of how events are generated by event cameras and discuss theoretical and practical considerations. Section 7: Event Camera Availability Description 7: Discuss the current availability of event cameras, including their cost, hardware improvements, and potential for mass adoption. Section 8: Advanced Event Cameras Description 8: Look into the state-of-the-art developments in event camera technology, including advanced features and experimental models. Section 9: Event Processing Paradigms Description 9: Review various methods and paradigms for processing the data produced by event cameras, including model-based and model-free approaches. Section 10: Algorithms / Applications Description 10: Discuss different algorithms and specific applications that have been developed for event-based vision, covering a wide range of tasks from feature detection to higher-level vision tasks. Section 11: Neuromorphic Computing Description 11: Explore the integration of event cameras with neuromorphic processors and the potential benefits of such systems. Section 12: Algorithms / Applications Description 12: Dive deeper into the different applications of event-based vision, such as optical flow estimation, 3D reconstruction, pose estimation, SLAM, visual-inertial odometry, and image reconstruction. Section 13: Neuromorphic Control Description 13: Discuss the control architectures and strategies that leverage event-based vision for real-time applications and robotic systems. Section 14: Resources Description 14: Provide information on available resources for event-based vision research, including software, datasets, and simulators. Section 15: Discussion Description 15: Summarize the current state of event-based vision research and project future trends and challenges in the field. Section 16: Conclusion Description 16: Conclude the paper with a summary of the key points discussed and the potential impact and future directions for event-based vision.
A Review of Towered Big-Data Service Model for Biomedical Text-Mining Databases
18
--- paper_title: GoGene: gene annotation in the fast lane paper_content: High-throughput screens such as microarrays and RNAi screens produce huge amounts of data. They typically result in hundreds of genes, which are often further explored and clustered via enriched GeneOntology terms. The strength of such analyses is that they build on high-quality manual annotations provided with the GeneOntology. However, the weakness is that annotations are restricted to process, function and location and that they do not cover all known genes in model organisms. GoGene addresses this weakness by complementing highquality manual annotation with high-throughput text mining extracting co-occurrences of genes and ontology terms from literature. GoGene contains over 4000000 associations between genes and gene-related terms for 10 model organisms extracted from more than 18000000 PubMed entries. It does not cover only process, function and location of genes, but also biomedical categories such as diseases, compounds, techniques and mutations. By bringing it all together, GoGene provides the most recent and most complete facts about genes and can rank them according to novelty and importance. GoGene accepts keywords, gene lists, gene sequences and protein sequences as input and supports search for genes in PubMed, EntrezGene and via BLAST. Since all associations of genes to terms are supported by evidence in the literature, the results are transparent and can be verified by the user. GoGene is available at http:// gopubmed.org/gogene. --- paper_title: Bioinformatics: Recent Trends in Programs, Placements and Job Opportunities paper_content: Amino thiol substituted dipeptides of the formula are disclosed. These compounds are useful as hypotensive agents due to their angiotensin converting enzyme inhibition activity and depending upon the definition of X may also be useful as analgesics due to their enkephalinase inhibition activity. --- paper_title: Application of text mining in the biomedical domain. paper_content: In recent years the amount of experimental data that is produced in biomedical research and the number of papers that are being published in this field have grown rapidly. In order to keep up to date with developments in their field of interest and to interpret the outcome of experiments in light of all available literature, researchers turn more and more to the use of automated literature mining. As a consequence, text mining tools have evolved considerably in number and quality and nowadays can be used to address a variety of research questions ranging from de novo drug target discovery to enhanced biological interpretation of the results from high throughput experiments. In this paper we introduce the most important techniques that are used for a text mining and give an overview of the text mining tools that are currently being used and the type of problems they are typically applied for. --- paper_title: Extraction of incremental information using query evaluator paper_content: Information Extraction is an activity of examine text for information relevant to some interest. Information extraction needs depth analysis than simple key word searches. The information extraction system recognizes and extracts knowledge from a massive literature and extracted knowledge is accumulated in a knowledge base. Many conventional automatic information extraction approaches using Natural Language Processing and Text Mining technologies have been proposed to extract meaningful information automatically in biomedical realm. These conventional approaches have considerable pitfall that whenever a different extraction goal become visible or any component in system is upgraded, extraction has to be reapplied from beginning to the whole text collection although only a minor part of the text collection might be influenced. In this paper we have applied Stanford dependency grammar to furnish easy description of the grammatical relationships in a sentence. This work relates incremental information extraction approach in which extraction needs are exhibited in the form of database queries. This work aims that in the occasion of installation of a upgraded component, reduction in the processing time takes place as compared to a conventional approach. --- paper_title: CoPub update: CoPub 5.0 a text mining system to answer biological questions paper_content: In this article, we present CoPub 5.0, a publicly available text mining system, which uses Medline abstracts to calculate robust statistics for keyword co-occurrences. CoPub was initially developed for the analysis of microarray data, but we broadened the scope by implementing new technology and new thesauri. In CoPub 5.0, we integrated existing CoPub technology with new features, and provided a new advanced interface, which can be used to answer a variety of biological questions. CoPub 5.0 allows searching for keywords of interest and its relations to curated thesauri and provides highlighting and sorting mechanisms, using its statistics, to retrieve the most important abstracts in which the terms co-occur. It also provides a way to search for indirect relations between genes, drugs, pathways and diseases, following an ABC principle, in which A and C have no direct connection but are connected via shared B intermediates. With CoPub 5.0, it is possible to create, annotate and analyze networks using the layout and highlight options of Cytoscape web, allowing for literature based systems biology. Finally, operations of the CoPub 5.0 Web service enable to implement the CoPub technology in bioinformatics workflows. CoPub 5.0 can be accessed through the CoPub portal http://www.copub.org. --- paper_title: Literature mining for the biologist: from information retrieval to biological discovery paper_content: For the average biologist, hands-on literature mining currently means a keyword search in PubMed. However, methods for extracting biomedical facts from the scientific literature have improved considerably, and the associated tools will probably soon be used in many laboratories to automatically annotate and analyse the growing number of system-wide experimental data sets. Owing to the increasing body of text and the open-access policies of many journals, literature mining is also becoming useful for both hypothesis generation and biological discovery. However, the latter will require the integration of literature and high-throughput data, which should encourage close collaborations between biologists and computational linguists. --- paper_title: A grid infrastructure for mixed bioinformatics data and text mining paper_content: Summary form only given. We present a distributed infrastructure for mixed data and text mining. Our approach is based on extending the discovery net infrastructure, a grid-computing environment for knowledge discovery, to allow end users to construct complex distributed text and data mining workflows. We describe our architecture, data model and visual programming approach and present a number of mixed data text mining examples over biological data. --- paper_title: Evaluation of biomedical text-mining systems : Lessons learned from information retrieval paper_content: Biomedical text-mining systems have great promise for improving the efficiency and productivity of biomedical researchers. However, such systems are still not in routine use. One impediment to their development is the lack of systematic and rigorous evaluation, comparable to the approaches developed for information retrieval systems. The developers of text-mining systems need to improve both test collections for system-oriented evaluation and undertake user-oriented evaluations to determine the most effective use of their systems for their intended audience. --- paper_title: TEXT AND DATA MINING FOR BIOMEDICAL DISCOVERY paper_content: The biggest challenge for text and data mining is to truly impact the biomedical discovery process, enabling scientists to generate novel hypothesis to address the most crucial questions. Among a number of worthy submissions, we have selected six papers that exemplify advances in text and data mining methods that have a demonstrated impact on a wide range of applications. Work presented in this session includes data mining techniques applied to the discovery of 3-way genetic interactions and to the analysis of genetic data in the context of electronic medical records (EMRs), as well as an integrative approach that combines data from genetic (SNP) and transcriptomic (microarray) sources for clinical prediction. Text mining advances include a classification method to determine whether a published article contains pharmacological experiments relevant to drug-drug interactions, a fine-grained text mining approach for detecting the catalytic sites in proteins in the biomedical literature, and a method for automatically extending a taxonomy of health-related terms to integrate consumer-friendly synonyms for medical terminologies. --- paper_title: Text Mining Functional Keywords Associated with Genes paper_content: Modern experimental techniques provide the ability to gather vast amounts of biological data in a single experiment (e.g. DNA microarray experiment), making it extremely difficult for the researcher to interpret the data and form conclusions about the functions of the genes. Current approaches provide useful information that organizes or relates genes, but a major shortcoming is they either do not address specific functions of the genes or are constrained by functions predefined in other databases, which can be biased, incomplete, or out-of-date. We extended Andrade and Valencia’s method [1] to statistically mine functional keywords associated with genes from MEDLINE abstracts. The MEDLINE abstracts are analyzed statistically to score and rank keywords for each gene using a background set of words for baseline frequencies. We generally got very good functional keyword information about the genes we tested, which was confirmed by searching for the individual keywords in context. The keywords extracted by our algorithm reveal a wealth of potential functional concepts, which were not represented in existing public databases. We feel that this approach is general enough to apply to medical and biological literature to find other relationships: drugs vs. genes, risk-factors vs. genes, etc. --- paper_title: A UMLS-based Knowledge Acquisition Tool for Rule-based Clinical Decision Support System Development paper_content: Decision support systems in the medical field have to be easily modified by medical experts themselves. The authors have designed a knowledge acquisition tool to facilitate the creation and maintenance of a knowledge base by the domain expert and its sharing and reuse by other institutions. The Unified Medical Language System (UMLS) contains the domain entities and constitutes the relations repository from which the expert builds, through a specific browser, the explicit domain ontology. The expert is then guided in creating the knowledge base according to the pre-established domain ontology and condition-action rule templates that are well adapted to several clinical decision-making processes. Corresponding medical logic modules are eventually generated. The application of this knowledge acquisition tool to the construction of a decision support system in blood transfusion demonstrates the value of such a pragmatic methodology for the design of rule-based clinical systems that rely on the highly progressive knowledge embedded in hospital information systems. --- paper_title: Automatic classification of biomedical texts: Experiments with a hearing loss corpus paper_content: To obtain reliable information extraction applications has been one of the main goals of the biomedical text mining community. These applications are valuable tools to biologists in their increasingly difficult task of assimilating the knowledge contained in biomedical literature. This paper presents a novel algorithm for automatic classification of biomedical texts, specifically texts about symptoms. We apply our approach to biomedical texts related to hearing loss, obtaining promising results. --- paper_title: A text feature-based approach for literature mining of lncRNA-protein interactions paper_content: Long non-coding RNAs (lncRNAs) play important roles in regulating transcriptional and post-transcriptional levels. Currently, Knowledge of lncRNA and protein interactions (LPIs) is crucial for biomedical researches that are related to lncRNA. Many freshly discovered LPIs are stored in biomedical literature. With over one million new biomedical journal articles published every year, just keeping up with the novel finding requires automatically extracting information by text mining. To address this issue, we apply a text feature-based text mining approach to efficiently extract LPIs from biomedical literatures. Our approach consists of four steps. By employ natural language processing (NLP) technologies, this approach extracts text features from sentences that can precisely reflect the real LPIs. Our approach involves four steps including data collection, text pre-processing, structured representation, features extraction and training model and classification. The F-score performance of our approach achieves 79.5%, and the results indicate that the proposed approach can efficiently extract LPIs from biomedical literature. The text mining approach to automatically extract lncRNA-protein interactions from literature.The efficiency of the approach is shown in related experiment and comparison studies.Extracting text features automatically from biomedical literature. --- paper_title: Searching association rules of traditional Chinese medicine on Ligusticum wallichii by text mining paper_content: Much useful information on Ligusticum wallichii (LW) could be obtained from published literature by text mining technique. In this study, the data set on LW was downloaded from Chinese BioMedical literature database (SinoMed). Then, association rules among diseases, traditional Chinese medicine (TCM) syndromes, formulae and herbs on LW were investigated by text mining technique. These rules include TCM syndromes to diseases, formulae and combinational herbs, respectively. Diseases related with formulae including LW were mined out by executing data slicing algorithm. Finally, the results were visually demonstrated with Cytoscape 2.8 software. The main features from the mining data were: (1) LW was frequently used in treating cerebral infraction; (2) Blood stasis due to Qi deficiency was the main syndrome in TCM clinical practice; (3) Angelica sinensis was the first herb to combine with LW according to co-occurrent frequency; (4) Associated with LW, networks of TCM syndromes-diseases, formulae-diseases, TCM syndromes-formulae, and TCM syndromes-combinational herbs were constructed. These associated networks represented a holistic thinking of Chinese medicinal therapy, which might embody association rules among diseases, syndromes, formulae and herbs on LW. --- paper_title: Legal aspects of text mining paper_content: Abstract “Text mining” covers a range of techniques that allow software to extract information from text documents. It is not a new technology, but it has recently received spotlight attention due to the emergence of Big Data. The applications of text mining are very diverse and span multiple disciplines, ranging from biomedicine to legal, business intelligence and security. From a legal perspective, text mining touches upon several areas of law, including contract law, copyright law and database law. This contribution discusses the legal issues encountered during the assembly of texts into so-called “corpora”, as well as the use of such corpora. --- paper_title: CGM: A biomedical text categorization approach using concept graph mining paper_content: Text Categorization is used to organize and manage biomedical text databases that are growing at an exponential rate. Feature representations for documents are a crucial factor for the performance of text categorization. Most of the successful existing techniques use a vector representation based on key entities extracted from the text. In this paper we investigate a new direction where we represent a document as a graph. In this representation we identify high level concepts and build a rich graph structure that contains additional concepts and relationships. We then use graph kernel techniques to perform text categorization. The results show a significant improvement in accuracy when compared to categorization based on only the extracted concepts. --- paper_title: Using association rules mining to explore pattern of Chinese medicinal formulae (prescription) in treating and preventing breast cancer recurrence and metastasis paper_content: BackgroundChinese herbal medicine is increasingly widely used as a complementary approach for control of breast cancer recurrence and metastasis. In this paper, we examined the implicit prescription patterns behind the Chinese medicinal formulae, so as to explore the Chinese medicinal compatibility patterns or rules in the treatment or control of breast cancer recurrence and metastasis.MethodsThis study was based on the herbs recorded in Pharmacopoeia of the People’s Republic of China, and the literature sources from Chinese Journal Net and China Master Dissertations Full-text Database (1990 – 2010) to analyze the compatibility rule of the prescription. Each Chinese herb was listed according to the selected medicinal formulae and the added information was organized to establish a database. The frequency and the association rules of the prescription patterns were analyzed using the SPSS Clenmentine Data Mining System. An initial statistical analysis was carried out to categorize the herbs according to their medicinal types and dosage, natures, flavors, channel tropism, and functions. Based on the categorization, the frequencies of occurrence were computed.ResultsThe main prescriptive features from the selected formulae of the mining data are: (1) warm or cold herbs in the Five Properties category; sweet or bitter herbs in the Five Flavors category and with affinity to the liver meridian are the most frequently prescribed in the 96 medicinal formulae; (2) herbs with tonifying and replenishing, blood-activating and stasis-resolving, spleen-strengthening and dampness-resolving or heat-clearing and detoxicating functions that are frequently prescribed; (3) herbs with blood-tonifying, yin-tonifying, spleen-strengthening and dampness-resolving, heat-clearing and detoxicating, and blood-activating with stasis-resolving functions that are interrelated and prescribed in combination with qi-tonifying herbs.ConclusionsThe results indicate that there is a close relationship between recurrence and metastasis of breast cancer with liver dysfunctions. These prescriptions focus on the herbs for nourishing the yin-blood, and emolliating and regulating the liver which seems to be the key element in the treatment process. Meanwhile, the use of qi-tonifying and spleen-strengthening herbs also forms the basis of prescription patterns. --- paper_title: Genescene: biomedical text and data mining paper_content: To access the content of digital texts efficiently, it is necessary to provide more sophisticated access than keyword based searching. Genescene provides biomedical researchers with research findings and background relations automatically extracted from text and experimental data. These provide a more detailed overview of the information available. The extracted relations were evaluated by qualified researchers and are precise. A qualitative ongoing evaluation of the current online interface indicates that this method to search the literature is more useful and efficient than keyword based searching. --- paper_title: Biological data mining with neural networks : implementation and application of a exible decision tree extraction algorithm to genomic problem domains paper_content: In the past, neural networks have been viewed as classification and regression systems whose internal representations were extremely difficult to interpret. It is now becoming apparent that algorithms can be designed which extract understandable representations from trained neural networks, enabling them to be used for data mining, i.e. the discovery and explanation of previously unknown relationships present in data. This paper reviews existing algorithms for extracting comprehensible representations from neural networks and describes research to generalize and extend the capabilities of one of these algorithms. The algorithm has been generalized for application to bioinformatics datasets, including the prediction of splice site junctions in Human DNA sequences. Results generated on this datasets are compared with those generated by a conventional data mining technique (C5) and conclusions drawn. --- paper_title: Overview and semantic issues of text mining paper_content: Text mining refers to the discovery of previously unknown knowledge that can be found in text collections. In recent years, the text mining field has received great attention due to the abundance of textual data. A researcher in this area is requested to cope with issues originating from the natural language particularities. This survey discusses such semantic issues along with the approaches and methodologies proposed in the existing literature. It covers syntactic matters, tokenization concerns and it focuses on the different text representation techniques, categorisation tasks and similarity measures suggested. --- paper_title: Assessment of Latent Semantic Analysis (LSA) Text Mining Algorithms for Large Scale Mapping of Patent and Scientific Publication Documents paper_content: In this study we conduct a thorough assessment of the LSA text mining method and its options (preprocessing, weighting, …) to grasp similarities between patent documents and scientific publications to develop a new method to detect direct science-technology linkages - as this is instrumental for research on topics in innovation management, e.g. anticommons issues. We want to assess effectiveness (in terms of precision and recall) and derive best practices on weighting and dimensionality reduction for application on patent data. We use LSA to derive similarity from a large set of patent and scientific publication documents (88,248 patent documents and 948,432 scientific publications) based on 40 similarity measurement variants (four weighting schemas are combined with ten levels of dimensionality reduction and the cosine metric). A thorough validation is set up to compare the performance of those measure variants (expert validation of 300 combinations plus a control set of 30,000 patents). We do not find evidence for the claims of LSA to be superior to plain cosine measures or simple common term or co-occurrence based measures in our data; dimensionality reduction only seems to approach cosine measures applied on the full vector space. We propose the combination of two measures based on the number of common terms (weighted by the minimum of the number of terms of both documents and weighted by the maximum of the number of terms of both documents respectively) as a more robust method to detect similarity between patents and publications. --- paper_title: Text Mining Methods and Techniques paper_content: In recent years growth of digital data is increasing, knowledge discovery and data mining have attracted great attention with coming up need for turning such data into useful information and knowledge. The use of the information and knowledge extracted from a large amount of data benefits many applications like market analysis and business management. In many applications database stores information in text form so text mining is the one of the most resent area for research. To extract user required information is the challenging issue. Text Mining is an important step of knowledge discovery process. Text mining extracts hidden information from notstructured to semi-structured data. Text mining is the discovery by automatically extracting information from different written resources and also by computer for extracting new, previously unknown information. This survey paper tries to cover the text mining techniques and methods that solve these challenges. In this survey paper we discuss such successful techniques and methods to give effectiveness over information retrieval in text mining. The types of situations where each technology may be useful in order to help users are also discussed. --- paper_title: Automatic Extraction of Biological Information from Scientific Text: Protein-Protein Interactions paper_content: We describe the basic design of a system for automatic detection of protein-protein interactions extracted from scientific abstracts. By restricting the problem domain and imposing a number of strong assumptions which include pre-speeified protein names and a limited set of verbs that represent actions, we show- that it is possible to perform accurate information extraction. The performance of tile system is evaluated with different cases of real-world interaction networks, including the Drosophila cell cycle control. The results obtained computationally are in good a~’eement with current biological knowledge and demonstrate tile feasibility of developing a flfily automated system able to describe networks of protein interactions with sufficient accuracy. --- paper_title: MedMiner: An Internet Text-Mining Tool for Biomedical Information, with Application to Gene Expression Profiling paper_content: The trend toward high-throughput techniques in molecular biology and the explosion of online scientific data threaten to overwhelm the ability of researchers to take full advantage of available information. This problem is particularly severe in the rapidly expanding area of gene expression experiments, for example, those carried out with cDNA microarrays or oligonucleotide chips. We present an Internet-based hypertext program, MedMiner, which filters and organizes large amounts of textual and structured information returned from public search engines like GeneCards and PubMed. We demonstrate the value of the approach for the analysis of gene expression data, but MedMiner can also be extended to other areas involving molecular genetic or pharmacological information. More generally still, MedMiner can be used to organize the information returned from any arbitrary PubMed search. --- paper_title: Text-mining and information-retrieval services for molecular biology paper_content: Text-mining in molecular biology - defined as the automatic extraction of information about genes, proteins and their functional relationships from text documents - has emerged as a hybrid discipline on the edges of the fields of information science, bioinformatics and computational linguistics. A range of text-mining applications have been developed recently that will improve access to knowledge for biologists and database annotators. --- paper_title: Knowledge discovery in biology and biotechnology texts: a review of techniques, evaluation strategies, and applications. paper_content: Arguably, the richest source of knowledge (as opposed to fact and data collections) about biology and biotechnology is captured in natural-language documents such as technical reports, conference proceedings and research articles. The automatic exploitation of this rich knowledge base for decision making, hypothesis management (generation and testing) and knowledge discovery constitutes a formidable challenge. Recently, a set of technologies collectively referred to as knowledge discovery in text (KDT) has been advocated as a promising approach to tackle this challenge. KDT comprises three main tasks: information retrieval, information extraction and text mining. These tasks are the focus of much recent scientific research and many algorithms have been developed and applied to documents and text in biology and biotechnology. This article introduces the basic concepts of KDT, provides an overview of some of these efforts in the field of bioscience and biotechnology, and presents a framework of commonly used techniques for evaluating KDT methods, tools and systems. --- paper_title: Mining clinical attributes of genomic variants through assisted literature curation in Egas paper_content: The veritable deluge of biological data over recent years has led to the establishment of a considerable number of knowledge resources that compile curated information extracted from the literature and store it in structured form, facilitating its use and exploitation. In this article, we focus on the curation of inherited genetic variants and associated clinical attributes, such as zygosity, penetrance or inheritance mode, and describe the use of Egas for this task. Egas is a web-based platform for text-mining assisted literature curation that focuses on usability through modern design solutions and simple user interactions. Egas offers a flexible and customizable tool that allows defining the concept types and relations of interest for a given annotation task, as well as the ontologies used for normalizing each concept type. Further, annotations may be performed on raw documents or on the results of automated concept identification and relation extraction tools. Users can inspect, correct or remove automatic text-mining results, manually add new annotations, and export the results to standard formats. Egas is compatible with the most recent versions of Google Chrome, Mozilla Firefox, Internet Explorer and Safari and is available for use at https://demo.bmd-software.com/egas/Database URL: https://demo.bmd-software.com/egas/. --- paper_title: PPInterFinder—a mining tool for extracting causal relations on human proteins from literature paper_content: One of the most common and challenging problem in biomedical text mining is to mine protein–protein interactions (PPIs) from MEDLINE abstracts and full-text research articles because PPIs play a major role in understanding the various biological processes and the impact of proteins in diseases. We implemented, PPInterFinder—a web-based text mining tool to extract human PPIs from biomedical literature. PPInterFinder uses relation keyword co-occurrences with protein names to extract information on PPIs from MEDLINE abstracts and consists of three phases. First, it identifies the relation keyword using a parser with Tregex and a relation keyword dictionary. Next, it automatically identifies the candidate PPI pairs with a set of rules related to PPI recognition. Finally, it extracts the relations by matching the sentence with a set of 11 specific patterns based on the syntactic nature of PPI pair. We find that PPInterFinder is capable of predicting PPIs with the accuracy of 66.05% on AIMED corpus and outperforms most of the existing systems. ::: ::: Database URL: http://www.biomining-bu.in/ppinterfinder/ --- paper_title: A text-mining system for extracting metabolic reactions from full-text articles paper_content: BackgroundIncreasingly biological text mining research is focusing on the extraction of complex relationships relevant to the construction and curation of biological networks and pathways. However, one important category of pathway — metabolic pathways — has been largely neglected.Here we present a relatively simple method for extracting metabolic reaction information from free text that scores different permutations of assigned entities (enzymes and metabolites) within a given sentence based on the presence and location of stemmed keywords. This method extends an approach that has proved effective in the context of the extraction of protein–protein interactions.ResultsWhen evaluated on a set of manually-curated metabolic pathways using standard performance criteria, our method performs surprisingly well. Precision and recall rates are comparable to those previously achieved for the well-known protein-protein interaction extraction task.ConclusionsWe conclude that automated metabolic pathway construction is more tractable than has often been assumed, and that (as in the case of protein–protein interaction extraction) relatively simple text-mining approaches can prove surprisingly effective. It is hoped that these results will provide an impetus to further research and act as a useful benchmark for judging the performance of more sophisticated methods that are yet to be developed. --- paper_title: RLIMS-P 2.0: A Generalizable Rule-Based Information Extraction System for Literature Mining of Protein Phosphorylation Information paper_content: We introduce RLIMS-P version 2.0, an enhanced rule-based information extraction (IE) system for mining kinase, substrate, and phosphorylation site information from scientific literature. Consisting of natural language processing and IE modules, the system has integrated several new features, including the capability of processing full-text articles and generalizability towards different post-translational modifications (PTMs). To evaluate the system, sets of abstracts and full-text articles, containing a variety of textual expressions, were annotated. On the abstract corpus, the system achieved F-scores of 0.91, 0.92, and 0.95 for kinases, substrates, and sites, respectively. The corresponding scores on the full-text corpus were 0.88, 0.91, and 0.92. It was additionally evaluated on the corpus of the 2013 BioNLP-ST GE task, and achieved an F-score of 0.87 for the phosphorylation core task, improving upon the results previously reported on the corpus. Full-scale processing of all abstracts in MEDLINE and all articles in PubMed Central Open Access Subset has demonstrated scalability for mining rich information in literature, enabling its adoption for biocuration and for knowledge discovery. The new system is generalizable and it will be adapted to tackle other major PTM types. RLIMS-P 2.0 online system is available online (http://proteininformationresource.org/rlimsp/) and the developed corpora are available from iProLINK (http://proteininformationresource.org/iprolink/). --- paper_title: Search terms and a validated brief search filter to retrieve publications on health-related values in Medline: a word frequency analysis study paper_content: Objective Healthcare debates and policy developments are increasingly concerned with a broad range of values-related areas. These include not only ethical, moral, religious, and other types of values ‘proper’, but also beliefs, preferences, experiences, choices, satisfaction, quality of life, etc. Research on such issues may be difficult to retrieve. This study used word frequency analysis to generate a broad pool of search terms and a brief filter to facilitate relevant searches in bibliographic databases. ::: ::: Methods Word frequency analysis for ‘values terms’ was performed on citations on diabetes, obesity, dementia, and schizophrenia (Medline; 2004–2006; 4440 citations; 1 110 291 words). Concordance® and SPSS 14.0 were used. Text words and MeSH terms of high frequency and precision were compiled into a search filter. It was validated on datasets of citations on dentistry and food hypersensitivity. ::: ::: Results 144 unique text words and 124 unique MeSH terms of moderate and high frequency (≥20) and very high precision (≥90%) were identified. Of these, 19 text words and seven MeSH terms were compiled into a ‘brief values filter’. In the derivation dataset, it had a sensitivity of 76.8% and precision of 86.8%. In the validation datasets, its sensitivity and precision were, respectively, 70.1% and 63.6% (food hypersensitivity) and 47.1% and 82.6% (dentistry). ::: ::: Conclusions This study provided a varied pool of search terms and a simple and highly effective tool for retrieving publications on health-related values. Further work is required to facilitate access to such research and enhance its chances of being translated into practice, policy, and service improvements. --- paper_title: BANNER: An Executable Survey of Advances in Biomedical Named Entity Recognition paper_content: There has been an increasing amount of research on biomedical named entity recognition, t he most basic text extraction problem, resulting in significant progress by different research teams around the world. This has created a need for a freely-available, open source system implementing the advances described in the literature. In this paper we present BANNER, an open-source, executable survey of advances in biomedical named entity recognition, intended to serve as a benchmark for the field. BANNER is implemented in Java as a machine-learning system based on conditional random fields and includes a wide survey of the best techniques recently described in the literature. It is designed to maximize domain independence by not employing brittle semantic features or rule-based processing steps, and achieves significantly better performance than existing baseline systems. It is therefore useful to developers as an extensible NER implementation, to researchers as a standard for comparing innovative techniques, and to biologists requiring the ability to find novel entities in large amounts of text. BANNER is available for download at http://banner.sourceforge.net. --- paper_title: Text mining facilitates database curation - extraction of mutation-disease associations from Bio-medical literature paper_content: Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains ‘locked’ in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3 % for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10 % in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5 %. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating discourse level analysis significantly improved the performance of extracting the protein-mutation-disease association. Future work includes the extension of MutD for full text articles. --- paper_title: HPIminer: A text mining system for building and visualizing human protein interaction networks and pathways paper_content: The knowledge on protein–protein interactions (PPI) and their related pathways are equally important to understand the biological functions of the living cell. Such information on human proteins is highly desirable to understand the mechanism of several diseases such as cancer, diabetes, and Alzheimer’s disease. Because much of that information is buried in biomedical literature, an automated text mining system for visualizing human PPI and pathways is highly desirable. In this paper, we present HPIminer, a text mining system for visualizing human protein interactions and pathways from biomedical literature. HPIminer extracts human PPI information and PPI pairs from biomedical literature, and visualize their associated interactions, networks and pathways using two curated databases HPRD and KEGG. To our knowledge, HPIminer is the first system to build interaction networks from literature as well as curated databases. Further, the new interactions mined only from literature and not reported earlier in databases are highlighted as new. A comparative study with other similar tools shows that the resultant network is more informative and provides additional information on interacting proteins and their associated networks. --- paper_title: Improving information retrieval using Medical Subject Headings Concepts: a test case on rare and chronic diseases. paper_content: BACKGROUND: As more scientific work is published, it is important to improve access to the biomedical literature. Since 2000, when Medical Subject Headings (MeSH) Concepts were introduced, the MeSH Thesaurus has been concept based. Nevertheless, information retrieval is still performed at the MeSH Descriptor or Supplementary Concept level. OBJECTIVE: The study assesses the benefit of using MeSH Concepts for indexing and information retrieval. METHODS: Three sets of queries were built for thirty-two rare diseases and twenty-two chronic diseases: (1) using PubMed Automatic Term Mapping (ATM), (2) using Catalog and Index of French-language Health Internet (CISMeF) ATM, and (3) extrapolating the MEDLINE citations that should be indexed with a MeSH Concept. RESULTS: Type 3 queries retrieve significantly fewer results than type 1 or type 2 queries (about 18,000 citations versus 200,000 for rare diseases; about 300,000 citations versus 2,000,000 for chronic diseases). CISMeF ATM also provides better precision than PubMed ATM for both disease categories. DISCUSSION: Using MeSH Concept indexing instead of ATM is theoretically possible to improve retrieval performance with the current indexing policy. However, using MeSH Concept information retrieval and indexing rules would be a fundamentally better approach. These modifications have already been implemented in the CISMeF search engine. --- paper_title: A hybrid named entity tagger for tagging human proteins/genes paper_content: The predominant step and pre-requisite in the analysis of scientific literature is the extraction of gene/protein names in biomedical texts. Though many taggers are available for this Named Entity Recognition (NER) task, we found none of them achieve a good state-of-art tagging for human genes/proteins. As most of the current text mining research is related to human literature, a good tagger to precisely tag human genes and proteins is highly desirable. In this paper, we propose a new hybrid approach based on (a) machine learning algorithm (conditional random fields), (b) set of (manually constructed) rules, and (c) a novel abbreviation identification algorithm to surmount the common errors observed in available taggers to tag human genes/proteins. Experiment results on JNLPBA2004 corpus show that our domain specific approach achieves a high precision of 80.47, F-score of 75.77 and outperforms most of the state-of-the-art systems. However, the recall of 71.60 still remains low and leaves much room for future improvement. --- paper_title: Text mining in radiology reports by statistical machine translation approach paper_content: Medical text mining has gained increasing popularity in recent years. Now a days, large amount of medical text data are daily generated in health institutions, but never refer again as it is very time consuming task. In Radiology domain, most of the reports are in free text format and usually unprocessed, hence it is difficult to access the valuable information for medical professional unless proper text mining is not applied. There are some systems existing for radiology report information retrieval like MedLEE, NeuRadIR, CBIR but very few of them make use of text associated with image. This paper proposes a text mining system to deals with this problem by using statistical machine translation approach. The System stores the text and image features to find the match report. The SVM classifier is use in SMT approach to check whether entered report present in database or not. The system will return the similar report match with the entered report from the database. --- paper_title: Selecting an Ontology for Biomedical Text Mining paper_content: Text mining for biomedicine requires a significant amount of domain knowledge. Much of this information is contained in biomedical ontologies. Developers of text mining applications often look for appropriate ontologies that can be integrated into their systems, rather than develop new ontologies from scratch. However, there is often a lack of documentation of the qualities of the ontologies. A number of methodologies for evaluating ontologies have been developed, but it is difficult for users by using these methods to select an ontology. In this paper, we propose a framework for selecting the most appropriate ontology for a particular text mining application. The framework comprises three components, each of which considers different aspects of requirements of text mining applications on ontologies. We also present an experiment based on the framework choosing an ontology for a gene normalization system. --- paper_title: Bayesian information extraction network for Medline abstract paper_content: Biomedical is a huge domain that combines a variety of research areas. MEDELINE is one of the largest biomedical databases. Thereby, the searching of pertinent information through Medline has become a difficult task. That's why; we need to develop information extraction systems in order to facilitate the treatment and the representation of data according to the user's need. This paper applies Bayesian Networks to support information extraction based on ontological annotation from Medline. We present a tool developed that combines between semantic and probabilistic reasoning techniques. --- paper_title: Mining and modeling linkage information from citation context for improving biomedical literature retrieval paper_content: Mining linkage information from the citation graph has been shown to be effective in identifying important literatures. However, the question of how to utilize linkage information from the citation graph to facilitate literature retrieval still remains largely unanswered. In this paper, given the context of biomedical literature retrieval, we first conduct a case study in order to find out whether applying PageRank and HITS algorithms directly to the citation graph is the best way of utilizing citation linkage information for improving biomedical literature retrieval. Second, we propose a probabilistic combination framework for integrating citation information into the content-based information retrieval weighting model. Based on the observations of the case study, we present two strategies for modeling the linkage information contained in the citation graph. The proposed framework provides a theoretical support for the combination of content and linkage information. Under this framework, exhaustive parameter tuning can be avoided. Extensive experiments on three TREC Genomics collections demonstrate the advantages and effectiveness of our proposed methods. --- paper_title: Towards Extracting Supporting Information About Predicted Protein-Protein Interactions paper_content: One of the goals of relation extraction is to identify protein-protein interactions (PPIs) in biomedical literature. Current systems are capturing binary relations and also the direction and type of an interaction. Besides assisting in the curation PPIs into databases, there has been little real-world application of these algorithms. We describe UPSITE, a text mining tool for extracting evidence in support of a hypothesized interaction. Given a predicted PPI, UPSITE uses a binary relation detector to check whether a PPI is found in abstracts in PubMed. If it is not found, UPSITE retrieves documents relevant to each of the two proteins separately, and extracts contextual information about biological events surrounding each protein, and calculates semantic similarity of the two proteins to provide evidential support for the predicted PPI. In evaluations, relation extraction achieved an Fscore of 0.88 on the HPRD50 corpus, and semantic similarity measured with angular distance was found to be statistically significant. With the development of PPI prediction algorithms, the burden of interpreting the validity and relevance of novel PPIs is on biologists. We suggest that presenting annotations of the two proteins in a PPI side-by-side and a score that quantifies their similarity lessens this burden to some extent. --- paper_title: Knowledge based word-concept model estimation and refinement for biomedical text mining paper_content: Display Omitted We describe a method to generate word-concept statistical models from a knowledge baseThis method integrates knowledge base descriptions and corpora informationWord sense disambiguation with this method is better than state-of-the-art approachesRanking of citation with the model improves the performance of baseline approaches Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation.In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences.The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. --- paper_title: Mining Gene-centric Relationships from Literature to Support Drug Discovery paper_content: Identifying drug target candidates is an important task for early development throughout the drug discovery process. This process is supported by the development of new high-throughput technologies that enable better understanding of disease mechanism. With the push for personalized medicine, more experimental data are produced to identify how the genetics differ among individuals with respect to disease mechanism and drug response. It becomes critical to facilitate effective analysis of the large amount of biological data. In this paper, we describe our solution in employing text mining as a technique for finding scientific information for target and biomarker discovery from the biomedical literature. Additionally, we discuss how the extracted knowledge can be an effective resource for the analysis of biological data such as next-generation sequencing data. --- paper_title: Quantifying the Impact and Extent of Undocumented Biomedical Synonymy paper_content: Synonymous relationships among biomedical terms are extensively annotated within specialized terminologies, implying that synonymy is important for practical computational applications within this field. It remains unclear, however, whether text mining actually benefits from documented synonymy and whether existing biomedical thesauri provide adequate coverage of these linguistic relationships. In this study, we examine the impact and extent of undocumented synonymy within a very large compendium of biomedical thesauri. First, we demonstrate that missing synonymy has a significant negative impact on named entity normalization, an important problem within the field of biomedical text mining. To estimate the amount synonymy currently missing from thesauri, we develop a probabilistic model for the construction of synonym terminologies that is capable of handling a wide range of potential biases, and we evaluate its performance using the broader domain of near-synonymy among general English words. Our model predicts that over 90% of these relationships are currently undocumented, a result that we support experimentally through "crowd-sourcing." Finally, we apply our model to biomedical terminologies and predict that they are missing the vast majority (>90%) of the synonymous relationships they intend to document. Overall, our results expose the dramatic incompleteness of current biomedical thesauri and suggest the need for "next-generation," high-coverage lexical terminologies. --- paper_title: Concepts extraction for medical documents using ontology paper_content: In the biomedical domain large amount of text documents are unstructured information is available in digital text form. Text Mining is the method or technique to find for interesting and useful information from unstructured text. Text Mining is also an important task in medical domain. The technique uses for Information retrieval, Information extraction and natural language processing (NLP). Traditional approaches for information retrieval are based on key based similarity. These approaches are used to overcome these problems; Semantic text mining is to discover the hidden information from unstructured text and making relationships of the terms occurring in them. In the biomedical text, the text should be in the form of text which can be present in the books, articles, literature abstracts, and so forth. Most of information is stored in the text format, so in this paper we will focus on the role of ontology for semantic text mining by using WordNet. Specifically, we have presented a model for extracting concepts from text documents using linguistic ontology in the domain of medical. --- paper_title: Extraction of drug-disease relations from MEDLINE abstracts paper_content: Biological research literature, as in many other domains of human activity, is a rich source of knowledge. MEDLINE is a huge database of biomedical information and life sciences; it provides information in the form of abstracts and documents. However, extracting this information leads to various problems, related to the types of information such as recognition of all terms related to the domain of texts, concepts associated with them, as well as identifying the types of relationships. In this context, we suggest in this paper an approach to extract disease-drug relations: in a first step, we employ Natural Language Processing techniques for the abstracts' preprocessing. In a second step we extract a set of features from the preprocessed abstracts. Finally we extract a disease-drug relation using machine learning classifier. --- paper_title: Text Mining for Bioinformatics: State of the Art Review paper_content: Biomedical literatures have been increased at the exponential rate. To find the useful and needed information from such a huge data set is a daunting task for users. Text mining is a powerful tool to solve this problem. In this paper, we surveyed on text mining in Bioinformatics with emphasis on applications of text mining for bioinformatics. In this paper, the main research directions of text mining in bioinformatics are accompanied with detailed examples. This paper suited the need for the state-of-the-art of the field of text mining in Bioinformatics because of the rapid development in both text mining and bioinformatics. Finally, the problems and future way are identified at last. --- paper_title: Text mining describes the use of statistical and epidemiological methods in published medical research paper_content: Abstract Objective To describe trends in the use of statistical and epidemiological methods in the medical literature over the past 2 decades. Study Design and Setting We obtained all 1,028,786 articles from the PubMed Central Open-Access archive (retrieved May 9, 2015). We focused on 113,450 medical research articles. A Delphi panel identified 177 statistical/epidemiological methods pertinent to clinical researchers. We used a text-mining approach to determine if a specific statistical/epidemiological method was encountered in a given article. We report the proportion of articles using a specific method for the entire cross-sectional sample and also stratified into three blocks of time (1995–2005; 2006–2010; 2011–2015). Results Numeric descriptive statistics were commonplace (96.4% articles). Other frequently encountered methods groups included statistical inferential concepts (52.9% articles), epidemiological measures of association (53.5% articles) methods for diagnostic/classification accuracy (40.1% articles), hypothesis testing (28.8% articles), ANOVA (23.2% articles), and regression (22.6% articles). We observed relative percent increases in the use of: regression (103.0%), missing data methods (217.9%), survival analysis (147.6%), and correlated data analysis (192.2%). Conclusions This study identified commonly encountered and emergent methods used to investigate medical research problems. Clinical researchers must be aware of the methodological landscape in their field, as statistical/epidemiological methods underpin research claims. --- paper_title: Mapping Gene/Protein Names in Free Text to Biomedical Databases paper_content: Observing that many biomedical databases have been developed and maintained independently, their records referring to the same entities may have different sets of synonyms. Integration of names pertaining to the same entity would provide a more comprehensive list of synonyms than each individual database. We have assembled BioThesaurus, a thesaurus of proteins and their corresponding genes compiled from multiple databases for all UniProtKB records. In this study, the coverage of BioThesaurus, and the contribution of each individual database were assessed for several organisms. The result indicates that the coverage of BioThesaurus is over 80% for most of the organisms with an average of 85.4%. When restricted to individual databases or resources, the percentages dropped ranging from 3 to 30%. The study demonstrated that each individual database or resource has some synonyms not covered by other databases or resources, and a list of names compiled from multiple databases would be desired for systems requiring high recall. --- paper_title: Intelligent Agent System for Bio-medical Literature Mining paper_content: With the advances of World Wide Web technology and advanced research in bioinformatics and systems biology domain has highlighted the increasing need for automatic information extraction [IE] system to extract information from scientific literature databases. Extraction of scientific information in biomedical articles is a central task for supporting biomarker discovery efforts. In this paper, we propose an algorithm that is capable of extracting scientific information on biomarker like gene, genome, disease, allele, cell etc from the text by finding out the focal topic of the document and extracting the most relevant properties of that topic. The topic and its properties are represented as semantic networks and then stored in a database. This IE algorithm will extract the most important biological terms and relation by statistical and pattern matching NLP techniques. This IE tool expected to help the researchers to get the latest information on biomarker discovery and its other biomedical research advances. We show preliminary results, demonstrating that the method has a strong potential to biomarker discovery methods. --- paper_title: Identifying gene-disease associations using word proximity and similarity of Gene Ontology terms paper_content: Associating genes with diseases is an active area of research because it is useful for helping human health with applications to clinical diagnosis and therapy. This paper proposes two methods to guide the associations between genes and diseases: (1) making use of the proximity relationship between genes and diseases and (2) utilizing GO terms shared by genes and diseases for similarity comparison. The experiments show that associations utilizing GO terms perform better than using word proximity. The results reveal that the GO terms act as a good gene-disease association feature. --- paper_title: Automatic classification of biomedical texts: Experiments with a hearing loss corpus paper_content: To obtain reliable information extraction applications has been one of the main goals of the biomedical text mining community. These applications are valuable tools to biologists in their increasingly difficult task of assimilating the knowledge contained in biomedical literature. This paper presents a novel algorithm for automatic classification of biomedical texts, specifically texts about symptoms. We apply our approach to biomedical texts related to hearing loss, obtaining promising results. --- paper_title: Biomedical text mining for concept identification from traditional medicine literature paper_content: In recent years, vast amount of biomedical literature is produced and published. Recent developments in biomedical text mining shows potential for supporting scientists in understanding new information from the existing biomedical literature because volume of electronically available biomedical literature are increasing massively. Automated literature mining offers one opportunity to discover different entities from literature. Web Technologies allow these entities to be stores and publish in the form to the further reuse by the researchers. The approach presented here includes text mining methodologies to automatically extract different entities from biomedical text. For this purpose biomedical articles based on Traditional Chinese medicine are extracted from Bio Med Central and Pub Med Central and used as corpus. Using text mining techniques of tokenization, splitting, stemming, lemmatization, parsing, named entity recognition are used for preprocessing of corpus. Candidate terms are identified by applying C-Value algorithm. These candidate terms and existing Seed/Ontological Terms are tagged in corpus. Using lexical and contextual profiles comparison between candidate terms and already existed Seed/Ontological Terms, we have identified new concepts. Identified concepts are evaluated. --- paper_title: Mining biomedical data from hypertext documents paper_content: Data mining is a process of discovering useful information from a database and analysis of extracted information. Text mining uses many techniques of data mining. It primarily deals with unstructured data. Web mining is an extension of text mining since it deals with unstructured data. Data mining relates to find data from “static databases” which contains “structured” data where as, web mining plays with data that are “dynamic” and “unstructured”. In this papers our goal is to mine biomedical data from hypertext documents (e.g., mining data from web contents) using text mining techniques with the help of “biomedical ontology”. Web data repositories are the hypertext documents. Texts in the Hypertext documents are unstructured and they contain Hypertext Markup Language (HTML) tags, scripting languages, images, audios, videos, URLs etc. We collect a number of documents using Google crawler and preprocess the hypertext documents and extract the text data. Next, we identify whether a word is a biomedical entity or not by using a biomedical database the “Unified Medical Language System (UMLS) metathesaurus”. The mapping of biomedical entity from the metathesaurus will be done based on keyword query. Then we apply the result to re-rank the web documents to find most relevant documents. We conclude that the more occurrence of a biomedical entity in a page, the more relevant the page is, and thus, we can re-rank the documents to find the most relevant documents by using text mining technique. --- paper_title: The Impact of Directionality in Predications on Text Mining paper_content: The number of publications in biomedicine is increasing enormously each year. To help researchers digest the information in these documents, text mining tools are being developed that present co-occurrence relations between concepts. Statistical measures are used to mine interesting subsets of relations. We demonstrate how directionality of these relations affects interestingness. Support and confidence, simple data mining statistics, are used as proxies for interestingness metrics. We first built a test bed of 126,404 directional relations extracted from biomedical abstracts, which we represent as graphs containing a central starting concept and 2 rings of associated relations. We manipulated directionality in four ways and randomly selected 100 starting concepts as a test sample for each graph type. Finally, we calculated the number of relations and their support and confidence. Variation in directionality significantly affected the number of relations as well as the support and confidence of the four graph types. --- paper_title: Disease-Disease Relationships for Rheumatic Diseases: Web-Based Biomedical Textmining an Knowledge Discovery to Assist Medical Decision Making paper_content: The MEDLINE database (Medical Literature Analysis and Retrieval System Online) contains an enormously increasing volume of biomedical articles. There is urgent need for techniques which enable the discovery, the extraction, the integration and the use of hidden knowledge in those articles. Text mining aims at developing technologies to help cope with the interpretation of these large volumes of publications. Co-occurrence analysis is a technique applied in text mining and the methodologies and statistical models are used to evaluate the significance of the relationship between entities such as disease names, drug names, and keywords in titles, abstracts or even entire publications. In this paper we present a method and an evaluation on knowledge discovery of disease-disease relationships for rheumatic diseases. This has huge medical relevance, since rheumatic diseases affect hundreds of millions of people worldwide and lead to substantial loss of functioning and mobility. In this study, we interviewed medical experts and searched the ACR (American College of Rheumatology) web site in order to select the most observed rheumatic diseases to explore disease-disease relationships. We used a web based text-mining tool to find disease names and their co-occurrence frequencies in MEDLINE articles for each disease. After finding disease names and frequencies, we normalized the names by interviewing medical experts and by utilizing biomedical resources. Frequencies are normally a good indicator of the relevance of a concept but they tend to overestimate the importance of common concepts. We also used Pointwise Mutual Information (PMI) measure to discover the strength of a relationship. PMI provides an indication of how more often the query and concept co-occur than expected by change. After finding PMI values for each disease, we ranked these values and frequencies together. The results reveal hidden knowledge in articles regarding rheumatic diseases indexed by MEDLINE, thereby exposing relationships that can provide important additional information for medical experts and researchers for medical decision-making. --- paper_title: A Framework for Semisupervised Feature Generation and Its Applications in Biomedical Literature Mining paper_content: Feature representation is essential to machine learning and text mining. In this paper, we present a feature coupling generalization (FCG) framework for generating new features from unlabeled data. It selects two special types of features, i.e., example-distinguishing features (EDFs) and class-distinguishing features (CDFs) from original feature set, and then generalizes EDFs into higher-level features based on their coupling degrees with CDFs in unlabeled data. The advantage is: EDFs with extreme sparsity in labeled data can be enriched by their co-occurrences with CDFs in unlabeled data so that the performance of these low-frequency features can be greatly boosted and new information from unlabeled can be incorporated. We apply this approach to three tasks in biomedical literature mining: gene named entity recognition (NER), protein-protein interaction extraction (PPIE), and text classification (TC) for gene ontology (GO) annotation. New features are generated from over 20 GB unlabeled PubMed abstracts. The experimental results on BioCreative 2, AIMED corpus, and TREC 2005 Genomics Track show that 1) FCG can utilize well the sparse features ignored by supervised learning. 2) It improves the performance of supervised baselines by 7.8 percent, 5.0 percent, and 5.8 percent, respectively, in the tree tasks. 3) Our methods achieve 89.1, 64.5 F-score, and 60.1 normalized utility on the three benchmark data sets. --- paper_title: A Verb-Centric Approach for Relationship Extraction in Biomedical Text paper_content: Advances in biomedical technology and research have resulted in a large number of research findings, which are primarily published in unstructured text such as journal articles. Text mining techniques have been thus employed to extract knowledge from such data. In this article we focus on the task of identifying and extracting relations between bio-entities such as green tea and breast cancer. Unlike previous work that employs heuristics such as co-occurrence patterns and handcrafted syntactic rules, we propose a verb-centric algorithm. This algorithm identifies and extracts the main verb(s) in a sentence, therefore, it does not require the usage of predefined rules or patterns. Using the main verb(s) it then extracts the two involved entities of a relationship. The biomedical entities are identified using a dependence parse tree by applying syntactic and linguistic features such as preposition phrases and semantic role analysis. The proposed verb-centric approach can effectively handle complex sentence structures such as clauses and conjunctive sentences. We evaluate the algorithm on several data sets and achieve an average F-score of 0.905, which is significantly higher than that of previous work. --- paper_title: A concept-driven biomedical knowledge extraction and visualization framework for conceptualization of text corpora paper_content: A number of techniques such as information extraction, document classification, document clustering and information visualization have been developed to ease extraction and understanding of information embedded within text documents. However, knowledge that is embedded in natural language texts is difficult to extract using simple pattern matching techniques and most of these methods do not help users directly understand key concepts and their semantic relationships in document corpora, which are critical for capturing their conceptual structures. The problem arises due to the fact that most of the information is embedded within unstructured or semi-structured texts that computers can not interpret very easily. In this paper, we have presented a novel Biomedical Knowledge Extraction and Visualization framework, BioKEVis to identify key information components from biomedical text documents. The information components are centered on key concepts. BioKEVis applies linguistic analysis and Latent Semantic Analysis (LSA) to identify key concepts. The information component extraction principle is based on natural language processing techniques and semantic-based analysis. The system is also integrated with a biomedical named entity recognizer, ABNER, to tag genes, proteins and other entity names in the text. We have also presented a method for collating information extracted from multiple sources to generate semantic network. The network provides distinct user perspectives and allows navigation over documents with similar information components and is also used to provide a comprehensive view of the collection. The system stores the extracted information components in a structured repository which is integrated with a query-processing module to handle biomedical queries over text documents. We have also proposed a document ranking mechanism to present retrieved documents in order of their relevance to the user query. --- paper_title: Reconstructing transcriptional Regulatory Networks using data integration and Text Mining paper_content: Transcriptional Regulatory Networks (TRNs) are powerful tool for representing several interactions that occur within a cell. Recent studies have provided information to help researchers in the tasks of building and understanding these networks. One of the major sources of information to build TRNs is biomedical literature. However, due to the rapidly increasing number of scientific papers, it is quite difficult to analyse the large amount of papers that have been published about this subject. This fact has heightened the importance of Biomedical Text Mining approaches in this task. Also, owing to the lack of adequate standards, as the number of databases increases, several inconsistencies concerning gene and protein names and identifiers are common. In this work, we developed an integrated approach for the reconstruction of TRNs that retrieve the relevant information from important biological databases and insert it into a unique repository, named KREN. Also, we applied text mining techniques over this integrated repository to build TRNs. However, was necessary to create a dictionary of names and synonyms associated with these entities and also develop an approach that retrieves all the abstracts from the related scientific papers stored on PubMed, in order to create a corpora of data about genes. Furthermore, these tasks were integrated into @Note, a software system that allows to use some methods from the Biomedical Text Mining field, including an algorithms for Named Entity Recognition (NER), extraction of all relevant terms from publication abstracts, extraction relationships between biological entities (genes, proteins and transcription factors). And finally, extended this tool to allow the reconstruction Transcriptional Regulatory Networks through using scientific literature. --- paper_title: A Survey on Chemical Text Mining Techniques for Identifying Relationship Network between Drug Disease Genes and Molecules paper_content: Text mining plays essential roles in the field of Chemoinformatics to reveal unknown information. The enormous amount of biomedical information is available on internet and resides in the form of published articles, files, patents etc. As the rich source of data is growing massively, it is widely contributing to the scientific researchers. Text mining is the most widely used in field of Natural Language Processing. The Text Pre-processing and data analysis techniques applied on biomedical literature allows us to identify ad investigate new theories. Finding the association between the chemical entities like drug, disease, genes and molecules is the new area of focus for researchers. This paper presents the study on several approaches and techniques proposed for chemical text-mining to identify relationship network for drug-disease, disease-gene associations. In this paper, we focus on comparative analysis of various Text mining techniques used for chemical literature with their results evaluations as well as observations. --- paper_title: High-Performance Biomedical Association Mining with MapReduce paper_content: MapReduce has been applied to data-intensive applications in different domains because of its simplicity, scalability and fault-tolerance. However, its uses in biomedical association mining are still very limited. In this paper, we investigate using MapReduce to efficiently mine the associations between biomedical terms extracted from a set of biomedical articles. First, biomedical terms were obtained by matching text to Unified Medical Language System (UMLS) Metathesaurus, a biomedical vocabulary and standard database. Then we developed a MapReduce algorithm that could be used to calculate a category of interestingness measures defined on the basis of a 2x2 contingency table. This algorithm consists of two MapReduce jobs and takes a stripes approach to reduce the number of intermediate results. Experiments were conducted using Amazon Elastic MapReduce (EMR) with an input of 3610 articles retrieved from two biomedical journals. Test results indicate that our algorithm has linear scalability. --- paper_title: Collaborative semi-automatic annotation of the biomedical literature paper_content: The increasing availability of whole human genomes due to the improvements in high-throughput sequencing technologies makes the interpretation and annotation of data at a whole-genome scale more and more feasible. However, these tasks critically depend on the availability of knowledge already stored in databases or published in the scientific literature. This scenario requires new reliable and integrative information extraction systems to be available to the biomedical community. In this work we present a hybrid approach for mining the wealth of knowledge stored in the scientific literature. This approach is based on the use of efficient text-mining tools in combination with highly accurate collaborative human curation. BioNotate-2.0 is an open source, modular friendly tool which implements a collaborative semi-automatic annotation platform in which human and automated annotations are efficiently combined. BioNotate-2.0 also builds upon the Semantic Web, facilitating the dissemination of annotated facts into other resources and pipelines. BioNotate-2.0 allows any interested user to run his own annotation efforts or to contribute to an existing annotation project. To access and contribute to the BioNotate-2.0 annotation platform, please check: http://genome2.ugr.es/bionotate2/. BioNotate source code is available at: http://sourceforge.net/projects/bionotate/. --- paper_title: Mapping Gene/Protein Names in Free Text to Biomedical Databases paper_content: Observing that many biomedical databases have been developed and maintained independently, their records referring to the same entities may have different sets of synonyms. Integration of names pertaining to the same entity would provide a more comprehensive list of synonyms than each individual database. We have assembled BioThesaurus, a thesaurus of proteins and their corresponding genes compiled from multiple databases for all UniProtKB records. In this study, the coverage of BioThesaurus, and the contribution of each individual database were assessed for several organisms. The result indicates that the coverage of BioThesaurus is over 80% for most of the organisms with an average of 85.4%. When restricted to individual databases or resources, the percentages dropped ranging from 3 to 30%. The study demonstrated that each individual database or resource has some synonyms not covered by other databases or resources, and a list of names compiled from multiple databases would be desired for systems requiring high recall. --- paper_title: Selecting an Ontology for Biomedical Text Mining paper_content: Text mining for biomedicine requires a significant amount of domain knowledge. Much of this information is contained in biomedical ontologies. Developers of text mining applications often look for appropriate ontologies that can be integrated into their systems, rather than develop new ontologies from scratch. However, there is often a lack of documentation of the qualities of the ontologies. A number of methodologies for evaluating ontologies have been developed, but it is difficult for users by using these methods to select an ontology. In this paper, we propose a framework for selecting the most appropriate ontology for a particular text mining application. The framework comprises three components, each of which considers different aspects of requirements of text mining applications on ontologies. We also present an experiment based on the framework choosing an ontology for a gene normalization system. --- paper_title: Intelligent Agent System for Bio-medical Literature Mining paper_content: With the advances of World Wide Web technology and advanced research in bioinformatics and systems biology domain has highlighted the increasing need for automatic information extraction [IE] system to extract information from scientific literature databases. Extraction of scientific information in biomedical articles is a central task for supporting biomarker discovery efforts. In this paper, we propose an algorithm that is capable of extracting scientific information on biomarker like gene, genome, disease, allele, cell etc from the text by finding out the focal topic of the document and extracting the most relevant properties of that topic. The topic and its properties are represented as semantic networks and then stored in a database. This IE algorithm will extract the most important biological terms and relation by statistical and pattern matching NLP techniques. This IE tool expected to help the researchers to get the latest information on biomarker discovery and its other biomedical research advances. We show preliminary results, demonstrating that the method has a strong potential to biomarker discovery methods. --- paper_title: Disease-Disease Relationships for Rheumatic Diseases: Web-Based Biomedical Textmining an Knowledge Discovery to Assist Medical Decision Making paper_content: The MEDLINE database (Medical Literature Analysis and Retrieval System Online) contains an enormously increasing volume of biomedical articles. There is urgent need for techniques which enable the discovery, the extraction, the integration and the use of hidden knowledge in those articles. Text mining aims at developing technologies to help cope with the interpretation of these large volumes of publications. Co-occurrence analysis is a technique applied in text mining and the methodologies and statistical models are used to evaluate the significance of the relationship between entities such as disease names, drug names, and keywords in titles, abstracts or even entire publications. In this paper we present a method and an evaluation on knowledge discovery of disease-disease relationships for rheumatic diseases. This has huge medical relevance, since rheumatic diseases affect hundreds of millions of people worldwide and lead to substantial loss of functioning and mobility. In this study, we interviewed medical experts and searched the ACR (American College of Rheumatology) web site in order to select the most observed rheumatic diseases to explore disease-disease relationships. We used a web based text-mining tool to find disease names and their co-occurrence frequencies in MEDLINE articles for each disease. After finding disease names and frequencies, we normalized the names by interviewing medical experts and by utilizing biomedical resources. Frequencies are normally a good indicator of the relevance of a concept but they tend to overestimate the importance of common concepts. We also used Pointwise Mutual Information (PMI) measure to discover the strength of a relationship. PMI provides an indication of how more often the query and concept co-occur than expected by change. After finding PMI values for each disease, we ranked these values and frequencies together. The results reveal hidden knowledge in articles regarding rheumatic diseases indexed by MEDLINE, thereby exposing relationships that can provide important additional information for medical experts and researchers for medical decision-making. --- paper_title: A Framework for Semisupervised Feature Generation and Its Applications in Biomedical Literature Mining paper_content: Feature representation is essential to machine learning and text mining. In this paper, we present a feature coupling generalization (FCG) framework for generating new features from unlabeled data. It selects two special types of features, i.e., example-distinguishing features (EDFs) and class-distinguishing features (CDFs) from original feature set, and then generalizes EDFs into higher-level features based on their coupling degrees with CDFs in unlabeled data. The advantage is: EDFs with extreme sparsity in labeled data can be enriched by their co-occurrences with CDFs in unlabeled data so that the performance of these low-frequency features can be greatly boosted and new information from unlabeled can be incorporated. We apply this approach to three tasks in biomedical literature mining: gene named entity recognition (NER), protein-protein interaction extraction (PPIE), and text classification (TC) for gene ontology (GO) annotation. New features are generated from over 20 GB unlabeled PubMed abstracts. The experimental results on BioCreative 2, AIMED corpus, and TREC 2005 Genomics Track show that 1) FCG can utilize well the sparse features ignored by supervised learning. 2) It improves the performance of supervised baselines by 7.8 percent, 5.0 percent, and 5.8 percent, respectively, in the tree tasks. 3) Our methods achieve 89.1, 64.5 F-score, and 60.1 normalized utility on the three benchmark data sets. --- paper_title: Event extraction with complex event classification using rich features. paper_content: Biomedical Natural Language Processing (BioNLP) attempts to capture biomedical phenomena from texts by extracting relations between biomedical entities (i.e. proteins and genes). Traditionally, only binary relations have been extracted from large numbers of published papers. Recently, more complex relations (biomolecular events) have also been extracted. Such events may include several entities or other relations. To evaluate the performance of the text mining systems, several shared task challenges have been arranged for the BioNLP community. With a common and consistent task setting, the BioNLP'09 shared task evaluated complex biomolecular events such as binding and regulation.Finding these events automatically is important in order to improve biomedical event extraction systems. In the present paper, we propose an automatic event extraction system, which contains a model for complex events, by solving a classification problem with rich features. The main contributions of the present paper are: (1) the proposal of an effective bio-event detection method using machine learning, (2) provision of a high-performance event extraction system, and (3) the execution of a quantitative error analysis. The proposed complex (binding and regulation) event detector outperforms the best system from the BioNLP'09 shared task challenge. --- paper_title: Identifying gene-disease associations using word proximity and similarity of Gene Ontology terms paper_content: Associating genes with diseases is an active area of research because it is useful for helping human health with applications to clinical diagnosis and therapy. This paper proposes two methods to guide the associations between genes and diseases: (1) making use of the proximity relationship between genes and diseases and (2) utilizing GO terms shared by genes and diseases for similarity comparison. The experiments show that associations utilizing GO terms perform better than using word proximity. The results reveal that the GO terms act as a good gene-disease association feature. --- paper_title: Automatic classification of biomedical texts: Experiments with a hearing loss corpus paper_content: To obtain reliable information extraction applications has been one of the main goals of the biomedical text mining community. These applications are valuable tools to biologists in their increasingly difficult task of assimilating the knowledge contained in biomedical literature. This paper presents a novel algorithm for automatic classification of biomedical texts, specifically texts about symptoms. We apply our approach to biomedical texts related to hearing loss, obtaining promising results. --- paper_title: Biomedical text mining for concept identification from traditional medicine literature paper_content: In recent years, vast amount of biomedical literature is produced and published. Recent developments in biomedical text mining shows potential for supporting scientists in understanding new information from the existing biomedical literature because volume of electronically available biomedical literature are increasing massively. Automated literature mining offers one opportunity to discover different entities from literature. Web Technologies allow these entities to be stores and publish in the form to the further reuse by the researchers. The approach presented here includes text mining methodologies to automatically extract different entities from biomedical text. For this purpose biomedical articles based on Traditional Chinese medicine are extracted from Bio Med Central and Pub Med Central and used as corpus. Using text mining techniques of tokenization, splitting, stemming, lemmatization, parsing, named entity recognition are used for preprocessing of corpus. Candidate terms are identified by applying C-Value algorithm. These candidate terms and existing Seed/Ontological Terms are tagged in corpus. Using lexical and contextual profiles comparison between candidate terms and already existed Seed/Ontological Terms, we have identified new concepts. Identified concepts are evaluated. --- paper_title: Mining biomedical data from hypertext documents paper_content: Data mining is a process of discovering useful information from a database and analysis of extracted information. Text mining uses many techniques of data mining. It primarily deals with unstructured data. Web mining is an extension of text mining since it deals with unstructured data. Data mining relates to find data from “static databases” which contains “structured” data where as, web mining plays with data that are “dynamic” and “unstructured”. In this papers our goal is to mine biomedical data from hypertext documents (e.g., mining data from web contents) using text mining techniques with the help of “biomedical ontology”. Web data repositories are the hypertext documents. Texts in the Hypertext documents are unstructured and they contain Hypertext Markup Language (HTML) tags, scripting languages, images, audios, videos, URLs etc. We collect a number of documents using Google crawler and preprocess the hypertext documents and extract the text data. Next, we identify whether a word is a biomedical entity or not by using a biomedical database the “Unified Medical Language System (UMLS) metathesaurus”. The mapping of biomedical entity from the metathesaurus will be done based on keyword query. Then we apply the result to re-rank the web documents to find most relevant documents. We conclude that the more occurrence of a biomedical entity in a page, the more relevant the page is, and thus, we can re-rank the documents to find the most relevant documents by using text mining technique. --- paper_title: A Verb-Centric Approach for Relationship Extraction in Biomedical Text paper_content: Advances in biomedical technology and research have resulted in a large number of research findings, which are primarily published in unstructured text such as journal articles. Text mining techniques have been thus employed to extract knowledge from such data. In this article we focus on the task of identifying and extracting relations between bio-entities such as green tea and breast cancer. Unlike previous work that employs heuristics such as co-occurrence patterns and handcrafted syntactic rules, we propose a verb-centric algorithm. This algorithm identifies and extracts the main verb(s) in a sentence, therefore, it does not require the usage of predefined rules or patterns. Using the main verb(s) it then extracts the two involved entities of a relationship. The biomedical entities are identified using a dependence parse tree by applying syntactic and linguistic features such as preposition phrases and semantic role analysis. The proposed verb-centric approach can effectively handle complex sentence structures such as clauses and conjunctive sentences. We evaluate the algorithm on several data sets and achieve an average F-score of 0.905, which is significantly higher than that of previous work. --- paper_title: Mining and modeling linkage information from citation context for improving biomedical literature retrieval paper_content: Mining linkage information from the citation graph has been shown to be effective in identifying important literatures. However, the question of how to utilize linkage information from the citation graph to facilitate literature retrieval still remains largely unanswered. In this paper, given the context of biomedical literature retrieval, we first conduct a case study in order to find out whether applying PageRank and HITS algorithms directly to the citation graph is the best way of utilizing citation linkage information for improving biomedical literature retrieval. Second, we propose a probabilistic combination framework for integrating citation information into the content-based information retrieval weighting model. Based on the observations of the case study, we present two strategies for modeling the linkage information contained in the citation graph. The proposed framework provides a theoretical support for the combination of content and linkage information. Under this framework, exhaustive parameter tuning can be avoided. Extensive experiments on three TREC Genomics collections demonstrate the advantages and effectiveness of our proposed methods. --- paper_title: High-Performance Biomedical Association Mining with MapReduce paper_content: MapReduce has been applied to data-intensive applications in different domains because of its simplicity, scalability and fault-tolerance. However, its uses in biomedical association mining are still very limited. In this paper, we investigate using MapReduce to efficiently mine the associations between biomedical terms extracted from a set of biomedical articles. First, biomedical terms were obtained by matching text to Unified Medical Language System (UMLS) Metathesaurus, a biomedical vocabulary and standard database. Then we developed a MapReduce algorithm that could be used to calculate a category of interestingness measures defined on the basis of a 2x2 contingency table. This algorithm consists of two MapReduce jobs and takes a stripes approach to reduce the number of intermediate results. Experiments were conducted using Amazon Elastic MapReduce (EMR) with an input of 3610 articles retrieved from two biomedical journals. Test results indicate that our algorithm has linear scalability. --- paper_title: MedicoPort: A medical search engine for all paper_content: We present a new next generation domain search engine called MedicoPort. MedicoPort is a medical search engine designed for the users with no medical expertise. It is enhanced with the domain knowledge obtained from Unified Medical Language System (UMLS) to increase the effectiveness of the searches. The power of the system is based on the ability to understand the semantics of web pages and the user queries. MedicoPort transforms a keyword search into a conceptual search. Through our system we present a topical web crawling technique and indexing techniques empowered by the semantics information. MedicoPort aims to generate maximum output with semantic value using minimum input from the user. Since MedicoPort is designed to help people seeking information about health on the web, our target users are not medical specialists who can effectively use the special jargon of medicine and access medical databases. Medical experts have the advantage of shrinking the answer set by expressing several terms using medical terminology. MedicoPort provides the same advantage to its users through the automated use of the medical domain knowledge in the background. The results of our experiments indicate that, expanding the queries with domain knowledge, such as using the synonyms and partially or contextually relevant terms from UMLS, increase dramatically the relevance of an answer set produced by MedicoPort and the number of retrieved web pages that are relevant to the user request. --- paper_title: Event extraction from heterogeneous news sources paper_content: With the proliferation of news articles from thousands of different sources now available on the Web, summarization of such information is becoming increasingly important. Our research focuses on merging descriptions of news events from multiple sources, to provide a concise description that combines the information from each source. Specifically, we describe and evaluate methods for grouping sentences in news articles that refer to the same event. The key idea is to cluster the sentences, using two novel distance metrics. The first distance metric exploits regularities in the sequential structure of events within a document. The second metric uses a TFIDF-like weighting scheme, enhanced to capture word frequencies within events even though the events themselves are not known a priori. Typical news articles contain sentences that do not describe specific events. We use machine learning methods to differentiate between sentences that describe one or more events, and those that do not. We then remove non-event sentences before initiating the clustering process. We demonstrate that this approach achieves significant improvements in overall clustering performance. --- paper_title: Mining Information Extraction Models for HmtDB annotation paper_content: Advances of genome sequencing techniques have risen an overwhelming increase in the literature on discovered genes, proteins and their role in biological processes. However, the biomedical literature remains a greatly unexploited source of biological information. Information extraction (IE) techniques are necessary to map this information into structured representations that allow facts relating domain-relevant entities to be automatically recognized. In this paper, we present a framework that supports biologists in the task of automatic extraction of information from texts. The framework integrates a data mining module that discovers extraction rules from a set of manually labelled texts. Extraction models are subsequently applied in an automatic mode on unseen texts. We report an application to a real-world dataset composed by publications selected to support biologists in the annotation of the HmtDB database --- paper_title: Towards Extracting Supporting Information About Predicted Protein-Protein Interactions paper_content: One of the goals of relation extraction is to identify protein-protein interactions (PPIs) in biomedical literature. Current systems are capturing binary relations and also the direction and type of an interaction. Besides assisting in the curation PPIs into databases, there has been little real-world application of these algorithms. We describe UPSITE, a text mining tool for extracting evidence in support of a hypothesized interaction. Given a predicted PPI, UPSITE uses a binary relation detector to check whether a PPI is found in abstracts in PubMed. If it is not found, UPSITE retrieves documents relevant to each of the two proteins separately, and extracts contextual information about biological events surrounding each protein, and calculates semantic similarity of the two proteins to provide evidential support for the predicted PPI. In evaluations, relation extraction achieved an Fscore of 0.88 on the HPRD50 corpus, and semantic similarity measured with angular distance was found to be statistically significant. With the development of PPI prediction algorithms, the burden of interpreting the validity and relevance of novel PPIs is on biologists. We suggest that presenting annotations of the two proteins in a PPI side-by-side and a score that quantifies their similarity lessens this burden to some extent. --- paper_title: Intelligent information mining from veterinary clinical records and open source repository paper_content: This paper reports an implementation of an intelligent mining approach from veterinary clinical records and an external source of information. The system retrieves information from a local veterinary clinical database and then complements this information with related records from an external source, OAIster. It utilizes text-mining, web service technologies and domain knowledge, in order to extract keywords, to retrieve related records from an external source, and to filter the extracted keywords list. This study meets a practical challenge encountered at the School of Veterinary and Biomedical Sciences at Murdoch University. The results indicate that the system can be used to increase the limited knowledge within a local source by complementing it with related records from an external source. Moreover, the system also reduces information overload by only retrieving a set of related information from an external source. Finally, domain knowledge can be used to filter the extracted keywords, in this case, selected medical keywords from the extracted keyword list. --- paper_title: NEW FRONTIERS IN BIOMEDICAL TEXT MINING paper_content: To paraphrase Gildea and Jurafsky [7], the past few years have been exhilarating ones for biomedical language processing. In less than a decade, we have seen an amazing increase in activity in text mining in the genomic domain [20]. The first textbook on biomedical text mining with a strong genomics focus appeared in 2005 [3]. The following year saw the establishment of a national center for text mining under the leadership of committed members of the BioNLP world [2], and two shared tasks [10,9] have led to the creation of new datasets and a very large community. These years have included considerable progress in some areas. The TREC Genomics track has brought an unprecedented amount of attention to the domain of biomedical information retrieval [8] and related tasks such as document classification [5] and question-answering, and the BioCreative shared task did the same for genomic named entity recognition, entity normalization, and information extraction [10]. Recent meetings have pushed the focus of biomedical NLP into new areas. A session at the Pacific Symposium on Biocomputing (PSB) 2006 [6] focussed on systems that linked multiple biological data sources, and the BioNLP’06 meeting [20] focussed on deeper semantic relations. However, there remain many application areas and approaches in which there is still an enormous amount of work to be done. In an attempt to facilitate movement of the field in those directions, the Call for Papers for this year’s PSB natural language processing session was written to address some of the potential “New Frontiers” in biomedical text mining. We solicited work in these specific areas: --- paper_title: Knowledge based word-concept model estimation and refinement for biomedical text mining paper_content: Display Omitted We describe a method to generate word-concept statistical models from a knowledge baseThis method integrates knowledge base descriptions and corpora informationWord sense disambiguation with this method is better than state-of-the-art approachesRanking of citation with the model improves the performance of baseline approaches Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation.In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences.The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. --- paper_title: Mining Gene-centric Relationships from Literature to Support Drug Discovery paper_content: Identifying drug target candidates is an important task for early development throughout the drug discovery process. This process is supported by the development of new high-throughput technologies that enable better understanding of disease mechanism. With the push for personalized medicine, more experimental data are produced to identify how the genetics differ among individuals with respect to disease mechanism and drug response. It becomes critical to facilitate effective analysis of the large amount of biological data. In this paper, we describe our solution in employing text mining as a technique for finding scientific information for target and biomarker discovery from the biomedical literature. Additionally, we discuss how the extracted knowledge can be an effective resource for the analysis of biological data such as next-generation sequencing data. --- paper_title: Text-mining and information-retrieval services for molecular biology paper_content: Text-mining in molecular biology - defined as the automatic extraction of information about genes, proteins and their functional relationships from text documents - has emerged as a hybrid discipline on the edges of the fields of information science, bioinformatics and computational linguistics. A range of text-mining applications have been developed recently that will improve access to knowledge for biologists and database annotators. --- paper_title: Quantifying the Impact and Extent of Undocumented Biomedical Synonymy paper_content: Synonymous relationships among biomedical terms are extensively annotated within specialized terminologies, implying that synonymy is important for practical computational applications within this field. It remains unclear, however, whether text mining actually benefits from documented synonymy and whether existing biomedical thesauri provide adequate coverage of these linguistic relationships. In this study, we examine the impact and extent of undocumented synonymy within a very large compendium of biomedical thesauri. First, we demonstrate that missing synonymy has a significant negative impact on named entity normalization, an important problem within the field of biomedical text mining. To estimate the amount synonymy currently missing from thesauri, we develop a probabilistic model for the construction of synonym terminologies that is capable of handling a wide range of potential biases, and we evaluate its performance using the broader domain of near-synonymy among general English words. Our model predicts that over 90% of these relationships are currently undocumented, a result that we support experimentally through "crowd-sourcing." Finally, we apply our model to biomedical terminologies and predict that they are missing the vast majority (>90%) of the synonymous relationships they intend to document. Overall, our results expose the dramatic incompleteness of current biomedical thesauri and suggest the need for "next-generation," high-coverage lexical terminologies. --- paper_title: A concept-driven biomedical knowledge extraction and visualization framework for conceptualization of text corpora paper_content: A number of techniques such as information extraction, document classification, document clustering and information visualization have been developed to ease extraction and understanding of information embedded within text documents. However, knowledge that is embedded in natural language texts is difficult to extract using simple pattern matching techniques and most of these methods do not help users directly understand key concepts and their semantic relationships in document corpora, which are critical for capturing their conceptual structures. The problem arises due to the fact that most of the information is embedded within unstructured or semi-structured texts that computers can not interpret very easily. In this paper, we have presented a novel Biomedical Knowledge Extraction and Visualization framework, BioKEVis to identify key information components from biomedical text documents. The information components are centered on key concepts. BioKEVis applies linguistic analysis and Latent Semantic Analysis (LSA) to identify key concepts. The information component extraction principle is based on natural language processing techniques and semantic-based analysis. The system is also integrated with a biomedical named entity recognizer, ABNER, to tag genes, proteins and other entity names in the text. We have also presented a method for collating information extracted from multiple sources to generate semantic network. The network provides distinct user perspectives and allows navigation over documents with similar information components and is also used to provide a comprehensive view of the collection. The system stores the extracted information components in a structured repository which is integrated with a query-processing module to handle biomedical queries over text documents. We have also proposed a document ranking mechanism to present retrieved documents in order of their relevance to the user query. --- paper_title: A Survey of event extraction methods from text for decision support systems paper_content: Event extraction, a specialized stream of information extraction rooted back into the 1980s, has greatly gained in popularity due to the advent of big data and the developments in the related fields of text mining and natural language processing. However, up to this date, an overview of this particular field remains elusive. Therefore, we give a summarization of event extraction techniques for textual data, distinguishing between data-driven, knowledge-driven, and hybrid methods, and present a qualitative evaluation of these. Moreover, we discuss common decision support applications of event extraction from text corpora. Last, we elaborate on the evaluation of event extraction systems and identify current research issues. We identify data-driven, knowledge-driven, and hybrid event extraction approaches.A wide variety of decision support applications can benefit from event extraction.Pressing research issues to be addressed are scalability and domain dependencies.Evaluation with annotated data from standard benchmarks or crowdsourcing is advised. --- paper_title: Complex event extraction at PubMed scale paper_content: Motivation: There has recently been a notable shift in biomedical information extraction (IE) from relation models toward the more expressive event model, facilitated by the maturation of basic tools for biomedical text analysis and the availability of manually annotated resources. The event model allows detailed representation of complex natural language statements and can support a number of advanced text mining applications ranging from semantic search to pathway extraction. A recent collaborative evaluation demonstrated the potential of event extraction systems, yet there have so far been no studies of the generalization ability of the systems nor the feasibility of large-scale extraction. ::: ::: Results: This study considers event-based IE at PubMed scale. We introduce a system combining publicly available, state-of-the-art methods for domain parsing, named entity recognition and event extraction, and test the system on a representative 1% sample of all PubMed citations. We present the first evaluation of the generalization performance of event extraction systems to this scale and show that despite its computational complexity, event extraction from the entire PubMed is feasible. We further illustrate the value of the extraction approach through a number of analyses of the extracted information. ::: ::: Availability: The event detection system and extracted data are open source licensed and available at http://bionlp.utu.fi/. ::: ::: Contact: [email protected] --- paper_title: Extraction of drug-disease relations from MEDLINE abstracts paper_content: Biological research literature, as in many other domains of human activity, is a rich source of knowledge. MEDLINE is a huge database of biomedical information and life sciences; it provides information in the form of abstracts and documents. However, extracting this information leads to various problems, related to the types of information such as recognition of all terms related to the domain of texts, concepts associated with them, as well as identifying the types of relationships. In this context, we suggest in this paper an approach to extract disease-drug relations: in a first step, we employ Natural Language Processing techniques for the abstracts' preprocessing. In a second step we extract a set of features from the preprocessed abstracts. Finally we extract a disease-drug relation using machine learning classifier. --- paper_title: A Survey on Chemical Text Mining Techniques for Identifying Relationship Network between Drug Disease Genes and Molecules paper_content: Text mining plays essential roles in the field of Chemoinformatics to reveal unknown information. The enormous amount of biomedical information is available on internet and resides in the form of published articles, files, patents etc. As the rich source of data is growing massively, it is widely contributing to the scientific researchers. Text mining is the most widely used in field of Natural Language Processing. The Text Pre-processing and data analysis techniques applied on biomedical literature allows us to identify ad investigate new theories. Finding the association between the chemical entities like drug, disease, genes and molecules is the new area of focus for researchers. This paper presents the study on several approaches and techniques proposed for chemical text-mining to identify relationship network for drug-disease, disease-gene associations. In this paper, we focus on comparative analysis of various Text mining techniques used for chemical literature with their results evaluations as well as observations. --- paper_title: Collaborative semi-automatic annotation of the biomedical literature paper_content: The increasing availability of whole human genomes due to the improvements in high-throughput sequencing technologies makes the interpretation and annotation of data at a whole-genome scale more and more feasible. However, these tasks critically depend on the availability of knowledge already stored in databases or published in the scientific literature. This scenario requires new reliable and integrative information extraction systems to be available to the biomedical community. In this work we present a hybrid approach for mining the wealth of knowledge stored in the scientific literature. This approach is based on the use of efficient text-mining tools in combination with highly accurate collaborative human curation. BioNotate-2.0 is an open source, modular friendly tool which implements a collaborative semi-automatic annotation platform in which human and automated annotations are efficiently combined. BioNotate-2.0 also builds upon the Semantic Web, facilitating the dissemination of annotated facts into other resources and pipelines. BioNotate-2.0 allows any interested user to run his own annotation efforts or to contribute to an existing annotation project. To access and contribute to the BioNotate-2.0 annotation platform, please check: http://genome2.ugr.es/bionotate2/. BioNotate source code is available at: http://sourceforge.net/projects/bionotate/. --- paper_title: Text mining describes the use of statistical and epidemiological methods in published medical research paper_content: Abstract Objective To describe trends in the use of statistical and epidemiological methods in the medical literature over the past 2 decades. Study Design and Setting We obtained all 1,028,786 articles from the PubMed Central Open-Access archive (retrieved May 9, 2015). We focused on 113,450 medical research articles. A Delphi panel identified 177 statistical/epidemiological methods pertinent to clinical researchers. We used a text-mining approach to determine if a specific statistical/epidemiological method was encountered in a given article. We report the proportion of articles using a specific method for the entire cross-sectional sample and also stratified into three blocks of time (1995–2005; 2006–2010; 2011–2015). Results Numeric descriptive statistics were commonplace (96.4% articles). Other frequently encountered methods groups included statistical inferential concepts (52.9% articles), epidemiological measures of association (53.5% articles) methods for diagnostic/classification accuracy (40.1% articles), hypothesis testing (28.8% articles), ANOVA (23.2% articles), and regression (22.6% articles). We observed relative percent increases in the use of: regression (103.0%), missing data methods (217.9%), survival analysis (147.6%), and correlated data analysis (192.2%). Conclusions This study identified commonly encountered and emergent methods used to investigate medical research problems. Clinical researchers must be aware of the methodological landscape in their field, as statistical/epidemiological methods underpin research claims. ---
Title: A Review of Towered Big-Data Service Model for Biomedical Text-Mining Databases Section 1: INTRODUCTION Description 1: Describe the current landscape of biomedical research, highlighting the exponential growth of unstructured biomedical data and the challenges associated with extracting useful information from this data. Section 2: PURPOSE OF THE STUDY Description 2: Discuss the rationale behind the study, including the challenges of information extraction in biomedical databases, and outline the major challenges and implications addressed by the research. Section 3: Overview of Text Mining Description 3: Provide a broad overview of text mining, including its definition, typical tasks, and importance in managing large-scale, unstructured textual data, especially in the biomedical field. Section 4: Text Mining Description 4: Explain the process of automated knowledge extraction from text, detailing the phases and interdisciplinary fields involved. Section 5: Models and Methods Used in Text Mining Description 5: Discuss the different models and methods previously used for text mining, focusing on information retrieval and integration methods. Section 6: Biomedical Literature Mining Description 6: Trace the history and evolution of text mining applications in biomedical literature, highlighting key tasks like document retrieval, information extraction, and the importance of summarization. Section 7: Biomedical Text Mining Tasks Description 7: Detail the specific tasks associated with biomedical text mining, such as document retrieval, prioritization, information extraction, knowledge discovery, and hypothesis generation. Section 8: RELATED WORKS Description 8: Summarize the previous studies conducted in biomedical text mining, covering various frameworks, methodologies, and applications discussed in the literature. Section 9: Text Mining Methods Description 9: Outline the different text mining methods proposed by previous researchers, with a focus on their advantages, limitations, and areas for improvement. Section 10: Knowledge Extraction Methods Description 10: Describe the various methods for knowledge extraction, including innovative frameworks and algorithms tailored for biomedical text mining. Section 11: Biomedical’s Data Mapping Techniques Description 11: Discuss techniques employed for mapping biomedical data, including hybrid approaches and the application of ontologies. Section 12: MATERIALS AND METHODS Description 12: Explain the research methodology, including the comprehensive literature search strategy and the criteria for article inclusion and exclusion. Section 13: RESULTS AND DISCUSSION Description 13: Present the findings of the review, discuss the strengths and limitations of various approaches, and provide potential directions for future research. Section 14: CONCLUSION Description 14: Summarize the study, highlighting major conclusions, future recommendations, and the broader implications for the field of biomedical text mining. Section 15: Text Summarization Description 15: Highlight the need for further research in text summarization, including subjective aspects, visualization methods, and impact assessments. Section 16: Summarization Tool Description 16: Discuss the importance of reference standards and corpora for advancing summarization tools across different applications. Section 17: Databases Description 17: Emphasize the growing interest in effective data retrieval and extraction, and the role of text mining in molecular biology and future knowledge discovery tools. Section 18: Higher Performance Description 18: Focus on improving classification and data mining techniques to achieve higher performance in biomedical text mining applications.
Reconfigurable computing: a survey of systems and software
31
--- paper_title: A representation for dynamic graphs in reconfigurable hardware and its application to fundamental graph algorithms paper_content: This paper gives a representation for graph data structures as electronic circuits in reconfigurable hardware. Graph properties, such as vertex reachability, are computed quickly by exploiting a graph's edge parallelism—signals propagate along many graph edges concurrently. This new representation admits arbitrary graphs in which vertices/edges may be inserted and deleted dynamically at low cost—graph modification does not entail any re-fitting of the graph's circuit. Dynamic modification is achieved by rewriting cells in a reconfigurable hardware array. Dynamic graph algorithms are given for vertex reachability, transitive closure, shortest unit path, cycle detection, and connected-component identification. On the task of computing a graph's transitive closure, for example, simulation of such a dynamic graph processor indicates possible speedups greater than three orders of magnitude compared to an efficient software algorithm running on a contemporaneously fast uniprocessor. Implementation of a prototype in an FPGA verifies the accuracy of the simulation and demonstrates that a practical and efficient (compact) mapping of the graph construction is possible in existing FPGA architectures. In addition to speeding conventional graph computations with dynamic graph processors, we note their potential as parallel graph reducers implementing general (Turing equivalent) computation. --- paper_title: Hardware-software codesign and parallel implementation of a Golomb ruler derivation engine paper_content: A new architecture for Golomb ruler derivation has been developed so that rulers up to 24 marks can be proven on it. In this architecture, 8-mark stubs that are derived on a personal computer are subsequently processed by the FCCM, called GE2, allowing for parallel processing of as many stubs as are the available FPGAs. Actual runs of the new design have been performed on the TOP parallel FPGA machine at Virginia Tech. This paper presents the design improvements over the original architecture, which include single FPGA implementation, hardware/software codesign, FIFO based I/O, design for parallel execution, and performance results from actual runs. --- paper_title: Architecture and design of GE1, an FCCM for Golomb ruler derivation paper_content: A new architecture for Golomb ruler derivation has been developed, and an FPGA-based custom compute engine of the new architecture has been fully designed. The new FCCM, called GE1, is presented in terms of its datapath, and control path. Portions of the GE1 have been implemented to verify functional correctness and accuracy of the simulation results. The new machine requires twenty Xilinx 5000 series FPGA's for derivation of the 20 mark Golomb ruler, and its performance is roughly 30 times that of a high-end workstation, making its cost-performance ratio exceptionally good for derivation of new rulers. --- paper_title: Seeking solutions in configurable computing paper_content: Configurable computing offers the potential of producing powerful new computing systems. Will current research overcome the dearth of commercial applicability to make such systems a reality? Unfortunately, no system to date has yet proven attractive or competitive enough to establish a commercial presence. We believe that ample opportunity exists for work in a broad range of areas. In particular, the configurable computing community should focus on refining the emerging architectures, producing more effective software/hardware APIs, better tools for application development that incorporate the models of hardware reconfiguration, and effective benchmarking strategies. --- paper_title: FPGA implementation of a microcoded elliptic curve cryptographic processor paper_content: Elliptic curve cryptography (ECC) has been the focus of much recent attention since it offers the highest security per bit of any known public key cryptosystem. This benefit of smaller key sizes makes ECC particularly attractive for embedded applications since its implementation requires less memory and processing power. In this paper a microcoded Xilinx Virtex based elliptic curve processor is described. In contrast to previous implementations, it implements curve operations as well as optimal normal basis field operations in F(2/sup n/); the design is parameterized for arbitrary n; and it is microcoded to allow for rapid development of the control part of the processor. The design was successfully tested on a Xilinx Virtex XCV300-4 and, for n=113 bits, utilized 1290 slices at a maximum frequency of 45 MHz and achieved a thirty-fold speedup over an optimized software implementation. --- paper_title: Factoring large numbers with programmable hardware paper_content: Most advanced forms of security for electronic transactions rely on the public-key cryptosystems developed by Rivest, Shamir and Adleman. Unfortunately, these systems are only secure while it remains difficult to factor large integers. The fastest published algorithms for factoring large numbers have a common sieving step. These sieves collect numbers that are completely factored by a set of prime numbers that are known in advance. Furthermore, the time required to execute these sieves currently dominates the runtime of the factoring algorithms. We show how the sieving process can be mapped to the Mojave configurable computing architecture. The mapping exploits unique properties of the sieving algorithms to fully utilize the bandwidth of a multiple bank interleaved memory system. The sieve has been mapped to a single programmable hardware unit on the Mojave computer, and achieves a clock frequency of 16 MHz. The full system implementation sieves over 28 times faster than an UltraSPARC Workstation. A simple upgrade to 8ns SRAMs will result in a speedup factor of 160. --- paper_title: An FPGA implementation and performance evaluation of the Serpent block cipher paper_content: With the expiration of the Data Encryption Standard (DES) in 1998, the Advanced Eneryption Standard (AES) development process is well underway. It is hoped that the result of the AES process will be the specification of a new non-classified encryption algorithm that will have the global acceptance achieved by DES as well as the capability of long-term protection of sensitive information. The technical analysis used in determining which of the potential AES candidates will be selected as the Advanced Encryption Algorithm includes efficiency testing of both hardware and software implementations of candidate algorithms. Reprogrammable devices such as Field Programmable Gate Arrays (FPGAs) are highly attractive options for hardware implementations of encryption algorithms as they provide cryptographic algorithm agility, physical security, and potentially much higher performance than software solutions. This contribution investigates the significance of an FPGA implementation of Serpent, one of the Advanced Encryption Standard candidate algorithms. Multiple architecture options of the Serpent algorithm will be explored with a strong focus being placed on a high speed implementation within an FPGA in order to support security for current and future high bandwidth applications. One of the main findings is that Serpent can be implemented with encryption rates beyond 4 Gbit/s on current FPGAs. --- paper_title: Genetic algorithms in software and in hardware-a performance analysis of workstation and custom computing machine implementations paper_content: The paper analyzes the performance differences found between the hardware and software versions of a genetic algorithm used to solve the travelling salesman problem. The hardware implementation requires 4 FPGA's on a Splash 2 board and runs at 11 MHz. The software implementation was written in C++ and executed on a 125 MHz HP PA-RISC workstation. The software run time was more than four times that of the hardware (up to 50 times as many cycles). The paper analyses the contribution made to this performance difference by the following hardware features: hard-wired control, custom address generation logic, memory hierarchy efficiency, and both fine- and course-grained parallelism. The results indicate that the major contributor to the hardware performance advantage is fine-grained parallelism-RTL-level parallelism due to operator pipelining. This alone accounts for as much as a 38X cycle-count reduction over the software in one section of the algorithm. The next major contributors include hard-wired control and custom address generation which account for as much as a 3X speedup in other sections of the algorithm. Finally, memory hierarchy inefficiencies in the software (cache misses and paging) and coarse-grained parallelism in the hardware are each shown to have lesser effect on the performance difference between the implementations. --- paper_title: Automated target recognition on SPLASH 2 paper_content: Automated target recognition is an application area that requires special-purpose hardware to achieve reasonable performance. FPGA-based platforms can provide a high level of performance for ATR systems if the implementation can be adapted to the limited FPGA and routing resources of these architectures. The paper discusses a mapping experiment where a linear-systolic implementation of an ATR algorithm is mapped to the SPLASH 2 platform. Simple column oriented processors were used throughout the design to achieve high performance with limited nearest neighbor communication. The distributed SPLASH 2 memories are also exploited to achieve a high degree of parallelism. The resulting design is scalable and can be spread across multiple SPLASH 2 boards with a linear increase in performance. --- paper_title: Software technologies for reconfigurable systems paper_content: FPGA-based systems are a significant area of computing, providing a high-performance implementation substrate for many different applications. However, the key to harnessing their power for most domains is developing mapping tools for automatically transforming a circuit or algorithm into a configuration for the system. In this paper we review the current state-of-the-art in mapping tools for FPGA-based systems, including single-chip and multi-chip mapping algorithms for FPGAs, software support for reconfigurable computing, and tools for run-time reconfigurability. We also discuss the challenges for the future, pointing out where development is still needed to let reconfigurable systems achieve all of their promise. --- paper_title: The Roles of FPGAs in Reprogrammable Systems paper_content: Reprogrammable systems based on field programmable gate arrays are revolutionizing some forms of computation and digital logic. As a logic emulation system, they provide orders of magnitude faster computation than software simulation. As a custom-computing machine, they achieve the highest performance implementation for many types of applications. As a multimode system, they yield significant hardware savings and provide truly generic hardware. In this paper, we discuss the promise and problems of reprogrammable systems. This includes an overview of the chip and system architectures of reprogrammable systems as well as the applications of these systems. We also discuss the challenges and opportunities of future reprogrammable systems. --- paper_title: Garp: a MIPS processor with a reconfigurable coprocessor paper_content: Typical reconfigurable machines exhibit shortcomings that make them less than ideal for general-purpose computing. The Garp Architecture combines reconfigurable hardware with a standard MIPS processor on the same die to retain the better features of both. Novel aspects of the architecture are presented, as well as a prototype software environment and preliminary performance results. Compared to an UltraSPARC, a Garp of similar technology could achieve speedups ranging from a factor of 2 to as high as a factor of 24 for some useful applications. --- paper_title: Parallel Processing in a Restructurable Computer System paper_content: Pragmatic problem studies predict gains in computation speeds in a variety of computational tasks when executed on appropriate problem-oriented configurations of the variable structure computer. The economic feasibility of the system is based on utilization of essentially the same hardware in a variety of special purpose structures. This capability is achieved by programmed or physical restructuring of a part of the hardware. Existence of important classes of problems which the variable structure computer system promises to render practically computable, as well as use of the system for experiments in computer organization and for evaluation of new circuits and devices warrant construction of a variable structure computer. This paper describes the organization, programming, and hardware of a variable structure computer system presently under construction at UCLA. --- paper_title: The programmable logic data book paper_content: Improvement to video-telephone systems allowing a called subscriber to have at his disposal on his television receiver screen, before he takes off his handset, information about the person who has initiated the call. Each subscriber station has a handset, a television camera, a television receiver and a character generator and is connected to a switching network by a telephone line, an incoming video line and an outgoing video line. Means are provided in the calling station for connecting to the outgoing video line the character generator when the calling subscriber takes down the handset and the television camera when video signals are detected on the incoming video line and in the called station for supplying with current the television receiver when ringing tone signals are detected on the telephone line. --- paper_title: The NAPA adaptive processing architecture paper_content: The National Adaptive Processing Architecture (NAPA) is a major effort to integrate the resources needed to develop teraops class computing systems based on the principles of adaptive computing. The primary goals for this effort include: (1) the development of an example NAPA component which achieves an order of magnitude cost/performance improvement compared to traditional FPGA based systems, (2) the creation of a rich but effective application development environment for NAPA systems based on the ideas of compile time functional partitioning and (3) significantly improve the base infrastructure for effective research in reconfigurable computing. This paper emphasizes the technical aspects of the architecture to achieve the first goal while illustrating key architectural concepts motivated by the second and third goals. --- paper_title: RaPiD - Reconfigurable Pipelined Datapath paper_content: Configurable computing has captured the imagination of many architects who want the performance of application-specific hardware combined with the reprogrammability of general-purpose computers. Unfortunately, onfigurable computing has had rather limited success largely because the FPGAs on which they are built are more suited to implementing »ndom logic than computing tasks. This paper presents RaPiD, a new coarse-grained FPGA architecture that is optimized for highly repetitive, computation-intensive tasks. Very deep application-specific computation pipelines can be configured in RaPiD. These pipelines make much more efficient use of silicon than traditional FPGAs and also yield much higher performance for a wide range of applications. --- paper_title: A reconfigurable arithmetic array for multimedia applications paper_content: In this paper we describe a reconfigurable architecture optimised for media processing, and based on 4-bit ALUs and interconnect. --- paper_title: A quantitative analysis of reconfigurable coprocessors for multimedia applications paper_content: Recently, computer architectures that combine a reconfigurable (or retargetable) coprocessor with a general-purpose microprocessor have been proposed. These architectures are designed to exploit large amounts of fine grain parallelism in applications. In this paper, we study the performance of the reconfigurable coprocessors on multimedia applications. We compare a Field Programmable Gate Array (FPGA) based reconfigurable coprocessor with the array processor called REMARC (Reconfigurable Multimedia Array Coprocessor). REMARC uses a 16-bit simple processor that is much larger than a Configurable Logic Block (CLB) of an FPGA. We have developed a simulator, a programming environment, and multimedia application programs to evaluate the performance of the two coprocessor architectures. The simulation results show that REMARC achieves speedups ranging from a factor of 2.3 to 7.3 on these applications. The FPGA coprocessor achieves similar performance improvements. However, the FPGA coprocessor needs more hardware area to achieve the same performance improvement as REMARC. --- paper_title: The NAPA adaptive processing architecture paper_content: The National Adaptive Processing Architecture (NAPA) is a major effort to integrate the resources needed to develop teraops class computing systems based on the principles of adaptive computing. The primary goals for this effort include: (1) the development of an example NAPA component which achieves an order of magnitude cost/performance improvement compared to traditional FPGA based systems, (2) the creation of a rich but effective application development environment for NAPA systems based on the ideas of compile time functional partitioning and (3) significantly improve the base infrastructure for effective research in reconfigurable computing. This paper emphasizes the technical aspects of the architecture to achieve the first goal while illustrating key architectural concepts motivated by the second and third goals. --- paper_title: A quantitative analysis of reconfigurable coprocessors for multimedia applications paper_content: Recently, computer architectures that combine a reconfigurable (or retargetable) coprocessor with a general-purpose microprocessor have been proposed. These architectures are designed to exploit large amounts of fine grain parallelism in applications. In this paper, we study the performance of the reconfigurable coprocessors on multimedia applications. We compare a Field Programmable Gate Array (FPGA) based reconfigurable coprocessor with the array processor called REMARC (Reconfigurable Multimedia Array Coprocessor). REMARC uses a 16-bit simple processor that is much larger than a Configurable Logic Block (CLB) of an FPGA. We have developed a simulator, a programming environment, and multimedia application programs to evaluate the performance of the two coprocessor architectures. The simulation results show that REMARC achieves speedups ranging from a factor of 2.3 to 7.3 on these applications. The FPGA coprocessor achieves similar performance improvements. However, the FPGA coprocessor needs more hardware area to achieve the same performance improvement as REMARC. --- paper_title: Garp: a MIPS processor with a reconfigurable coprocessor paper_content: Typical reconfigurable machines exhibit shortcomings that make them less than ideal for general-purpose computing. The Garp Architecture combines reconfigurable hardware with a standard MIPS processor on the same die to retain the better features of both. Novel aspects of the architecture are presented, as well as a prototype software environment and preliminary performance results. Compared to an UltraSPARC, a Garp of similar technology could achieve speedups ranging from a factor of 2 to as high as a factor of 24 for some useful applications. --- paper_title: A detailed router for field-programmable gate arrays paper_content: A detailed routing algorithm, called the coarse graph expander (CGE), that has been designed specifically for field-programmable gate arrays (FPGAs) is described. The algorithm approaches this problem in a general way, allowing it to be used over a wide range of different FPGA routing architectures. It addresses the issue of scarce routing resources by considering the side effects that the routing of one connection has on another, and also has the ability to optimize the routing delays of time-critical connections. CGE has been used to obtain excellent routing results for several industrial circuits implemented in FPGAs with various routing architectures. The results show that CGE can route relatively large FPGAs in very close to the minimum number of tracks as determined by global routing, and it can successfully optimize the routing delays of time-critical connections. CGE has a linear run time over circuit size. > --- paper_title: Architecture of Field-Programmable Gate Arrays paper_content: A survey of field-programmable gate array (FPGA) architectures and the programming technologies used to customize them is presented. Programming technologies are compared on the basis of their volatility, size parasitic capacitance, resistance, and process technology complexity. FPGA architectures are divided into two constituents: logic block architectures and routing architectures. A classification of logic blocks based on their granularity is proposed, and several logic blocks used in commercially available FPGAs are described. A brief review of recent results on the effect of logic block granularity on logic density and performance of an FPGA is then presented. Several commercial routing architectures are described in the context of a general routing architecture model. Finally, recent results on the tradeoff between the flexibility of an FPGA routing architecture, its routability, and its density are reviewed. > --- paper_title: Exploring optimal cost-performance designs for Raw microprocessors paper_content: The semiconductor industry roadmap projects that advance in VLSI technology will permit more than one billion transistors on a chip by the year 2010. The MIT Raw microprocessor is a proposed architecture that strives to exploit these chip-level resources by implementing thousands of tiles, each comprising a processing element and a small amount of memory, coupled by a static two-dimensional interconnect. A compiler partitions fine-grain instruction-level parallelism across the tiles and statically schedules inter-tile communication over the interconnect. Because Raw microprocessors fully expose their internal hardware structure to the software, they can be viewed as a gigantic FPGA with coarse-grained tiles, in which software orchestrates communication over static interconnections. One open challenge in Raw architectures is to determine their optimal grain size and balance. The grain size is the area of each tile, and the balance is the proportion of area in each tile devoted to memory, processing, communication, and I/O. If the total chip area is fixed, more area devoted to processing will result in a higher processing power per node, but will lead to a fewer number of tiles. This paper presents an analytical framework using which designers can reason about the design space of Raw microprocessors. Based on an architectural model and a VLSI cost analysis, the framework computes the performance of applications, and uses an optimization process to identify designs that will execute these applications most cost-effectively. --- paper_title: RaPiD - Reconfigurable Pipelined Datapath paper_content: Configurable computing has captured the imagination of many architects who want the performance of application-specific hardware combined with the reprogrammability of general-purpose computers. Unfortunately, onfigurable computing has had rather limited success largely because the FPGAs on which they are built are more suited to implementing »ndom logic than computing tasks. This paper presents RaPiD, a new coarse-grained FPGA architecture that is optimized for highly repetitive, computation-intensive tasks. Very deep application-specific computation pipelines can be configured in RaPiD. These pipelines make much more efficient use of silicon than traditional FPGAs and also yield much higher performance for a wide range of applications. --- paper_title: A reconfigurable multiplier array for video image processing tasks, suitable for embedding in an FPGA structure paper_content: This paper presents a design for a reconfigurable multiplier array. The multiplier is constructed using an array of 4 bit Flexible Array Blocks (FABs), which could be embedded within a conventional FPGA structure. The array can be configured to perform a number of 4n/spl times/4m bit signed/unsigned binary multiplications. We have estimated that the FABs are about 25 times more efficient in area than the equivalent multiplier implemented using a conventional FPGA structure alone. --- paper_title: The programmable logic data book paper_content: Improvement to video-telephone systems allowing a called subscriber to have at his disposal on his television receiver screen, before he takes off his handset, information about the person who has initiated the call. Each subscriber station has a handset, a television camera, a television receiver and a character generator and is connected to a switching network by a telephone line, an incoming video line and an outgoing video line. Means are provided in the calling station for connecting to the outgoing video line the character generator when the calling subscriber takes down the handset and the television camera when video signals are detected on the incoming video line and in the called station for supplying with current the television receiver when ringing tone signals are detected on the telephone line. --- paper_title: A reconfigurable arithmetic array for multimedia applications paper_content: In this paper we describe a reconfigurable architecture optimised for media processing, and based on 4-bit ALUs and interconnect. --- paper_title: A quantitative analysis of reconfigurable coprocessors for multimedia applications paper_content: Recently, computer architectures that combine a reconfigurable (or retargetable) coprocessor with a general-purpose microprocessor have been proposed. These architectures are designed to exploit large amounts of fine grain parallelism in applications. In this paper, we study the performance of the reconfigurable coprocessors on multimedia applications. We compare a Field Programmable Gate Array (FPGA) based reconfigurable coprocessor with the array processor called REMARC (Reconfigurable Multimedia Array Coprocessor). REMARC uses a 16-bit simple processor that is much larger than a Configurable Logic Block (CLB) of an FPGA. We have developed a simulator, a programming environment, and multimedia application programs to evaluate the performance of the two coprocessor architectures. The simulation results show that REMARC achieves speedups ranging from a factor of 2.3 to 7.3 on these applications. The FPGA coprocessor achieves similar performance improvements. However, the FPGA coprocessor needs more hardware area to achieve the same performance improvement as REMARC. --- paper_title: Garp: a MIPS processor with a reconfigurable coprocessor paper_content: Typical reconfigurable machines exhibit shortcomings that make them less than ideal for general-purpose computing. The Garp Architecture combines reconfigurable hardware with a standard MIPS processor on the same die to retain the better features of both. Novel aspects of the architecture are presented, as well as a prototype software environment and preliminary performance results. Compared to an UltraSPARC, a Garp of similar technology could achieve speedups ranging from a factor of 2 to as high as a factor of 24 for some useful applications. --- paper_title: Hybrid product term and LUT based architectures using embedded memory blocks paper_content: The Embedded System Block (ESB) of the APEX20K programmable logic device family from Altera Corporation includes the capability of implementing product term macrocells in addition to flexibly configurable ROM and dual port RAM. In product term mode, each ESB has 16 macrocells built out of 32 product terms with 32 literal inputs. The ability to reconfigure memory blocks in this way represents a new and innovative use of resources in a programmable logic device, requiring creative solutions in both the hardware and software domains. The architecture and features of this Embedded System Block are described. --- paper_title: SMAP: heterogeneous technology mapping for area reduction in FPGAs with embedded memory arrays paper_content: It has become clear that large embedded configurable memory arrays will be essential in future FPGAs. Embedded arrays provide high-density high-speed implementations of the storage parts of circuits. Unfortunately, they require the FPGA vendor to partition the device into memory and logic resources at manufacture-time. This leads to a waste of chip area for customers that do not use all of the storage provided. This chip area need not be wasted, and can in fact be used very efficiently, if the arrays are configured as large multi-output ROMs, and used to implement logic. In order to efficiently use the embedded arrays in this way, a technology mapping algorithm that identifies parts of circuits that can be efficiently mapped to an embedded array is required. In this paper, we describe such an algorithm. The new tool, called SMAP, packs as much circuit information as possible into the available memory arrays, and maps the rest of the circuit into four-input lookup-tables. On a set of 29 sequential and combinational benchmarks, the tool is able to map, on average, 60 4-LUTs into a single 2-Kbit memory array. If there are 16 arrays available, it can map, on average, 358 4-LUTs to the 16 arrays. --- paper_title: A reconfigurable multiplier array for video image processing tasks, suitable for embedding in an FPGA structure paper_content: This paper presents a design for a reconfigurable multiplier array. The multiplier is constructed using an array of 4 bit Flexible Array Blocks (FABs), which could be embedded within a conventional FPGA structure. The array can be configured to perform a number of 4n/spl times/4m bit signed/unsigned binary multiplications. We have estimated that the FABs are about 25 times more efficient in area than the equivalent multiplier implemented using a conventional FPGA structure alone. --- paper_title: The programmable logic data book paper_content: Improvement to video-telephone systems allowing a called subscriber to have at his disposal on his television receiver screen, before he takes off his handset, information about the person who has initiated the call. Each subscriber station has a handset, a television camera, a television receiver and a character generator and is connected to a switching network by a telephone line, an incoming video line and an outgoing video line. Means are provided in the calling station for connecting to the outgoing video line the character generator when the calling subscriber takes down the handset and the television camera when video signals are detected on the incoming video line and in the called station for supplying with current the television receiver when ringing tone signals are detected on the telephone line. --- paper_title: Technology mapping for FPGAs with embedded memory blocks paper_content: Modern field programmable gate arrays (FPGAs) provide embedded memory blocks (EMBs) to be used as on-chip memories. In this paper, we explore the possibility of using EMBs to implement logic functions when they are not used as on-chip memory. We propose a general technology mapping problem for FPGAs with EMBs for area and delay minimization and develop an efficient algorithm based on the concepts of Maximum Fanout Free Cone (MFFC) [3] and Maximum Fanout Free Subgraph (MFFS) [7], named EMB_Pack, which minimizes the area after or before technology mapping by using EMBs while maintaining the circuit delay. We have tested EMB_Pack on MCNC benchmarks on Altera's FLEX10K device family [1]. The experimental results show that compared with the original mapped circuits generated from CutMap [5] without using EMBs, EMB_Pack as postprocessing can further reduce up to 10% of the area on the mapped circuits while maintaining the layout delay by making efficient use of available EMB resources. Compared with CutMap-e without using EMBs, EMB_Pack as pre-mapping processing followed by CutMap-e can reduce 6% of the area while maintaining the circuit optimal delay. --- paper_title: HSRA: high-speed, hierarchical synchronous reconfigurable array paper_content: There is no inherent characteristic forcing Field Programmable Gate Array (FPGA) or Reconfigurable Computing (RC) Array cycle times to be greater than processors in the same process. Modern FPGAs seldom achieve application clock rates close to their processor cousins because (1) resources in the FPGAs are not balanced appropriately for high-speed operation, (2) FPGA CAD does not automatically provide the requisite transforms to support this operation, and (3) interconnect delays can be large and vary almost continuously, complicating high frequency mapping. We introduce a novel reconfigurable computing array, the High-Speed, Hierarchical Synchronous Reconfigurable Array (HSRA), and its supporting tools. This packagedemonstrates that computing arrays can achieve efficient, high-speedoperation. We have designedand implemented a prototype component in a 0.4 m logic design on a DRAM process which will support 250MHz operation for CAD mapped designs. --- paper_title: FPGA routing architecture: segmentation and buffering to optimize speed and density paper_content: In this work we investigate the routing architecture of FPGAs, focusing primarily on determining the best distribution of routing segment lengths and the best mix of pass transistor and tri-state buffer routing switches. While most commercial FPGAs contain many length 1 wires (wires that span only one logic block) we find that wires this short lead to FPGAs that are inferior in terms of both delay and routing area. Our results show instead that it is best for FPGA routing segments to have lengths of 4 to 8 logic blocks. We also show that 50% to 80% of the routing switches in an FPGA should be pass transistors, with the remainder being tri-state buffers. Architectures that employ the best segmentation distributions and the best mixes of pass transistor and tri-state buffer switches found in this paper are not only 11% to 18% faster than a routing architecture very similar to that of the Xilinx XC4000X but also considerably simpler. These results are obtained using an architecture investigation infrastructure that contains a fully timing-driven router and detailed area and delay models. --- paper_title: More wires and fewer LUTs: a design methodology for FPGAs paper_content: In designing FPGAs, it is important to achiev e a good balance bet w een the number of logic blocks, suc h has Look-Up Tables (LUTs), and wiring resources. It is difficult to find an optimal solution. In this paper, w e presen t an FPGA design methodology to efficiently find well-balanced FPGA architectures. The method covers all aspects of FPGA development from the architecture-decision process to physical implementation. It has been used to develop a new FPGA that can implement circuits that are twice as large as those implementable with the previous version but with half the number of logic blocks. This indicates that the methodology is effectiv e in dev eloping well-balanced FPGAs. --- paper_title: RaPiD - Reconfigurable Pipelined Datapath paper_content: Configurable computing has captured the imagination of many architects who want the performance of application-specific hardware combined with the reprogrammability of general-purpose computers. Unfortunately, onfigurable computing has had rather limited success largely because the FPGAs on which they are built are more suited to implementing »ndom logic than computing tasks. This paper presents RaPiD, a new coarse-grained FPGA architecture that is optimized for highly repetitive, computation-intensive tasks. Very deep application-specific computation pipelines can be configured in RaPiD. These pipelines make much more efficient use of silicon than traditional FPGAs and also yield much higher performance for a wide range of applications. --- paper_title: Hierarchical interconnection structures for field programmable gate arrays paper_content: Field programmable gate arrays (FPGA's) suffer from lower density and lower performance than conventional gate arrays. Hierarchical interconnection structures for field programmable gate arrays are proposed. They help overcome these problems. Logic blocks in a field programmable gate array are grouped into clusters. Clusters are then recursively grouped together. To obtain the optimal hierarchical structure with high performance and high density, various hierarchical structures with the same routability are discussed. The field programmable gate arrays with new architecture can be efficiently configured with existing computer aided design algorithms. The k-way min-cut algorithm is applicable to the placement step in the implementation. Global routing paths in a field programmable gate array can be obtained easily. The placement and global routing steps can be performed simultaneously. Experiments on benchmark circuits show that density and performance are significantly improved. --- paper_title: Routing architectures for hierarchical field programmable gate arrays paper_content: This paper evaluates an architecture that implements a hierarchical routing structure for FPGAs, called a hierarchical FPGA (HFPGA). A set of new tools has been used to place and route several circuits on this architecture, with the goal of comparing the cost of HFPGAs to conventional symmetrical FPGAs. The results show that HFPGAs can implement circuits with fewer routing switches, and fewer switches in total, compared to symmetrical FGPAs, although they have the potential disadvantage that they may require more logic blocks due to coarser granularity. > --- paper_title: Balancing interconnect and computation in a reconfigurable computing array (or, why you don't really want 100% LUT utilization) paper_content: FPGA users often view the ability of an FPGA to route designs with high LUT (gate) utilization as a feature, leading them to demand high gate utilization from vendors. We present initial evidence from a hierarchical array design showing that high LUT utilization is not directly correlated with efficient silicon usage. Rather, since interconnect resources consume most of the area on these devices (often 80-90%), we can achieve more area efficient designs by allowing some LUTs to go unused—allowing us to use the dominant resource, interconnect, more efficiently. This extends the "Sea-ofgates" philosophy, familiar to mask programmable gate arrays, to FPGAs. Also introduced in this work is an algorithm for "depopulating" the gates in a hierarchical network to match the limited wiring resources. --- paper_title: The NAPA adaptive processing architecture paper_content: The National Adaptive Processing Architecture (NAPA) is a major effort to integrate the resources needed to develop teraops class computing systems based on the principles of adaptive computing. The primary goals for this effort include: (1) the development of an example NAPA component which achieves an order of magnitude cost/performance improvement compared to traditional FPGA based systems, (2) the creation of a rich but effective application development environment for NAPA systems based on the ideas of compile time functional partitioning and (3) significantly improve the base infrastructure for effective research in reconfigurable computing. This paper emphasizes the technical aspects of the architecture to achieve the first goal while illustrating key architectural concepts motivated by the second and third goals. --- paper_title: RaPiD - Reconfigurable Pipelined Datapath paper_content: Configurable computing has captured the imagination of many architects who want the performance of application-specific hardware combined with the reprogrammability of general-purpose computers. Unfortunately, onfigurable computing has had rather limited success largely because the FPGAs on which they are built are more suited to implementing »ndom logic than computing tasks. This paper presents RaPiD, a new coarse-grained FPGA architecture that is optimized for highly repetitive, computation-intensive tasks. Very deep application-specific computation pipelines can be configured in RaPiD. These pipelines make much more efficient use of silicon than traditional FPGAs and also yield much higher performance for a wide range of applications. --- paper_title: PipeRench: A Reconfigurable Architecture and Compiler paper_content: With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time. --- paper_title: Garp: a MIPS processor with a reconfigurable coprocessor paper_content: Typical reconfigurable machines exhibit shortcomings that make them less than ideal for general-purpose computing. The Garp Architecture combines reconfigurable hardware with a standard MIPS processor on the same die to retain the better features of both. Novel aspects of the architecture are presented, as well as a prototype software environment and preliminary performance results. Compared to an UltraSPARC, a Garp of similar technology could achieve speedups ranging from a factor of 2 to as high as a factor of 24 for some useful applications. --- paper_title: Mesh routing topologies for multi-FPGA systems paper_content: There is currently great interest in using fixed arrays of FPGAs for logic emulators, custom computing devices, and software accelerators. An important part of designing such a system is determining the proper routing topology to use to interconnect the FPGAs. This topology can have a great effect on the area and delay of the resulting system. Tree, bipartite graph, and mesh inter-connection schemes have all been proposed for use in FPGA-based systems. In this paper we examine mesh interconnection schemes, and propose several constructs for more efficient topologies. These reduce inter-chip delays by more than 60% over the basic 4-way Mesh. > --- paper_title: The Transmogrifier-2: a 1 million gate rapid prototyping system paper_content: This paper describes the Transmogrifier-2, a second generation multi-FPGA system. The largest version of the system will comprise 16 boards that each contain two Altera 10K50 FPGAs, four I-cube interconnect chips, and up to 8 Mbytes of memory. The inter-FPGA routing architecture of the TM-2 uses a novel interconnect structure, a non-uniform partial crossbar, that provides a constant delay between any two FPGAs in the system. The TM-2 architecture is modular and scalable, meaning that various sized systems can be constructed from the same board, while maintaining routability and the constant delay feature. Other features include a system-level programmable clock that allows single-cycle access to off-chip memory, and programmable clock waveforms with resolution to 10ns. The first Transmogrifier-2 boards have been manufactured and are functional. They have recently been used successfully in some simple graphics acceleration applications. --- paper_title: An efficient logic emulation system paper_content: The Realizer, is a logic emulation system that automatically configures a network of field-programmable gate arrays (FPGAs) to implement large digital logic designs, is presented. Logic and interconnect are separated to achieve optimum FPGA utilization. Its interconnection architecture, called the partial crossbar, greatly reduces system-level placement and routing complexity, achieves bounded interconnect delay, scales linearly with pin count, and allows hierarchical expansion to systems with hundreds of thousands of FPGA devices in a fast and uniform way. An actual multiboard system has been built, using 42 Xilinx XC3090 FPGAs for logic. Several designs, including a 32-b CPU datapath, have been automatically realized and operated at speed. They demonstrate very good FPGA utilization. The Realizer has applications in logic verification and prototyping, simulation, architecture development, and special-purpose execution. > --- paper_title: A hybrid complete-graph partial-crossbar routing architecture for multi-FPGA systems paper_content: Multi-FPGA systems (MFSs) are used as custom computing machines, logic emulators and rapid prototyping vehicles. A key aspect of these systems is their programmable routing architecture; the manner in which wires, FPGAs and Field-Programmable Interconnect Devices (FPIDs) are connected. Several routing architectures for MFSs have been proposed [Arno92] [Butt92] [Hauc94] [Apti96] [Vuil96] and previous research has shown that the partial crossbar is one of the best existing architectures [Kim96] [Khal97]. In this paper we propose a new routing architecture, called the Hybrid Complete-Graph and Partial-Crossbar (HCGP) which has superior speed and cost compared to a partial crossbar. The new architecture uses both hard-wired and programmable connections between the FPGAs. We compare the performance and cost of the HCGP and partial crossbar architectures experimentally, by mapping a set of 15 large benchmark circuits into each architecture. A customized set of partitioning and inter-chip routing tools were developed, with particular attention paid to architecture-appropriate inter-chip routing algorithms. We show that the cost of the partial crossbar (as measured by the number of pins on all FPGAs and FPIDs required to fit a design), is on average 20% more than the new HCGP architecture and as much as 35% more. Furthermore, the critical path delay for designs implemented on the partial crossbar increased, and were on average 9% more than the HCGP architecture and up to 26% more. Using our experimental approach, we also explore a key architecture parameter associated with the HCGP architecture: the proportion of hard-wired connections versus programmable connections, to determine its best value. --- paper_title: The Roles of FPGAs in Reprogrammable Systems paper_content: Reprogrammable systems based on field programmable gate arrays are revolutionizing some forms of computation and digital logic. As a logic emulation system, they provide orders of magnitude faster computation than software simulation. As a custom-computing machine, they achieve the highest performance implementation for many types of applications. As a multimode system, they yield significant hardware savings and provide truly generic hardware. In this paper, we discuss the promise and problems of reprogrammable systems. This includes an overview of the chip and system architectures of reprogrammable systems as well as the applications of these systems. We also discuss the challenges and opportunities of future reprogrammable systems. --- paper_title: Synthesis and floorplanning for large hierarchical FPGAs paper_content: Because the VLSI circuits complexity growth, the trend in design is towards divide-and-conquer schemes, in which circuits are composed of blocks, standard macros or custom macros. From the other side, to allow an implementation of large digital circuits, increased capacity target FPGAs are organized hierarchically. In this paper, we present a hierarchical FPGA floorplanning method which takes into account both the hierarchy of the design and the hierarchy of the target. The method aims at minimization the timing and balancing cost of the floorplan and is based on automatic detection of macro blocks and assigning them to the target FPGA hierarchical zones. --- paper_title: FPGA routing and routability estimation via Boolean satisfiability paper_content: Guaranteeing or even estimating the routability of a portion of a placed field programmable gate array (FPGA) remains difficult or impossible in most practical applications. In this paper, we develop a novel formulation of both routing and routability estimation that relies on a rendering of the routing constraints as a single large Boolean equation. Any satisfying assignment to this equation specifies a complete detailed routing. By representing the equation as a binary decision diagram (BDD), we represent all possible routes for all nets simultaneously. Routability estimation is transformed to Boolean satisfiability, which is trivial for BDD's. We use the technique in the context of a perfect routability estimator for a global router. Experimental results from a standard FPGA benchmark suite suggest the technique is feasible for realistic circuits, but refinements are needed for very large designs. --- paper_title: A new retiming-based technology mapping algorithm for LUT-based FPGAs paper_content: In this paper, w e presen t a new retiming-based technology mapping algorithm for look-up table-based field programmable gate arrays. The algorithm is based on a novel iterative procedure for computing all k -cuts of all nodes in a sequen tialcircuit, in the presence of retiming. The algorithm completely avoids flow computation whic his the bottleneck of previous algorithms. Due to the fact that k is very small in practice, the procedure for computing all k -cuts is v ery fast. Experimental results indicate the overall algorithm is very efficient in practice. --- paper_title: A detailed router for field-programmable gate arrays paper_content: A detailed routing algorithm, called the coarse graph expander (CGE), that has been designed specifically for field-programmable gate arrays (FPGAs) is described. The algorithm approaches this problem in a general way, allowing it to be used over a wide range of different FPGA routing architectures. It addresses the issue of scarce routing resources by considering the side effects that the routing of one connection has on another, and also has the ability to optimize the routing delays of time-critical connections. CGE has been used to obtain excellent routing results for several industrial circuits implemented in FPGAs with various routing architectures. The results show that CGE can route relatively large FPGAs in very close to the minimum number of tracks as determined by global routing, and it can successfully optimize the routing delays of time-critical connections. CGE has a linear run time over circuit size. > --- paper_title: Fast module mapping and placement for datapaths in FPGAs paper_content: By tailoring a compiler tree-parsing tool for datapath module mapping, we produce good quality results for datapath synthesis in very fast run time. Rather than flattening the design to gates, we preserve the datapath structure; this allows exploitation of specialized datapath features in FPGAs, retains regularity, and also results in a smaller problem size. To further achive high mapping speed, we formulate the problem as tree covering and solve it efficiently with a linear-time dynamic programming algorithm. In a novel extension to the tree-covering algorithm, we perform module placement simultaneously with the mapping, still in linear time. Integrating placement has the potential to increase the quality of the result since we can optimize total delay including routing delays. To our knowledge this is the first effort to leverage a grammar-based tree covering tool for datapath module mapping. Further, it is the first work to integrate simultaneous placement with module mapping in a way that preserves linear time complexity. --- paper_title: Satisfiability-based layout revisited: detailed routing of complex FPGAs via search-based Boolean SAT paper_content: 1. ABSTRACT lier BDD-based methods. Boolean-based routing transforms the geometric FPGA routing task into a single, large Boolean equation with the property that any assignment of input variables that “satisfies” the equation (that renders equation identically “1”) specifies a valid routing. The formulation has the virtue that it considers all nets simultaneously, and the absence of a satisfying assignment implies that the layout is unroutable. Initial Boolean-based approaches to routing used Binary Decision Diagrams (BDDs) to represent and solve the layout problem. BDDs, however, limit the size and complexity of the FPGAs that can be routed, leading these approaches to concentrate only on individual FPGA channels. In this paper, we present a new search-based Satisfiability (SAT) formulation that can handle entire FPGAs, routing all nets concurrently. The approach relies on a recently developed SAT engine (GRASP) that uses systematic search with conflict-directed non-chronological backtracking, capable of handling very large SAT instances. We present the first comparisons of search-based SAT routing results to other routers, and offer the first evidence that SAT methods can actually demonstrate the unroutability of a layout. Preliminary experimental results suggest that this approach to FPGA routing is more viable than ear1.1 --- paper_title: A methodology for fast FPGA floorplanning paper_content: Floorplanning is an important problem in FPGA circuit mapping. As FPGA capacity grows, new innovative approaches will be required for efficiently mapping circuits to FPGAs. In this paper we present a macro based floorplanning methodology suitable for mapping large circuits to large, high density FPGAs. Our method uses clustering techniques to combine macros into clusters, and then uses a tabu search based approach to place clusters while enhancing both circuit routability and performance. Our method is capable of handling both hard (fixed size and shape) macros and soft (fixed size and variable shape) macros. We demonstrate our methodology on several macro based circuit designs and compare the execution speed and quality of results with commercially available CAE tools. Our approach shows a dramatic speedup in execution time without any negative impact on quality. --- paper_title: A New FPGA Technology Mapping Approach by Cluster Merging paper_content: In this paper, a new technology mapping method based on the cluster merging is proposed. It provides better global view on optimal technology mapping and formalization. In addition, it supports both MUX-based and LUT-based FPGA allowing various cost functions. Experimental results on MCNC benchmarks show that our approach produces the best results for MUX-based FPGA case in terms of cell count. --- paper_title: Algorithms for an fpga switch module routing problem with application to global routing paper_content: We consider a switch module routing problem for symmetrical-array field-programmable gate arrays (FPGAs). This problem was first introduced by Zhu et al. (1993). They used it to evaluate the routability properties of switch modules which they proposed. Only an approximation algorithm for the problem was proposed by them. We give an optimal algorithm for the problem based on integer linear programming (ILP). Experiments show that this formulation leads to fast and efficient solutions to practical-sized problems. We then propose a precomputation that eliminates the need to use ILP on-line. We also identify special cases of this problem that reduce to problems for whom efficient algorithms are known. Thus, the switch module routing problem can be solved in polynomial time for these special cases. Using our solution to the switch module routing problem, we propose a new metric to estimate the congestion in each switch module in the FPGA. We demonstrate the use of this metric in a global router. A comparison with a global router guided by the density of the routing channels shows that our metric leads to far superior global and detailed routing solutions. --- paper_title: A fast routability-driven router for FPGAs paper_content: Three factors are driving the demand for rapid FPGA compilation. First, as FPGAs have grown in logic capacity, the compile computation has grown more quickly than the compute power of the available computers. Second, there exists a subset of users who are willing to pay for very high speed compile with a decrease in quality of result, and accordingly being required to use a larger FPGA or use more real-estate on a given FPGA than is otherwise necessary. Third, very high speed compile has been a long-standing desire of those using FPGA-based custom computing machines, as they want compile times at least closer to those of regular computers. This paper focuses on the routing phase of the compile process, and in particular on routability-driven routing (as opposed to timing-driven routing). We present a routing algorithm and routing tool that has three unique capabilities relating to very high-speed compile: For a “low stress” routing problem (which we define as the case where the track supply is at least 10% greater than the minimun number of tracks per channel actually needed to route a circuit) the routing time is very fast. For example, the routing phase (after the netlist is parsed and the routing graph is constructed) for a 20,000 LUT/FF pair circuit with 30% extra tracks is only 23 seconds on a 300 MHz Sparcstation. For low-stress routing problems the routing time is near-linear in the size of the circuit, and the linearity constant is very small: 1.1 ms per LUT/FF pair, or roughly 55,000 LUT/FF pairs per minute. For more difficult routing problems (where the track supply is close to the minimum needed) we provide a method that quickly identifies and subdivides this class into two sub-classes: (i) those circuits which are difficult (but possible) to route and will take significantly more time than low-stress problems, and (ii) those circuits which are impossible to route. In the first case the user can choose to continue or reduce the amount of logic; in the second case the user is forced to reduce the amount of logic or obtain a larger FPGA. --- paper_title: SMAP: heterogeneous technology mapping for area reduction in FPGAs with embedded memory arrays paper_content: It has become clear that large embedded configurable memory arrays will be essential in future FPGAs. Embedded arrays provide high-density high-speed implementations of the storage parts of circuits. Unfortunately, they require the FPGA vendor to partition the device into memory and logic resources at manufacture-time. This leads to a waste of chip area for customers that do not use all of the storage provided. This chip area need not be wasted, and can in fact be used very efficiently, if the arrays are configured as large multi-output ROMs, and used to implement logic. In order to efficiently use the embedded arrays in this way, a technology mapping algorithm that identifies parts of circuits that can be efficiently mapped to an embedded array is required. In this paper, we describe such an algorithm. The new tool, called SMAP, packs as much circuit information as possible into the available memory arrays, and maps the rest of the circuit into four-input lookup-tables. On a set of 29 sequential and combinational benchmarks, the tool is able to map, on average, 60 4-LUTs into a single 2-Kbit memory array. If there are 16 arrays available, it can map, on average, 358 4-LUTs to the 16 arrays. --- paper_title: The programmable logic data book paper_content: Improvement to video-telephone systems allowing a called subscriber to have at his disposal on his television receiver screen, before he takes off his handset, information about the person who has initiated the call. Each subscriber station has a handset, a television camera, a television receiver and a character generator and is connected to a switching network by a telephone line, an incoming video line and an outgoing video line. Means are provided in the calling station for connecting to the outgoing video line the character generator when the calling subscriber takes down the handset and the television camera when video signals are detected on the incoming video line and in the called station for supplying with current the television receiver when ringing tone signals are detected on the telephone line. --- paper_title: Cut ranking and pruning: enabling a general and efficient FPGA mapping solution paper_content: Cut enumeration is a common approach used in a number of FPGA synthesis and mapping algorithms for consideration of various possible LUT implementations at each node in a circuit. Such an approach is very general and flexible, but often suffers high computational complexity and poor scalability. In this paper, we develop several efficient and effective techniques on cut enumeration, ranking and pruning. These techniques lead to much better runtime and scalability of the cut-enumeration based algorithms; they can also be used to compute a tight lower-bound on the size of an area-minimum mapping solution. For area-oriented FPGA mapping, experimental results show that the new techniques lead to over 160X speed-up over the original optimal duplication-free mapping algorithm, achieve mapping solutions with 5-21% smaller area for heterogeneous FPGAs compared to those by Chortle-crf, MIS-pga-new, and TOS-TUM, yet with over 100X speed-up over MIS-pganew and TOS-TUM. --- paper_title: Synthesis Methods for Field Programmable Gate Arrays paper_content: Field programmable gate arrays (FPGA ’s) reduce the turnaround time of application-spec@c integrated circuits from weeks to minutes. However, the high complexity of their architectures makes manual mapping of designs time consuming and error prone thereby offsetting any turnaround advantage. Consequently, effective design automation tools are needed to reduce design time. Among the most important is logic synthesis. While standard synthesis techniques could be used for FPGA’s, the quality of the synthesized designs is often unacceptable. As a result, much recent work has been devoted to developing logic synthesis tools targeted to different FPGA architectures. The paper surveys this work. The three most popular types of FPGA architectures are considered, namely those using logic blocks based on lookuptables, multiplexers and wide AND/OR arrays. The emphasis is on tools which attempt to minimize the area of the combinational logic part of a design since little work has been done on optimizing performance or routability, or on synthesis of the sequential part of a design. The different tools surveyed are compared using a suite of benchmark designs. --- paper_title: VPR: A new packing, placement and routing tool for FPGA research paper_content: We describe the capabilities of and algorithms used in a ne w FPGA CAD tool, Versatile Place and Route (VPR). In terms of minimizing routing area, VPR outperforms all published FPGA place and route tools to which we can compare. Although the algorithms used are based on pre viously known approaches, we present several enhancements that improve run-time and quality. We present placement and routing results on a new set of lar ge circuits to allo w future benchmark comparisons of FPGA place and route tools on circuit sizes more typical of today’s industrial designs. VPR is capable of targeting a broad range of FPGA architectures, and the source code is publicly available. It and the associated netlist translation / clustering tool VPACK ha ve already been used in a number of research projects w orldwide, and should be useful in many areas of FPGA architecture research. --- paper_title: Technology mapping for FPGAs with embedded memory blocks paper_content: Modern field programmable gate arrays (FPGAs) provide embedded memory blocks (EMBs) to be used as on-chip memories. In this paper, we explore the possibility of using EMBs to implement logic functions when they are not used as on-chip memory. We propose a general technology mapping problem for FPGAs with EMBs for area and delay minimization and develop an efficient algorithm based on the concepts of Maximum Fanout Free Cone (MFFC) [3] and Maximum Fanout Free Subgraph (MFFS) [7], named EMB_Pack, which minimizes the area after or before technology mapping by using EMBs while maintaining the circuit delay. We have tested EMB_Pack on MCNC benchmarks on Altera's FLEX10K device family [1]. The experimental results show that compared with the original mapped circuits generated from CutMap [5] without using EMBs, EMB_Pack as postprocessing can further reduce up to 10% of the area on the mapped circuits while maintaining the layout delay by making efficient use of available EMB resources. Compared with CutMap-e without using EMBs, EMB_Pack as pre-mapping processing followed by CutMap-e can reduce 6% of the area while maintaining the circuit optimal delay. --- paper_title: A Performance and Routability Driven Router for FPGAs Considering Path Delays paper_content: This paper presents a new performance and routability driven router for symmetrical array based Field Programmable Gate Arrays (FPGAs). The objectives of our proposed routing algorithm are twofold: (1) improve the routability of the design (i.e., minimize the maximumrequired routing channel density) and (2) improve the overall performance of the design (i.e., minimize the overall path delay). Initially, nets are routed sequentially according to their criticalities and routabilities. The nets/paths violating the routing-resource and timing constraints are then resolved iteratively by a rip-up-and-rerouter, which is guided by a simulated evolution based optimization technique. The proposed algorithm considers the path delays and routability throughout the entire routing process. Experimental results show that our router can significantly improve routability and reduce delay over many existing routing algorithms. --- paper_title: VLSI cell placement techniques paper_content: VLSI cell placement problem is known to be NP complete. A wide repertoire of heuristic algorithms exists in the literature for efficiently arranging the logic cells on a VLSI chip. The objective of this paper is to present a comprehensive survey of the various cell placement techniques, with emphasis on standard cell and macro placement. Five major algorithms for placement are discussed: simulated annealing, force-directed placement, min-cut placement, placement by numerical optimization, and evolution-based placement. The first two classes of algorithms owe their origin to physical laws, the third and fourth are analytical techniques, and the fifth class of algorithms is derived from biological phenomena. In each category, the basic algorithm is explained with appropriate examples. Also discussed are the different implementations done by researchers. --- paper_title: Performance driven floorplanning for FPGA based designs paper_content: Increasing design densities on large FPGAs and greater demand for performance, has calledfor special purpose tools like floorplanner, performance driven router, and more. In this paper we present a floorplanning based design mapping solution that is capable of mapping macro cell based designs as well as hierarchicaldesigns on FPGAs. The mapping solution has been tested extensively on a large collection of designs. We not only outperform state of the art CAE tools from industry in terms of execution time but also achieve much better performance in terms of timing. These methods are especially suitable for mapping designs on very large FPGAs. --- paper_title: Software technologies for reconfigurable systems paper_content: FPGA-based systems are a significant area of computing, providing a high-performance implementation substrate for many different applications. However, the key to harnessing their power for most domains is developing mapping tools for automatically transforming a circuit or algorithm into a configuration for the system. In this paper we review the current state-of-the-art in mapping tools for FPGA-based systems, including single-chip and multi-chip mapping algorithms for FPGAs, software support for reconfigurable computing, and tools for run-time reconfigurability. We also discuss the challenges for the future, pointing out where development is still needed to let reconfigurable systems achieve all of their promise. --- paper_title: New performance-driven FPGA routing algorithms paper_content: Motivated by the goal of increasing the performance of FPGA-based designs, we propose new Steiner and arborescence FPGA routing algorithms. Our Steiner tree constructions significantly outperform the best known ones and have provably good performance bounds. Our arborescence heuristics produce routing solutions with optimal source-sink pathlengths, and with wirelength on par with the best existing Steiner tree heuristics. We have incorporated these algorithms into an actual FPGA router, which routed a number of industrial circuits using channel width considerably smaller than is achievable by previous routers. Our routing results for both the 3000 and 4000-series Xilinx parts are currently the best known in the Literature. --- paper_title: Trading quality for compile time: ultra-fast placement for FPGAs paper_content: The demand for high-speed FPGA compilation tools has occured for three reasons: first, as FPGA device capacity has grown, the computation time devoted to placement and routing has grown more dramatically than the compute power of the available computers. Second, there exists a subset of users who are willing to accept a reduction in the quality of result in exchange for a high-speed compilation. Third, high-speed compile has been a long-standing desire of users of FPGA-based custom computing machines, since their compile time requirements are ideally closer to those of regular computers. This paper focuses on the placement phase of the compile process, and presents an ultra-fast placement algorithm targeted to FPGAs. The algorithm is based on a combination of multiple-level, bottom-up clustering and hierarchical simulated annealing. It provides superior area results over a known high-quality placement tool on a set of large benchmark circuits, when both are restricted to a short run time. For example, it can generate a placement for a 100,000-gate circuit in 10 seconds on a 300 MHz Sun UltraSPARC workstation that is only 33% worse than a high-quality placement that takes 524 seconds using a pure simulated annealing implementation. In addition, operating in its fastest mode, this tool can provide an accurate estimate of the wirelength achievable with good quality placement. This can be used, in conjunction with a routing predictor, to very quickly determine the routability of a given circuit on a given FPGA device. --- paper_title: General modeling and technology-mapping technique for LUT-based FPGAs paper_content: We present a general approach to the FPGA technology mapping problem that applies to any logic block composed of lookup tables (LUTs) and can yield optimal solutions. The connections between LUTs of a logic block are modeled by virtual switches, which define a set of multiple-LUT blocks (MLBs) called an MLB-basis. We identify the MLB-bases for various commercial logic blocks. Given a n MLB-basis, we formulate FPGA mapping as a mixed integer linear programming (MILP) problem to achieve both the generality and the optimality objectives. We solve the MILP models using a general-purpose MILP solver, and present the results of mapping some ISCAS.85 benchmark circuits with a variety of commercial FPGAs. Circuits of a few hundred gates can be mapped in reasonable time using the MILP approach directly. Larger circuits can be handled by partitioning them prior to technology mapping. We show that optimal or provably near-optimal solutions can be obtained for the large ISCAS.85 benchmark circuits using partitions defined by their high-level functions. --- paper_title: PathFinder: a negotiation-based performance-driven router for FPGAs paper_content: Routing FPGAs is a challenging problem because of the relative scarcity of routing resources, both wires and connection points. This can lead either to slow implementations caused by long wiring paths that avoid congestion or a failure to route all signals. This paper presents PathFinder, a router that balances the goals of performance and routability. PathFinder uses an iterative algorithm that converges to a solution in which all signals are routed while achieving close to the optimal performance allowed by the placement. Routability is achieved by forcing signals to negotiate for a resource and thereby determine which signal needs the resource most. Delay is minimized by allowing the more critical signals a greater say in this negotiation. Because PathFinder requires only a directed graph to describe the architecture of routing resources, it adapts readily to a wide variety of FPGA architectures such as Triptych, Xilinx 3000 and mesh-connected arrays of FPGAs. The results of routing ISCAS benchmarks on the Triptych FPGA architecture show an average increase of only 4.5% in critical path delay over the optimum delay for a placement. Routes of ISCAS benchmarks on the Xilinx 3000 architecture show a greater completion rate than commercial tools, as well as 11% faster implementations. --- paper_title: Boolean matching for complex PLBs in LUT-based FPGAs with application to architecture evaluation paper_content: In this paper, we developed Boolean matching techniques for complex programmable logic blocks (PLBs) in LUT-based FPGAs. A complex PLB can not only be used as a K -input LUT, but also can implement some wide functions of more than K variables. We apply previous and develop new functional decomposition methods to match wide functions to PLBs. We can determine exactly whether a given wide function can be implemented with a XC4000 CLB or other three PLB architectures (including the XC5200 CLB). We evaluate functional capabilities of the four PLB architectures on implementing wide functions in MCNC benchmarks. Experiments show that the XC4000 CLB can be used to implement up to 98% of 6-cuts and 88% of 7-cuts in MCNC benchmarks, while two of the other three PLB architectures have a smaller cost in terms of logic capability per silicon area. Our results are useful for designing future logic unit architectures in LUT based FPGAs. --- paper_title: Acceleration of an FPGA router paper_content: The authors describe their experience and progress in accelerating an FPGA router. Placement and routing is undoubtedly the most time-consuming process in automatic chip design or configuring programmable logic devices as reconfigurable computing elements. Their goal is to accelerate routing of FPGAs by 10 fold with a combination of processor clusters and hardware acceleration. Coarse-grain parallelism is exploited by having several processors route separate groups of nets in parallel. A hardware accelerator is presented which exploits the fine-grain parallelism in routing individual nets. --- paper_title: Technology mapping of heterogeneous LUT-based FPGAs paper_content: New techniques have been developed for the technology mapping of FPGAs containing more than one size of look-up table. The Xil inx 4000 series is one such family of devices. These have a very large share of the FPGA market, and yet the associated technology mapping problem has hardly been addressed in the literature. Our method extends the standard techniques of functional decomposition and network covering. For the decomposition, we have extended the conventional binpacking (cube-packing) algorithms so that it produces two sizes of bins. We have also enhanced it to explore several packing possibilities, and include cube division and cascading of nodes. The covering step is based on the concept of flow networks and cut-computation. We devised a theory that reduces the flow network sizes so that a dynamic programming approach can be used to compute the feasible cuts in the network. An iterative selection algorithm can then be used to compute the set cover of the network. Experimental results show good performances for the Xilinx 4K devices (about 25% improvement over MOFL and 10% over comparable algorithms in SIS in terms of CLBs). --- paper_title: An Efficient Algorithm for Performance-Optimal FPGA Technology Mapping with Retiming paper_content: It is known that most field programmable gate array (FPGA) mapping algorithms consider only combinational circuits. Pan and Liu [1996] recently proposed a novel algorithm, named SeqMapII, of technology mapping with retiming for clock period minimization. Their algorithm, however, requires O(K/sup 3/n/sup 5/log(Kn/sup 2/)logn) run time and O(K/sup 2/n/sup 2/) space for sequential circuits with n gates. In practice, these requirements are too high for targeting K-lookup-table-based FPGA's implementing medium or large designs. In this paper, we present three strategies to improve the performance of the SeqMapII algorithm significantly. Our algorithm works in O(K/sup 2/nln|P/sub v/|logn) run time and O(K|P/sub v/|) space, where n/sub l/ is the number of labeling iterations and |P/sub v/| is the size of the partial flow network. In practice, both n/sub l/ and |P/sub v/| are less than n. Area minimization is also considered in our algorithm based on efficient low-cost K-cut computation. --- paper_title: Technology mapping for TLU FPGAs based on decomposition of binary decision diagrams paper_content: This paper proposes an efficient algorithm for technology mapping targeting table look-up (TLU) blocks. It is capable of minimizing either the number of TLUs used or the depth of the produced circuit. Our approach consists of two steps. First a network of super nodes, is created. Next a Boolean function of each super node with an appropriate don't care set is decomposed into a network of TLUs. To minimize the circuit's depth, several rules are applied on the critical portion of the mapped circuit. --- paper_title: Timing driven floorplanning on programmable hierarchical targets paper_content: The goal of this paper is to perform a timing optimization of a circuit described b y a network of cells on a target structure whose connection delays ha v ediscrete values follo wing its hierarch y. The circuits is modelled by a set of timed cones whose delay histograms allow their classification into critical, potential critical and neutral cones according to predicted delays. The floorplanning is then guided b y this cone structuring and has two innov ativ e features:first, it is shown that the placement of the elements of the neutral cones has no impact on timing results, th us a significant reduction is obtained; second, despite a greedy approach, a near optimal floorplan is achieved in a large number of examples. --- paper_title: NAPA C: compiling for a hybrid RISC/FPGA architecture paper_content: Hybrid architectures combining conventional processors with configurable logic resources enable efficient coordination of control with datapath computation. With integration of the two components on a single device, loop control and data-dependent branching can be handled by the conventional processor. While regular datapath computation occurs on the configurable hardware. This paper describes a novel pragma-based approach to programming such hybrid devices. The NAPA C language provides pragma directives so that the programmer (or an automatic partitioner) can specify where data is to reside and where computation is to occur with statement-level granularity. The NAPA C compiler, targeting National Semiconductor's NAPA1000 chip, performs semantic analysis of the pragma-annotated program and co-synthesizes a conventional program executable combined with a configuration bit stream for the adaptive logic. Compiler optimizations include synthesis of hardware pipelines from pipelineable loops. --- paper_title: An operating system for custom computing machines based on the Xputer paradigm paper_content: The paper presents an operating system (OS) for custom computing machines (CCMs) based on the Xputer paradigm. Custom computing tries to combine traditional computing with programmable hardware, attempting to gain from the benefits of both adaptive software and optimized hardware. The OS running as an extension to the actual host OS allows a greater flexibility in deciding what parts of the application should run on the configurable hardware with structural code and what on the host-hardware with conventional software. This decision can be taken late - at run-time - and dynamically, in contrast to early partitioning and deciding at compile-time as used currently on CCMs. Thus the CCM can be used concurrently by multiple users or applications without knowledge of each other. This raises programming and using CCMs to levels close to modem OSes for sequential von Neumann processors. --- paper_title: The Garp architecture and C compiler paper_content: Various projects and products have been built using off-the-shelf field-programmable gate arrays (FPGAs) as computation accelerators for specific tasks. Such systems typically connect one or more FPGAs to the host computer via an I/O bus. Some have shown remarkable speedups, albeit limited to specific application domains. Many factors limit the general usefulness of such systems. Long reconfiguration times prevent the acceleration of applications that spread their time over many different tasks. Low-bandwidth paths for data transfer limit the usefulness of such systems to tasks that have a high computation-to-memory-bandwidth ratio. In addition, standard FPGA tools require hardware design expertise which is beyond the knowledge of most programmers. To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it. They are also investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications. They present their results in this article. --- paper_title: A hardware/software partitioning algorithm for custom computing machines paper_content: In this paper an Hardware/Software partitioning algorithm is presented. Appropriate cost and performance estimation functions were developed, as well, as techniques for their automated calculation. The partitioning algorithm that explores the parallelism in acyclic code regions is part of a larger tool kit specific for custom computing machines. The tool kit includes a parallelising compiler, an hardware/software partitioning program, as well as, a set of programs for performance estimation and system implementation. It speeds up the computationally intensive tasks using a FPGA based processing platform to augment the functionality of the processor with new operations and parallel capacities. An example was used to demonstrate the proposed partitioning techniques. --- paper_title: A CAD Suite for High-Performance FPGA Design paper_content: This paper describes the current status of a suite of CAD tools designed specifically for use by designers who are developing high-performance configurable-computing applications. The basis of this tool suite is JHDL, a design tool originally conceived as a way to experiment with Run-Time Reconfigured (RTR) designs. However, what began as a limited experiment to model RTR designs with Java has evolved into a comprehensive suite of design tools and verification aids, with these tools being used successfully to implement high-performance applications in Automated Target Recognition (ATR), sonar beamforming, and general image processing on configurable-computing systems. --- paper_title: Fast integrated tools for circuit design with FPGAs paper_content: To implement high-density and high-speed FPGA circuits, designers need tight control over the circuit implementation process. However, current design tools are unsuited for this purpose as they lack fast turnaround times, interactiveness, and integration. We present a system for the Xilinx XC6200 FPGA, which addresses these issues. It consists of a suite of tightly integrated tools for the XC6200 architecture centered around an architecture-independent tool framework. The system lets the designer easily intervene at various stages of the design process and features design cycle times (from an HDL specification to a complete layout) in the order of seconds. --- paper_title: NAPA C: compiling for a hybrid RISC/FPGA architecture paper_content: Hybrid architectures combining conventional processors with configurable logic resources enable efficient coordination of control with datapath computation. With integration of the two components on a single device, loop control and data-dependent branching can be handled by the conventional processor. While regular datapath computation occurs on the configurable hardware. This paper describes a novel pragma-based approach to programming such hybrid devices. The NAPA C language provides pragma directives so that the programmer (or an automatic partitioner) can specify where data is to reside and where computation is to occur with statement-level granularity. The NAPA C compiler, targeting National Semiconductor's NAPA1000 chip, performs semantic analysis of the pragma-annotated program and co-synthesizes a conventional program executable combined with a configuration bit stream for the adaptive logic. Compiler optimizations include synthesis of hardware pipelines from pipelineable loops. --- paper_title: JHDL-an HDL for reconfigurable systems paper_content: JHDL is a design tool for reconfigurable systems that allows designers to express circuit organizations that dynamically change over time in a natural way, using only standard programming abstractions found in object-oriented languages. JHDL manages FPGA resources in a manner that is similar to the way object-oriented languages manage memory: circuits are treated as distinct objects and a circuit is configured onto a configurable computing machine (CCM) by invoking its constructor effectively "constructing " an instance of the circuit onto the reconfigurable platform just as object instances are allocated in memory with conventional object-oriented languages. This approach of using object constructors/destructors to control the circuit lifetime on a CCM is a powerful technique that naturally leads to a dual simulation/execution environment where a designer can easily switch between either software simulation or hardware execution on a CCM with a single application description. Moreover JHDL supports dual hardware/software execution; parts of the application described using JHDL circuit constructs can be executed on the CCM while the remainder of the application the-GUI for example-can run on the CCM host. Based on an existing programming language (Java), JHDL requires no language extensions and can be used with any standard Java 1.1 distribution. --- paper_title: The Garp architecture and C compiler paper_content: Various projects and products have been built using off-the-shelf field-programmable gate arrays (FPGAs) as computation accelerators for specific tasks. Such systems typically connect one or more FPGAs to the host computer via an I/O bus. Some have shown remarkable speedups, albeit limited to specific application domains. Many factors limit the general usefulness of such systems. Long reconfiguration times prevent the acceleration of applications that spread their time over many different tasks. Low-bandwidth paths for data transfer limit the usefulness of such systems to tasks that have a high computation-to-memory-bandwidth ratio. In addition, standard FPGA tools require hardware design expertise which is beyond the knowledge of most programmers. To help investigate the viability of connected FPGA systems, the authors designed their own architecture called Garp and experimented with running applications on it. They are also investigating whether Garp's design enables automatic, fast, effective compilation across a broad range of applications. They present their results in this article. --- paper_title: The Transmogrifier C hardware description language and compiler for FPGAs paper_content: The Transmogrifier C hardware description language is almost identical to the C programming language, making it attractive to the large community of C-language programmers. This paper describes the semantics of the language and presents a Transmogrifier C compiler that targets the Xilinx 4000 FPGA. The compiler is operational and has produced several working circuits, including a graphics display driver. --- paper_title: Exploiting reconfigurability through domain-specific systems paper_content: Domain-specific systems represent an important opportunity for FPGA-based systems. They provide a way to exploit the low-level flexibility of the device through the use of domain-specific libraries of specialized circuit elements. Moreover, reconfigurability of the FPGAs is also exploited by reusing these circuit elements to implement a wide variety of applications within the specific domain. Application areas such as image processing, signal processing, graphics, DNA analysis, and other areas are important areas that can use this approach to achieve high levels of performance at reasonable cost. Unfortunately, current CAD tools are a relatively poor match for the design needs of such systems. However, some of the additions listed in this paper should not be too difficult to implement and they would greatly ease the design of such systems. --- paper_title: An Universal CLA Adder Generator for SRAM-Based FPGAs paper_content: In this paper we present an universal module generator for hierarchical carry lookahead adders of any word length which is suitable for most SRAM-based FPGA architectures. We introduce a generic model of SRAM-based FPGAs taking different configurations of the logic blocks into account. Considering the logical structure of CLA adders we efficiently perform technology mapping including an adaptive structure generation process as well as signal flow driven placement and partitioning which is necessary if the macro exceeds the limitations given by the FPGA's pin or CLB count. --- paper_title: A Case Study of Partially Evaluated Hardware Circuits: Key-Specific DES paper_content: FPGA based data encryption provides greater flexibility than ASICs and higher performance than software. Because FPGAs can be reprogrammed, they allow a single integrated circuit to efficiently implement multiple encryption algorithms. Furthermore, the ability to program FPGAs at runtime can be used to improve the performance through dynamic optimization. This paper describes the application of partial evaluation to an implementation of the Data Encryption Standard (DES). Each end user of a DES session shares a secret key, and this knowledge can be used to improve circuit performance. Key-specific encryption circuits require fewer resources and have shorter critical paths than the completely general design. By applying partial evaluation to DES on a Xilinx XC4000 series device we have reduced the CLB usage by 45% and improved the encryption bandwidth by 35%. --- paper_title: Automated field-programmable compute accelerator design using partial evaluation paper_content: This paper describes a compiler that generates both hardware and controlling software for field-programmable compute accelerators. By analyzing a source program together with part of its input, the compiler generates VHDL descriptions of functional units that are mapped on a set of FPGA chips and an optimized sequence of control constructions that run on the customized machine. The primary technique employed in the compiler is partial evaluation, which is used to transform an application program together with part of its input into an optimized program. Further phases in the compiler identify pieces of the program that can be realized in hardware and schedule computations to execute on the resulting hardware. Finally, a set of specialized functional units generated by the compiler for a timing simulation program is used to demonstrate the approach. --- paper_title: Run-time parameterised circuits for the Xilinx XC6200 paper_content: Current design tools support parameterisation of circuits, but the parameters are fixed at compile-time. In contrast, the circuits discussed in this paper fix their parameters at run-time. Run-time parameterised circuits can potentially out-perform custom VLSI hardware by optimising the FPGA circuit for a specific instance of a problem rather than for a general class of problem. This paper discusses the design of run-time parameterised circuits, and presents a study of run-time parameterised circuits for finite field operations on the Xilinx XC6200. The paper includes a comparison with implementation on a self-timed version of the XC6200 architecture, which illustrates the potential benefits of self-timing for dynamically reconfigurable systems. --- paper_title: Parallelizing Applications into Silicon paper_content: The next decade of computing will be dominated by embedded systems, information appliances and application-specific computers. In order to build these systems, designers will need high-level compilation and CAD tools that generate architectures that effectively meet the needs of each application. In this paper we present a novel compilation system that allows sequential programs, written in C or FORTRAN, to be compiled directly into custom silicon or reconfigurable architectures. This capability is also interesting because trends in computer architecture are moving towards more reconfigurable hardware-like substrates, such as FPGA based systems. Our system works by successfully combining two resource-efficient computing disciplines: Small Memories and Virtual Wires. For a given application, the compiler first analyzes the memory access patterns of pointers and arrays in the program and constructs a partitioned memory system made up of many small memories. The computation is implemented by active computing elements that are spatially distributed within the memory array. A space-time scheduler assigns instructions to the computing elements in a way that maximizes locality and minimizes physical communication distance. It also generates an efficient static schedule for the interconnect. Finally, specialized hardware for the resulting schedule of memory accesses, wires, and computation is generated as a multi-process state machine in synthesizable Verilog. With this system, implemented as a set of SUIF compiler passes, we have successfully compiled programs into hardware and achieve specialization performance enhancements by up to an order of magnitude versus a single general purpose processor. We also achieve additional parallelization speedups similar to those obtainable using a tightly-interconnected multiprocessor. --- paper_title: Fast compilation for pipelined reconfigurable fabrics paper_content: In this paper we describe a compiler which quickly synthesizes high quality pipelined datapaths for pipelined reconfigurable devices. The compiler uses the same internal representation to perform synthesis, module generation, optimization, and place and route. The core of the compiler is a linear time place and route algorithm more than two orders of magnitude faster than traditional CAD tools. The key behind our approach is that we never backtrack, rip-up, or re-route. Instead, the graph representing the computation is preprocessed to guarantee routability by inserting lazy noops. The preprocessing steps provides enough information to make a greedy strategy feasible. The compilation speed is approximately 3000 bit-operations/second (on a PII/400Mhz) for a wide range of applications. The hardware utilization averages 60% on the target device, PipeRench. --- paper_title: Specifying and Compiling Applications for RAPID paper_content: Efficient, deeply pipelined implementations exist for a wide variety of important computation-intensive applications, and many special-purpose hardware machines have been built that take advantage of these pipelined computation structures. While these implementations achieve high performance, this comes at the expense of flexibility. On the other hand, flexible architectures proposed thus far have not been very efficient. RaPiD is a reconfigurable pipelined datapath architecture designed to provide a combination of performance and flexibility for a variety of applications. It uses a combination of static and dynamic control to efficiently implement pipelined computations. This control, however, is very complicated; specifying a computation's control circuitry directly would be prohibitively difficult. This paper describes how specifications of a pipelined computation in a suitably high-level language are compiled into the control required to implement that computation in the RaPiD architecture. The compiler extracts a statically configured datapath from this description, identifies the dynamic control signals required to execute the computation, and then produces the control program and decoding structure that generates these dynamic control signals. --- paper_title: NAPA C: compiling for a hybrid RISC/FPGA architecture paper_content: Hybrid architectures combining conventional processors with configurable logic resources enable efficient coordination of control with datapath computation. With integration of the two components on a single device, loop control and data-dependent branching can be handled by the conventional processor. While regular datapath computation occurs on the configurable hardware. This paper describes a novel pragma-based approach to programming such hybrid devices. The NAPA C language provides pragma directives so that the programmer (or an automatic partitioner) can specify where data is to reside and where computation is to occur with statement-level granularity. The NAPA C compiler, targeting National Semiconductor's NAPA1000 chip, performs semantic analysis of the pragma-annotated program and co-synthesizes a conventional program executable combined with a configuration bit stream for the adaptive logic. Compiler optimizations include synthesis of hardware pipelines from pipelineable loops. --- paper_title: Automated field-programmable compute accelerator design using partial evaluation paper_content: This paper describes a compiler that generates both hardware and controlling software for field-programmable compute accelerators. By analyzing a source program together with part of its input, the compiler generates VHDL descriptions of functional units that are mapped on a set of FPGA chips and an optimized sequence of control constructions that run on the customized machine. The primary technique employed in the compiler is partial evaluation, which is used to transform an application program together with part of its input into an optimized program. Further phases in the compiler identify pieces of the program that can be realized in hardware and schedule computations to execute on the resulting hardware. Finally, a set of specialized functional units generated by the compiler for a timing simulation program is used to demonstrate the approach. --- paper_title: Multi-FPGA systems paper_content: Multi-FPGA systems are a growing area of research. They offer the potential to deliver high performance solutions to general computing tasks, especially for the prototyping of digital logic. However, to realize this potential requires a flexible, powerful hardware substrate and a complete, high quality and high performance automatic mapping system. ::: The primary goal of this thesis is to offer a disciplined look at the issues and requirements of multi-FPGA systems. This includes an in-depth study of some of the hardware and software issues of multi-FPGA systems, especially logic partitioning and mesh routing topologies, as well as investigations into problems that have largely been ignored, including pin assignment and architectural support for logic emulator interfaces. We also present Springbok, a novel rapid-prototyping system for board-level designs. --- paper_title: I/O and performance tradeoffs with the FunctionBus during multi-FPGA partitioning paper_content: We improve upon a new approach for automatically partitioning a system among several FPGAs. The new approach partitions a system's functional specification, now commonly available, rather than its structural implementation. The improvement uses a bus, the FunctionBus, for implementing function calls among FPGA's, The bus can be used with any number of and its protocol uses only a small amount of existing FPGA hardware, requiring no special hardware. While functional rather than structural partitioning can substantially reduce the number of input/output pins using (I/O) the FunctionBus takes such reduction even further. In particular, performance and I/0can be traded-off by varying the bus size, as demonstrated using several examples. --- paper_title: Pin Assignment for Multi-FPGA Systems paper_content: Multi-FPGA systems have tremendous potential, providing a high-performance computing substrate for many different applications. One of the keys to achieving this potential is a complete, automatic mapping solution that creates high-quality mappings in the shortest possible time. In this paper, we consider one step in this process, the assignment of inter-FPGA signals to specific I/O pins on the FPGAs in a multi-FPGA system. We show that this problem can neither be handled by pin assignment methods developed for other applications nor standard routing algorithms. Although current mapping systems ignore this issue, we show that an intelligent pin assignment method can achieve both quality and mapping speed improvements over random approaches. Intelligent pin assignment methods already exist for multi-FPGA systems, but are restricted to topologies where logic-bearing FPGAs cannot be directly connected. In this paper, we provide three new algorithms for the pin assignment of multi-FPGA systems with arbitrary topologies. We compare these approaches on several mappings to current multi-FPGA systems, and show that the force-directed approach produces better mappings, in significantly shorter time, than any of the other approaches. --- paper_title: Using cone structures for circuit partitioning into FPGA packages paper_content: Circuit designers and high-level synthesis tools have traditionally used circuit hierarchy to partition circuits into packages. However hierarchical partitioning can not be easily performed if hierarchical blocks have too large a size or too many I-Os. This problem becomes more frequent with field-programmable gate arrays (FPGAs) which commonly have small size limits and up to ten times smaller I-O pin limits. An I-O bottleneck often occurs which during circuit partitioning means more required packages and more ordinary signal wires crossing between the packages. More critical timing paths between packages are cut and circuit operational frequencies are drastically reduced. In this paper, two new partitioning algorithms are presented that use cone structures to partition large hierarchical blocks into FPGA's. Cone structures are minimum cut partitioning structures for netlists with low fanout, and clustering structures for partitioning netlists with high fanout. Cone structures also allow for full containment of critical paths. When used with good merging and cutting strategies, results show the cone partitioning algorithms given here produces fewer FPGG partitions than min-cut with good performance. --- paper_title: Automatic mapping of algorithms onto multiple FPGA-SRAM modules paper_content: This paper describes the processes that have been developed to allow the automatic mapping of technology independent algorithms onto a network of reconfigurable hardware modules. The VHDL language is used at the behavioural level of abstraction to describe the algorithm. Each reconfigurable hardware module comprises of a single FPGA and SRAM. A mesh configuration of these modules provides the resources for data manipulation and data storage required by the algorithm. By pre-processing the algorithm prior to synthesis, and then performing network partitioning on the synthesised netlist, a hardware implementation is realised on multiple modules. This approach means that an algorithm may be mapped onto a versatile hardware system which is not constrained by the limitations of the target technology. --- paper_title: Virtual wires: overcoming pin limitations in FPGA-based logic emulators paper_content: Existing FPGA-based logic emulators only use a fraction of potential communication bandwidth because they dedicate each FPGA pin (physical wire) to a single emulated signal (logical wire). Virtual wires overcome pin limitations by intelligently multiplexing each physical wire among multiple logical wires and pipelining these connections at the maximum clocking frequency of the FPGA. A virtual wire represents a connection from a logical output on one FPGA to a logical input on another FPGA. Virtual wires not only increase usable bandwidth, but also relax the absolute limits imposed on gate utilization. The resulting improvement in bandwidth reduces the need for global interconnect, allowing effective use of low dimension inter-chip connections (such as nearest-neighbor). Nearest-neighbor topologies, coupled with the ability of virtual wires to overlap communication with computation, can even improve emulation speeds. The authors present the concept of virtual wires and describe their first implementation, a 'softwire' compiler which utilizes static routing and relies on minimal hardware support. Results from compiling netlists for the 18 K gate Sparcle microprocessor and the 86 K gate Alewife Communications and Cache Controller indicate that virtual wires can increase FPGA gate utilization beyond 80 percent without a significant slowdown in emulation speed. > --- paper_title: Software technologies for reconfigurable systems paper_content: FPGA-based systems are a significant area of computing, providing a high-performance implementation substrate for many different applications. However, the key to harnessing their power for most domains is developing mapping tools for automatically transforming a circuit or algorithm into a configuration for the system. In this paper we review the current state-of-the-art in mapping tools for FPGA-based systems, including single-chip and multi-chip mapping algorithms for FPGAs, software support for reconfigurable computing, and tools for run-time reconfigurability. We also discuss the challenges for the future, pointing out where development is still needed to let reconfigurable systems achieve all of their promise. --- paper_title: TIERS: Topology independent pipelined routing and scheduling for VirtualWire compilation paper_content: TIERS is a new pipelined routing and scheduling algorithm implemented in a complete VirtualWire TM compilation and synthesis system. TIERS is described and compared to prior work both analytically and quantitatively. TIERS improves system speed by as much as a factor of 2.5 over prior work. TIERS routing results for both Altera and Xilinx based FPGA systems are provided. --- paper_title: Multiterminal net routing for partial crossbar-based multi-FPGA systems paper_content: Multi-FPGA (field-programmable gate arrays) systems are used as custom computing machines to solve compute-intensive problems and also in the verification and prototyping of large circuits. In this paper, we address the problem of routing multiterminal nets in a multi-FPGA system that uses partial crossbars as interconnect structures. First, we model the multiterminal routing problem as a partitioned bin-packing problem and formulate it as an integer linear programming problem where the number of variables is exponential. A fast heuristic is applied to compute an upper bound on the routing solution. Then, a column generation technique is used to solve the linear relaxation of the initial master problem in order to obtain a lower bound on the routing solution. This is followed by an iterative branch-and-price procedure that attempts to find a routing solution somewhere between the two established bounds. In this regard, the proposed algorithm guarantees an exact-routing solution by searching a branch-and-price tree. Due to the tightness of the bounds, the branch-and-price tree is small resulting in shorter execution times. Experimental results are provided for different netlists and board configurations in order to demonstrate the algorithms performance. The obtained results show that the algorithm finds an exact routing solution in a very short time. --- paper_title: Improving simulation accuracy in design methodologies for dynamically reconfigurable logic systems paper_content: This paper presents a new approach to the simulation of Dynamically Reconfigurable Logic (DRL) systems, which offers better accuracy of modelling dynamic reconfiguration than previously reported simulation techniques. Our method, named Clock Morphing, is based on modelling dynamic reconfiguration via a reconfigured module clock signal while using a dedicated signal value to indicate dynamic reconfiguration. We also discuss problems associated with the other DRL simulation techniques, describe the main principles of the proposed simulation method and evaluate its feasibility by implementing of a Clock Morphing based DRL simulation in VHDL. --- paper_title: A CAD Suite for High-Performance FPGA Design paper_content: This paper describes the current status of a suite of CAD tools designed specifically for use by designers who are developing high-performance configurable-computing applications. The basis of this tool suite is JHDL, a design tool originally conceived as a way to experiment with Run-Time Reconfigured (RTR) designs. However, what began as a limited experiment to model RTR designs with Java has evolved into a comprehensive suite of design tools and verification aids, with these tools being used successfully to implement high-performance applications in Automated Target Recognition (ATR), sonar beamforming, and general image processing on configurable-computing systems. --- paper_title: A Simulation Tool for Dynamically Reconfigurable Field Programmable Gate Arrays paper_content: The emergence of static memory-based field programmable gate arrays (FPGAs) that are capable of being dynamically reconfigured, i.e., partially reconfigured while active, has initiated research into new methods of digital systems synthesis. At present, however, there are virtually no specific CAD tools to support the design and investigation of digital systems using dynamic reconfiguration. This paper reports on an investigation of new CAD tools and the development of a new simulation technique, called dynamic circuit switching (DCS), for dynamically reconfigurable systems. The principles of DCS are presented and examples of its application are described. --- paper_title: Debugging techniques for dynamically reconfigurable hardware paper_content: Testing dynamically reconfigurable systems imposes new challenges which require special treatment. We present tools and techniques we developed for debugging a dynamically reconfigurable system that performs run-time constant propagation optimisations. An application for monitoring the effect of run-time specialisation is presented and we show how we adapted standard testability techniques to evaluate the performance of specialised circuits. We also outline how NDLs that capture reconfiguration at a high level can assist with debugging. --- paper_title: JHDL-an HDL for reconfigurable systems paper_content: JHDL is a design tool for reconfigurable systems that allows designers to express circuit organizations that dynamically change over time in a natural way, using only standard programming abstractions found in object-oriented languages. JHDL manages FPGA resources in a manner that is similar to the way object-oriented languages manage memory: circuits are treated as distinct objects and a circuit is configured onto a configurable computing machine (CCM) by invoking its constructor effectively "constructing " an instance of the circuit onto the reconfigurable platform just as object instances are allocated in memory with conventional object-oriented languages. This approach of using object constructors/destructors to control the circuit lifetime on a CCM is a powerful technique that naturally leads to a dual simulation/execution environment where a designer can easily switch between either software simulation or hardware execution on a CCM with a single application description. Moreover JHDL supports dual hardware/software execution; parts of the application described using JHDL circuit constructs can be executed on the CCM while the remainder of the application the-GUI for example-can run on the CCM host. Based on an existing programming language (Java), JHDL requires no language extensions and can be used with any standard Java 1.1 distribution. --- paper_title: The design and implementation of a context switching FPGA paper_content: Dynamic reconfiguration of field programmable gate arrays (FPGAs) has recently emerged as the next step in reconfigurable computing. Sanders, A Lockheed Martin Company, is developing the enabling technology to exploit dynamic reconfiguration. The device being developed is capable of storing four configurations on-chip and switching between them on a clock cycle basis. Configurations can be loaded while other contexts are active. A powerful cross-context data sharing mechanism has been implemented. The current status of this work and future work are described. --- paper_title: The NAPA adaptive processing architecture paper_content: The National Adaptive Processing Architecture (NAPA) is a major effort to integrate the resources needed to develop teraops class computing systems based on the principles of adaptive computing. The primary goals for this effort include: (1) the development of an example NAPA component which achieves an order of magnitude cost/performance improvement compared to traditional FPGA based systems, (2) the creation of a rich but effective application development environment for NAPA systems based on the ideas of compile time functional partitioning and (3) significantly improve the base infrastructure for effective research in reconfigurable computing. This paper emphasizes the technical aspects of the architecture to achieve the first goal while illustrating key architectural concepts motivated by the second and third goals. --- paper_title: RaPiD - Reconfigurable Pipelined Datapath paper_content: Configurable computing has captured the imagination of many architects who want the performance of application-specific hardware combined with the reprogrammability of general-purpose computers. Unfortunately, onfigurable computing has had rather limited success largely because the FPGAs on which they are built are more suited to implementing »ndom logic than computing tasks. This paper presents RaPiD, a new coarse-grained FPGA architecture that is optimized for highly repetitive, computation-intensive tasks. Very deep application-specific computation pipelines can be configured in RaPiD. These pipelines make much more efficient use of silicon than traditional FPGAs and also yield much higher performance for a wide range of applications. --- paper_title: The design and implementation of a context switching FPGA paper_content: Dynamic reconfiguration of field programmable gate arrays (FPGAs) has recently emerged as the next step in reconfigurable computing. Sanders, A Lockheed Martin Company, is developing the enabling technology to exploit dynamic reconfiguration. The device being developed is capable of storing four configurations on-chip and switching between them on a clock cycle basis. Configurations can be loaded while other contexts are active. A powerful cross-context data sharing mechanism has been implemented. The current status of this work and future work are described. --- paper_title: DPGA utilization and application paper_content: Dynamically Programmable Gate Arrays (DPGAs) are programmable arrays which allow the strategic reuse of limited resources. In so doing, DPGAs promise greater capacity, and in some cases higher performance, than conventional programmable device architectures where all array resources are dedicated to a single function for an entire operational epoch. This paper examines several usage patterns for DPGAs including temporal pipelining, utility functions, multiple function accommodation, and state-dependent logic. In the process, it offers insight into the application and technology space where DPGA-style reuse techniques are most beneficial. --- paper_title: The NAPA adaptive processing architecture paper_content: The National Adaptive Processing Architecture (NAPA) is a major effort to integrate the resources needed to develop teraops class computing systems based on the principles of adaptive computing. The primary goals for this effort include: (1) the development of an example NAPA component which achieves an order of magnitude cost/performance improvement compared to traditional FPGA based systems, (2) the creation of a rich but effective application development environment for NAPA systems based on the ideas of compile time functional partitioning and (3) significantly improve the base infrastructure for effective research in reconfigurable computing. This paper emphasizes the technical aspects of the architecture to achieve the first goal while illustrating key architectural concepts motivated by the second and third goals. --- paper_title: The programmable logic data book paper_content: Improvement to video-telephone systems allowing a called subscriber to have at his disposal on his television receiver screen, before he takes off his handset, information about the person who has initiated the call. Each subscriber station has a handset, a television camera, a television receiver and a character generator and is connected to a switching network by a telephone line, an incoming video line and an outgoing video line. Means are provided in the calling station for connecting to the outgoing video line the character generator when the calling subscriber takes down the handset and the television camera when video signals are detected on the incoming video line and in the called station for supplying with current the television receiver when ringing tone signals are detected on the telephone line. --- paper_title: A time-multiplexed FPGA paper_content: This paper describes the architecture of a time-multiplexed FPGA. Eight configurations of the FPGA are stored in on-chip memory. This inactive on-chip memory is distributed around the chip, and accessible so that the entire configuration of the FPGA can be changed in a single cycle of the memory. The entire configuration of the FPGA can be loaded from this on-chip memory in 30 ns. Inactive memory is accessible as block RAM for applications. The FPGA is based on the Xilinx XC4000E FPGA, and includes extensions for dealing with state saving and forwarding and for increased routing demand due to time-multiplexing the hardware. --- paper_title: PipeRench: A Reconfigurable Architecture and Compiler paper_content: With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time. --- paper_title: Configuration caching vs data caching for striped FPGAs paper_content: Striped FPGA [1], or pipeline-recon gurable FPGA provides hardware virtualization by supporting fast run-time recon guration. In this paper we show that the performance of striped FPGA depends on the recon guration pattern, the run time scheduling of con gurations through the FPGA. We study two main con guration scheduling approachesCon guration Caching and Data Caching. We present the quantitative analysis of these scheduling techniques to compute their total execution cycles taking into account the overhead caused by the IO with the external memory. Based on the analysis we can determine which scheduling technique works better for the given application and for the given hardware parameters. --- paper_title: PipeRench: A Reconfigurable Architecture and Compiler paper_content: With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time. --- paper_title: Compilation tools for run-time reconfigurable designs paper_content: This paper describes a framework and tools for automating the production of designs which can be partially reconfigured at run time. The tools include: a partial evaluator, which produces configuration files for a given design, where the number of configurations can be minimised by a process, known as compile-time sequencing; an incremental configuration calculator, which takes the output of the partial evaluator and generates an initial configuration file and incremental configuration files that partially update preceding configurations; and a tool which further optimises designs for FPGAs supporting simultaneous configuration of multiple cells. While many of our techniques are independent of the design language and device used, our tools currently target Xilinx 6200 devices. Simultaneous configuration, for example, can be used to reduce the time for reconfiguring an adder to a subtractor from time linear with respect to its size to constant time at best and logarithmic time at worst. --- paper_title: A dynamic reconfiguration run-time system paper_content: The feasibility of run-time reconfiguration of FPGAs has been established by a large number of case studies. However, these systems have typically involved an ad hoc combination of hardware and software. The software that manages the dynamic reconfiguration is typically specialised to one application and one hardware configuration. We present three different applications of dynamic reconfiguration, based on research activities at Glasgow University, and extract a set of common requirements. We present the design of an extensible run-time system for managing the dynamic reconfiguration of FPGAs, motivated by these requirements. The system is called RAGE, and incorporates operating-system style services that permit sophisticated and high level operations on circuits. --- paper_title: Debugging techniques for dynamically reconfigurable hardware paper_content: Testing dynamically reconfigurable systems imposes new challenges which require special treatment. We present tools and techniques we developed for debugging a dynamically reconfigurable system that performs run-time constant propagation optimisations. An application for monitoring the effect of run-time specialisation is presented and we show how we adapted standard testability techniques to evaluate the performance of specialised circuits. We also outline how NDLs that capture reconfiguration at a high level can assist with debugging. --- paper_title: A Case Study of Partially Evaluated Hardware Circuits: Key-Specific DES paper_content: FPGA based data encryption provides greater flexibility than ASICs and higher performance than software. Because FPGAs can be reprogrammed, they allow a single integrated circuit to efficiently implement multiple encryption algorithms. Furthermore, the ability to program FPGAs at runtime can be used to improve the performance through dynamic optimization. This paper describes the application of partial evaluation to an implementation of the Data Encryption Standard (DES). Each end user of a DES session shares a secret key, and this knowledge can be used to improve circuit performance. Key-specific encryption circuits require fewer resources and have shorter critical paths than the completely general design. By applying partial evaluation to DES on a Xilinx XC4000 series device we have reduced the CLB usage by 45% and improved the encryption bandwidth by 35%. --- paper_title: Improving functional density through run-time constant propagation paper_content: Circuit specialization techniques such as constant propagation are commonly used to reduce both the hardware resources and cycle time of digital circuits. When reconfigurable FPGAs are used, these advantages can be extended by dynamically specializing circuits using run-time reconfiguration (RTR). For systems exploiting constant propagation, hardware resources can be reduced by folding constants within the circuit and dynamically changing the constants using circuit reconfiguration. To measure the benefits of circuit specialization, a functional density metric is presented. This metric allows the analysis of both static and run-time reconfigured circuits by including the cost of circuit reconfiguration. This metric will be used to justify runtime constant propagation as well as analyze the effects of reconfiguration time on run-time reconfigured systems. --- paper_title: Run-time parameterised circuits for the Xilinx XC6200 paper_content: Current design tools support parameterisation of circuits, but the parameters are fixed at compile-time. In contrast, the circuits discussed in this paper fix their parameters at run-time. Run-time parameterised circuits can potentially out-perform custom VLSI hardware by optimising the FPGA circuit for a specific instance of a problem rather than for a general class of problem. This paper discusses the design of run-time parameterised circuits, and presents a study of run-time parameterised circuits for finite field operations on the Xilinx XC6200. The paper includes a comparison with implementation on a self-timed version of the XC6200 architecture, which illustrates the potential benefits of self-timing for dynamically reconfigurable systems. --- paper_title: Circuit partitioning for dynamically reconfigurable FPGAs paper_content: Dynamically recon gurable FPGAs have the potential to dramatically improve logic density by time-sharing a physical FPGA device. This paper presents a networkow based partitioning algorithm for dynamically recon gurable FPGAs based on the architecture in [2]. Experiments show that our approach outperforms the enhanced force-directed scheduling method in [2] in terms of communication cost. --- paper_title: A CAD Suite for High-Performance FPGA Design paper_content: This paper describes the current status of a suite of CAD tools designed specifically for use by designers who are developing high-performance configurable-computing applications. The basis of this tool suite is JHDL, a design tool originally conceived as a way to experiment with Run-Time Reconfigured (RTR) designs. However, what began as a limited experiment to model RTR designs with Java has evolved into a comprehensive suite of design tools and verification aids, with these tools being used successfully to implement high-performance applications in Automated Target Recognition (ATR), sonar beamforming, and general image processing on configurable-computing systems. --- paper_title: JHDL-an HDL for reconfigurable systems paper_content: JHDL is a design tool for reconfigurable systems that allows designers to express circuit organizations that dynamically change over time in a natural way, using only standard programming abstractions found in object-oriented languages. JHDL manages FPGA resources in a manner that is similar to the way object-oriented languages manage memory: circuits are treated as distinct objects and a circuit is configured onto a configurable computing machine (CCM) by invoking its constructor effectively "constructing " an instance of the circuit onto the reconfigurable platform just as object instances are allocated in memory with conventional object-oriented languages. This approach of using object constructors/destructors to control the circuit lifetime on a CCM is a powerful technique that naturally leads to a dual simulation/execution environment where a designer can easily switch between either software simulation or hardware execution on a CCM with a single application description. Moreover JHDL supports dual hardware/software execution; parts of the application described using JHDL circuit constructs can be executed on the CCM while the remainder of the application the-GUI for example-can run on the CCM host. Based on an existing programming language (Java), JHDL requires no language extensions and can be used with any standard Java 1.1 distribution. --- paper_title: Scheduling designs into a time-multiplexed FPGA paper_content: An algorithm is presented for partitioning a design in time. The algorithm devides a large, technology-mapped design into multiple configurations of a time-multiplexed FPGA. These configurations are rapidly executed in the FPGA to emulate the large design. The tool includes facilities for optimizing the partitioning to improve routability, for fitting the design into more configurations than the depth of the critical path and for compressing the critical path of the design into fewer configurations, both to fit the design into the device and to improve performance. Scheduling results are shown for mapping designs into an 8-configuration time-multiplexed FPGA and for architecture investigation for a time-multiplexed FPGA. --- paper_title: Temporal partitioning and scheduling data flow graphs for reconfigurable computers paper_content: FPGA-based configurable computing machines are evolving rapidly. They offer the ability to deliver very high performance at a fraction of the cost when compared to supercomputers. The first generation of configurable computers (those with multiple FPGAs connected using a specific interconnect) used statically reconfigurable FPGAs. On these configurable computers, computations are performed by partitioning an entire task into spatially interconnected subtasks. Such configurable computers are used in logic emulation systems and for functional verification of hardware. In general, configurable computers provide the ability to reconfigure rapidly to any desired custom form. Hence, the available resources can be reused effectively to cut down the hardware costs and also improve the performance. In this paper, we introduce the concept of temporal partitioning to partition a task into temporally interconnected subtasks. Specifically, we present algorithms for temporal partitioning and scheduling data flow graphs for configurable computers. We are given a configurable computing unit (RPU) with a logic capacity of S/sub RPU/ and a computational task represented by an acyclic data flow graph G=(V, E). Computations with logic area requirements that exceed S/sub RPU/ cannot be completely mapped on a configurable computer (using traditional spatial mapping techniques). However, a temporal partitioning of the data flow graph followed by proper scheduling can facilitate the configurable computer based execution. Temporal partitioning of the data flow graph is a k-way partitioning of G=(V, E) such that each partitioned segment will not exceed S/sub RPU/ in its logic requirement. Scheduling assigns an execution order to the partitioned segments so as to ensure proper execution. Thus, for each segment in {s/sub 1/,s/sub 2/,...,s/sub k/}, scheduling assigns a unique ordering S/sub i/-j,1/spl les/i/spl les/k,1/spl les/j/spl les/k, such that the computation would execute in proper sequential order as defined by the flow graph G=(V, E). --- paper_title: Configuration prefetch for single context reconfigurable coprocessors paper_content: Current reconfigurable systems suffer from a significant overhead due to the time it takes to reconfigure their hardware. In order to deal with this overhead, and increase the power of reconfigurable systems, it is important to develop hardware and software systems to reduce or eliminate this delay. In this paper we propose one technique for significantly reducing the reconfiguration latency: the prefetching of configurations. By loading a configuration into the reconfigurable logic in advance of when it is needed, we can overlap the reconfiguration with useful computation. We demonstrate the power of this technique, and propose an algorithm for automatically adding prefetch operations into reconfigurable applications. This results in a significant decrease in the reconfiguration overhead for these applications. --- paper_title: A dynamic instruction set computer paper_content: A dynamic instruction set computer (DISC) has been developed that supports demand-driven modification of its instruction set. Implemented with partially reconfigurable FPGAs, DISC treats instructions as removable modules paged in and out through partial reconfiguration as demanded by the executing program. Instructions occupy FPGA resources only when needed and FPGA resources can be reused to implement an arbitrary number of performance-enhancing application-specific instructions. DISC further enhances the functional density of FPGAs by physically relocating instruction modules to available FPGA space. --- paper_title: Sequencing run-time reconfigured hardware with software paper_content: Run-Time Reconfigured systems offer additional hardware resources to systems based on reconfigurable FPGAs. These systems, however, are often difficult to build and must tolerate substantial reconfiguration times. A processor based architecture has been built to simplify the development of these systems by providing programmable control of hardware sequencing while retaining the performance of hardware. Configuration overhead of this system is reduced by "caching" hardware on the reconfigurable resource. An image processing application was developed on this system to demonstrate both the performance improvements of custom hardware and the ease of software development. --- paper_title: Compilation tools for run-time reconfigurable designs paper_content: This paper describes a framework and tools for automating the production of designs which can be partially reconfigured at run time. The tools include: a partial evaluator, which produces configuration files for a given design, where the number of configurations can be minimised by a process, known as compile-time sequencing; an incremental configuration calculator, which takes the output of the partial evaluator and generates an initial configuration file and incremental configuration files that partially update preceding configurations; and a tool which further optimises designs for FPGAs supporting simultaneous configuration of multiple cells. While many of our techniques are independent of the design language and device used, our tools currently target Xilinx 6200 devices. Simultaneous configuration, for example, can be used to reduce the time for reconfiguring an adder to a subtractor from time linear with respect to its size to constant time at best and logarithmic time at worst. --- paper_title: Runlength compression techniques for FPGA configurations paper_content: The time it takes to reconfigure FPGAs can be a significant overhead for reconfigurable computing. In this paper we develop new compression algorithms for FPGA configurations that can significantly reduce this overhead. By using runlength and other compression techniques, files can be compressed by a factor of 3.6 times. Bus transfer and decompression hardware are also discussed. This results in a single compression methodology which achieves higher compression ratios than existing algorithms in an off-line version, as well as a somewhat lower quality compression approach which is suitable for on-line use in dynamic circuit generation and other mapping-time critical situations. --- paper_title: Automating production of run-time reconfigurable designs paper_content: This paper describes a method that automates a key step in producing run-time reconfigurable designs: the identification and mapping of reconfigurable regions. In this method, two successive circuit configurations are matched to locate the components common to them, so that reconfiguration time can be minimized. The circuit configurations are represented as a weighted bipartite graph, to which an efficient matching algorithm is applied. Our method, which supports hierarchical and library-based design, is device-independent and has been tested using Xilinx 6200 FPGAs. A number of examples in arithmetic, pattern matching and image processing are selected to illustrate our approach. --- paper_title: Don't Care discovery for FPGA configuration compression paper_content: One of the major overheads in reconfigurabl e computing is the time it takes to reconfigure the devices in the system. The configuration compression algorithm presented in our previous paper [Hauck98c] is one efficient technique for reducing this overhead. In this paper, we develop an algorithm for finding Don’t Care bits in configuration s to improve the compatibility of the configuration data. With the help of the Don’t Cares, higher configuration compression ratios can be achieved by using our modified configuration compression algorithm. This improves compression ratios of a factor of 7, where our original algorithm only achieved a factor of 4. 1. Configuration Compression FPGAs are often used as powerful hardware for applications that require high speed computation. One major benefit provided by FPGAs is the ability to reconfigure during execution. For systems in which reconfiguration was done infrequently, the time to reconfigure the FPGA was of little concern. However, as more and more applications involve run-time reconfiguration, fast reconfiguration of FPGAs becomes an important issue [Hauck98a]. In most systems an FPGA must sit idle while it is being reconfigured, wasting cycles that could otherwise be used to perform useful work. For example, applications on the DISC and DISC II system spend 25% [Withlin96] to 71% [Wirthlin95] of their execution time performing reconfiguration. Thus, a reduction in the amount of cycles wasted to reconfiguration can significantly improve performance. Previously, we have presented methods for overlapping reconfiguration with computation via configuration prefetching [Hauck98b]. We have also presented a technique for reducing the overhead by compressing the configuration datastreams [Hauck98c]. In this paper, we will present a technique for finding possible Don’t Cares in configuration data such that higher compression ratios can be achieved. --- paper_title: Configuration compression for FPGA-based embedded systems paper_content: Field programmable gate arrays (FPGAs) are a promising technology for developing high-performance embedded systems. The density and performance of FPGAs have drastically improved over the past few years. Consequently, the size of the configuration bit-streams has also increased considerably. As a result, the cost-effectiveness of FPGA-based embedded systems is significantly affected by the memory required for storing various FPGA configurations. This paper proposes a novel compression technique that reduces the memory required for storing FPGA configurations and results in high decompression efficiency. Decompression efficiency corresponds to the decompression hardware cost as well as the decompression rate. The proposed technique is applicable to any SRAM-based FPGA device since configuration bit-streams are processed as raw data. The required decompression hardware is simple and the decompression rate scales with the speed of the memory used for storing the configuration bit-streams. Moreover, the time to configure the device is not affected by our compression technique. Using our technique, we demonstrate up to 41% savings in memory for configuration bit-streams of several real-world applications. --- paper_title: Configuration relocation and defragmentation for run-time reconfigurable computing paper_content: Due to its potential to greatly accelerate a wide variety of applications, reconfigurable computing has become a subject of a great deal of research. By mapping the compute-intensive sections of an application to reconfigurable hardware, custom computing systems exhibit significant speedups over traditional microprocessors. However, this potential acceleration is limited by the requirement that the speedups provided must outweigh the considerable cost of reconfiguration. The ability to relocate and defragment configurations on field programmable gate arrays (FPGAs) can dramatically decrease the overall reconfiguration overhead incurred by the use of the reconfigurable hardware. We therefore present hardware solutions to provide relocation and defragmentation support with a negligible area increase over a generic partially reconfigurable FPGA, as well as software algorithms for controlling this hardware. This results in factors of 8 to 12 improvement in the configuration overheads displayed by traditional serially programmed FPGAs. --- paper_title: Configuration caching vs data caching for striped FPGAs paper_content: Striped FPGA [1], or pipeline-recon gurable FPGA provides hardware virtualization by supporting fast run-time recon guration. In this paper we show that the performance of striped FPGA depends on the recon guration pattern, the run time scheduling of con gurations through the FPGA. We study two main con guration scheduling approachesCon guration Caching and Data Caching. We present the quantitative analysis of these scheduling techniques to compute their total execution cycles taking into account the overhead caused by the IO with the external memory. Based on the analysis we can determine which scheduling technique works better for the given application and for the given hardware parameters. --- paper_title: Run-time compaction of FPGA designs paper_content: Controllers for dynamically reconfigurable FPGAs that are capable of supporting multiple independent tasks simultaneously need to be able to place designs at run-time when the sequence or geometry of designs is not known in advance. As tasks arrive and depart the available cells become fragmented, thereby reducing the controller's ability to place new tasks. The response times of tasks and the utilization of the FPGA consequently suffer. In this paper, we describe and assess a task compaction heuristic that alleviates the problems of external fragmentation by exploiting partial reconfiguration. We identify a region of the chip that can satisfy the next request after the designs occupying the region have been moved. The approach is simple and platform independent. We show by simulation that for a wide range of task sizes and configuration delays, the response of overloaded systems can be improved significantly. --- paper_title: PipeRench: A Reconfigurable Architecture and Compiler paper_content: With the proliferation of highly specialized embedded computer systems has come a diversification of workloads for computing devices. General-purpose processors are struggling to efficiently meet these applications' disparate needs, and custom hardware is rarely feasible. According to the authors, reconfigurable computing, which combines the flexibility of general-purpose processors with the efficiency of custom hardware, can provide the alternative. PipeRench and its associated compiler comprise the authors' new architecture for reconfigurable computing. Combined with a traditional digital signal processor, microcontroller or general-purpose processor, PipeRench can support a system's various computing needs without requiring custom hardware. The authors describe the PipeRench architecture and how it solves some of the pre-existing problems with FPGA architectures, such as logic granularity, configuration time, forward compatibility, hard constraints and compilation time. --- paper_title: Garp: a MIPS processor with a reconfigurable coprocessor paper_content: Typical reconfigurable machines exhibit shortcomings that make them less than ideal for general-purpose computing. The Garp Architecture combines reconfigurable hardware with a standard MIPS processor on the same die to retain the better features of both. Novel aspects of the architecture are presented, as well as a prototype software environment and preliminary performance results. Compared to an UltraSPARC, a Garp of similar technology could achieve speedups ranging from a factor of 2 to as high as a factor of 24 for some useful applications. --- paper_title: Dynamic reconfiguration to support concurrent applications paper_content: This paper describes the development of a dynamically reconfigurable system that can support multiple applications running concurrently. A dynamically reconfigurable system allows hardware reconfiguration while part of the reconfigurable hardware is busy computing. An FPGA resource manager (RM) is developed to allocate and de-allocate FPGA resources and to preload FPGA configuration files. For each individual application, different tasks that require FPGA resources are represented as a flow graph which is made available to the RM so as to enable efficient resource management and preloading. The performance of using the RM to support several applications is summarized. The impact of supporting concurrency and preloading in reducing application execution time is demonstrated. --- paper_title: Memory interfacing and instruction specification for reconfigurable processors paper_content: As custom computing machines evolve, it is clear that a major bottleneck is the slow interconnection architecture between the logic and memory. This paper describes the architecture of a custom computing machine that overcomes the interconnection bottleneck by closely integrating a fixed-logic processor, a reconfigurable logic array, and memory into a single chip, called OneChip-98. The OneChip-98 system has a seamless programming model that enables the programmer to easily specify instructions without additional complex instruction decoding hardware. As well, there is a simple scheme for mapping instructions to the corresponding programming bits. To allow the processor and the reconfigurable array to execute concurrently, the programming model utilizes a novel memory-consistency scheme implemented in the hardware. To evaluate the feasibility of the OneChip-98 architecture, a 32-bit MIPS-like processor and several performance enhancement applications were mapped to the Transmogrifier-2 field programmable system. For two typical applications, the 2-dimensional discrete cosine transform and the 64-tap FIR filter, we were capable of achieving a performance speedup of over 30 times that of a stand-alone state-of-the-art processor. --- paper_title: Safe and protected execution for the Morph/AMRM reconfigurable processor paper_content: Technology scaling of CMOS processes brings relatively faster transistors (gates) and slower interconnects (wires), making viable the addition of reconfigurability to increase performance. In the Morph/AMRM system we are exploring the addition of reconfigurable logic, deeply integrated with the processor core, employing the reconfigurability to manage the cache, datapath, and pipeline resources more effectively. However, integration of reconfigurable logic introduces significant protection and safety challenges for microprocess execution. We analyze the protection structures in a state of the art microprocessor core (R10000), identifying the few critical logic blocks and demonstrating that the majority of the logic in the processor core can be safely reconfigured. Subsequently, we propose a protection architecture for the Morph/AMRM reconfigurable processor which enable nearly the full range of power of reconfigurability in the processor core while requiring only a small number of fixed logic features which to ensure safe, protected multiprocess execution. ---
Title: Reconfigurable Computing: A Survey of Systems and Software Section 1: Introduction Description 1: Introduce the concept of reconfigurable computing, its significance, and the survey's purpose. Section 2: Technology Description 2: Discuss the foundational technology behind reconfigurable computing and its evolution. Section 3: Hardware Description 3: Explore the different hardware architectures used in reconfigurable computing systems. Section 4: Coupling Description 4: Examine various methods of coupling reconfigurable hardware with traditional microprocessors. Section 5: Traditional FPGAs Description 5: Provide an overview of the structure and function of traditional FPGAs used in reconfigurable computing. Section 6: Logic Block Granularity Description 6: Analyze the impact of the granularity of logic blocks on performance and design complexity. Section 7: Heterogeneous Arrays Description 7: Discuss the use of heterogeneous structures within reconfigurable systems to improve performance. Section 8: Routing Resources Description 8: Detail the routing resources in reconfigurable architectures and their optimization. Section 9: One-Dimensional Structures Description 9: Describe the concept and execution of one-dimensional reconfigurable architectures. Section 10: Multi-FPGA Systems Description 10: Cover the additional hardware considerations in systems composed of multiple FPGAs. Section 11: Hardware Summary Description 11: Summarize the various hardware design choices and their implications in reconfigurable computing. Section 12: Software Description 12: Discuss the software environment required to design and compile configurations for reconfigurable hardware. Section 13: Hardware-Software Partitioning Description 13: Explain how systems partition tasks between reconfigurable hardware and traditional software execution. Section 14: Circuit Specification Description 14: Describe the methods available for specifying circuits to be implemented on reconfigurable hardware. Section 15: Circuit Libraries Description 15: Discuss the use of pre-designed circuit or macro libraries to expedite design. Section 16: Circuit Generators Description 16: Explain how circuit generators can customize modules to meet application-specific requirements. Section 17: Partial Evaluation Description 17: Introduce the concept of partial evaluation to optimize circuit configurations. Section 18: Memory Allocation Description 18: Detail strategies for effective memory allocation in reconfigurable systems. Section 19: Parallelization Description 19: Discuss techniques for incorporating parallelism into reconfigurable computing applications. Section 20: Multi-FPGA System Software Description 20: Examine the software challenges and solutions for systems using multiple FPGAs. Section 21: Design Testing Description 21: Describe methods for testing and verifying the design of circuits for reconfigurable systems. Section 22: Software Summary Description 22: Summarize the role of software tools in the design and use of reconfigurable computing systems. Section 23: Run-Time Reconfiguration Description 23: Discuss the principles and benefits of run-time reconfiguration in reconfigurable computing. Section 24: Reconfigurable Models Description 24: Define the different types of reconfigurable models, including single context, multicontext, and partially reconfigurable systems. Section 25: Pipeline Reconfigurable Description 25: Introduce the concept of pipeline reconfigurable systems and their execution models. Section 26: Run-Time Partial Evaluation Description 26: Explain the advantages of run-time partial evaluation for optimizing configurations based on runtime data. Section 27: Compilation and Configuration Scheduling Description 27: Discuss strategies for compiling and scheduling configurations in run-time reconfigurable systems. Section 28: Fast Configuration Description 28: Present methods for reducing configuration overhead in reconfigurable systems. Section 29: Potential Problems with RTR Description 29: Identify potential issues with run-time reconfiguration and solutions to mitigate them. Section 30: Run-Time Reconfiguration Summary Description 30: Summarize the key points and benefits of run-time reconfiguration in reconfigurable systems. Section 31: Conclusion Description 31: Conclude by summarizing the importance, challenges, and future directions of reconfigurable computing.
Universal Reinforcement Learning Algorithms: Survey and Experiments
8
--- paper_title: Generalised Discount Functions applied to a Monte-Carlo AImu Implementation paper_content: In recent years, work has been done to develop the theory of General Reinforcement Learning (GRL). However, there are no examples demonstrating the known results regarding generalised discounting. We have added to the GRL simulation platform (AIXIjs) the functionality to assign an agent arbitrary discount functions, and an environment which can be used to determine the effect of discounting on an agent's policy. Using this, we investigate how geometric, hyperbolic and power discounting affect an informed agent in a simple MDP. We experimentally reproduce a number of theoretical results, and discuss some related subtleties. It was found that the agent's behaviour followed what is expected theoretically, assuming appropriate parameters were chosen for the Monte-Carlo Tree Search (MCTS) planning algorithm. --- paper_title: Thompson Sampling is Asymptotically Optimal in General Environments paper_content: We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear. --- paper_title: Count-Based Exploration in Feature Space for Reinforcement Learning paper_content: We introduce a new count-based optimistic exploration algorithm for Reinforcement Learning (RL) that is feasible in environments with high-dimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our \phi-pseudocount achieves generalisation by exploiting same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The \phi-Exploration-Bonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on high-dimensional RL benchmarks. --- paper_title: Thompson Sampling is Asymptotically Optimal in General Environments paper_content: We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear. --- paper_title: Count-Based Exploration in Feature Space for Reinforcement Learning paper_content: We introduce a new count-based optimistic exploration algorithm for Reinforcement Learning (RL) that is feasible in environments with high-dimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our \phi-pseudocount achieves generalisation by exploiting same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The \phi-Exploration-Bonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on high-dimensional RL benchmarks. --- paper_title: Thompson Sampling is Asymptotically Optimal in General Environments paper_content: We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear. --- paper_title: Thompson Sampling is Asymptotically Optimal in General Environments paper_content: We discuss a variant of Thompson sampling for nonparametric reinforcement learning in a countable classes of general stochastic environments. These environments can be non-Markov, non-ergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear. --- paper_title: Count-Based Exploration in Feature Space for Reinforcement Learning paper_content: We introduce a new count-based optimistic exploration algorithm for Reinforcement Learning (RL) that is feasible in environments with high-dimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our \phi-pseudocount achieves generalisation by exploiting same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The \phi-Exploration-Bonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on high-dimensional RL benchmarks. --- paper_title: Count-Based Exploration in Feature Space for Reinforcement Learning paper_content: We introduce a new count-based optimistic exploration algorithm for Reinforcement Learning (RL) that is feasible in environments with high-dimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our \phi-pseudocount achieves generalisation by exploiting same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The \phi-Exploration-Bonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on high-dimensional RL benchmarks. ---
Title: Universal Reinforcement Learning Algorithms: Survey and Experiments Section 1: Introduction Description 1: Introduce the topic of universal reinforcement learning (URL) and its significance, outlining the need for minimal assumptions about the environment and the contributions of the paper. Section 2: Literature Survey Description 2: Provide a survey of history-based Bayesian algorithms in the context of URL, discussing central agents like AIXI, their variants, and relevant theoretical results. Section 3: Notation Description 3: Clarify the notation used in the paper, especially focusing on the distinctions between hidden states, percepts, environments, policies, and important mathematical symbols. Section 4: The General Reinforcement Learning Problem Description 4: Formulate the agent-environment interaction as a partially observable Markov Decision Process (POMDP) and discuss the objectives and value functions involved. Section 5: Algorithms Description 5: Explore the class of Bayesian URL agents, detailing various algorithms like AIξ, knowledge-seeking agents (KSA), BayesExp, Thompson sampling, and MDL. Section 6: Implementation Description 6: Describe the environments used for experiments, introduce Bayesian environment models, and discuss necessary approximations for the algorithms to function effectively. Section 7: Experiments Description 7: Present and analyze experimental results comparing the learning performance of different agents, discussing the impact of model class, exploration strategies, and computational considerations. Section 8: Conclusion Description 8: Summarize the findings of the survey and experiments, discussing the importance of model class construction, exploration strategies, and practical considerations for Bayesian agents in URL.
A review of feature selection techniques in bioinformatics
11
--- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: An Introduction to Variable and Feature Selection paper_content: Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods. --- paper_title: Feature Selection for Knowledge Discovery and Data Mining paper_content: From the Publisher: ::: With advanced computer technologies and their omnipresent usage, data accumulates in a speed unmatchable by the human's capacity to process data. To meet this growing challenge, the research community of knowledge discovery from databases emerged. The key issue studied by this community is, in layman's terms, to make advantageous use of large stores of data. In order to make raw data useful, it is necessary to represent, process, and extract knowledge for various applications. Feature Selection for Knowledge Discovery and Data Mining offers an overview of the methods developed since the 1970's and provides a general framework in order to examine these methods and categorize them. This book employs simple examples to show the essence of representative feature selection methods and compares them using data sets with combinations of intrinsic properties according to the objective of feature selection. In addition, the book suggests guidelines for how to use different methods under various circumstances and points out new challenges in this exciting area of research. Feature Selection for Knowledge Discovery and Data Mining is intended to be used by researchers in machine learning, data mining, knowledge discovery, and databases as a toolbox of relevant tools that help in solving large real-world problems. This book is also intended to serve as a reference book or secondary text for courses on machine learning, data mining, and databases. --- paper_title: Gene Selection for Cancer Classification using Support Vector Machines paper_content: DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate. --- paper_title: Toward integrating feature selection algorithms for classification and clustering paper_content: This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development. --- paper_title: miTarget: microRNA target gene prediction using a support vector machine paper_content: BackgroundMicroRNAs (miRNAs) are small noncoding RNAs, which play significant roles as posttranscriptional regulators. The functions of animal miRNAs are generally based on complementarity for their 5' components. Although several computational miRNA target-gene prediction methods have been proposed, they still have limitations in revealing actual target genes.ResultsWe implemented miTarget, a support vector machine (SVM) classifier for miRNA target gene prediction. It uses a radial basis function kernel as a similarity measure for SVM features, categorized by structural, thermodynamic, and position-based features. The latter features are introduced in this study for the first time and reflect the mechanism of miRNA binding. The SVM classifier produces high performance with a biologically relevant data set obtained from the literature, compared with previous tools. We predicted significant functions for human miR-1, miR-124a, and miR-373 using Gene Ontology (GO) analysis and revealed the importance of pairing at positions 4, 5, and 6 in the 5' region of a miRNA from a feature selection experiment. We also provide a web interface for the program.ConclusionmiTarget is a reliable miRNA target gene prediction tool and is a successful application of an SVM classifier. Compared with previous tools, its predictions are meaningful by GO analysis and its performance can be improved given more training examples. --- paper_title: Feature selection for genetic sequence classification paper_content: Motivation: Most of the existing methods for genetic sequence classification are based on a computer search for homologies in nucleotide or amino acid sequences. The standard sequence alignment programs scale very poorly as the number of sequences increases or the degree of sequence identity is <30%. Some new computationally inexpensive methods based on nucleotide or amino acid compositional analysis have been proposed, but prediction results are still unsatisfactory and depend on the features chosen to represent the sequences. Results: In this paper a feature selection method based on the Gamma (or near-neighbour) test is proposed. If there is a continuous or smooth map from feature space to the classification target values, the Gamma test gives an estimate for the mean-squared error of the classification, despite the fact that one has no a priori knowledge of the smooth mapping. We can search a large space of possible feature combinations for a combination which gives a smallest estimated mean-squared error using a genetic algorithm. The method was used for feature selection and classification of the large subunits ofrRNA according to RDP (Ribosomal Database Project) phylogenetic classes. The sequences were represented by dinucleotide frequency distribution. The nearest-neighbour criterion has been used to estimate the predictive accuracy of the classification based on the selected features. For examples discussed, we found that the classification according to the first nearest neighbour is correct for 80% of the test samples. ff we consider the set of the 10 nearest neighbours, then 94% of the test samples are classified correctly. Availability: The principal novel component of this method is the Gamma test and this can be downloaded compiled for Unix Sun 4,Windows 95 and MS-DOS from http://www.cs.cfac.uk/ec/ Contact: s.margetts@cs.$ac.uk --- paper_title: Feature Selection and the Class Imbalance Problem in Predicting Protein Function from Sequence paper_content: When the standard approach to predict protein function by sequence homology fails, other alternative methods can be used that require only the amino acid sequence for predicting function. One such approach uses machine learning to predict protein function directly from amino acid sequence features. However, there are two issues to consider before successful functional prediction can take place: identifying discriminatory features, and overcoming the challenge of a large imbalance in the training data. We show that by applying feature subset selection followed by undersampling of the majority class, significantly better support vector machine (SVM) classifiers are generated compared with standard machine learning approaches. As well as revealing that the features selected could have the potential to advance our understanding of the relationship between sequence and function, we also show that undersampling to produce fully balanced data significantly improves performance. The best discriminating ability is achieved using SVMs together with feature selection and full undersampling; this approach strongly outperforms other competitive learning algorithms. We conclude that this combined approach can generate powerful machine learning classifiers for predicting protein function directly from sequence. --- paper_title: Microbial gene identification using interpolated Markov models paper_content: This paper describes a new system, GLIMMER, for finding genes in microbial genomes. In a series of tests on Haemophilus influenzae , Helicobacter pylori and other complete microbial genomes, this system has proven to be very accurate at locating virtually all the genes in these sequences, outperforming previous methods. A conservative estimate based on experiments on H.pylori and H. influenzae is that the system finds >97% of all genes. GLIMMER uses interpolated Markov models (IMMs) as a framework for capturing dependencies between nearby nucleotides in a DNA sequence. An IMM-based method makes predictions based on a variable context; i.e., a variable-length oligomer in a DNA sequence. The context used by GLIMMER changes depending on the local composition of the sequence. As a result, GLIMMER is more flexible and more powerful than fixed-order Markov methods, which have previously been the primary content-based technique for finding genes in microbial DNA. --- paper_title: Improved microbial gene identification with GLIMMER paper_content: The GLIMMER system for microbial gene identification finds approximately 97-98% of all genes in a genome when compared with published annotation. This paper reports on two new results: (i) significant technical improvements to GLIMMER that improve its accuracy still further, and (ii) a comprehensive evaluation that demonstrates that the accuracy of the system is likely to be higher than previously recognized. A significant proportion of the genes missed by the system appear to be hypothetical proteins whose existence is only supported by the predictions of other programs. When the analysis is restricted to genes that have significant homology to genes in other organisms, GLIMMER misses <1% of known genes. --- paper_title: Using Amino Acid Patterns to Accurately Predict Translation Initiation Sites paper_content: The translation initiation site (TIS) prediction problem is about how to correctly identify TIS in mRNA, cDNA, or other types of genomic sequences. High prediction accuracy can be helpful in a better understanding of protein coding from nucleotide sequences. This is an important step in genomic analysis to determine protein coding from nucleotide sequences. In this paper, we present an in silico method to predict translation initiation sites in vertebrate cDNA or mRNA sequences. This method consists of three sequential steps as follows. In the first step, candidate features are generated using k-gram amino acid patterns. In the second step, a small number of top-ranked features are selected by an entropy-based algorithm. In the third step, a classification model is built to recognize true TISs by applying support vector machines or ensembles of decision trees to the selected features. We have tested our method on several independent data sets, including two public ones and our own extracted sequences. The experimental results achieved are better than those reported previously using the same data sets. Our high accuracy not only demonstrates the feasibility of our method, but also indicates that there might be "amino acid" patterns around TIS in cDNA and mRNA sequences. --- paper_title: P.: Identification of DNA Regulatory Motifs Using Bayesian Variable Selection Suite paper_content: Motivation: Understanding the mechanisms that determine gene expression regulation is an important and challenging problem. A common approach consists of identifying DNA-binding sites from a collection of co-regulated genes and their nearby non-coding DNA sequences. Here, we consider a regression model that linearly relates gene expression levels to a sequence matching score of nucleotide patterns. We use Bayesian models and stochastic search techniques to select transcription factor binding site candidates, as an alternative to stepwise regression procedures used by other investigators. ::: ::: Results: We demonstrate through simulated data the improved performance of the Bayesian variable selection method compared to the stepwise procedure. We then analyze and discuss the results from experiments involving well-studied pathways of Saccharomyces cerevisiae and Schizosaccharomyces pombe. We identify regulatory motifs known to be related to the experimental conditions considered. Some of our selected motifs are also in agreement with recent findings by other researchers. In addition, our results include novel motifs that constitute promising sets for further assessment. ::: ::: Availability: The Matlab code for implementing the Bayesian variable selection method may be obtained from the corresponding author. --- paper_title: Feature selection for splice site prediction: A new method using EDA-based feature ranking paper_content: BackgroundThe identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data.ResultsIn this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing.ConclusionWe show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do) this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features. --- paper_title: Tissue classification with gene expression profiles paper_content: Constantly improving gene expression profiling technologies are expected to provide understanding and insight into cancer related cellular processes. Gene expression data is also expected to significantly and in the development of efficient cancer diagnosis and classification platforms. In this work we examine two sets of gene expression data measured across sets of tumor and normal clinical samples One set consists of 2,000 genes, measured in 62 epithelial colon samples [1]. The second consists of a 100,000 clones, measured in 32 ovarian samples (unpublished, extension of data set described in [26]). We examine the use of scoring methods, measuring separation of tumors from normals using individual gene expression levels. These are then coupled with high dimensional classification methods to assess the classification power of complete expression profiles. We present results of performing leave-one-out cross validation (LOOCV) experiments on the two data sets. employing SVM [8], AdaBoost [13] and a novel clustering based classification technique. As tumor samples can differ from normal samples in their cell-type composition we also perform LOOCV experiments using appropriately modified sets of genes, attempting to eliminate the resulting bias. We demonstrate success rate of at least 90% in tumor vs normal classification, using sets of selected genes, with as well as without cellular contamination related members. These results are insensitive to the exact selection mechanism, over a certain range. --- paper_title: Identification of regulatory elements using a feature selection method paper_content: Motivation: Many methods have been described to identify regulatory motifs in the transcription control regions of genes that exhibit similar patterns of gene expression across a variety of experimental conditions. Here we focus on a single experimental condition, and utilize gene expression data to identify sequence motifs associated with genes that are activated under this experimental condition. We use a linear model with two-way interactions to model gene expression as a function of sequence features (words) present in presumptive transcription control regions. The most relevant features are selected by a feature selection method called stepwise selection with monte carlo cross validation. We apply this method to a publicly available dataset of the yeast Saccharomyces cerevisiae, focussing on the 800 basepairs immediately upstream of each gene’s translation start site (the upstream control region (UCR)). Result: We successfully identify regulatory motifs that are known to be active under the experimental conditions analyzed, and find additional significant sequences that may represent novel regulatory motifs. We also discuss a complementary method that utilizes gene expression data from a single microarray experiment and allows averaging over variety of experimental conditions as an alternative to motif finding methods that act on clusters of co-expressed genes. Availability: The software is available upon request from the first author or may be downloaded from http://www.stat. --- paper_title: Gene selection and classification of microarray data using random forest paper_content: BackgroundSelection of relevant genes for sample classification is a common task in most gene expression studies, where researchers try to identify the smallest possible set of genes that can still achieve good predictive performance (for instance, for future use with diagnostic purposes in clinical practice). Many gene selection approaches use univariate (gene-by-gene) rankings of gene relevance and arbitrary thresholds to select the number of genes, can only be applied to two-class problems, and use gene selection ranking criteria unrelated to the classification algorithm. In contrast, random forest is a classification algorithm well suited for microarray data: it shows excellent performance even when most predictive variables are noise, can be used when the number of variables is much larger than the number of observations and in problems involving more than two classes, and returns measures of variable importance. Thus, it is important to understand the performance of random forest with microarray data and its possible use for gene selection.ResultsWe investigate the use of random forest for classification of microarray data (including multi-class problems) and propose a new method of gene selection in classification problems based on random forest. Using simulated and nine microarray data sets we show that random forest has comparable performance to other classification methods, including DLDA, KNN, and SVM, and that the new gene selection procedure yields very small sets of genes (often smaller than alternative methods) while preserving predictive accuracy.ConclusionBecause of its performance and features, random forest and gene selection using random forest should probably become part of the "standard tool-box" of methods for class prediction and gene selection with microarray data. --- paper_title: A two-sample Bayesian t-test for microarray data paper_content: BackgroundDetermining whether a gene is differentially expressed in two different samples remains an important statistical problem. Prior work in this area has featured the use of t-tests with pooled estimates of the sample variance based on similarly expressed genes. These methods do not display consistent behavior across the entire range of pooling and can be biased when the prior hyperparameters are specified heuristically.ResultsA two-sample Bayesian t-test is proposed for use in determining whether a gene is differentially expressed in two different samples. The test method is an extension of earlier work that made use of point estimates for the variance. The method proposed here explicitly calculates in analytic form the marginal distribution for the difference in the mean expression of two samples, obviating the need for point estimates of the variance without recourse to posterior simulation. The prior distribution involves a single hyperparameter that can be calculated in a statistically rigorous manner, making clear the connection between the prior degrees of freedom and prior variance.ConclusionThe test is easy to understand and implement and application to both real and simulated data shows that the method has equal or greater power compared to the previous method and demonstrates consistent Type I error rates. The test is generally applicable outside the microarray field to any situation where prior information about the variance is available and is not limited to cases where estimates of the variance are based on many similar observations. --- paper_title: Tests for finding complex patterns of differential expression in cancers: towards individualized medicine paper_content: BackgroundMicroarray studies in cancer compare expression levels between two or more sample groups on thousands of genes. Data analysis follows a population-level approach (e.g., comparison of sample means) to identify differentially expressed genes. This leads to the discovery of 'population-level' markers, i.e., genes with the expression patterns A > B and B > A. We introduce the PPST test that identifies genes where a significantly large subset of cases exhibit expression values beyond upper and lower thresholds observed in the control samples.ResultsInterestingly, the test identifies A > B and B < A pattern genes that are missed by population-level approaches, such as the t-test, and many genes that exhibit both significant overexpression and significant underexpression in statistically significantly large subsets of cancer patients (ABA pattern genes). These patterns tend to show distributions that are unique to individual genes, and are aptly visualized in a 'gene expression pattern grid'. The low degree of among-gene correlations in these genes suggests unique underlying genomic pathologies and high degree of unique tumor-specific differential expression. We compare the PPST and the ABA test to the parametric and non-parametric t-test by analyzing two independently published data sets from studies of progression in astrocytoma.ConclusionsThe PPST test resulted findings similar to the nonparametric t-test with higher self-consistency. These tests and the gene expression pattern grid may be useful for the identification of therapeutic targets and diagnostic or prognostic markers that are present only in subsets of cancer patients, and provide a more complete portrait of differential expression in cancer. --- paper_title: Individualized markers optimize class prediction of microarray data paper_content: Background ::: Identification of molecular markers for the classification of microarray data is a challenging task. Despite the evident dissimilarity in various characteristics of biological samples belonging to the same category, most of the marker – selection and classification methods do not consider this variability. In general, feature selection methods aim at identifying a common set of genes whose combined expression profiles can accurately predict the category of all samples. Here, we argue that this simplified approach is often unable to capture the complexity of a disease phenotype and we propose an alternative method that takes into account the individuality of each patient-sample. --- paper_title: Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data paper_content: A reliable and precise classification of tumors is essential for successful diagnosis and treatment of cancer. cDNA microarrays and high-density oligonucleotide chips are novel biotechnologies increasingly used in cancer research. By allowing the monitoring of expression levels in cells for thousands of genes simultaneously, microarray experiments may lead to a more complete understanding of the molecular variations among tumors and hence to a finer and more informative classification. The ability to successfully distinguish between tumor classes (already known or yet to be discovered) using gene expression data is an important aspect of this novel approach to cancer classification. This article compares the performance of different discrimination methods for the classification of tumors based on gene expression data. The methods include nearest-neighbor classifiers, linear discriminant analysis, and classification trees. Recent machine learning approaches, such as bagging and boosting, are also considere... --- paper_title: Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays paper_content: Oligonucleotide arrays can provide a broad picture of the state of the cell, by monitoring the expression level of thousands of genes at the same time. It is of interest to develop techniques for extracting useful information from the resulting data sets. Here we report the application of a two-way clustering method for analyzing a data set consisting of the expression patterns of different cell types. Gene expres- sion in 40 tumor and 22 normal colon tissue samples was analyzed with an Affymetrix oligonucleotide array comple- mentary to more than 6,500 human genes. An efficient two- way clustering algorithm was applied to both the genes and the tissues, revealing broad coherent patterns that suggest a high degree of organization underlying gene expression in these tissues. Coregulated families of genes clustered together, as demonstrated for the ribosomal proteins. Clustering also separated cancerous from noncancerous tissue and cell lines from in vivo tissues on the basis of subtle distributed patterns of genes even when expression of individual genes varied only slightly between the tissues. Two-way clustering thus may be of use both in classifying genes into functional groups and in classifying tissues based on gene expression. --- paper_title: Classification and Selection of Biomarkers in Genomic Data Using LASSO paper_content: High-throughput gene expression technologies such as microarrays have been utilized in a variety of scientific applications. Most of the work has been done on assessing univariate associations between gene expression profiles with clinical outcome (variable selection) or on developing classification procedures with gene expression data (supervised learning). We consider a hybrid variable selection/classification approach that is based on linear combinations of the gene expression profiles that maximize an accuracy measure summarized using the receiver operating characteristic curve. Under a specific probability model, this leads to the consideration of linear discriminant functions. We incorporate an automated variable selection approach using LASSO. An equivalence between LASSO estimation with support vector machines allows for model fitting using standard software. We apply the proposed method to simulated data as well as data from a recently published prostate cancer study. --- paper_title: A Bayesian Framework for the Analysis of Microarray Expression Data: Regularized t-Test and Statistical Inferences of Gene Changes paper_content: Motivation: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data. Results: We develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t-test, provide a systematic inference approach that compares favorably with simple t-test or fold methods, and partly compensate for the lack of replication. --- paper_title: Gene Selection for Cancer Classification using Support Vector Machines paper_content: DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate. --- paper_title: Regularized ROC method for disease classification and biomarker selection with microarray data paper_content: Motivation: An important application of microarrays is to discover genomic biomarkers, among tens of thousands of genes assayed, for disease classification. Thus there is a need for developing statistical methods that can efficiently use such high-throughput genomic data, select biomarkers with discriminant power and construct classification rules. The ROC (receiver operator characteristic) technique has been widely used in disease classification with low-dimensional biomarkers because (1) it does not assume a parametric form of the class probability as required for example in the logistic regression method; (2) it accommodates case--control designs and (3) it allows treating false positives and false negatives differently. However, due to computational difficulties, the ROC-based classification has not been used with microarray data. Moreover, the standard ROC technique does not incorporate built-in biomarker selection. ::: ::: Results: We propose a novel method for biomarker selection and classification using the ROC technique for microarray data. The proposed method uses a sigmoid approximation to the area under the ROC curve as the objective function for classification and the threshold gradient descent regularization method for estimation and biomarker selection. Tuning parameter selection based on the V-fold cross validation and predictive performance evaluation are also investigated. The proposed approach is demonstrated with a simulation study, the Colon data and the Estrogen data. The proposed approach yields parsimonious models with excellent classification performance. ::: ::: Availability: R code is available upon request. ::: ::: Contact: [email protected] --- paper_title: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring paper_content: Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge. --- paper_title: Multidimensional local false discovery rate for microarray studies paper_content: Motivation: The false discovery rate (fdr) is a key tool for statistical assessment of differential expression (DE) in microarray studies. Overall control of the fdr alone, however, is not sufficient to address the problem of genes with small variance, which generally suffer from a disproportionally high rate of false positives. It is desirable to have an fdr-controlling procedure that automatically accounts for gene variability. ::: ::: Methods: We generalize the local fdr as a function of multiple statistics, combining a common test statistic for assessing DE with its standard error information. We use a non-parametric mixture model for DE and non-DE genes to describe the observed multi-dimensional statistics, and estimate the distribution for non-DE genes via the permutation method. We demonstrate this fdr2d approach for simulated and real microarray data. ::: ::: Results: The fdr2d allows objective assessment of DE as a function of gene variability. We also show that the fdr2d performs better than commonly used modified test statistics. ::: ::: Availability: An R-package OCplus containing functions for computing fdr2d() and other operating characteristics of microarray data is available at http://www.meb.ki.se/~yudpaw ::: ::: Contact: [email protected] --- paper_title: An extensive comparison of recent classification tools applied to microarray data paper_content: Since most classi%cation articles have applied a single technique to a single gene expression dataset, it is crucial to assess the performance of each method through a comprehensive comparative study. We evaluate by extensive comparison study extending Dudoit et al. (J. Amer. Statist. Assoc. 97 (2002) 77) the performance of recently developed classi%cation methods in microarray experiment, and provide the guidelines for %nding the most appropriate classi%cation tools in various situations. We extend their comparison in three directions: more classi%cation methods (21 methods), more datasets (7 datasets) and more gene selection techniques (3 methods). Our comparison study shows several interesting facts and provides the biologists and the biostatisticians some insights into the classi%cation tools in microarray data analysis. This study also shows that the more sophisticated classi%ers give better performances than classical methods such as kNN, DLDA, DQDA and the choice of gene selection method has much e>ect on the performance of the classi%cation methods, and thus the classi%cation methods should be considered together with the gene selection criteria. c 2004 Elsevier B.V. All rights reserved. --- paper_title: Improving false discovery rate estimation paper_content: Motivation: Recent attempts to account for multiple testing in the analysis of microarray data have focused on controlling the false discovery rate (FDR). However, rigorous control of the FDR at a preselected level is often impractical. Consequently, it has been suggested to use the q-value as an estimate of the proportion of false discoveries among a set of significant findings. However, such an interpretation of the q-value may be unwarranted considering that the q-value is based on an unstable estimator of the positive FDR (pFDR). Another method proposes estimating the FDR by modeling p-values as arising from a beta-uniform mixture (BUM) distribution. Unfortunately, the BUM approach is reliable only in settings where the assumed model accurately represents the actual distribution of p-values. ::: ::: Methods: A method called the spacings LOESS histogram (SPLOSH) is proposed for estimating the conditional FDR (cFDR), the expected proportion of false positives conditioned on having k 'significant' findings. SPLOSH is designed to be more stable than the q-value and applicable in a wider variety of settings than BUM. ::: ::: Results: In a simulation study and data analysis example, SPLOSH exhibits the desired characteristics relative to the q-value and BUM. ::: ::: Availability: The Web site www.stjuderesearch.org/statistics/splosh.html has links to freely available S-plus code to implement the proposed procedure. --- paper_title: Biomarker Identification by Feature Wrappers paper_content: Gene expression studies bridge the gap between DNA information and trait information by dissecting biochemical pathways into intermediate components between genotype and phenotype. These studies open new avenues for identifying complex disease genes and biomarkers for disease diagnosis and for assessing drug efficacy and toxicity. However, the majority of analytical methods applied to gene expression data are not efficient for biomarker identification and disease diagnosis. In this paper, we propose a general framework to incorporate feature (gene) selection into pattern recognition in the process to identify biomarkers. Using this framework, we develop three feature wrappers that search through the space of feature subsets using the classification error as measure of goodness for a particular feature subset being "wrapped around": linear discriminant analysis, logistic regression, and support vector machines. To effectively carry out this computationally intensive search process, we employ sequential forward search and sequential forward floating search algorithms. To evaluate the performance of feature selection for biomarker identification we have applied the proposed methods to three data sets. The preliminary results demonstrate that very high classification accuracy can be attained by identified composite classifiers with several biomarkers. --- paper_title: Systematic variation in gene expression patterns in human cancer cell lines paper_content: We used cDNA microarrays to explore the variation in expression of approximately 8,000 unique genes among the 60 cell lines used in the National Cancer Institute's screen for anti-cancer drugs. Classification of the cell lines based solely on the observed patterns of gene expression revealed a correspondence to the ostensible origins of the tumours from which the cell lines were derived. The consistent relationship between the gene expression patterns and the tissue of origin allowed us to recognize outliers whose previous classification appeared incorrect. Specific features of the gene expression patterns appeared to be related to physiological properties of the cell lines, such as their doubling time in culture, drug metabolism or the interferon response. Comparison of gene expression patterns in the cell lines to those observed in normal breast tissue or in breast tumour specimens revealed features of the expression patterns in the tumours that had recognizable counterparts in specific cell lines, reflecting the tumour, stromal and inflammatory components of the tumour tissue. These results provided a novel molecular characterization of this important group of human cell lines and their relationships to tumours in vivo. --- paper_title: Filter versus wrapper gene selection approaches in DNA microarray domains paper_content: DNA microarray experiments generating thousands of gene expression measurements, are used to collect information from tissue and cell samples regarding gene expression differences that could be useful for diagnosis disease, distinction of the specific tumor type, etc. One important application of gene expression microarray data is the classification of samples into known categories. As DNA microarray technology measures the gene expression en masse, this has resulted in data with the number of features (genes) far exceeding the number of samples. As the predictive accuracy of supervised classifiers that try to discriminate between the classes of the problem decays with the existence of irrelevant and redundant features, the necessity of a dimensionality reduction process is essential. We propose the application of a gene selection process, which also enables the biology researcher to focus on promising gene candidates that actively contribute to classification in these large scale microarrays. Two basic approaches for feature selection appear in machine learning and pattern recognition literature: the filter and wrapper techniques. Filter procedures are used in most of the works in the area of DNA microarrays. In this work, a comparison between a group of different filter metrics and a wrapper sequential search procedure is carried out. The comparison is performed in two well-known DNA microarray datasets by the use of four classic supervised classifiers. The study is carried out over the original-continuous and three-intervals discretized gene expression data. While two well-known filter metrics are proposed for continuous data, four classic filter measures are used over discretized data. The same wrapper approach is used for both continuous and discretized data. The application of filter and wrapper gene selection procedures leads to considerably better accuracy results in comparison to the non-gene selection approach, coupled with interesting and notable dimensionality reductions. Although the wrapper approach mainly shows a more accurate behavior than filter metrics, this improvement is coupled with considerable computer-load necessities. We note that most of the genes selected by proposed filter and wrapper procedures in discrete and continuous microarray data appear in the lists of relevant-informative genes detected by previous studies over these datasets. The aim of this work is to make contributions in the field of the gene selection task in DNA microarray datasets. By an extensive comparison with more popular filter techniques, we would like to make contributions in the expansion and study of the wrapper approach in this type of domains. --- paper_title: A direct approach to false discovery rates paper_content: Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential "p"-value rejection methods based on the observed data. Whereas a sequential "p"-value method fixes the error rate and "estimates" its corresponding rejection region, we propose the opposite approach-we "fix" the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the "q"-value, the pFDR analogue of the "p"-value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini-Hochberg FDR method. Copyright 2002 Royal Statistical Society. --- paper_title: A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression paper_content: Summary: This paper studies the problem of building multiclass classifiers for tissue classification based on gene expression. The recent development of microarray technologies has enabled biologists to quantify gene expression of tens of thousands of genes in a single experiment. Biologists have begun collecting gene expression for a large number of samples. One of the urgent issues in the use of microarray data is to develop methods for characterizing samples based on their gene expression. The most basic step in the research direction is binary sample classification, which has been studied extensively over the past few years. This paper investigates the next step---multiclass classification of samples based on gene expression. The characteristics of expression data (e.g. large number of genes with small sample size) makes the classification problem more challenging. ::: ::: The process of building multiclass classifiers is divided into two components: (i) selection of the features (i.e. genes) to be used for training and testing and (ii) selection of the classification method. This paper compares various feature selection methods as well as various state-of-the-art classification methods on various multiclass gene expression datasets. ::: ::: Our study indicates that multiclass classification problem is much more difficult than the binary one for the gene expression datasets. The difficulty lies in the fact that the data are of high dimensionality and that the sample size is small. The classification accuracy appears to degrade very rapidly as the number of classes increases. In particular, the accuracy was very low regardless of the choices of the methods for large-class datasets (e.g. NCI60 and GCM). While increasing the number of samples is a plausible solution to the problem of accuracy degradation, it is important to develop algorithms that are able to analyze effectively multiple-class expression data for these special datasets. --- paper_title: Empirical Bayes Analysis of a Microarray Experiment paper_content: Microarrays are a novel technology that facilitates the simultaneous measurement of thousands of gene expression levels. A typical microarray experiment can produce millions of data points, raising serious problems of data reduction, and simultaneous inference. We consider one such experiment in which oligonucleotide arrays were employed to assess the genetic effects of ionizing radiation on seven thousand human genes. A simple nonparametric empirical Bayes model is introduced, which is used to guide the efficient reduction of the data to a single summary statistic per gene, and also to make simultaneous inferences concerning which genes were affected by the radiation. Although our focus is on one specific experiment, the proposed methods can be applied quite generally. The empirical Bayes inferences are closely related to the frequentist false discovery rate (FDR) criterion. --- paper_title: An assessment of recently published gene expression data analyses: reporting experimental design and statistical factors paper_content: BackgroundThe analysis of large-scale gene expression data is a fundamental approach to functional genomics and the identification of potential drug targets. Results derived from such studies cannot be trusted unless they are adequately designed and reported. The purpose of this study is to assess current practices on the reporting of experimental design and statistical analyses in gene expression-based studies.MethodsWe reviewed hundreds of MEDLINE-indexed papers involving gene expression data analysis, which were published between 2003 and 2005. These papers were examined on the basis of their reporting of several factors, such as sample size, statistical power and software availability.ResultsAmong the examined papers, we concentrated on 293 papers consisting of applications and new methodologies. These papers did not report approaches to sample size and statistical power estimation. Explicit statements on data transformation and descriptions of the normalisation techniques applied prior to data analyses (e.g. classification) were not reported in 57 (37.5%) and 104 (68.4%) of the methodology papers respectively. With regard to papers presenting biomedical-relevant applications, 41(29.1 %) of these papers did not report on data normalisation and 83 (58.9%) did not describe the normalisation technique applied. Clustering-based analysis, the t-test and ANOVA represent the most widely applied techniques in microarray data analysis. But remarkably, only 5 (3.5%) of the application papers included statements or references to assumption about variance homogeneity for the application of the t-test and ANOVA. There is still a need to promote the reporting of software packages applied or their availability.ConclusionRecently-published gene expression data analysis studies may lack key information required for properly assessing their design quality and potential impact. There is a need for more rigorous reporting of important experimental factors such as statistical power and sample size, as well as the correct description and justification of statistical methods applied. This paper highlights the importance of defining a minimum set of information required for reporting on statistical design and analysis of expression data. By improving practices of statistical analysis reporting, the scientific community can facilitate quality assurance and peer-review processes, as well as the reproducibility of results. --- paper_title: Tissue classification with gene expression profiles paper_content: Constantly improving gene expression profiling technologies are expected to provide understanding and insight into cancer related cellular processes. Gene expression data is also expected to significantly and in the development of efficient cancer diagnosis and classification platforms. In this work we examine two sets of gene expression data measured across sets of tumor and normal clinical samples One set consists of 2,000 genes, measured in 62 epithelial colon samples [1]. The second consists of a 100,000 clones, measured in 32 ovarian samples (unpublished, extension of data set described in [26]). We examine the use of scoring methods, measuring separation of tumors from normals using individual gene expression levels. These are then coupled with high dimensional classification methods to assess the classification power of complete expression profiles. We present results of performing leave-one-out cross validation (LOOCV) experiments on the two data sets. employing SVM [8], AdaBoost [13] and a novel clustering based classification technique. As tumor samples can differ from normal samples in their cell-type composition we also perform LOOCV experiments using appropriately modified sets of genes, attempting to eliminate the resulting bias. We demonstrate success rate of at least 90% in tumor vs normal classification, using sets of selected genes, with as well as without cellular contamination related members. These results are insensitive to the exact selection mechanism, over a certain range. --- paper_title: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data paper_content: We consider the problem of inferring fold changes in gene expression from cDNA microarray data. Standard procedures focus on the ratio of measured fluorescent intensities at each spot on the microarray, but to do so is to ignore the fact that the variation of such ratios is not constant. Estimates of gene expression changes are derived within a simple hierarchical model that accounts for measurement error and fluctuations in absolute gene expression levels. Significant gene expression changes are identified by deriving the posterior odds of change within a similar model. The methods are tested via simulation and are applied to a panel of Escherichia coli microarrays. --- paper_title: Class prediction and discovery using gene microarray and proteomics mass spectroscopy data: curses, caveats, cautions paper_content: Motivation: Tw op ractical realities constrain the analysis of microarray data, mass spectra from proteomics, and biomedical infrared or magnetic resonance spectra. One is the ‘curse of dimensionality’: the number of features characterizing these data is in the thousands or tens of thousands. The other is the ‘curse of dataset sparsity’: the number of samples is limited. The consequences of these two curses are far-reaching when such data are used to classify the presence or absence of disease. Results: Using very simple classifiers, we show for several publicly available microarray and proteomics datasets how these curses influence classification outcomes. In particular, even if the sample per feature ratio is increased to the recommended 5–10 by feature extraction/reduction methods, dataset sparsity can render any classification result statistically suspect. In addition, several ‘optimal’ feature sets are typically identifiable for sparse datasets, all producing perfect classification results, both for the training and independent validation sets. This non-uniqueness leads to interpretational difficulties and casts doubt on the biological relevance of any of these ‘optimal’ feature sets. We suggest an approach to assess the relative quality of apparently equally good classifiers. --- paper_title: Gene expression A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis paper_content: Motivation: Cancer diagnosis is one of the most important emerging clinical applications of gene expression microarray technology. We are seeking to develop a computer system for powerful and reliable cancer diagnostic model creation based on microarray data. To keep a realistic perspective on clinical applications we focus on multicategory diagnosis. To equip the system with the optimum combination of classifier, gene selection and cross-validation methods, we performed a systematic and comprehensive evaluation of several major algorithms for multicategory classification, several gene selection methods, multiple ensemble classifier methods and two cross-validation designs using 11 datasets spanning 74 diagnostic categories and 41 cancer types and 12 normal tissue types. ::: ::: Results: Multicategory support vector machines (MC-SVMs) are the most effective classifiers in performing accurate cancer diagnosis from gene expression data. The MC-SVM techniques by Crammer and Singer, Weston and Watkins and one-versus-rest were found to be the best methods in this domain. MC-SVMs outperform other popular machine learning algorithms, such as k-nearest neighbors, backpropagation and probabilistic neural networks, often to a remarkable degree. Gene selection techniques can significantly improve the classification performance of both MC-SVMs and other non-SVM learning algorithms. Ensemble classifiers do not generally improve performance of the best non-ensemble models. These results guided the construction of a software system GEMS (Gene Expression Model Selector) that automates high-quality model construction and enforces sound optimization and performance estimation procedures. This is the first such system to be informed by a rigorous comparative analysis of the available algorithms and datasets. ::: ::: Availability: The software system GEMS is available for download from http://www.gems-system.org for non-commercial use. ::: ::: Contact: [email protected] --- paper_title: Incremental wrapper-based gene selection frommicroarray data for cancer classification paper_content: Gene expression microarray is a rapidly maturing technology that provides the opportunity to assay the expression levels of thousands or tens of thousands of genes in a single experiment. We present a new heuristic to select relevant gene subsets in order to further use them for the classification task. Our method is based on the statistical significance of adding a gene from a ranked-list to the final subset. The efficiency and effectiveness of our technique is demonstrated through extensive comparisons with other representative heuristics. Our approach shows an excellent performance, not only at identifying relevant genes, but also with respect to the computational cost. --- paper_title: Joint analysis of two microarray gene-expression data sets to select lung adenocarcinoma marker genes paper_content: Background ::: Due to the high cost and low reproducibility of many microarray experiments, it is not surprising to find a limited number of patient samples in each study, and very few common identified marker genes among different studies involving patients with the same disease. Therefore, it is of great interest and challenge to merge data sets from multiple studies to increase the sample size, which may in turn increase the power of statistical inferences. In this study, we combined two lung cancer studies using micorarray GeneChip®, employed two gene shaving methods and a two-step survival test to identify genes with expression patterns that can distinguish diseased from normal samples, and to indicate patient survival, respectively. --- paper_title: Rank products: a simple, yet powerful, new method to detect differentially regulated genes in replicated microarray experiments q paper_content: One of the main objectives in the analysis of microarray experiments is the identification of genes that are differentially expressed under two experimental conditions. This task is complicated by the noisiness of the data and the large number of genes that are examined simultaneously. Here, we present a novel technique for identifying differentially expressed genes that does not originate from a sophisticated statistical model but rather from an analysis of biological reasoning. The new technique, which is based on calculating rank products (RP) from replicate experiments, is fast and simple. At the same time, it provides a straightforward and statistically stringent way to determine the significance level for each gene and allows for the flexible control of the false-detection rate and familywise error rate in the multiple testing situation of a microarray experiment. We use the RP technique on three biological data sets and show that in each case it performs more reliably and consistently than the non-parametric t-test variant implemented in Tusher et al.'s significance analysis of microarrays (SAM). We also show that the RP results are reliable in highly noisy data. An analysis of the physiological function of the identified genes indicates that the RP approach is powerful for identifying biologically relevant expression changes. In addition, using RP can lead to a sharp reduction in the number of replicate experiments needed to obtain reproducible results. --- paper_title: Nonparametric methods for identifying differentially expressed genes in microarray data paper_content: Motivation: Gene expression experiments provide a fast and systematic way to identify disease markers relevant to clinical care. In this study, we address the problem of robust identification of differentially expressed genes from microarray data. Differentially expressed genes, or discriminator genes, are genes with significantly different expression in two user-defined groups of microarray experiments. We compare three model-free approaches: (1) nonparametric t-test, (2) Wilcoxon (or Mann‐Whitney) rank sum test, and (3) a heuristic method based on high Pearson correlation to a perfectly differentiating gene (‘ideal discriminator method’). We systematically assess the performance of each method based on simulated and biological data under varying noise levels and p-value cutoffs. Results: All methods exhibit very low false positive rates and identify a large fraction of the differentially expressed genes in simulated data sets with noise level similar to that of actual data. Overall, the rank sum test appears most conservative, which may be advantageous when the computationally identified genes need to be tested biologically. However, if a more inclusive list of markers is desired, a higher p-value cutoff or the nonparametric t-test may be appropriate. When applied to data from lung tumor and lymphoma data sets, the methods identify biologically relevant differentially expressed genes that allow clear separation of groups in question. Thus the methods described and evaluated here provide a convenient and robust way to identify differentially expressed genes for further biological and clinical analysis. Availability: By request from the authors. --- paper_title: Significance analysis of microarrays applied to the ionizing radiation response paper_content: Microarrays can measure the expression of thousands of genes to identify changes in expression between different biological states. Methods are needed to determine the significance of these changes while accounting for the enormous number of genes. We describe a method, Significance Analysis of Microarrays (SAM), that assigns a score to each gene on the basis of change in gene expression relative to the standard deviation of repeated measurements. For genes with scores greater than an adjustable threshold, SAM uses permutations of the repeated measurements to estimate the percentage of genes identified by chance, the false discovery rate (FDR). When the transcriptional response of human cells to ionizing radiation was measured by microarrays, SAM identified 34 genes that changed at least 1.5-fold with an estimated FDR of 12%, compared with FDRs of 60 and 84% by using conventional methods of analysis. Of the 34 genes, 19 were involved in cell cycle regulation and 3 in apoptosis. Surprisingly, four nucleotide excision repair genes were induced, suggesting that this repair pathway for UV-damaged DNA might play a previously unrecognized role in repairing DNA damaged by ionizing radiation. --- paper_title: A robust meta-classification strategy for cancer detection from MS data paper_content: We propose a novel method for phenotype identification involving a stringent noise analysis and filtering procedure followed by combining the results of several machine learning tools to produce a robust predictor. We illustrate our method on SELDI-TOF MS prostate cancer data (http://home.ccr.cancer.gov/ncifdaproteomics/ppatterns.asp). Our method identified 11 proteomic biomarkers and gave significantly improved predictions over previous analyses with these data. We were able to distinguish cancer from non-cancer cases with a sensitivity of 90.31% and a specificity of 98.81%. The proposed method can be generalized to multi-phenotype prediction and other types of data (e.g., microarray data). --- paper_title: A comparative study on feature selection and classification methods using gene expression profiles and proteomic patterns paper_content: Feature selection plays an important role in classification. We present a comparative study on six feature selection heuristics by applying them to two sets of data. The first set of data are gene expression profiles from Acute Lymphoblastic Leukemia (ALL) patients. The second set of data are proteomic patterns from ovarian cancer patients. Based on features chosen by these methods, error rates of several classification algorithms were obtained for analysis. Our results demonstrate the importance of feature selection in accurately classifying new samples. --- paper_title: An integrated approach utilizing artificial neural networks and SELDI mass spectrometry for the classification of human tumours and rapid identification of potential biomarkers paper_content: Motivation: MALDI mass spectrometry is able to elicit macromolecular expression data from cellular material and when used in conjunction with Ciphergen protein chip technology (also referred to as SELDI-Surface Enhanced Laser Desorption/lonization), it permits a semi-high throughput approach to be taken with respect to sample processing and data acquisition. Due to the large array of data that is generated from a single analysis (8-10 000 variables using a mass range of 2-15 kDa-this paper) it is essential to implement the use of algorithms that can detect expression patterns from such large volumes of data correlating to a given biological/pathological phenotype from multiple samples. If successful, the methodology could be extrapolated to larger data sets to enable the identification of validated biomarkers correlating strongly to disease progression. This would not only serve to enable tumours to be classified according to their molecular expression profile but could also focus attention upon a relatively small number of molecules that might warrant further biochemical/molecular characterization to assess their suitability as potential therapeutic targets. Results: Using a multi-layer perceptron Artificial Neural Network (ANN) (Neuroshell 2) with a back propagation algorithm we have developed a prototype approach that uses a model system (comprising five low and seven high-grade human astrocytomas) to identify mass spectral peaks whose relative intensity values correlate strongly to tumour grade. Analyzing data derived from MALDI mass spectrometry in conjunction with Ciphergen protein chip technology we have used relative importance values, determined from the weights of trained ANNs (Balls et al., Water, Air Soil Pollut., 85, 1467-1472, 1996), to identify masses that accurately predict tumour grade. Implementing a three-stage procedure, we have screened a population of approximately 100000-120000 variables and identified two ions (m/z values of 13454 and 13457) whose relative intensity pattern was significantly reduced in high-grade astrocytoma. The data from this initial study suggests that application of ANN-based approaches can identify molecular ion patterns which strongly associate with disease grade and that its application to larger cohorts of patient material could potentially facilitate the rapid identification of validated biomarkers having significant clinical (i.e. diagnostic/prognostic) potential for the field of cancer biology. --- paper_title: Feature selection in proteomic pattern data with support vector machines paper_content: This work introduces novel methods for feature selection (FS) based on support vector machines (SVM). The methods combine feature subsets produced by a variant of SVM-RFE, a popular feature ranking/selection algorithm based on SVM. Two combination strategies are proposed: union of features occurring frequently, and ensemble of classifiers built on single feature subsets. The resulting methods are applied to pattern proteomic data for tumor diagnostics. Results of experiments on three proteomic pattern datasets indicate that combining feature subsets affects positively the prediction accuracy of both SVM and SVM-RFE. A discussion about the biological interpretation of selected features is provided. --- paper_title: Analysis of mass spectral serum profiles for biomarker selection paper_content: Motivation: Mass spectrometric profiles of peptides and proteins obtained by current technologies are characterized by complex spectra, high dimensionality and substantial noise. These characteristics generate challenges in the discovery of proteins and protein-profiles that distinguish disease states, e.g. cancer patients from healthy individuals. We present low-level methods for the processing of mass spectral data and a machine learning method that combines support vector machines, with particle swarm optimization for biomarker selection. ::: ::: Results: The proposed method identified mass points that achieved high prediction accuracy in distinguishing liver cancer patients from healthy individuals in SELDI-QqTOF profiles of serum. ::: ::: Availability: MATLAB scripts to implement the methods described in this paper are available from the HWR's lab website http://lombardi.georgetown.edu/labpage ::: ::: Contact: [email protected] --- paper_title: Gene Selection for Cancer Classification using Support Vector Machines paper_content: DNA micro-arrays now permit scientists to screen thousands of genes simultaneously and determine whether those genes are active, hyperactive or silent in normal or cancerous tissue. Because these new micro-array devices generate bewildering amounts of raw data, new analytical methods must be developed to sort out whether cancer tissues have distinctive signatures of gene expression over normal tissues or other types of cancer tissues. ::: ::: In this paper, we address the problem of selection of a small subset of genes from broad patterns of gene expression data, recorded on DNA micro-arrays. Using available training examples from cancer and normal patients, we build a classifier suitable for genetic diagnosis, as well as drug discovery. Previous attempts to address this problem select genes with correlation techniques. We propose a new method of gene selection utilizing Support Vector Machine methods based on Recursive Feature Elimination (RFE). We demonstrate experimentally that the genes selected by our techniques yield better classification performance and are biologically relevant to cancer. ::: ::: In contrast with the baseline method, our method eliminates gene redundancy automatically and yields better and more compact gene subsets. In patients with leukemia our method discovered 2 genes that yield zero leave-one-out error, while 64 genes are necessary for the baseline method to get the best result (one leave-one-out error). In the colon cancer database, using only 4 genes our method is 98% accurate, while the baseline method is only 86% accurate. --- paper_title: Mining mass spectra for diagnosis and biomarker discovery of cerebral accidents paper_content: In this paper we try to identify potential biomarkers for early stroke diagnosis using surface-enhanced laser desorption/ionization mass spectrometry coupled with analysis tools from machine learning and data mining. Data consist of 42 specimen samples, i.e., mass spectra divided in two big categories, stroke and control specimens. Among the stroke specimens two further categories exist that correspond to ischemic and hemorrhagic stroke; in this paper we limit our data analysis to discriminating between control and stroke specimens. We performed two suites of experiments. In the first one we simply applied a number of different machine learning algorithms; in the second one we have chosen the best performing algorithm as it was determined from the first phase and coupled it with a number of different feature selection methods. The reason for this was 2-fold, first to establish whether feature selection can indeed improve performance, which in our case it did not seem to confirm, but more importantly to acquire a small list of potentially interesting biomarkers. Of the different methods explored the most promising one was support vector machines which gave us high levels of sensitivity and specificity. Finally, by analyzing the models constructed by support vector machines we produced a small set of 13 features that could be used as potential biomarkers, and which exhibited good performance both in terms of sensitivity, specificity and model stability. --- paper_title: Use of Proteomic Patterns in Serum to Identify Ovarian Cancer paper_content: In this study, a new high-order bioinformatics tool used to identify differences in proteomic patterns in serum was evaluated for its ability to detect the presence of cancer in the ovary. The proteomic pattern is generated using matrix-assisted laser desorption and ionization time-of-flight and surface-enhanced laser desorption and ionization time-of-flight mass spectroscopy from thousands of low-molecular-weight serum proteins. Proteomic spectra patterns were generated from 50 women with and 50 women without ovarian cancer and analyzed on the Protein Biology System 2 SELDI-TOF mass spectrometer (Ciphergen Biosystems, Freemont, CA) to find a pattern unique to ovarian cancer. In the graph of the analysis, each proteomic spectrum is comprised of 15,200 mass/charge (m/z) values located along the x axis with corresponding amplitude values along the y axis. By comparing the proteomic spectra derived from the serum of patients with known ovarian cancer to that of disease-free patients, a profile of ovarian cancer was identified in the peak amplitude values along the horizontal axis. The comparison was conducted using repetitive analysis of ever smaller subsets until discriminatory values from five protein peaks were isolated. The validity of this pattern was tested using an additional 116 masked serum samples from 50 women known to have ovarian cancer and 66 nonaffected women. All of the subjects with cancer and most of the women with no cancer were from the National Ovarian Cancer Early Detection Program at Northwestern University. The nonaffected women had been diagnosed with a variety of benign gynecologic conditions after evaluation for possible ovarian cancer and were considered to be a high-risk population. Serum samples were collected before examination, diagnosis, or treatment and frozen in liquid nitrogen. The samples were thawed and added to a C16 hydrophobic interaction protein chip for analysis. In the validation set, 63 of the 66 women with benign ovarian conditions were correctly identified in the spectra analysis. All 50 patients with a diagnosis of ovarian cancer were correctly identified in the analysis, including 18 women with stage I disease. Thus, the ability of proteomic patterns to detect the presence of ovarian cancer had a sensitively of 100%, a specificity of 95%, and a positive predictive value of 94%. In comparison, the positive predictive value for serum cancer antigen 125 in the set of patients was 35%. Additionally, no matching patterns were seen in serum samples from 266 men with benign and malignant prostate disease. --- paper_title: Proteomic mass spectra classification using decision tree based ensemble methods paper_content: Motivation: Modern mass spectrometry allows the determination of proteomic fingerprints of body fluids like serum, saliva or urine. These measurements can be used in many medical applications in order to diagnose the current state or predict the evolution of a disease. Recent developments in machine learning allow one to exploit such datasets, characterized by small numbers of very high-dimensional samples. ::: ::: Results: We propose a systematic approach based on decision tree ensemble methods, which is used to automatically determine proteomic biomarkers and predictive models. The approach is validated on two datasets of surface-enhanced laser desorption/ionization time of flight measurements, for the diagnosis of rheumatoid arthritis and inflammatory bowel diseases. The results suggest that the methodology can handle a broad class of similar problems. ::: ::: Supplementary information: Additional tables, appendicies and datasets may be found at http://www.montefiore.ulg.ac.be/~geurts/Papers/Proteomic-suppl.html ::: ::: Contact: [email protected] --- paper_title: Sample classification from protein mass spectrometry , by “ peak probability contrasts ” paper_content: Motivation: Early cancer detection has always been a major research focus in solid tumor oncology. Early tumor detection can theoretically result in lower stage tumors, more treatable diseases and ultimately higher cure rates with less treatment-related morbidities. Protein mass spectrometry is a potentially powerful tool for early cancer detection. ::: ::: We propose a novel method for sample classification from protein mass spectrometry data. When applied to spectra from both diseased and healthy patients, the 'peak probability contrast' technique provides a list of all common peaks among the spectra, their statistical significance and their relative importance in discriminating between the two groups. We illustrate the method on matrix-assisted laser desorption and ionization mass spectrometry data from a study of ovarian cancers. ::: ::: Results: Compared to other statistical approaches for class prediction, the peak probability contrast method performs as well or better than several methods that require the full spectra, rather than just labelled peaks. It is also much more interpretable biologically. The peak probability contrast method is a potentially useful tool for sample classification from protein mass spectrometry data. ::: ::: Supplementary Information: http://www.stat.stanford.edu/~tibs/ppc --- paper_title: Class prediction and discovery using gene microarray and proteomics mass spectroscopy data: curses, caveats, cautions paper_content: Motivation: Tw op ractical realities constrain the analysis of microarray data, mass spectra from proteomics, and biomedical infrared or magnetic resonance spectra. One is the ‘curse of dimensionality’: the number of features characterizing these data is in the thousands or tens of thousands. The other is the ‘curse of dataset sparsity’: the number of samples is limited. The consequences of these two curses are far-reaching when such data are used to classify the presence or absence of disease. Results: Using very simple classifiers, we show for several publicly available microarray and proteomics datasets how these curses influence classification outcomes. In particular, even if the sample per feature ratio is increased to the recommended 5–10 by feature extraction/reduction methods, dataset sparsity can render any classification result statistically suspect. In addition, several ‘optimal’ feature sets are typically identifiable for sparse datasets, all producing perfect classification results, both for the training and independent validation sets. This non-uniqueness leads to interpretational difficulties and casts doubt on the biological relevance of any of these ‘optimal’ feature sets. We suggest an approach to assess the relative quality of apparently equally good classifiers. --- paper_title: Processing and classification of protein mass spectra. paper_content: Among the many applications of mass spectrometry, biomarker pattern discovery from protein mass spectra has aroused considerable interest in the past few years. While research efforts have raised hopes of early and less invasive diagnosis, they have also brought to light the many issues to be tackled before mass-spectra-based proteomic patterns become routine clinical tools. Known issues cover the entire pipeline leading from sample collection through mass spectrometry analytics to biomarker pattern extraction, validation, and interpretation. This study focuses on the data-analytical phase, which takes as input mass spectra of biological specimens and discovers patterns of peak masses and intensities that discriminate between different pathological states. We survey current work and investigate computational issues concerning the different stages of the knowledge discovery process: exploratory analysis, quality control, and diverse transforms of mass spectra, followed by further dimensionality reduction, classification, and model evaluation. We conclude after a brief discussion of the critical biomedical task of analyzing discovered discriminatory patterns to identify their component proteins as well as interpret and validate their biological implications. --- paper_title: Ovarian cancer identification based on dimensionality reduction for high-throughput mass spectrometry data paper_content: Motivation: High-throughput and high-resolution mass spectrometry instruments are increasingly used for disease classification and therapeutic guidance. However, the analysis of immense amount of data poses considerable challenges. We have therefore developed a novel method for dimensionality reduction and tested on a published ovarian high-resolution SELDI-TOF dataset. ::: ::: Results: We have developed a four-step strategy for data preprocessing based on: (1) binning, (2) Kolmogorov--Smirnov test, (3) restriction of coefficient of variation and (4) wavelet analysis. Subsequently, support vector machines were used for classification. The developed method achieves an average sensitivity of 97.38% (sd = 0.0125) and an average specificity of 93.30% (sd = 0.0174) in 1000 independent k-fold cross-validations, where k = 2, ..., 10. ::: ::: Availability: The software is available for academic and non-commercial institutions. ::: ::: Contact: [email protected] --- paper_title: A machine learning perspective on the development of clinical decision support systems utilizing mass spectra of blood samples paper_content: Currently, the best way to reduce the mortality of cancer is to detect and treat it in the earliest stages. Technological advances in genomics and proteomics have opened a new realm of methods for early detection that show potential to overcome the drawbacks of current strategies. In particular, pattern analysis of mass spectra of blood samples has attracted attention as an approach to early detection of cancer. Mass spectrometry provides rapid and precise measurements of the sizes and relative abundances of the proteins present in a complex biological/chemical mixture. This article presents a review of the development of clinical decision support systems using mass spectrometry from a machine learning perspective. The literature is reviewed in an explicit machine learning framework, the components of which are preprocessing, feature extraction, feature selection, classifier training, and evaluation. --- paper_title: Comparison of statistical methods for classification of ovarian cancer using mass spectrometry data paper_content: Motivation: Novel methods, both molecular and statistical, are urgently needed to take advantage of recent advances in biotechnology and the human genome project for disease diagnosis and prognosis. Mass spectrometry (MS) holds great promise for biomarker identification and genome-wide protein profiling. It has been demonstrated in the literature that biomarkers can be identified to distinguish normal individuals from cancer patients using MS data. Such progress is especially exciting for the detection of early-stage ovarian cancer patients. Although various statistical methods have been utilized to identify biomarkers from MS data, there has been no systematic comparison among these approaches in their relative ability to analyze MS data. Results: We compare the performance of several classes of statistical methods for the classification of cancer based on MS spectra. These methods include: linear discriminant analysis, quadratic discriminant analysis, k -nearest neighbor classifier, bagging and boosting classification trees, support vector machine, and random forest (RF). The methods are applied to ovarian cancer and control serum samples from the National Ovarian Cancer Early Detection Program clinic at Northwestern University Hospital. We found that RF outperforms other methods in the analysis of MS data. --- paper_title: Is Cross-Validation Valid for Small-Sample Microarray Classification? paper_content: Motivation: Microarray classification typically possesses two striking attributes: (1) classifier design and error estimation are based on remarkably small samples and (2) cross-validation error estimation is employed in the majority of the papers. Thus, it is necessary to have a quantifiable understanding of the behavior of cross-validation in the context of very small samples. ::: ::: Results: An extensive simulation study has been performed comparing cross-validation, resubstitution and bootstrap estimation for three popular classification rules---linear discriminant analysis, 3-nearest-neighbor and decision trees (CART)---using both synthetic and real breast-cancer patient data. Comparison is via the distribution of differences between the estimated and true errors. Various statistics for the deviation distribution have been computed: mean (for estimator bias), variance (for estimator precision), root-mean square error (for composition of bias and variance) and quartile ranges, including outlier behavior. In general, while cross-validation error estimation is much less biased than resubstitution, it displays excessive variance, which makes individual estimates unreliable for small samples. Bootstrap methods provide improved performance relative to variance, but at a high computational cost and often with increased bias (albeit, much less than with resubstitution). ::: ::: Availability and Supplementary information: A companion web site can be accessed at the URL http://ee.tamu.edu/~edward/cv_paper. The companion web site contains: (1) the complete set of tables and plots regarding the simulation study; (2) additional figures; (3) a compilation of references for microarray classification studies and (4) the source code used, with full documentation and examples. --- paper_title: Prediction error estimation: a comparison of resampling methods paper_content: MOTIVATION ::: In genomic studies, thousands of features are collected on relatively few samples. One of the goals of these studies is to build classifiers to predict the outcome of future observations. There are three inherent steps to this process: feature selection, model selection and prediction assessment. With a focus on prediction assessment, we compare several methods for estimating the 'true' prediction error of a prediction model in the presence of feature selection. ::: ::: ::: RESULTS ::: For small studies where features are selected from thousands of candidates, the resubstitution and simple split-sample estimates are seriously biased. In these small samples, leave-one-out cross-validation (LOOCV), 10-fold cross-validation (CV) and the .632+ bootstrap have the smallest bias for diagonal discriminant analysis, nearest neighbor and classification trees. LOOCV and 10-fold CV have the smallest bias for linear discriminant analysis. Additionally, LOOCV, 5- and 10-fold CV, and the .632+ bootstrap have the lowest mean square error. The .632+ bootstrap is quite biased in small sample sizes with strong signal-to-noise ratios. Differences in performance among resampling methods are reduced as the number of specimens available increase. ::: ::: ::: SUPPLEMENTARY INFORMATION ::: A complete compilation of results and R code for simulations and analyses are available in Molinaro et al. (2005) (http://linus.nci.nih.gov/brb/TechReport.htm). --- paper_title: Selection bias in gene extraction on the basis of microarray gene-expression data paper_content: In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called .632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes. --- paper_title: Class prediction and discovery using gene microarray and proteomics mass spectroscopy data: curses, caveats, cautions paper_content: Motivation: Tw op ractical realities constrain the analysis of microarray data, mass spectra from proteomics, and biomedical infrared or magnetic resonance spectra. One is the ‘curse of dimensionality’: the number of features characterizing these data is in the thousands or tens of thousands. The other is the ‘curse of dataset sparsity’: the number of samples is limited. The consequences of these two curses are far-reaching when such data are used to classify the presence or absence of disease. Results: Using very simple classifiers, we show for several publicly available microarray and proteomics datasets how these curses influence classification outcomes. In particular, even if the sample per feature ratio is increased to the recommended 5–10 by feature extraction/reduction methods, dataset sparsity can render any classification result statistically suspect. In addition, several ‘optimal’ feature sets are typically identifiable for sparse datasets, all producing perfect classification results, both for the training and independent validation sets. This non-uniqueness leads to interpretational difficulties and casts doubt on the biological relevance of any of these ‘optimal’ feature sets. We suggest an approach to assess the relative quality of apparently equally good classifiers. --- paper_title: Gene expression A comprehensive evaluation of multicategory classification methods for microarray gene expression cancer diagnosis paper_content: Motivation: Cancer diagnosis is one of the most important emerging clinical applications of gene expression microarray technology. We are seeking to develop a computer system for powerful and reliable cancer diagnostic model creation based on microarray data. To keep a realistic perspective on clinical applications we focus on multicategory diagnosis. To equip the system with the optimum combination of classifier, gene selection and cross-validation methods, we performed a systematic and comprehensive evaluation of several major algorithms for multicategory classification, several gene selection methods, multiple ensemble classifier methods and two cross-validation designs using 11 datasets spanning 74 diagnostic categories and 41 cancer types and 12 normal tissue types. ::: ::: Results: Multicategory support vector machines (MC-SVMs) are the most effective classifiers in performing accurate cancer diagnosis from gene expression data. The MC-SVM techniques by Crammer and Singer, Weston and Watkins and one-versus-rest were found to be the best methods in this domain. MC-SVMs outperform other popular machine learning algorithms, such as k-nearest neighbors, backpropagation and probabilistic neural networks, often to a remarkable degree. Gene selection techniques can significantly improve the classification performance of both MC-SVMs and other non-SVM learning algorithms. Ensemble classifiers do not generally improve performance of the best non-ensemble models. These results guided the construction of a software system GEMS (Gene Expression Model Selector) that automates high-quality model construction and enforces sound optimization and performance estimation procedures. This is the first such system to be informed by a rigorous comparative analysis of the available algorithms and datasets. ::: ::: Availability: The software system GEMS is available for download from http://www.gems-system.org for non-commercial use. ::: ::: Contact: [email protected] --- paper_title: Gene selection and classification of microarray data using random forest paper_content: BackgroundSelection of relevant genes for sample classification is a common task in most gene expression studies, where researchers try to identify the smallest possible set of genes that can still achieve good predictive performance (for instance, for future use with diagnostic purposes in clinical practice). Many gene selection approaches use univariate (gene-by-gene) rankings of gene relevance and arbitrary thresholds to select the number of genes, can only be applied to two-class problems, and use gene selection ranking criteria unrelated to the classification algorithm. In contrast, random forest is a classification algorithm well suited for microarray data: it shows excellent performance even when most predictive variables are noise, can be used when the number of variables is much larger than the number of observations and in problems involving more than two classes, and returns measures of variable importance. Thus, it is important to understand the performance of random forest with microarray data and its possible use for gene selection.ResultsWe investigate the use of random forest for classification of microarray data (including multi-class problems) and propose a new method of gene selection in classification problems based on random forest. Using simulated and nine microarray data sets we show that random forest has comparable performance to other classification methods, including DLDA, KNN, and SVM, and that the new gene selection procedure yields very small sets of genes (often smaller than alternative methods) while preserving predictive accuracy.ConclusionBecause of its performance and features, random forest and gene selection using random forest should probably become part of the "standard tool-box" of methods for class prediction and gene selection with microarray data. --- paper_title: Gene selection for sample classification based on gene expression data: study of sensitivity to choice of parameters of the GA/KNN method paper_content: Motivation: We recently introduced a multivariate approach that selects a subset of predictive genes jointly for sample classification based on expression data. We tested the algorithm on colon and leukemia data sets. As an extension to our earlier work, we systematically examine the sensitivity, reproducibility and stability of gene selection/sample classification to the choice of parameters of the algorithm. Methods: Our approach combines a Genetic Algorithm (GA) and the k-Nearest Neighbor (KNN) method to identify genes that can jointly discriminate between different classes of samples (e.g. normal versus tumor). The GA/KNN method is a stochastic supervised pattern recognition method. The genes identified are subsequently used to classify independent test set samples. Results: The GA/KNN method is capable of selecting a subset of predictive genes from a large noisy data set for sample classification. It is a multivariate approach that can capture the correlated structure in the data. We find that for a given data set gene selection is highly repeatable in independent runs using the GA/KNN method. In general, however, gene selection may be less robust than classification. Availability: The method is available at http://dir.niehs.nih. gov/microarray/datamining --- paper_title: Bayesian Model Averaging: Development of an Improved Multi-Class, Gene Selection and Classification Tool for Microarray Data paper_content: Motivation: Selecting a small number of relevant genes for accurate classification of samples is essential for the development of diagnostic tests. We present the Bayesian model averaging (BMA) method for gene selection and classification of microarray data. Typical gene selection and classification procedures ignore model uncertainty and use a single set of relevant genes (model) to predict the class. BMA accounts for the uncertainty about the best set to choose by averaging over multiple models (sets of potentially overlapping relevant genes). ::: ::: Results: We have shown that BMA selects smaller numbers of relevant genes (compared with other methods) and achieves a high prediction accuracy on three microarray datasets. Our BMA algorithm is applicable to microarray datasets with any number of classes, and outputs posterior probabilities for the selected genes and models. Our selected models typically consist of only a few genes. The combination of high accuracy, small numbers of genes and posterior probabilities for the predictions should make BMA a powerful tool for developing diagnostics from expression data. ::: ::: Availability: The source codes and datasets used are available from our Supplementary website. ::: ::: Contact: [email protected] ::: ::: Supplementary information: http://www.expression.washington.edu/publications/kayee/bma --- paper_title: Gene selection: a Bayesian variable selection approach paper_content: Selection of significant genes via expression patterns is an important problem in microarray experiments. Owing to small sample size and the large number of variables (genes), the selection process can be unstable. This paper proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables to specialize the model to a regression setting and uses a Bayesian mixture prior to perform the variable selection. We control the size of the model by assigning a prior distribution over the dimension (number of significant genes) of the model. The posterior distributions of the parameters are not in explicit form and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the parameters from the posteriors. The Bayesian model is flexible enough to identify significant genes as well as to perform future predictions. The method is applied to cancer classification via cDNA microarrays where the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify a set of significant genes. The method is also applied successfully to the leukemia data. --- paper_title: Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data paper_content: A reliable and precise classification of tumors is essential for successful diagnosis and treatment of cancer. cDNA microarrays and high-density oligonucleotide chips are novel biotechnologies increasingly used in cancer research. By allowing the monitoring of expression levels in cells for thousands of genes simultaneously, microarray experiments may lead to a more complete understanding of the molecular variations among tumors and hence to a finer and more informative classification. The ability to successfully distinguish between tumor classes (already known or yet to be discovered) using gene expression data is an important aspect of this novel approach to cancer classification. This article compares the performance of different discrimination methods for the classification of tumors based on gene expression data. The methods include nearest-neighbor classifiers, linear discriminant analysis, and classification trees. Recent machine learning approaches, such as bagging and boosting, are also considere... --- paper_title: Proteomic mass spectra classification using decision tree based ensemble methods paper_content: Motivation: Modern mass spectrometry allows the determination of proteomic fingerprints of body fluids like serum, saliva or urine. These measurements can be used in many medical applications in order to diagnose the current state or predict the evolution of a disease. Recent developments in machine learning allow one to exploit such datasets, characterized by small numbers of very high-dimensional samples. ::: ::: Results: We propose a systematic approach based on decision tree ensemble methods, which is used to automatically determine proteomic biomarkers and predictive models. The approach is validated on two datasets of surface-enhanced laser desorption/ionization time of flight measurements, for the diagnosis of rheumatoid arthritis and inflammatory bowel diseases. The results suggest that the methodology can handle a broad class of similar problems. ::: ::: Supplementary information: Additional tables, appendicies and datasets may be found at http://www.montefiore.ulg.ac.be/~geurts/Papers/Proteomic-suppl.html ::: ::: Contact: [email protected] --- paper_title: Tissue classification with gene expression profiles paper_content: Constantly improving gene expression profiling technologies are expected to provide understanding and insight into cancer related cellular processes. Gene expression data is also expected to significantly and in the development of efficient cancer diagnosis and classification platforms. In this work we examine two sets of gene expression data measured across sets of tumor and normal clinical samples One set consists of 2,000 genes, measured in 62 epithelial colon samples [1]. The second consists of a 100,000 clones, measured in 32 ovarian samples (unpublished, extension of data set described in [26]). We examine the use of scoring methods, measuring separation of tumors from normals using individual gene expression levels. These are then coupled with high dimensional classification methods to assess the classification power of complete expression profiles. We present results of performing leave-one-out cross validation (LOOCV) experiments on the two data sets. employing SVM [8], AdaBoost [13] and a novel clustering based classification technique. As tumor samples can differ from normal samples in their cell-type composition we also perform LOOCV experiments using appropriately modified sets of genes, attempting to eliminate the resulting bias. We demonstrate success rate of at least 90% in tumor vs normal classification, using sets of selected genes, with as well as without cellular contamination related members. These results are insensitive to the exact selection mechanism, over a certain range. --- paper_title: How Many Genes Are Needed for a Discriminant Microarray Data Analysis ? paper_content: The analysis of the leukemia data from Whitehead/MIT group is a discriminant analysis (also called a supervised learning). Among thousands of genes whose expression levels are measured, not all are needed for discriminant analysis: a gene may either not contribute to the separation of two types of tissues/cancers, or it may be redundant because it is highly correlated with other genes. There are two theoretical frameworks in which variable selection (or gene selection in our case) can be addressed. The first is model selection, and the second is model averaging. We have carried out model selection using Akaike information criterion and Bayesian information criterion with logistic regression (discrimination, prediction, or classification) to determine the number of genes that provide the best model. These model selection criteria set upper limits of 22-25 and 12-13 genes for this data set with 38 samples, and the best model consists of only one (no.4847, zyxin) or two genes. We have also carried out model averaging over the best single-gene logistic predictors using three different weights: maximized likelihood, prediction rate on training set, and equal weight. We have observed that the performance of most of these weighted predictors on the testing set is gradually reduced as more genes are included, but a clear cutoff that separates good and bad prediction performance is not found. --- paper_title: Feature selection and nearest centroid classification for protein mass spectrometry paper_content: BackgroundThe use of mass spectrometry as a proteomics tool is poised to revolutionize early disease diagnosis and biomarker identification. Unfortunately, before standard supervised classification algorithms can be employed, the "curse of dimensionality" needs to be solved. Due to the sheer amount of information contained within the mass spectra, most standard machine learning techniques cannot be directly applied. Instead, feature selection techniques are used to first reduce the dimensionality of the input space and thus enable the subsequent use of classification algorithms. This paper examines feature selection techniques for proteomic mass spectrometry.ResultsThis study examines the performance of the nearest centroid classifier coupled with the following feature selection algorithms. Student-t test, Kolmogorov-Smirnov test, and the P-test are univariate statistics used for filter-based feature ranking. From the wrapper approaches we tested sequential forward selection and a modified version of sequential backward selection. Embedded approaches included shrunken nearest centroid and a novel version of boosting based feature selection we developed. In addition, we tested several dimensionality reduction approaches, namely principal component analysis and principal component analysis coupled with linear discriminant analysis. To fairly assess each algorithm, evaluation was done using stratified cross validation with an internal leave-one-out cross-validation loop for automated feature selection. Comprehensive experiments, conducted on five popular cancer data sets, revealed that the less advocated sequential forward selection and boosted feature selection algorithms produce the most consistent results across all data sets. In contrast, the state-of-the-art performance reported on isolated data sets for several of the studied algorithms, does not hold across all data sets.ConclusionThis study tested a number of popular feature selection methods using the nearest centroid classifier and found that several reportedly state-of-the-art algorithms in fact perform rather poorly when tested via stratified cross-validation. The revealed inconsistencies provide clear evidence that algorithm evaluation should be performed on several data sets using a consistent (i.e., non-randomized, stratified) cross-validation procedure in order for the conclusions to be statistically sound. --- paper_title: Joint analysis of two microarray gene-expression data sets to select lung adenocarcinoma marker genes paper_content: Background ::: Due to the high cost and low reproducibility of many microarray experiments, it is not surprising to find a limited number of patient samples in each study, and very few common identified marker genes among different studies involving patients with the same disease. Therefore, it is of great interest and challenge to merge data sets from multiple studies to increase the sample size, which may in turn increase the power of statistical inferences. In this study, we combined two lung cancer studies using micorarray GeneChip®, employed two gene shaving methods and a two-step survival test to identify genes with expression patterns that can distinguish diseased from normal samples, and to indicate patient survival, respectively. --- paper_title: Comparison of statistical methods for classification of ovarian cancer using mass spectrometry data paper_content: Motivation: Novel methods, both molecular and statistical, are urgently needed to take advantage of recent advances in biotechnology and the human genome project for disease diagnosis and prognosis. Mass spectrometry (MS) holds great promise for biomarker identification and genome-wide protein profiling. It has been demonstrated in the literature that biomarkers can be identified to distinguish normal individuals from cancer patients using MS data. Such progress is especially exciting for the detection of early-stage ovarian cancer patients. Although various statistical methods have been utilized to identify biomarkers from MS data, there has been no systematic comparison among these approaches in their relative ability to analyze MS data. Results: We compare the performance of several classes of statistical methods for the classification of cancer based on MS spectra. These methods include: linear discriminant analysis, quadratic discriminant analysis, k -nearest neighbor classifier, bagging and boosting classification trees, support vector machine, and random forest (RF). The methods are applied to ovarian cancer and control serum samples from the National Ovarian Cancer Early Detection Program clinic at Northwestern University Hospital. We found that RF outperforms other methods in the analysis of MS data. --- paper_title: Identifying differentially expressed genes from microarray experiments via statistic synthesis paper_content: Motivation: A common objective of microarray experiments is the detection of differential gene expression between samples obtained under different conditions. The task of identifying differentially expressed genes consists of two aspects: ranking and selection. Numerous statistics have been proposed to rank genes in order of evidence for differential expression. However, no one statistic is universally optimal and there is seldom any basis or guidance that can direct toward a particular statistic of choice. ::: ::: Results: Our new approach, which addresses both ranking and selection of differentially expressed genes, integrates differing statistics via a distance synthesis scheme. Using a set of (Affymetrix) spike-in datasets, in which differentially expressed genes are known, we demonstrate that our method compares favorably with the best individual statistics, while achieving robustness properties lacked by the individual statistics. We further evaluate performance on one other microarray study. ::: ::: Availability: The approach is implemented in an R package called DEDS, which is available for download from the Bioconductor website (http://www.bioconductor.org/). ::: ::: Contact: [email protected] --- paper_title: Bayesian neural network approaches to ovarian cancer identification from high-resolution mass spectrometry data paper_content: Motivation: The classification of high-dimensional data is always a challenge to statistical machine learning. We propose a novel method named shallow feature selection that assigns each feature a probability of being selected based on the structure of training data itself. Independent of particular classifiers, the high dimension of biodata can be fleetly reduced to an applicable case for consequential processing. Moreover, to improve both efficiency and performance of classification, these prior probabilities are further used to specify the distributions of top-level hyperparameters in hierarchical models of Bayesian neural network (BNN), as well as the parameters in Gaussian process models. ::: ::: Results: Three BNN approaches were derived and then applied to identify ovarian cancer from NCI's high-resolution mass spectrometry data, which yielded an excellent performance in 1000 independent k-fold cross validations (k = 2,...,10). For instance, indices of average sensitivity and specificity of 98.56 and 98.42%, respectively, were achieved in the 2-fold cross validations. Furthermore, only one control and one cancer were misclassified in the leave-one-out cross validation. Some other popular classifiers were also tested for comparison. ::: ::: Availability: The programs implemented in MatLab, R and Neal's fbm.2004-11-10. ::: ::: Contact: [email protected] --- paper_title: Application of a genetic algorithm — support vector machine hybrid for prediction of clinical phenotypes based on genome-wide SNP profiles of sib pairs paper_content: Large-scale genome-wide genetic profiling using markers of single nucleotide polymorphisms (SNPs) has offered the opportunities to investigate the possibility of using those biomarkers for predicting genetic risks. Because of the special data structure characterized with a high dimension, signal-to-noise ratio and correlations between genes, but with a relative small sample size, the data analysis needs special strategies. We propose a robust data reduction technique based on a hybrid between genetic algorithm and support vector machine. The major goal of this hybridization is to fully exploit their respective merits (e.g., robustness to the size of solution space and capability of handling a very large dimension of features) for identification of key SNP features for risk prediction. We have applied the approach to the Genetic Analysis Workshop 14 COGA data to predict affection status of a sib pair based on genome-wide SNP identical-by-decent (IBD) informatics. This application has demonstrated its potential to extract useful information from the massive SNP data. --- paper_title: BNTagger: improved tagging SNP selection using Bayesian networks paper_content: Genetic variation analysis holds much promise as a basis for disease-gene association. However, due to the tremendous number of candidate single nucleotide polymorphisms (SNPs), there is a clear need to expedite genotyping by selecting and considering only a subset of all SNPs. This process is known as tagging SNP selection. Several methods for tagging SNP selection have been proposed, and have shown promising results. However, most of them rely on strong assumptions such as prior block-partitioning, bi-allelic SNPs, or a fixed number or location of tagging SNPs. We introduce BNTagger, a new method for tagging SNP selection, basedonconditional independence amongSNPs.Usingtheformalism of Bayesian networks (BNs), our system aims to select a subset of independentandhighlypredictiveSNPs.Similartopreviouspredictionbased methods, we aim to maximize the prediction accuracy of tagging SNPs, but unlike them, we neither fix the number nor the location of predictive tagging SNPs, nor require SNPs to be bi-allelic. In addition, for newly-genotyped samples, BNTagger directly uses genotype data as input, while producing as output haplotype data of all SNPs. Usingthreepublicdatasets,wecomparethepredictionperformance of our method to that of three state-of-the-art tagging SNP selection methods. The results demonstrate that our method consistently improves upon previous methods in terms of prediction accuracy. Moreover, our method retains its good performance even when a very small number of tagging SNPs are used. Contact: [email protected], [email protected] --- paper_title: Combining Functional and Linkage Disequilibrium Information in the Selection of Tag SNPs paper_content: Summary: We have developed an online program, WCLUSTAG, for tag SNP selection that allows the user to specify variable tagging thresholds for different SNPs. Tag SNPs are selected such that a SNP with user-specified tagging threshold C will have a minimum R2 of C with at least one tag SNP. This flexible feature is useful for researchers who wish to prioritize genomic regions or SNPs in an association study. ::: ::: Availability: The online WCLUSTAG program is available at http://bioinfo.hku.hk/wclustag/ ::: ::: Contact: [email protected] --- paper_title: The structure of haplotype blocks in the human genome. paper_content: Haplotype-based methods offer a powerful approach to disease gene mapping, based on the association between causal mutations and the ancestral haplotypes on which they arose. As part of The SNP Consortium Allele Frequency Projects, we characterized haplotype patterns across 51 autosomal regions (spanning 13 megabases of the human genome) in samples from Africa, Europe, and Asia. We show that the human genome can be parsed objectively into haplotype blocks: sizable regions over which there is little evidence for historical recombination and within which only a few common haplotypes are observed. The boundaries of blocks and specific haplotypes they contain are highly correlated across populations. We demonstrate that such haplotype frameworks provide substantial statistical power in association studies of common genetic variation across each region. Our results provide a foundation for the construction of a haplotype map of the human genome, facilitating comprehensive genetic association studies of human disease. --- paper_title: Tumor Classification based on DNA Copy Number Aberrations Determined using SNP Arrays paper_content: High-density single nucleotide polymorphism (SNP) array is a recently introduced technology that genotypes more than 10,000 human SNPs on a single array. It has been shown that SNP arrays can be used to determine not only SNP genotype calls, but also DNA copy number (DCN) aberrations, which are common in solid tumors. In the past, effective cancer classification has been demonstrated using microarray gene expression data, or DCN data derived from comparative genomic hybridization (CGH) arrays. However, the feasibility of cancer classification based on DCN aberrations determined by SNP arrays has not been previously investigated. In this study, we address this issue by applying state-of-the-art classification algorithms and feature selection algorithms to the DCN aberration data derived from a public SNP array dataset. Performance was measured via leave-one-out cross-validation (LOOCV) classification accuracy. Experimental results showed that the maximum accuracy was 73.33%, which is comparable to the maximum accuracy of 76.5% based on CGH-derived DCN data reported previously in the literature. These results suggest that DCN aberration data derived from SNP arrays is useful for etiology-based tumor classification. --- paper_title: Selecting a Maximally Informative Set of Single-Nucleotide Polymorphisms for Association Analyses Using Linkage Disequilibrium paper_content: Common genetic polymorphisms may explain a portion of the heritable risk for common diseases. Within candidate genes, the number of common polymorphisms is finite, but direct assay of all existing common polymorphism is inefficient, because genotypes at many of these sites are strongly correlated. Thus, it is not necessary to assay all common variants if the patterns of allelic association between common variants can be described. We have developed an algorithm to select the maximally informative set of common single-nucleotide polymorphisms (tagSNPs) to assay in candidate-gene association studies, such that all known common polymorphisms either are directly assayed or exceed a threshold level of association with a tagSNP. The algorithm is based on the r(2) linkage disequilibrium (LD) statistic, because r(2) is directly related to statistical power to detect disease associations with unassayed sites. We show that, at a relatively stringent r(2) threshold (r2>0.8), the LD-selected tagSNPs resolve >80% of all haplotypes across a set of 100 candidate genes, regardless of recombination, and tag specific haplotypes and clades of related haplotypes in nonrecombinant regions. Thus, if the patterns of common variation are described for a candidate gene, analysis of the tagSNP set can comprehensively interrogate for main effects from common functional variation. We demonstrate that, although common variation tends to be shared between populations, tagSNPs should be selected separately for populations with different ancestries. --- paper_title: Finding Haplotype Tagging SNPs by Use of Principal Components Analysis paper_content: The immense volume and rapid growth of human genomic data, especially single nucleotide polymorphisms (SNPs), present special challenges for both biomedical researchers and automatic algorithms. One such challenge is to select an optimal subset of SNPs, commonly referred as “haplotype tagging SNPs” (htSNPs), to capture most of the haplotype diversity of each haplotype block or gene-specific region. This information-reduction process facilitates cost-effective genotyping and, subsequently, genotype-phenotype association studies. It also has implications for assessing the risk of identifying research subjects on the basis of SNP information deposited in public domain databases. We have investigated methods for selecting htSNPs by use of principal components analysis (PCA). These methods first identify eigenSNPs and then map them to actual SNPs. We evaluated two mapping strategies, greedy discard and varimax rotation, by assessing the ability of the selected htSNPs to reconstruct genotypes of non-htSNPs. We also compared these methods with two other htSNP finders, one of which is PCA based. We applied these methods to three experimental data sets and found that the PCA-based methods tend to select the smallest set of htSNPs to achieve a 90% reconstruction precision. --- paper_title: Data mining and genetic algorithm based gene / SNP selection paper_content: Objective: Genomic studies provide large volumes of data with the number of single nucleotide polymorphisms (SNPs) ranging into thousands. The analysis of SNPs permits determining relationships between genotypic and phenotypic information as well as the identification of SNPs related to a disease. The growing wealth of information and advances in biology call for the development of approaches for discovery of new knowledge. One such area is the identification of gene/SNP patterns impacting cure/drug development for various diseases. Methods: A new approach for predicting drug effectiveness is presented. The approach is based on data mining and genetic algorithms. A global search mechanism, weighted decision tree, decision-tree-based wrapper, a correlation-based heuristic, and the identification of intersecting feature sets are employed for selecting significant genes. Results: The feature selection approach has resulted in 85% reduction of number of features. The relative increase in cross-validation accuracy and specificity for the significant gene/SNP set was 10% and 3.2%, respectively. Conclusion: The feature selection approach was successfully applied to data sets for drug and placebo subjects. The number of features has been significantly reduced while the quality of knowledge was enhanced. The feature set intersection approach provided the most significant genes/SNPs. The results reported in the paper discuss associations among SNPs resulting in patient-specific treatment protocols. --- paper_title: CHOISS for selection of single nucleotide polymorphism markers on interval regularity paper_content: Summary: We developed algorithms that find a set of single nucleotide polymorphism (SNP) markers based on interval regularity, given either the number of SNPs to choose (m) or the desired interval (I), subject to minimum variance or minimum sum of squared deviations from I. In both cases, the number of all possible sets increases exponentially with respect to the number of input SNPs (n), but our algorithms find the minima only with O(n2) calculations and comparisons by elimination of redundancy. ::: ::: Availability: A Windows executable program CHOISS is freely available at http://biochem.kaist.ac.kr/choiss.htm ::: ::: Supplementary information: http://biochem.kaist.ac.kr/choiss.htm --- paper_title: High-resolution haplotype structure in the human genome paper_content: Linkage disequilibrium (LD) analysis is traditionally based on individual genetic markers and often yields an erratic, non-monotonic picture, because the power to detect allelic associations depends on specific properties of each marker, such as frequency and population history. Ideally, LD analysis should be based directly on the underlying haplotype structure of the human genome, but this structure has remained poorly understood. Here we report a high-resolution analysis of the haplotype structure across 500 kilobases on chromosome 5q31 using 103 single-nucleotide polymorphisms (SNPs) in a European-derived population. The results show a picture of discrete haplotype blocks (of tens to hundreds of kilobases), each with limited diversity punctuated by apparent sites of recombination. In addition, we develop an analytical model for LD mapping based on such haplotype blocks. If our observed structure is general (and published data suggest that it may be), it offers a coherent framework for creating a haplotype map of the human genome. --- paper_title: BIOINFORMATICS MLR-Tagging: Informative SNP Selection for Unphased Genotypes Based on Multiple Linear Regression paper_content: UNLABELLED ::: The search for the association between complex diseases and single nucleotide polymorphisms (SNPs) or haplotypes has recently received great attention. For these studies, it is essential to use a small subset of informative SNPs accurately representing the rest of the SNPs. Informative SNP selection can achieve (1) considerable budget savings by genotyping only a limited number of SNPs and computationally inferring all other SNPs or (2) necessary reduction of the huge SNP sets (obtained, e.g. from Affymetrix) for further fine haplotype analysis. A novel informative SNP selection method for unphased genotype data based on multiple linear regression (MLR) is implemented in the software package MLR-tagging. This software can be used for informative SNP (tag) selection and genotype prediction. The stepwise tag selection algorithm (STSA) selects positions of the given number of informative SNPs based on a genotype sample population. The MLR SNP prediction algorithm predicts a complete genotype based on the values of its informative SNPs, their positions among all SNPs, and a sample of complete genotypes. An extensive experimental study on various datasets including 10 regions from HapMap shows that the MLR prediction combined with stepwise tag selection uses fewer tags than the state-of-the-art method of Halperin et al. (2005). ::: ::: ::: AVAILABILITY ::: MLR-Tagging software package is publicly available at http://alla.cs.gsu.edu/~software/tagging/tagging.html --- paper_title: Large-scale ensemble decision analysis of sib-pair IBD profiles for identification of the relevant molecular signatures for alcoholism paper_content: The large-scale genome-wide SNP data being acquired from biomedical domains have offered resources to evaluate modern data mining techniques in applications to genetic studies. The purpose of this study is to extend our recently developed gene mining approach to extracting the relevant SNPs for alcoholism using sib-pair IBD profiles of pedigrees. Application to a publicly available large dataset of 100 simulated replicates for three American populations demonstrates that the proposed ensemble decision approach has successfully identified most of the simulated true loci, thus implicating that IBD statistic could be used as one of the informatics for mining the genetic underpins for complex human diseases. --- paper_title: Literature mining for the biologist: from information retrieval to biological discovery paper_content: For the average biologist, hands-on literature mining currently means a keyword search in PubMed. However, methods for extracting biomedical facts from the scientific literature have improved considerably, and the associated tools will probably soon be used in many laboratories to automatically annotate and analyse the growing number of system-wide experimental data sets. Owing to the increasing body of text and the open-access policies of many journals, literature mining is also becoming useful for both hypothesis generation and biological discovery. However, the latter will require the integration of literature and high-throughput data, which should encourage close collaborations between biologists and computational linguists. --- paper_title: An extensive empirical study of feature selection metrics for text classification paper_content: Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g. Information Gain) evaluated on a benchmark of 229 text classification problem instances that were gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal perspectives-accuracy, F-measure, precision, and recall-since each is appropriate in different situations. The results reveal that a new feature selection metric we call 'Bi-Normal Separation' (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text classification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focuses on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspective, BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a member of the pair---e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin. --- paper_title: Combining NLP and probabilistic categorisation for document and term selection for Swiss-Prot medical annotation paper_content: Motivation: Searching relevant publications for manual database annotation is a tedious task. In this paper, we apply a combination of Natural Language Processing (NLP) and probabilistic classification to re-rank documents returned by PubMed according to their relevance to SwissProt annotation, and to identify significant terms in the documents. ---
Title: A Review of Feature Selection Techniques in Bioinformatics Section 1: Introduction Description 1: Provide an overview of the importance and growing necessity of feature selection techniques in bioinformatics, emphasizing their advantages and the focus on supervised learning. Section 2: Feature Selection Techniques Description 2: Discuss the main categories of feature selection techniques, namely filter methods, wrapper methods, and embedded methods, including their advantages, disadvantages, and examples. Section 3: Feature Selection for Sequence Analysis Description 3: Detail the application of feature selection techniques in sequence analysis, distinguishing between content analysis and signal analysis, with examples of various methodologies used. Section 4: Content Analysis Description 4: Describe feature selection techniques used in predicting protein-coding subsequences and protein function from sequences, including specific methods and their applications. Section 5: Signal Analysis Description 5: Explain feature selection in the context of identifying important motifs and structural elements within sequences, with examples of methodologies and their application. Section 6: Feature Selection for Microarray Analysis Description 6: Highlight the challenges of microarray data analysis and the role of feature selection techniques, detailing various methods used and their suitability to the data characteristics. Section 7: Mass Spectra Analysis Description 7: Discuss the emerging role of feature selection techniques in mass spectrometry data analysis, including popular methods, challenges, and specific examples of techniques used. Section 8: Dealing with Small Sample Domains Description 8: Address the challenges posed by small sample sizes in bioinformatics and discuss initiatives for robust evaluation and feature selection approaches in such scenarios. Section 9: Single Nucleotide Polymorphism Analysis Description 9: Outline feature selection techniques for SNP analysis, focusing on methods for selecting informative subsets for disease-gene association studies. Section 10: Text and Literature Mining Description 10: Explore the application of feature selection techniques in text and literature mining within the biomedical domain, providing examples and methodologies used. Section 11: Feature Selection Software Packages Description 11: Provide an overview of existing software packages available for feature selection in various bioinformatics applications, including their main references and implementation details. Section 12: Conclusions and Future Perspectives Description 12: Summarize the review, highlighting common challenges and opportunities for future research in feature selection techniques in bioinformatics.
Multicast Routing Protocols in Mobile Ad Hoc Networks: A Comparative Survey and Taxonomy
9
--- paper_title: Host extensions for IP multicasting paper_content: This memo specifies the extensions required of a host implementation ::: of the Internet Protocol (IP) to support multicasting. Recommended ::: procedure for IP multicasting in the Internet. This RFC obsoletes RFCs ::: 998 and 1054. [STANDARDS-TRACK] --- paper_title: Ad hoc mobile wireless networks : protocols and systems paper_content: About The Author. Preface. Acknowledgments. Quotes & Words of Wisdom. 1. Introduction to Wireless Networks. Evolution of Mobile Cellular Networks. Global System for Mobile Communications (GSM). General Packet Radio Service (GPRS). Personal Communications Services (PCSs). Wireless LANs (WLANS). Universal Mobile Telecommunications System (UMTS). IMT2000. IS-95, cdmaOne and cdma2000 Evolution. Organization of this Book. 2. Origins Of Ad Hoc: Packet Radio Networks. Introduction. Technical Challenges. Architecture of PRNETs. Components of Packet Radios. Routing in PRNETs. Route Calculation. Pacing Techniques. Media Access in PRNETs. Flow Acknowledgments in PRNETs. Conclusions. 3. Ad Hoc Wireless Networks. What Is an Ad Hoc Network? Heterogeneity in Mobile Devices. Wireless Sensor Networks. Traffic Profiles. Types of Ad Hoc Mobile Communications. Types of Mobile Host Movements. Challenges Facing Ad Hoc Mobile Networks. Conclusions. 4. Ad Hoc Wireless Media Access Protocols. Introduction. Problems in Ad Hoc Channel Access. Receiver-Initiated MAC Protocols. Sender-Initiated MAC Protocols. Existing Ad Hoc MAC Protocols. MARCH: Media Access with Reduced Handshake. Conclusions. 5. Overview of Ad Hoc Routing Protocols. Table-Driven Approaches. Destination Sequenced Distance Vector (DSDV). Wireless Routing Protocol (WRP). Cluster Switch Gateway Routing (CSGR). Source-Initiated On-Demand Approaches. Ad Hoc On-Demand Distance Vector Routing (AODV). Dynamic Source Routing (DSR). Temporally Ordered Routing Algorithm (TORA). Signal Stability Routing (SSR). Location-Aided Routing (LAR). Power-Aware Routing (PAR). Zone Routing Protocol (ZRP). Source Tree Adaptive Routing (STAR). Relative Distance Microdiversity Routing (RDMAR). Conclusions. 6. Associativity-Based Long-Lived Routing. A New Routing Paradigm. Associativity-Based Long-Lived Routing. ABR Protocol Description. Conclusions. 7. Implementation Of Ad Hoc Mobile Networks. Introduction. ABR Protocol Implementation in Linux. Experimentation and Protocol Performance. Important Deductions. Conclusions. 8. Communication Performance of Ad Hoc Networks. Introduction. Performance Parameters of Interest. Route Discovery (RD) Time. End-to-End Delay (EED) Performance. Communication Throughput Performance. Packet Loss Performance. Route Reconfiguration/Repair Time. TCP/IP-Based Applications. Conclusions. 9. Energy Conservation: Power Life Issues. Introduction. Power Management. Advances in Device Power Management. Advances in Protocol Power Management. Power Conservation by Mobile Applications. Periodic Beaconing On Battery Life. Standalone Beaconing. HF Beaconing with Neighboring Nodes. Comparison of HF Beaconing with and without Neighbors. LF Beaconing with Neighboring Nodes. Comparison of LF Beaconing with and without Neighbors. Deductions. Conclusions. 10. Ad Hoc Wireless Multicast Routing. Multicasting in Wired Networks. Multicast Routing in Mobile Ad Hoc Networks. Existing Ad Hoc Multicast Routing Protocols. ABAM: Associativity-Based Ad Hoc Multicast. Comparisons of Multicast Routing Protocols. Conclusions. 11. TCP Over Ad Hoc. Introduction to TCP. Versions of TCP. Problems Facing TCP in Wireless Last-Hop. Problems Facing TCP in Wireless Ad Hoc. Approaches to TCP over Ad Hoc. Conclusion. 12. Internet & Ad Hoc Service Discovery. Resource Discovery in the Internet. Service Location Protocol (SLP) Architecture. SLPv2 Packet Format. Jini. Salutation Protocol. Simple Service Discovery Protocol (SSDP). Service Discovery for Ad Hoc. Ad Hoc Service Location Architectures. Conclusions. 13. BLUETOOTH TECHNOLOGY. Bluetooth Specifications. Bluetooth Architectures. Bluetooth Protocols. Bluetooth Service Discovery. Bluetooth MAC. Bluetooth Packet Structure. Bluetooth Audio. Bluetooth Addressing. Bluetooth Limitations. Bluetooth Implementation. Conclusions. 14. WIRELESS APPLICATION PROTOCOL (WAP). The WAP Forum. The WAP Service Model. The WAP Protocol Architecture. The WWW Programming Model. The WAP Programming Model. Conclusions. 15. Ad Hoc Nomadic Mobile Applications. In the Office. While Traveling. Arriving Home. In the Car. Shopping Malls. The Modern Battlefield. Car-to-Car Mobile Communications. Mobile Collaborative Applications. Location/Context Based Mobile Services. Conclusions. 16. Conclusions and The Future. Pervasive Computing. Motorola PIANO Project. UC Berkeley Sensor Networks: Smart Dust. EPFL Terminodes/Large-Scale Networks. 802.15 PANs and 802.16 Wireless MANs. Ad Hoc Everywhere? Glossary of Terms. References. Index. --- paper_title: Shared tree wireless network multicast paper_content: In this paper we propose a multicast protocol for a multihop, mobile wireless network with cluster based routing and token access protocol within each cluster. The multicast protocol uses a shared tree which is dynamically updated to adjust to changes in topology and membership (i.e. dynamic joins and quits). Two options for tree maintenance have been simulated and evaluated: "hard state" (i.e. each connection must be explicitly cleared) and "soft state" (each connection is automatically timed out and must be refreshed). For the soft state policy, the performance of different choices of timeout and refresh timers is first analyzed for a range of node mobility values. Next, soft state and hard state policies are compared based on throughput, join delay, and control overhead criteria. --- paper_title: Routing and multicast in multihop, mobile wireless networks paper_content: In this paper we present a multicast protocol which builds upon a cluster based wireless network infrastructure. First, we introduce the network infrastructure which includes several innovative features such as: minimum change cluster formation; dynamic priority token access protocol, and distributed hierarchical routing. Then, for this infrastructure we propose a multicast protocol which is inspired by the core based tree approach developed for the Internet. We show that the multicast protocol is robust to mobility, has low bandwidth overhead and latency, scales well with membership group size, and can be generalized to other wireless infrastructures. --- paper_title: Tree Multicast Strategies in Mobile, Multihop Wireless Networks paper_content: Tree multicast is a well established concept in wired networks. Two versions, per-source tree multicast (e.g., DVMRP) and shared tree multicast (e.g., Core Based Tree), account for the majority of the wireline implementations. In this paper, we extend the tree multicast concept to wireless, mobile, multihop networks for applications ranging from ad hoc networking to disaster recovery and battlefield. The main challenge in wireless, mobile networks is the rapidly changing environment. We address this issue in our design by: (a) using “soft state”; (b) assigning different roles to nodes depending on their mobility (2-level mobility model); (c) proposing an adaptive scheme which combines shared tree and per-source tree benefits, and (d) dynamically relocating the shared tree Rendezvous Point (RP). A detailed wireless simulation model is used to evaluate various multicast schemes. The results show that per-source trees perform better in heavy loads because of the more efficient traffic distribution; while shared trees are more robust to mobility and are more scalable to large network sizes. The adaptive tree multicast scheme, a hybrid between shared tree and per-source tree, combines the advantages of both and performs consistently well across all load and mobility scenarios. The main contributions of this study are: the use of a 2-level mobility model to improve the stability of the shared tree, the development of a hybrid, adaptive per-source and shared tree scheme, and the dynamic relocation of the RP in the shared tree. --- paper_title: Wireless Network Multicasting paper_content: Wireless networks provide mobile users with ubiquitous communicating capability and information access regardless of location. Conventional ground radio networks are the "last hop" extension of a wireline network, thus supporting only single hop communications within a "cell". In this dissertation we address a novel type of wireless networks called "multihop" networks. As a difference from "single hop" (i.e., cellular) networks which require fixed base stations inter-connected by a wired backbone, multihop networks have no fixed based stations nor a wired backbone. The main application for mobile wireless multihopping is rapid deployment and dynamic reconfiguration. When the wireline network is not available, as in battlefield communications and search and rescue operations, multihop wireless networks provide the only feasible means for ground communications and information access. Multihopping poses several new challenges in the design of wireless network protocols. We focus on multicasting in this thesis. ::: The multicast service is critical in applications characterized by the close collaboration of teams (e.g., rescue patrol, battalion, scientists, etc.) with audio/video conferencing requirements and sharing of text and images. Multicasting in a multihop wireless network is much more complex than in cellular wireless networks where all mobiles in a cell can be reached in a single hop. In fact, one or more multicast structures (e.g., trees) are maintained in the multihop network to efficiently deliver packets from sources to destinations in the multicast group. Multicast solutions similar to those used in mesh wireline networks such as the Internet might be considered. Yet, these solutions are not directly applicable to wireless networks because of the mobility of the users and the dynamically changing topology. In this dissertation we evaluate various popular multicast protocols via simulations and propose new protocols which are well suitable for multihop networks. ::: This dissertation mainly covers five areas: (1) Cluster-Token infrastructure and cluster routing; (2) Shared tree wireless multicast routing protocols; (3) Wireless multicast routing without Rendezvous Points; (4) On-demand wireless multicast; (5) Reliable wireless multicast. --- paper_title: Adaptive shared tree multicast in mobile wireless networks paper_content: Shared tree multicast is a well established concept used in several multicast protocols for wireline networks (e.g. core base tree, PIM sparse mode etc). In this paper, we extend the shared tree concept to wireless, mobile, multihop networks for applications ranging from ad hoc networking to disaster recovery and battlefield. The main challenge in wireless, mobile networks is the rapidly changing environment. We address this issue in our design by: (a) using "soft state"; (b) assigning different roles to nodes depending on their mobility (two level mobility model); (c) proposing an adaptive scheme which combines shared tree and source tree benefits. A detailed wireless simulation model is used to evaluate the proposed schemes and compare them with source based tree (as opposed to shared tree) multicast. The results show that shared tree protocols have low overhead and are very robust to mobility. --- paper_title: Handbook of Wireless Networks and Mobile Computing paper_content: If you want to get Handbook of Internet Computing pdf eBook copy write by good Handbook of Wireless Networks and Mobile Computing Google Books. Mobile Computing General. Handbook of Algorithms for Wireless Networking and Mobile Computing by Azzedine Boukerche (Editor). Call Number: TK 5103.2. CITS4419 Mobile and Wireless Computing software projects related to wireless networks, (2) write technical reports and documentation for complex computer. --- paper_title: Ad Hoc Wireless Networks: Architectures and Protocols paper_content: Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. --- paper_title: Ad hoc mobile wireless networks : protocols and systems paper_content: About The Author. Preface. Acknowledgments. Quotes & Words of Wisdom. 1. Introduction to Wireless Networks. Evolution of Mobile Cellular Networks. Global System for Mobile Communications (GSM). General Packet Radio Service (GPRS). Personal Communications Services (PCSs). Wireless LANs (WLANS). Universal Mobile Telecommunications System (UMTS). IMT2000. IS-95, cdmaOne and cdma2000 Evolution. Organization of this Book. 2. Origins Of Ad Hoc: Packet Radio Networks. Introduction. Technical Challenges. Architecture of PRNETs. Components of Packet Radios. Routing in PRNETs. Route Calculation. Pacing Techniques. Media Access in PRNETs. Flow Acknowledgments in PRNETs. Conclusions. 3. Ad Hoc Wireless Networks. What Is an Ad Hoc Network? Heterogeneity in Mobile Devices. Wireless Sensor Networks. Traffic Profiles. Types of Ad Hoc Mobile Communications. Types of Mobile Host Movements. Challenges Facing Ad Hoc Mobile Networks. Conclusions. 4. Ad Hoc Wireless Media Access Protocols. Introduction. Problems in Ad Hoc Channel Access. Receiver-Initiated MAC Protocols. Sender-Initiated MAC Protocols. Existing Ad Hoc MAC Protocols. MARCH: Media Access with Reduced Handshake. Conclusions. 5. Overview of Ad Hoc Routing Protocols. Table-Driven Approaches. Destination Sequenced Distance Vector (DSDV). Wireless Routing Protocol (WRP). Cluster Switch Gateway Routing (CSGR). Source-Initiated On-Demand Approaches. Ad Hoc On-Demand Distance Vector Routing (AODV). Dynamic Source Routing (DSR). Temporally Ordered Routing Algorithm (TORA). Signal Stability Routing (SSR). Location-Aided Routing (LAR). Power-Aware Routing (PAR). Zone Routing Protocol (ZRP). Source Tree Adaptive Routing (STAR). Relative Distance Microdiversity Routing (RDMAR). Conclusions. 6. Associativity-Based Long-Lived Routing. A New Routing Paradigm. Associativity-Based Long-Lived Routing. ABR Protocol Description. Conclusions. 7. Implementation Of Ad Hoc Mobile Networks. Introduction. ABR Protocol Implementation in Linux. Experimentation and Protocol Performance. Important Deductions. Conclusions. 8. Communication Performance of Ad Hoc Networks. Introduction. Performance Parameters of Interest. Route Discovery (RD) Time. End-to-End Delay (EED) Performance. Communication Throughput Performance. Packet Loss Performance. Route Reconfiguration/Repair Time. TCP/IP-Based Applications. Conclusions. 9. Energy Conservation: Power Life Issues. Introduction. Power Management. Advances in Device Power Management. Advances in Protocol Power Management. Power Conservation by Mobile Applications. Periodic Beaconing On Battery Life. Standalone Beaconing. HF Beaconing with Neighboring Nodes. Comparison of HF Beaconing with and without Neighbors. LF Beaconing with Neighboring Nodes. Comparison of LF Beaconing with and without Neighbors. Deductions. Conclusions. 10. Ad Hoc Wireless Multicast Routing. Multicasting in Wired Networks. Multicast Routing in Mobile Ad Hoc Networks. Existing Ad Hoc Multicast Routing Protocols. ABAM: Associativity-Based Ad Hoc Multicast. Comparisons of Multicast Routing Protocols. Conclusions. 11. TCP Over Ad Hoc. Introduction to TCP. Versions of TCP. Problems Facing TCP in Wireless Last-Hop. Problems Facing TCP in Wireless Ad Hoc. Approaches to TCP over Ad Hoc. Conclusion. 12. Internet & Ad Hoc Service Discovery. Resource Discovery in the Internet. Service Location Protocol (SLP) Architecture. SLPv2 Packet Format. Jini. Salutation Protocol. Simple Service Discovery Protocol (SSDP). Service Discovery for Ad Hoc. Ad Hoc Service Location Architectures. Conclusions. 13. BLUETOOTH TECHNOLOGY. Bluetooth Specifications. Bluetooth Architectures. Bluetooth Protocols. Bluetooth Service Discovery. Bluetooth MAC. Bluetooth Packet Structure. Bluetooth Audio. Bluetooth Addressing. Bluetooth Limitations. Bluetooth Implementation. Conclusions. 14. WIRELESS APPLICATION PROTOCOL (WAP). The WAP Forum. The WAP Service Model. The WAP Protocol Architecture. The WWW Programming Model. The WAP Programming Model. Conclusions. 15. Ad Hoc Nomadic Mobile Applications. In the Office. While Traveling. Arriving Home. In the Car. Shopping Malls. The Modern Battlefield. Car-to-Car Mobile Communications. Mobile Collaborative Applications. Location/Context Based Mobile Services. Conclusions. 16. Conclusions and The Future. Pervasive Computing. Motorola PIANO Project. UC Berkeley Sensor Networks: Smart Dust. EPFL Terminodes/Large-Scale Networks. 802.15 PANs and 802.16 Wireless MANs. Ad Hoc Everywhere? Glossary of Terms. References. Index. --- paper_title: Handbook of Wireless Networks and Mobile Computing paper_content: If you want to get Handbook of Internet Computing pdf eBook copy write by good Handbook of Wireless Networks and Mobile Computing Google Books. Mobile Computing General. Handbook of Algorithms for Wireless Networking and Mobile Computing by Azzedine Boukerche (Editor). Call Number: TK 5103.2. CITS4419 Mobile and Wireless Computing software projects related to wireless networks, (2) write technical reports and documentation for complex computer. --- paper_title: Ad Hoc Wireless Networks: Architectures and Protocols paper_content: Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. --- paper_title: Multicasting techniques in mobile ad hoc networks paper_content: This chapter gives a general survey of multicast protocols in mobile ad hoc networks (MANETs). After giving a brief summary of two multicast protocols in wired networks--shortest path multicast tree protocol and core-based tree multicast protocol -- we point out limitations of these protocols when they are applied in the highly dynamic environment of MANETs. Four multicast protocols -- On-Demand Multicast Routing Protocol (ODMRP), Multicast Ad Hoc On-Demand Distance Vector Routing Protocol (Multcast AODV), Forwarding Group Multicast Protocol (FGMP), and Core-Assisted Mesh Protocol -- are discussed in detail with a focus on how the limitations of multicast protocols in wired networks are overcome. A brief overview of other multicast protocols in MANETs is provided. The chapter ends with two important related issues: QoS multicast and reliable multicast in MANETs. --- paper_title: Multicast in Mobile ad hoc Networks paper_content: A method for measuring the velocity of earth surface motion utilizing coherent light energy which comprises steps of generating and directing a coherent light beam toward a selected earth surface position, reflecting said beam from a stationary reflector located at the earth surface position, and simultaneously reflecting said beam from a reflector at said earth surface position which moves with earth surface motion; and, detecting the reflected beam, including both the reflection path components, to derive an instantaneous difference frequency that is proportional to the velocity of the earth surface motion. --- paper_title: Ad hoc mobile wireless networks : protocols and systems paper_content: About The Author. Preface. Acknowledgments. Quotes & Words of Wisdom. 1. Introduction to Wireless Networks. Evolution of Mobile Cellular Networks. Global System for Mobile Communications (GSM). General Packet Radio Service (GPRS). Personal Communications Services (PCSs). Wireless LANs (WLANS). Universal Mobile Telecommunications System (UMTS). IMT2000. IS-95, cdmaOne and cdma2000 Evolution. Organization of this Book. 2. Origins Of Ad Hoc: Packet Radio Networks. Introduction. Technical Challenges. Architecture of PRNETs. Components of Packet Radios. Routing in PRNETs. Route Calculation. Pacing Techniques. Media Access in PRNETs. Flow Acknowledgments in PRNETs. Conclusions. 3. Ad Hoc Wireless Networks. What Is an Ad Hoc Network? Heterogeneity in Mobile Devices. Wireless Sensor Networks. Traffic Profiles. Types of Ad Hoc Mobile Communications. Types of Mobile Host Movements. Challenges Facing Ad Hoc Mobile Networks. Conclusions. 4. Ad Hoc Wireless Media Access Protocols. Introduction. Problems in Ad Hoc Channel Access. Receiver-Initiated MAC Protocols. Sender-Initiated MAC Protocols. Existing Ad Hoc MAC Protocols. MARCH: Media Access with Reduced Handshake. Conclusions. 5. Overview of Ad Hoc Routing Protocols. Table-Driven Approaches. Destination Sequenced Distance Vector (DSDV). Wireless Routing Protocol (WRP). Cluster Switch Gateway Routing (CSGR). Source-Initiated On-Demand Approaches. Ad Hoc On-Demand Distance Vector Routing (AODV). Dynamic Source Routing (DSR). Temporally Ordered Routing Algorithm (TORA). Signal Stability Routing (SSR). Location-Aided Routing (LAR). Power-Aware Routing (PAR). Zone Routing Protocol (ZRP). Source Tree Adaptive Routing (STAR). Relative Distance Microdiversity Routing (RDMAR). Conclusions. 6. Associativity-Based Long-Lived Routing. A New Routing Paradigm. Associativity-Based Long-Lived Routing. ABR Protocol Description. Conclusions. 7. Implementation Of Ad Hoc Mobile Networks. Introduction. ABR Protocol Implementation in Linux. Experimentation and Protocol Performance. Important Deductions. Conclusions. 8. Communication Performance of Ad Hoc Networks. Introduction. Performance Parameters of Interest. Route Discovery (RD) Time. End-to-End Delay (EED) Performance. Communication Throughput Performance. Packet Loss Performance. Route Reconfiguration/Repair Time. TCP/IP-Based Applications. Conclusions. 9. Energy Conservation: Power Life Issues. Introduction. Power Management. Advances in Device Power Management. Advances in Protocol Power Management. Power Conservation by Mobile Applications. Periodic Beaconing On Battery Life. Standalone Beaconing. HF Beaconing with Neighboring Nodes. Comparison of HF Beaconing with and without Neighbors. LF Beaconing with Neighboring Nodes. Comparison of LF Beaconing with and without Neighbors. Deductions. Conclusions. 10. Ad Hoc Wireless Multicast Routing. Multicasting in Wired Networks. Multicast Routing in Mobile Ad Hoc Networks. Existing Ad Hoc Multicast Routing Protocols. ABAM: Associativity-Based Ad Hoc Multicast. Comparisons of Multicast Routing Protocols. Conclusions. 11. TCP Over Ad Hoc. Introduction to TCP. Versions of TCP. Problems Facing TCP in Wireless Last-Hop. Problems Facing TCP in Wireless Ad Hoc. Approaches to TCP over Ad Hoc. Conclusion. 12. Internet & Ad Hoc Service Discovery. Resource Discovery in the Internet. Service Location Protocol (SLP) Architecture. SLPv2 Packet Format. Jini. Salutation Protocol. Simple Service Discovery Protocol (SSDP). Service Discovery for Ad Hoc. Ad Hoc Service Location Architectures. Conclusions. 13. BLUETOOTH TECHNOLOGY. Bluetooth Specifications. Bluetooth Architectures. Bluetooth Protocols. Bluetooth Service Discovery. Bluetooth MAC. Bluetooth Packet Structure. Bluetooth Audio. Bluetooth Addressing. Bluetooth Limitations. Bluetooth Implementation. Conclusions. 14. WIRELESS APPLICATION PROTOCOL (WAP). The WAP Forum. The WAP Service Model. The WAP Protocol Architecture. The WWW Programming Model. The WAP Programming Model. Conclusions. 15. Ad Hoc Nomadic Mobile Applications. In the Office. While Traveling. Arriving Home. In the Car. Shopping Malls. The Modern Battlefield. Car-to-Car Mobile Communications. Mobile Collaborative Applications. Location/Context Based Mobile Services. Conclusions. 16. Conclusions and The Future. Pervasive Computing. Motorola PIANO Project. UC Berkeley Sensor Networks: Smart Dust. EPFL Terminodes/Large-Scale Networks. 802.15 PANs and 802.16 Wireless MANs. Ad Hoc Everywhere? Glossary of Terms. References. Index. --- paper_title: Supporting multicasting in mobile ad‐hoc wireless networks: issues, challenges, and current protocols paper_content: The basic philosophy of personal communication services is to provide user-to-user, location independent communication services. The emerging group communication wireless applications, such as multipoint data dissemination and multiparty conferencing tools have made the design and development of efficient multicast techniques in mobile ad-hoc networking environments a necessity and not just a desire. Multicast protocols in mobile ad-hoc networks have been an area of active research for the past couple of years. This paper summarizes the activities and recent advances in this work-in-progress area by identifying the main issues and challenges that multicast protocols are facing in mobile ad-hoc networking environments, and by surveying several existing multicasting protocols. This article presents a classification of the current multicast protocols, discusses the functionality of the individual existing protocols, and provides a qualitative comparison of their characteristics according to several distinct features and performance parameters. Furthermore, since many of the additional issues and constraints associated with the mobile ad-hoc networks are due, to a large extent, to the attribute of user mobility, we also present an overview of research and development efforts in the area of group mobility modeling in mobile ad-hoc networks. Copyright © 2001 John Wiley & Sons, Ltd. --- paper_title: Effective location-guided tree construction algorithms for small group multicast in MANET paper_content: Group communication has become increasingly important in mobile ad hoc networks (MANET). Current multicast routing protocols in MANET have a large overhead due to the dynamic network topology. To overcome this problem, there is a recent shift towards stateless multicast in small groups. We introduce a small group multicast scheme, based on packet encapsulation, which uses a novel packet distribution tree construction algorithms for efficient data delivery. The tree is constructed with the goal of minimizing the overall bandwidth cost of the tree. Two construction algorithms, for a location-guided k-ary (LGK) tree and a location-guided Steiner (LGS) tree, utilize the geometric locations of the destination nodes as heuristics to compute the trees. They are accompanied by a hybrid location update mechanism to disseminate location information among a group of nodes. Our simulation results show that LGS tree has lower bandwidth cost than LGK tree when the location information of the nodes is up-to-date, and its cost is similar to that of an optimal Steiner multicast tree. When location information of the nodes is out-dated, LGK tree outperforms LGS tree due to its lower computational complexity. --- paper_title: DESIRE: Density Aware Heterogenous Overlay Multicast Forwarding Scheme in Mobile Ad Hoc Networks paper_content: Two nested half cylinders, having a concentric axis and closed ends, are pivoted together on aligned pivot pins mounted in the closed ends and on the concentric axis. The nested half cylinders are actuated to pivot 90 degrees oppositely around the concentric axis and aligned pivot pins to close non-compressively around free flowing granular material in a closed cylindrical hoist bucket, thereby saving the additional weight and strength of material and the actuating energy therefor required for compressive loading of hoist buckets having off center or non-concentric axis of pivot. Additionally, the non-compressive loading does not move the granular material nor overflow the closed cylindrical hoist bucket to pollute the environment with dust therefrom. --- paper_title: Secure group communications using key graphs paper_content: Many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive simulation, and collaborative work) are based upon a group communications model, i.e., they require packet delivery from one or more authorized senders to a very large number of authorized receivers. As a result, securing group communications (i.e., providing confidentiality, integrity, and authenticity of messages delivered between group members) will become a critical networking issue.In this paper, we present a novel solution to the scalability problem of group/multicast key management. We formalize the notion of a secure group as a triple (U,K,R) where U denotes a set of users, K a set of keys held by the users, and R a user-key relation. We then introduce key graphs to specify secure groups. For a special class of key graphs, we present three strategies for securely distributing rekey messages after a join/leave, and specify protocols for joining and leaving a secure group. The rekeying strategies and join/leave protocols are implemented in a prototype group key server we have built. We present measurement results from experiments and discuss performance comparisons. We show that our group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves. In particular, the average measured processing time per join/leave increases linearly with the logarithm of group size. --- paper_title: On-demand overlay multicast in mobile ad hoc networks paper_content: This paper presents the on-demand overlay multicast protocol (ODOMP), a novel approach for multicast data distribution in mobile ad hoc networks. ODOMP is a reactive protocol which creates an overlay among the group members on-demand. The created overlay is a source rooted tree which connects the group members via IP-in-IP tunnels. Routing is done by an arbitrary underlying unicast routing protocol. A novel creation mechanism allows ODOMP to create an efficient overlay with low communication overhead. The performance of ODOMP is compared with the performance of the well known multicast routing protocol ODMRP by means of simulations. --- paper_title: Efficient overlay multicast for mobile ad hoc networks paper_content: Overlay multicast protocol builds a virtual mesh spanning all member nodes of a multicast group. It employs standard unicast routing and forwarding to fulfill multicast functionality. The advantages of this approach are robustness and low overhead. However, efficiency is an issue since the generated multicast trees are normally not optimized in terms of total link cost and data delivery delay. In this paper, we propose an efficient overlay multicast protocol to tackle this problem in MANET environment. The virtual topology gradually adapts to the changes in underlying network topology in a fully distributed manner. A novel source-based Steiner tree algorithm is proposed for constructing the multicast tree. The multicast tree is progressively adjusted according to the latest local topology information. Simulations are conducted to evaluate the tree quality. The results show that our approach solves the efficiency problem effectively. --- paper_title: Application versus network layer multicasting in ad hoc networks: The ALMA routing protocol. Ad Hoc Networks Journal paper_content: Application layer multicasting has emerged as an appealing alternative to network layer multicasting in wireline networks. Here, we examine the suitability of application layer multicast in ad hoc networks. To this effect, we propose a flexible receiver-driven overlay multicast protocol which we call Application Layer Multicast Algorithm (ALMA). ALMA constructs an overlay multicast tree in a dynamic, decentralized and incremental way. First, ALMA is receiver-driven: the member nodes find their connections according to their needs. Second, it is flexible, and thus, it can satisfy the performance goals and the needs of a wide range of applications. Third, it is highly adaptive: it reconfigures the tree in response to mobility or congestion. In addition, our protocol has the advantages of an application layer protocol: (a) simplicity of deployment, (b) independence from lower layer protocols, and (c) capability of exploiting features such as reliability and security that may be provided by the lower layers. Through extensive simulations, we show that ALMA performs favorably against the currently best application layer and network layer protocols. In more detail, we find that ALMA performs significantly better than ODMRP, a network layer, for small group sizes. We conclude that the application layer approach and ALMA seem very promising for ad hoc multicasting. --- paper_title: MAC Layer Multicast in Wireless Multihop Networks paper_content: Many applications in wireless ad-hoc networks require multicast communication. In order to provide efficient multicast, various multicast routing protocols have been designed in recent years to facilitate formation of routes between multicast senders and receivers. There has also been some work to develop a suitable MAC protocol to improve efficiency of multicast communication. In this work we explore some approaches for reliable multicast at the MAC layer. We develop a multicast extension of IEEE 802.11 protocol and evaluate its performance. We have implemented our protocol in the popular ns-2 simulator and have performed experiments with multicast routing protocol.1. Our approach demonstrates superior performance in terms of packet deliveryfraction as well as delay compared to the IEEE 802.11 protocol. --- paper_title: Effective location-guided tree construction algorithms for small group multicast in MANET paper_content: Group communication has become increasingly important in mobile ad hoc networks (MANET). Current multicast routing protocols in MANET have a large overhead due to the dynamic network topology. To overcome this problem, there is a recent shift towards stateless multicast in small groups. We introduce a small group multicast scheme, based on packet encapsulation, which uses a novel packet distribution tree construction algorithms for efficient data delivery. The tree is constructed with the goal of minimizing the overall bandwidth cost of the tree. Two construction algorithms, for a location-guided k-ary (LGK) tree and a location-guided Steiner (LGS) tree, utilize the geometric locations of the destination nodes as heuristics to compute the trees. They are accompanied by a hybrid location update mechanism to disseminate location information among a group of nodes. Our simulation results show that LGS tree has lower bandwidth cost than LGK tree when the location information of the nodes is up-to-date, and its cost is similar to that of an optimal Steiner multicast tree. When location information of the nodes is out-dated, LGK tree outperforms LGS tree due to its lower computational complexity. --- paper_title: Differential destination multicast-a MANET multicast routing protocol for small groups paper_content: In this paper we propose a multicast routing protocol for mobile ad hoc networks (MANETs). The protocol-termed differential destination mmulticast (DDM)-differs from common approaches proposed for MANET multicast routing in two ways. Firstly, instead of distributing membership control throughout the network, DDM concentrates this authority at the data sources (i.e. senders) thereby giving sources knowledge of group membership. Secondly, differentially-encoded, variable-length destination headers are inserted in data packets which are used in combination with unicast routing tables to forward multicast packets towards multicast receivers. Instead of requiring that multicast forwarding state to be stored in all participating nodes, this approach also provides the option of stateless multicasting. Each node independently has the choice of caching forwarding state or having its upstream neighbor to insert this state into self-routed data packets, or some combination thereof. The protocol is best suited for use with small multicast groups operating in dynamic networks of any size. --- paper_title: Scalable multicasting in mobile ad hoc networks paper_content: Many potential applications of mobile ad hoc networks (MANETs) involve group communications among the nodes. Multicasting is an useful operation that facilitates group communications. Efficient and scalable multicast routing in MANETs is a difficult issue. In addition to the conventional multicast routing algorithms, recent protocols have adopted the following new approaches: overlays, backbone-based, and stateless. In this paper, we study these approaches from the protocol state management point of view, and compare their scalability behaviors. To enhance performance and enable scalability, we have proposed a framework for hierarchical multicasting in MANET environments. Two classes of hierarchical multicasting approaches, termed as domain-based and overlay-based, are proposed. We have considered a variety of approaches that are suitable for different mobility patterns and multicast group sizes. Results obtained through simulations demonstrate enhanced performance and scalability of the proposed techniques --- paper_title: On-Demand Multicast Routing Protocol in Multihop Wireless Mobile Networks paper_content: An ad hoc network is a dynamically reconfigurable wireless network with no fixed infrastructure or central administration. Each host is mobile and must act as a router. Routing and multicasting protocols in ad hoc networks are faced with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. This paper presents the On-Demand Multicast Routing Protocol (ODMRP) for wireless mobile and hoc networks. ODMRP is a mesh-based, rather than a conventional tree-based, multicast scheme and uses a forwarding group concept; only a subset of nodes forwards the multicast packets via scoped flooding. It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP performance with other multicast protocols proposed for ad hoc networks via extensive and detailed simulation. --- paper_title: ABAM: on-demand associativity-based multicast routing for ad hoc mobile networks paper_content: Multicast has emerged as a very desirable feature in communication networks. With multicast, data can be distributed to multiple recipients in an efficient and economical manner. We present a performance evaluation of a novel multicast routing strategy for a mobile ad hoc network environment, which is characterized by a highly-dynamic topology with constrained bandwidth and limited power. The associativity-based ad hoc multicast (ABAM) protocol establishes multicast session on demand and utilizes an association stability concept, which is introduced in the ABR protocol for mobile ad hoc unicast routing. The performance of ABAM is simulated and compared with a flooding-based multicast routing protocol. The results show impressive throughput and very low communication overhead. --- paper_title: Differential destination multicast-a MANET multicast routing protocol for small groups paper_content: In this paper we propose a multicast routing protocol for mobile ad hoc networks (MANETs). The protocol-termed differential destination mmulticast (DDM)-differs from common approaches proposed for MANET multicast routing in two ways. Firstly, instead of distributing membership control throughout the network, DDM concentrates this authority at the data sources (i.e. senders) thereby giving sources knowledge of group membership. Secondly, differentially-encoded, variable-length destination headers are inserted in data packets which are used in combination with unicast routing tables to forward multicast packets towards multicast receivers. Instead of requiring that multicast forwarding state to be stored in all participating nodes, this approach also provides the option of stateless multicasting. Each node independently has the choice of caching forwarding state or having its upstream neighbor to insert this state into self-routed data packets, or some combination thereof. The protocol is best suited for use with small multicast groups operating in dynamic networks of any size. --- paper_title: Bandwidth-efficient multicast routing for multihop, ad-hoc wireless networks paper_content: In this paper, we propose and investigate a bandwidth-efficient multicast routing protocol for ad-hoc networks. The proposed protocol achieves low communication overhead, namely, it requires a small number of control packet transmissions for route setup and maintenance. The proposed protocol also achieves high multicast efficiency, namely, it delivers multicast packets to receivers with a small number of transmissions. In order to achieve low communication overhead and high multicast efficiency, the proposed protocol employs the following mechanisms: (1) on-demand invocation of the route setup and route recovery processes to avoid periodic transmissions of control packets, (2) a new route setup process that allows a newly joining node to find the nearest forwarding node to minimize the number of forwarding nodes, and (3) a route optimization process that detects and removes unnecessary forwarding nodes to eliminate redundant and inefficient routes. Our simulation results show that the proposed protocol achieves high multicast efficiency with low communication overhead compared with other existing multicast routing protocols, especially in the ease where the number of receivers in a multicast group is large. --- paper_title: Weight based multicast routing protocol for ad hoc wireless networks paper_content: Ad hoc wireless networks are self-organizing, dynamic topology networks formed by a collection of mobile nodes through radio links. Minimal configuration, absence of infrastructure and quick deployment make them convenient for emergency situations apart from military applications. Multicasting plays a crucial role in the application of ad hoc networks. Since bandwidth is limited in ad hoc wireless networks, a multicast routing protocol must be efficient. It is well known that tree based multicast routing protocols are efficient. But, the main drawback of these protocols is that they are not robust enough to achieve high packet delivery ratios. We propose a tree based multicast protocol in which, to make the protocol robust as well as efficient, a node joins the multicast tree by taking into consideration not only the number of forwarding nodes but also the distance between the source and receiver. For tree maintenance, we have used a localized prediction scheme which results in a high packet delivery ratio. --- paper_title: MZRP: An Extension of the Zone Routing Protocol for Multicasting in MANETs paper_content: In recent years, a variety of unicast and multicast routing protocols for Mobile Ad hoc wireless NETworks (MANETs) have been developed. The Zone Routing Protocol (ZRP) is a hybrid unicast protocol that proactively maintains routing information for a local neighborhood (routing zone), while reactively acquiring routes to destinations beyond the routing zone. In this paper, we extend ZRP for application to multicast routing and call it the Multicast Zone Routing Protocol (MZRP). MZRP is a shared tree multicast routing protocol that proactively maintains the multicast tree membership for nodes’ local routing zones at each node while establishing multicast trees on-demand. It is scalable to a large number of multicast senders and groups. IP tunnel mechanism is used to improve the data packet delivery ratio during transmission. Detailed simulations were performed on the NS-2 simulator. Its performance was also compared with that of ODMRP. --- paper_title: MCEDAR: multicast core-extraction distributed ad hoc routing paper_content: In this paper, we present the MCEDAR (multicast core extraction distributed ad hoc routing) multicast routing algorithm for ad hoc networks. MCEDAR is an extension to the CEDAR architecture and provides the robustness of mesh based routing protocols and the approximates the efficiency of tree based forwarding protocols. It decouples the control infrastructure from the actual data forwarding infrastructure. The decoupling allows for a very minimalistic and low overhead control infrastructure while still enabling very efficient data forwarding. --- paper_title: CEDAR: a core-extraction distributed ad hoc routing algorithm paper_content: CEDAR is an algorithm for QoS routing in ad hoc network environments. It has three key components: (a) the establishment and maintenance of a self-organizing routing infrastructure called the core for performing route computations, (b) the propagation of the link-state of stable high-bandwidth links in the core through increase/decrease waves, and (c) a QoS route computation algorithm that is executed at the core nodes using only locally available state. But preliminary performance evaluation shows that CEDAR is a robust and adaptive QoS routing algorithm that reacts effectively to the dynamics of the network while still approximating link-state performance for stable networks. --- paper_title: Independent-tree ad hoc multicast routing (ITAMAR) paper_content: Multicasting is an efficient means of one to many communication and is typically implemented by creating a multicasting tree. Because of the severe battery power and transmission bandwidth limitations in ad hoc networks, multicast routing can significantly improve the performance of this type of network. However, due to the frequent and hard-to-predict topological changes of ad hoc networks, maintenance of a multicasting tree to ensure its availability could be a difficult task. We borrow from the concept of alternate path routing, which has been studied for providing QoS routing, effective congestion control, security, and route failure protection, to propose a scheme in which a set of multicasting trees is continuously maintained. A tree is used until it fails at which time it is replaced by an alternative tree in the set, so that the time between failure of a tree and resumption of multicast routing is minimal. We introduce the scheme and present a number of heuristics to compute a set of alternate trees. The heuristics are then compared in terms of transmission cost, improvement in the average time between multicast failures and the probability of usefulness. Simulations show significant gains over a wide range of network operational conditions. In particular, we show that using alternate trees has the potential of improving mean time between interruption by 100-600% in a 50 node network (for most multicast group sizes) with small increase in the tree cost and the route discovery overhead. --- paper_title: A preferred link based multicast protocol for wireless mobile ad hoc networks paper_content: Existing multicast routing protocols for mobile ad hoc networks can be broadly classified into two categories, tree based protocols and mesh based protocols. Mesh based protocols have high packet delivery ratio compared to tree based protocols, but incur more control overhead. The packet delivery ratio of tree based protocols decreases with increasing mobility. This is due to the occurrence of frequent tree breaks and lack of proper tree maintenance mechanisms. These tree breaks result in frequent flooding of JoinQuery packets by the multicast group member nodes which try to get re-connected to the tree. These broadcast packets collide with the data packets and reduce the efficiency of the protocol. We propose an efficient protocol which we call as preferred link based multicast protocol (PLBM). PLBM uses a preferred link approach for forwarding JoinQuery packets. Subsets of neighbors of a node are selected using a preferred link based algorithm. These nodes, termed as preferred nodes, are only eligible for further forwarding of JoinQuery packets. We also propose a quick link break detection mechanism that locally repairs broken links. Simulation results show that our protocol performs better than other existing multicast protocols in terms of packet delivery ratio and control overhead. --- paper_title: Ad Hoc Wireless Networks: Architectures and Protocols paper_content: Practical design and performance solutions for every ad hoc wireless networkAd Hoc Wireless Networks comprise mobile devices that use wireless transmission for communication. They can be set up anywhere and any time because they eliminate the complexities of infrastructure setup and central administration-and they have enormous commercial and military potential. Now, there's a book that addresses every major issue related to their design and performance. Ad Hoc Wireless Networks: Architectures and Protocols presents state-of-the-art techniques and solutions, and supports them with easy-to-understand examples. The book starts off with the fundamentals of wireless networking (wireless PANs, LANs, MANs, WANs, and wireless Internet) and goes on to address such current topics as Wi-Fi networks, optical wireless networks, and hybrid wireless architectures. Coverage includes: Medium access control, routing, multicasting, and transport protocols QoS provisioning, energy management, security, multihop pricing, and much more In-depth discussion of wireless sensor networks and ultra wideband technology More than 200 examples and end-of-chapter problemsAd Hoc Wireless Networks is an invaluable resource for every network engineer, technical manager, and researcher designing or building ad hoc wireless networks. --- paper_title: A preferred link based routing protocol for wireless ad hoc networks paper_content: Routing protocols in wireless ad hoc networks experience high control overhead due to frequent path breaks that occur due to mobility of nodes, which leads to flooding of control packets throughout the network. We propose a preferred link based routing protocol that reduces flooding of control packets by selectively allowing some nodes to forward the packets using a preferred list. We propose two algorithms to compute the preferred list. The first algorithm computes the preferred list based on neighbor's degree. The second algorithm computes the preferred list based on stability information. Simulation results illustrate their performance and demonstrate their good behavior compared to other protocols. They also show that the stability based algorithm finds more stable paths while the neighbor's degree based algorithm has least control overhead. --- paper_title: PPMA , a probabilistic predictive multicast algorithm for ad hoc networks q paper_content: Ad hoc networks are collections of mobile nodes communicating using wireless media without any fixed infrastructure. Existing multicast protocols fall short in a harsh ad hoc mobile environment, since node mobility causes conventional multicast trees to rapidly become outdated. The amount of bandwidth resource required for building up a multicast tree is less than that required for other delivery structures, since a tree avoids unnecessary duplication of data. However, a tree structure is more subject to disruption due to link/node failure and node mobility than more meshed structures. This paper explores these contrasting issues and proposes PPMA, a Probabilistic Predictive Multicast Algorithm for ad hoc networks, that leverages the tree delivery structure for multicasting, solving its drawbacks in terms of lack of robustness and reliability in highly mobile environments. PPMA overcomes the existing trade-off between the bandwidth efficiency to set up a multicast tree, and the tree robustness to node energy consumption and mobility, by decoupling tree efficiency from mobility robustness. By exploiting the non-deterministic nature of ad hoc networks, the proposed algorithm takes into account the estimated network state evolution in terms of node residual energy, link availability and node mobility forecast, in order to maximize the multicast tree lifetime, and consequently reduce the number of costly tree reconfigurations. The algorithm statistically tracks the relative movements among nodes to capture the dynamics in the ad hoc network. This way, PPMA estimates the node future relative positions in order to calculate a long-lasting multicast tree. To do so, it exploits the most stable links in the network, while minimizing the total network energy consumption. We propose PPMA in both its centralized and distributed version, providing performance evaluation through extensive simulation experiments. � 2005 Elsevier B.V. All rights reserved. --- paper_title: Adaptive demand-driven multicast routing in multi-hop wireless ad hoc networks paper_content: The use of on-demand techniques in routing protocols for multi-hop wireless ad hoc networks has been shown to have significant advantages in terms of reducing the routing protocol's overhead and improving its ability to react quickly to topology changes in the network. A number of on-demand multicast routing protocols have been proposed, but each also relies on significant periodic (non-on-demand) behavior within portions of the protocol. This paper presents the design and initial evluation of the Adaptive Demand-Driven Multicast Routing protocol (ADMR), a new on-demand ad hoc network multicast routing protocol that attemps to reduce as much as possible any non-on-demand components within the protocol. Multicast routing state is dynamically established and maintained only for active groups and only in nodes located between multicast senders and receivers. Each multicast data packet is forwarded along the shortest-delay path with multicast forwarding state, from the sender to the receivers, and receivers dynamically adapt to the sending pattern of senders in order to efficiently balance overhead and maintenance of the multicast routing state as nodes in the network move or as wireless transmission conditions in the network change. We describe the operation of the ADMR protocol and present an initial evaluation of its performance based on detailed simulation in ad hoc networks of 50 mobile nodes. We show that ADMR achieves packet delivery ratios within 1% of a flooding-based protocol, while incurring half to a quarter of the overhead. --- paper_title: Adaptive shared tree multicast in mobile wireless networks paper_content: Shared tree multicast is a well established concept used in several multicast protocols for wireline networks (e.g. core base tree, PIM sparse mode etc). In this paper, we extend the shared tree concept to wireless, mobile, multihop networks for applications ranging from ad hoc networking to disaster recovery and battlefield. The main challenge in wireless, mobile networks is the rapidly changing environment. We address this issue in our design by: (a) using "soft state"; (b) assigning different roles to nodes depending on their mobility (two level mobility model); (c) proposing an adaptive scheme which combines shared tree and source tree benefits. A detailed wireless simulation model is used to evaluate the proposed schemes and compare them with source based tree (as opposed to shared tree) multicast. The results show that shared tree protocols have low overhead and are very robust to mobility. --- paper_title: Multipath passive data acknowledgement on-demand multicast protocol paper_content: Ad hoc network is a multihop wireless network of mobile nodes without fixed infrastructure. We have proposed PDAODMRP (Passive Data Acknowledgement ODMRP) to adapt the limited bandwidth and frequently changing topology of ad hoc network. PDAODMRP defines the adjacent un-forwarding nodes of forwarding nodes as pool nodes to reduces its local route maintenance scope, and its local route maintenance scope is at least one-hop smaller than that of PatchODMRP. However, PDAODMRP does not make full use of its multiple paths by distributing its data load to these paths. Therefore, in this paper, we propose MPDAODMRP (Multipath PDAODMRP) to extend PDAODMRP. MPDAODMRP distributes data overhead to multipath based on diversity coding. The simulation results show that the data overhead and the data delivery delay of MPDAODMRP are much lower than that of PDAODMRP. --- paper_title: Adaptive demand-driven multicast routing in multi-hop wireless ad hoc networks paper_content: The use of on-demand techniques in routing protocols for multi-hop wireless ad hoc networks has been shown to have significant advantages in terms of reducing the routing protocol's overhead and improving its ability to react quickly to topology changes in the network. A number of on-demand multicast routing protocols have been proposed, but each also relies on significant periodic (non-on-demand) behavior within portions of the protocol. This paper presents the design and initial evluation of the Adaptive Demand-Driven Multicast Routing protocol (ADMR), a new on-demand ad hoc network multicast routing protocol that attemps to reduce as much as possible any non-on-demand components within the protocol. Multicast routing state is dynamically established and maintained only for active groups and only in nodes located between multicast senders and receivers. Each multicast data packet is forwarded along the shortest-delay path with multicast forwarding state, from the sender to the receivers, and receivers dynamically adapt to the sending pattern of senders in order to efficiently balance overhead and maintenance of the multicast routing state as nodes in the network move or as wireless transmission conditions in the network change. We describe the operation of the ADMR protocol and present an initial evaluation of its performance based on detailed simulation in ad hoc networks of 50 mobile nodes. We show that ADMR achieves packet delivery ratios within 1% of a flooding-based protocol, while incurring half to a quarter of the overhead. --- paper_title: PatchODMRP: an ad-hoc multicast routing protocol paper_content: We propose an ad-hoc multicast routing protocol, referred to as PatchODMRP. PatchODMRP extends the ODMRP (on-demand multicast routing protocol), which is a mesh-based multicast routing protocol proposed for ad-hoc networks. In ODMRP, the nodes that are on the shortest paths between the multicast group members are selected as forwarding group (FG) nodes, and form a forwarding mesh for the multicast group. The ODMRP reconfigures the forwarding mesh periodically to adapt it to the node movements. When the number of sources in the multicast group is small, usually the forwarding mesh is formed sparsely and it can be very vulnerable to mobility. In this case, very frequent mesh reconfigurations are required in ODMRP, resulting in a large control overhead. To deal with this problem in a more efficient way, PatchODMRP deploys a local patching scheme instead of having very frequent mesh reconfigurations. In PatchODMRP, each FG node keeps checking if there is a symptom of mesh separation around itself. When an FG node finds such a symptom, it tries to patch itself to the mesh with local flooding of control messages. Through a course of simulation experiments, the performance of PatchODMRP is compared to the performance of ODMRP. The simulation results show that PatchODMRP improves the data delivery ratio, and reduces the control overheads. It has also been shown that the performance gain is larger when the degree of node mobility is bigger. --- paper_title: Preemptive Multicast Routing in Mobile Ad-hoc Networks paper_content: Preemptive route maintenance allows a routing algorithm to maintain connectivity by preemptively switching to a path of higher quality when the quality of the currently used path is deemed questionable. Preemptive routing initiates recovery actions early by detecting that a link is likely to be broken soon and searching for a new path before the current path actually breaks. Preemptive route maintenance has been used for unicast (point-to-point) communications in wired networks and in mobile ad-hoc networks (MANETs) to minimize the number of route breaks and thus packet losses, and end-to-end delays. In addition to these advantages, we show that preemptive route maintenance can help minimize control overhead and improve the scalability of multicast routing protocols in MANETs. In this paper, we present design and implementation issues of preemptive routing for multicast in MANETs. We then describe a preemptive multicast routing protocol based on ODMRP (On-Demand Multicast Routing Protocol), which we call PMR (Preemptive Multicast Routing). PMR significantly improves the scalability of ODMRP: it offers similar or higher packet delivery ratios while incurring much less control overhead. Our simulation results have confirmed these advantages of PMR. --- paper_title: On-Demand Multicast Routing Protocol in Multihop Wireless Mobile Networks paper_content: An ad hoc network is a dynamically reconfigurable wireless network with no fixed infrastructure or central administration. Each host is mobile and must act as a router. Routing and multicasting protocols in ad hoc networks are faced with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. This paper presents the On-Demand Multicast Routing Protocol (ODMRP) for wireless mobile and hoc networks. ODMRP is a mesh-based, rather than a conventional tree-based, multicast scheme and uses a forwarding group concept; only a subset of nodes forwards the multicast packets via scoped flooding. It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP performance with other multicast protocols proposed for ad hoc networks via extensive and detailed simulation. --- paper_title: On-Demand Multicast Routing Protocol in Multihop Wireless Mobile Networks paper_content: An ad hoc network is a dynamically reconfigurable wireless network with no fixed infrastructure or central administration. Each host is mobile and must act as a router. Routing and multicasting protocols in ad hoc networks are faced with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. This paper presents the On-Demand Multicast Routing Protocol (ODMRP) for wireless mobile and hoc networks. ODMRP is a mesh-based, rather than a conventional tree-based, multicast scheme and uses a forwarding group concept; only a subset of nodes forwards the multicast packets via scoped flooding. It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP performance with other multicast protocols proposed for ad hoc networks via extensive and detailed simulation. --- paper_title: A dynamic core based multicast routing protocol for ad hoc wireless networks paper_content: Ad hoc wireless networks are self-organizing dynamic topology networks formed by a collection of mobile nodes through radio links. Minimal configuration absence of infrastructure and quick deployment make them convenient for emergency situations other than military applications. Multicasting plays a very crucial role in the application of Ad hoc networks. As the number of participants increases scalability of the multicast protocol becomes an important issue. Among the existing multicast protocols On Demand Multicast Routing Protocol (ODMRP) perfo exhibits a high packet delivery ratio even at high mobility. But ODMRP suffers from higher control overhead as the network size and the number of sources increase.In this paper we propose an efficient multicast routing protocol for Ad hoc wireless networks. This protocol reduces the control overhead by dynamically classifying the sources into Active and Passive categories. The control overhead is significantly reduced by about 30% compared to ODMRP which contributes to the scalability of the protocol. We study the effectiveness of the proposed multicast routing protocol by simulation studies and the results show that the multicast efficiency is increased by 10--15% and packet delivery ratio is also improved at high network load. --- paper_title: Ad hoc Multicast Routing Algorithm with Swarm Intelligence paper_content: Swarm intelligence refers to complex behaviors that arise from very simple individual behaviors and interactions, which is often observed in nature, especially among social insects such as ants. Although each individual (an ant) has little intelligence and simply follows basic rules using local information obtained from the environment, such as ant's pheromone trail laying and following behavior, globally optimized behaviors, such as finding a shortest path, emerge when they work collectively as a group. In this paper, we apply this biologically inspired metaphor to the multicast routing problem in mobile ad hoc networks. Our proposed multicast protocol adapts a core-based approach which establishes multicast connectivity among members through a designated node (core). An initial multicast connection can be rapidly setup by having the core flood the network with an announcement so that nodes on the reverse paths to the core will be requested by group members to serve as forwarding nodes. In addition, each member who is not the core periodically deploys a small packet that behaves like an ant to opportunistically explore different paths to the core. This exploration mechanism enables the protocol to discover new forwarding nodes that yield lower total forwarding costs, where cost is abstract and can be used to represent any metric to suit the application. Simulations have been conducted to demonstrate the performance of the proposed approach and to compare it with certain existing multicast protocols. --- paper_title: Forwarding Group Multicast Protocol (FGMP) for multihop, mobile wireless networks paper_content: In this paper we propose a new multicast protocol for multihop mobile wireless networks. Instead of forming multicast trees, a group of nodes in charge of forwarding multicast packets is designated according to members' requests. Multicast is then carried out via “scoped” flooding over such a set of nodes. The forwarding group is periodically refreshed to handle topology/membership changes. Multicast using forwarding group takes advantage of wireless broadcast transmissions and reduces channel and storage overhead, thus improving the performance and scalability. The key innovation with respect to wired multicast schemes like DVMRP is the use of flags rather than upstream/downstream link state, making the protocol more robust to mobility. The dynamic reconfiguration capability makes this protocol particularly suitable for mobile networks. The performance of the proposed scheme is evaluated via simulation and is compared to that of DVMRP and global flooding. --- paper_title: Core based trees (CBT) paper_content: One of the central problems in one-to-many wide-area communications is forming the delivery tree - the collection of nodes and links that a multicast packet traverses. Significant problems remain to be solved in the area of multicast tree formation, the problem of scaling being paramount among these.In this paper we show how the current IP multicast architecture scales poorly (by scale poorly, we mean consume too much memory, bandwidth, or too many processing resources), and subsequently present a multicast protocol based on a new scalable architecture that is low-cost, relatively simple, and efficient. We also show how this architecture is decoupled from (though dependent on) unicast routing, and is therefore easy to install in an internet that comprises multiple heterogeneous unicast routing algorithms. --- paper_title: Dynamic source routing in ad hoc wireless networks paper_content: An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal. --- paper_title: Neighbor supporting ad hoc multicast routing protocol paper_content: An ad hoc network is a multi-hop wireless network formed by a collection of mobile nodes without the intervention of fixed infrastructure. Limited bandwidth and a high degree of mobility require that routing protocols for ad hoc networks be robust, simple, and energy-conserving. This paper proposes a new ad hoc multicast routing protocol called Neighbor-Supporting Multicast Protocol (NSMP). NSMP adopts a mesh structure to enhance resilience against mobility. And NSMP utilizes node locality to reduce the overhead of route failure recovery and mesh maintenance. NSMP also attempts to improve route efficiency and reduce data transmissions. Our simulation results show that NSMP delivers packets efficiently while substantially reducing control overhead in various environments. --- paper_title: On-Demand Multicast Routing Protocol in Multihop Wireless Mobile Networks paper_content: An ad hoc network is a dynamically reconfigurable wireless network with no fixed infrastructure or central administration. Each host is mobile and must act as a router. Routing and multicasting protocols in ad hoc networks are faced with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. This paper presents the On-Demand Multicast Routing Protocol (ODMRP) for wireless mobile and hoc networks. ODMRP is a mesh-based, rather than a conventional tree-based, multicast scheme and uses a forwarding group concept; only a subset of nodes forwards the multicast packets via scoped flooding. It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP performance with other multicast protocols proposed for ad hoc networks via extensive and detailed simulation. --- paper_title: Host extensions for IP multicasting paper_content: This memo specifies the extensions required of a host implementation ::: of the Internet Protocol (IP) to support multicasting. Recommended ::: procedure for IP multicasting in the Internet. This RFC obsoletes RFCs ::: 998 and 1054. [STANDARDS-TRACK] --- paper_title: OGHAM: On-demand global hosts for mobile ad-hoc multicast services paper_content: Abstract Recent advances in pervasive computing and wireless technologies have enabled novel multicast services anywhere, anytime, such as mobile auctions, advertisement, and e-coupons. Routing/multicast protocols in large-scale ad-hoc networks adopt two-tier infrastructures to accommodate the effectiveness of the flooding scheme and the efficiency of the tree-based scheme. In these protocols, hosts with a maximal number of neighbors are chosen as backbone hosts (BHs) to forward packets. Most likely, these BHs will be traffic concentrations or bottlenecks of the network and spend significant amount of time forwarding packets. In this paper, a distinct strategy is proposed for constructing a two-tier infrastructure in a large-scale ad-hoc network. Hosts with a minimal number of hops to the other hosts rather than those with a maximal number of neighbors will be adopted as BHs in order to obtain shorter multicast routes. The problem of determining BHs can be formulated with linear programming. BHs thus found have the advantages of shorter relay and less concentration. Besides, BHs are selected on-demand and can be globally reused for different multicast groups without flooding again. Simulation results show that the proposed protocol has shorter transmission latency, fewer control/data packets and higher receiving data packet ratios than other existing multicast protocols. Besides, the two-tier infrastructure constructed by the proposed protocol is more stable. --- paper_title: A Novel Adaptive Protocol for Lightweight Efficient Multicasting in Ad Hoc Networks Abstract paper_content: In group communications, we find that current multicast protocols are far from ''one size fits all'': they are typically geared towards and optimized for particular scenarios. As a result, when deployed in different scenarios, their performance and overhead often degrades significantly. A common problem is that most of these protocols incur high overheads with a high density of group members and in high mobility. Our objective is to design a protocol that adapts in response to the dynamics of the network. In particular, our objective is to provide efficient and lightweight multicast data dissemination irrespective of the density of group members and node density. Our work is motivated by two observations. First, broadcasting in some cases is more efficient than multicasting. Second, member and node layout distributions are not necessarily homogeneous. For example, many MANET applications result in a topological clustering of group members that move together. Thus, we develop Fireworks, an adaptive approach for group communications in mobile ad hoc networks. Fireworks is a hybrid 2-tier multicast/broadcast protocol that adapts to maintain performance given the dynamics of the network topology and group density. In a nutshell, our protocol creates pockets of broadcast distribution in areas with many members, while it develops a multicast backbone to interconnect these dense pockets. Fireworks offers packet delivery statistics comparable to that of a pure multicast scheme but with significantly lower overheads. --- paper_title: On-Demand Multicast Routing Protocol in Multihop Wireless Mobile Networks paper_content: An ad hoc network is a dynamically reconfigurable wireless network with no fixed infrastructure or central administration. Each host is mobile and must act as a router. Routing and multicasting protocols in ad hoc networks are faced with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. This paper presents the On-Demand Multicast Routing Protocol (ODMRP) for wireless mobile and hoc networks. ODMRP is a mesh-based, rather than a conventional tree-based, multicast scheme and uses a forwarding group concept; only a subset of nodes forwards the multicast packets via scoped flooding. It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP performance with other multicast protocols proposed for ad hoc networks via extensive and detailed simulation. --- paper_title: Multicast routing in mobile ad hoc networks by using a multiagent system paper_content: Multicast routing in mobile ad hoc networks (MANETs) poses several challenges due to inherent characteristics of the network such as node mobility, reliability, scarce resources, etc. This paper proposes an Agent Based Multicast Routing Scheme (ABMRS) in MANETs, which uses a set of static and mobile agents. Five types of agents are used in the scheme: Route manager static agent, Network initiation mobile agent, Network management static agent, Multicast initiation mobile agent and Multicast management static agent. The scheme operates in the following steps: (1) to identify reliable nodes; (2) to connect reliable nodes through intermediate nodes; (3) to construct a backbone for multicasting using reliable nodes and intermediate nodes; (4) to join multicast group members to the backbone; (5) to perform backbone and group members management in case of mobility. The scheme has been simulated in various network scenarios to test operation effectiveness in terms of performance parameters such as packet delivery ratio, control overheads and group reliability. Also, a comparison of proposed scheme with MAODV (Multicast Ad hoc on-demand Distance Vector) protocol is presented. ABMRS performs better than MAODV as observed from the simulation. ABMRS offers flexible and adaptable multicast services and also supports component based software development. --- paper_title: MZRP: An Extension of the Zone Routing Protocol for Multicasting in MANETs paper_content: In recent years, a variety of unicast and multicast routing protocols for Mobile Ad hoc wireless NETworks (MANETs) have been developed. The Zone Routing Protocol (ZRP) is a hybrid unicast protocol that proactively maintains routing information for a local neighborhood (routing zone), while reactively acquiring routes to destinations beyond the routing zone. In this paper, we extend ZRP for application to multicast routing and call it the Multicast Zone Routing Protocol (MZRP). MZRP is a shared tree multicast routing protocol that proactively maintains the multicast tree membership for nodes’ local routing zones at each node while establishing multicast trees on-demand. It is scalable to a large number of multicast senders and groups. IP tunnel mechanism is used to improve the data packet delivery ratio during transmission. Detailed simulations were performed on the NS-2 simulator. Its performance was also compared with that of ODMRP. --- paper_title: OPHMR: An Optimized Polymorphic Hybrid Multicast Routing Protocol for MANET paper_content: We propose in this paper an optimized, polymorphic, hybrid multicast routing protocol for MANET. This new polymorphic protocol attempts to benefit from the high efficiency of proactive behavior (in terms of quicker response to transmission requests) and the limited network traffic overhead of the reactive behavior, while being power, mobility, and vicinity-density (in terms of number of neighbor nodes per specified area around a mobile node) aware. The proposed protocol is based on the principle of adaptability and multibehavioral modes of operations. It is able to change behavior in different situations in order to improve certain metrics like maximizing battery life, reducing communication delays, improving deliverability, etc. The protocol is augmented by an optimization scheme, adapted from the one proposed for the optimized link state routing protocol (OLSR) in which only selected neighbor nodes propagate control packets to reduce the amount of control overhead. Extensive simulations and comparison to peer protocols demonstrated the effectiveness of the proposed protocol in improving performance and in extending battery power longevity --- paper_title: Multicast Optimized Link State Routing paper_content: This document describes the Multicast extension for the Optimized Link State Routing protocol (MOLSR). MOLSR is in charge of building a multicast structure in order to route multicast traffic in an ad-hoc network. MOLSR is designed for mobile multicast routers, and works in a heterogenous network composed of simple unicast OLSR routers, MOLSR routers and hosts. In the last part of this document we introduce also a Wireless Internet Group Management Protocol (WIGMP). It offers the possibility for OLSR nodes (without multicast capabilities) to join multicast groups and receive multicast data. --- paper_title: On-Demand Multicast Routing Protocol in Multihop Wireless Mobile Networks paper_content: An ad hoc network is a dynamically reconfigurable wireless network with no fixed infrastructure or central administration. Each host is mobile and must act as a router. Routing and multicasting protocols in ad hoc networks are faced with the challenge of delivering data to destinations through multihop routes in the presence of node movements and topology changes. This paper presents the On-Demand Multicast Routing Protocol (ODMRP) for wireless mobile and hoc networks. ODMRP is a mesh-based, rather than a conventional tree-based, multicast scheme and uses a forwarding group concept; only a subset of nodes forwards the multicast packets via scoped flooding. It applies on-demand procedures to dynamically build routes and maintain multicast group membership. ODMRP is well suited for ad hoc wireless networks with mobile hosts where bandwidth is limited, topology changes frequently, and power is constrained. We evaluate ODMRP performance with other multicast protocols proposed for ad hoc networks via extensive and detailed simulation. --- paper_title: Efficient overlay multicast for mobile ad hoc networks paper_content: Overlay multicast protocol builds a virtual mesh spanning all member nodes of a multicast group. It employs standard unicast routing and forwarding to fulfill multicast functionality. The advantages of this approach are robustness and low overhead. However, efficiency is an issue since the generated multicast trees are normally not optimized in terms of total link cost and data delivery delay. In this paper, we propose an efficient overlay multicast protocol to tackle this problem in MANET environment. The virtual topology gradually adapts to the changes in underlying network topology in a fully distributed manner. A novel source-based Steiner tree algorithm is proposed for constructing the multicast tree. The multicast tree is progressively adjusted according to the latest local topology information. Simulations are conducted to evaluate the tree quality. The results show that our approach solves the efficiency problem effectively. --- paper_title: Application versus network layer multicasting in ad hoc networks: The ALMA routing protocol. Ad Hoc Networks Journal paper_content: Application layer multicasting has emerged as an appealing alternative to network layer multicasting in wireline networks. Here, we examine the suitability of application layer multicast in ad hoc networks. To this effect, we propose a flexible receiver-driven overlay multicast protocol which we call Application Layer Multicast Algorithm (ALMA). ALMA constructs an overlay multicast tree in a dynamic, decentralized and incremental way. First, ALMA is receiver-driven: the member nodes find their connections according to their needs. Second, it is flexible, and thus, it can satisfy the performance goals and the needs of a wide range of applications. Third, it is highly adaptive: it reconfigures the tree in response to mobility or congestion. In addition, our protocol has the advantages of an application layer protocol: (a) simplicity of deployment, (b) independence from lower layer protocols, and (c) capability of exploiting features such as reliability and security that may be provided by the lower layers. Through extensive simulations, we show that ALMA performs favorably against the currently best application layer and network layer protocols. In more detail, we find that ALMA performs significantly better than ODMRP, a network layer, for small group sizes. We conclude that the application layer approach and ALMA seem very promising for ad hoc multicasting. --- paper_title: On-demand overlay multicast in mobile ad hoc networks paper_content: This paper presents the on-demand overlay multicast protocol (ODOMP), a novel approach for multicast data distribution in mobile ad hoc networks. ODOMP is a reactive protocol which creates an overlay among the group members on-demand. The created overlay is a source rooted tree which connects the group members via IP-in-IP tunnels. Routing is done by an arbitrary underlying unicast routing protocol. A novel creation mechanism allows ODOMP to create an efficient overlay with low communication overhead. The performance of ODOMP is compared with the performance of the well known multicast routing protocol ODMRP by means of simulations. --- paper_title: Supporting MAC layer multicast in IEEE 802.11 based MANETs: issues and solutions paper_content: In IEEE 802.11 based mobile ad hoc networks (MANET) multicast packets are generally forwarded as one hop broadcast; mainly to reach all the multicast members in the neighborhood in a single transmission. Because of the broadcast property of the forwarding, packets suffer from increased instances of the hidden terminal problem. Mobility of nodes makes things more difficult, and unlike unicast transmissions where MAC can detect the movement of a nexthop by making several retries, it is not possible in case of multicast forwarding. To address these issues, we propose a multicast aware MAC protocol (MMP) for MANET. The basic objective of MMP is to provide a MAC layer support for multicast traffic. This is done by attaching an extended multicast header (EMH) by the multicast agent, which provides the address of the nexthop nodes that are supposed to receive the multicast packet. The MAC layer in MMP uses the EMH field to support an ACK based data delivery. After sending the data packet, the transmitter waits for the ACK from each of its destinations in a strictly sequential order. A retransmission of the multicast packet is performed only if the ACK from any of the nodes in EMH is missing. We compare MMP with IEEE 802.11 and results show that MMP substantially improves the performance of multicast packet delivery in MANET without creating much MAC overhead. In addition, MMP provides a better mechanism to detect the movement of its nexthop members. --- paper_title: Congestion control multicast in wireless ad hoc networks paper_content: In this paper, the interaction of the Medium Access Control (MAC) and routing layer is used to address the congestion control multicast routing problem in wireless ad hoc networks. We first introduce the Broadcast Medium Window (BMW) MAC protocol, which provides robust delivery to broadcast packets at the MAC layer. In doing so, we show that although BMW is able to provide high reliability under low to medium network load, reliability dramatically degrades under high network load. We then extend the wireless On-Demand Multicast Routing Protocol (ODMRP) to facilitate congestion control in ad hoc networks using BMW to combat the poor performance under highly congested network conditions. Through simulation, we show that ODMRP with congestion control adapts well to multicast sources that are aggressive in data transmissions. --- paper_title: RMAC: a reliable multicast MAC protocol for wireless ad hoc networks paper_content: This work presents a new MAC protocol called RMAC that supports reliable multicast for wireless ad hoc networks. By utilizing the busy tone mechanism to realize multicast reliability, RMAC has the following three novelties: (1) it uses a variable-length control frame to stipulate an order for the receivers to respond, such that the problem of feedback collision is solved; (2) it extends the traditional usage of busy tone for preventing data frame collisions into the multicast scenario; and (3) it introduces a new usage of busy tone for acknowledging data frames. In addition, we also generalize RMAC into a comprehensive MAC protocol that provides both reliable and unreliable services for all the three modes of communications: unicast, multicast, and broadcast. Our evaluation shows that RMAC achieves high reliability with very limited overhead. We also compare RMAC with other reliable multicast MAC protocols, showing that RMAC not only provides higher reliability but also involves lower cost. --- paper_title: Reliable MAC layer multicast in IEEE 802.11 wireless networks paper_content: Multicast/broadcast is an important service primitive in networks. The IEEE 802.11 multicast/broadcast protocol is based on the basic access procedure of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). This protocol does not provide any media access control (MAC) layer recovery on multicast/broadcast frames. As a result, the reliability of the multicast/broadcast service is reduced due to the increased probability of lost frames resulting from interference or collisions. In this paper, we propose a reliable Batch Mode Multicast MAC protocol, BMMM, which substentially reduces the number of contention phases, thus considerably reduces the time required for a multicast/broadcast. We then propose a Location Aware Multicast MAC protocol, LAMM, that uses station location information to further improve upon BMMM. Extensive analysis and simulation results validate the reliability and efficiency of our multicast MAC protocols. --- paper_title: MAC reliable broadcast in ad hoc networks paper_content: Traditional wireless ad hoc medium access control (MAC) protocols often utilize control frames such as request-to-send (RTS), clear-to-send (CTS) and acknowledgement (ACK) to reliably deliver unicast data. However, little effort has been given to improve the reliable delivery of broadcast data. Often, broadcast data are transmitted blindly without any consideration of hidden terminals. In this paper, we proposed a novel MAC protocol, broadcast medium window (BMW) that reliably delivers broadcast data. --- paper_title: MAC Layer Multicast in Wireless Multihop Networks paper_content: Many applications in wireless ad-hoc networks require multicast communication. In order to provide efficient multicast, various multicast routing protocols have been designed in recent years to facilitate formation of routes between multicast senders and receivers. There has also been some work to develop a suitable MAC protocol to improve efficiency of multicast communication. In this work we explore some approaches for reliable multicast at the MAC layer. We develop a multicast extension of IEEE 802.11 protocol and evaluate its performance. We have implemented our protocol in the popular ns-2 simulator and have performed experiments with multicast routing protocol.1. Our approach demonstrates superior performance in terms of packet deliveryfraction as well as delay compared to the IEEE 802.11 protocol. --- paper_title: Supporting MAC layer multicast in IEEE 802.11 based MANETs: issues and solutions paper_content: In IEEE 802.11 based mobile ad hoc networks (MANET) multicast packets are generally forwarded as one hop broadcast; mainly to reach all the multicast members in the neighborhood in a single transmission. Because of the broadcast property of the forwarding, packets suffer from increased instances of the hidden terminal problem. Mobility of nodes makes things more difficult, and unlike unicast transmissions where MAC can detect the movement of a nexthop by making several retries, it is not possible in case of multicast forwarding. To address these issues, we propose a multicast aware MAC protocol (MMP) for MANET. The basic objective of MMP is to provide a MAC layer support for multicast traffic. This is done by attaching an extended multicast header (EMH) by the multicast agent, which provides the address of the nexthop nodes that are supposed to receive the multicast packet. The MAC layer in MMP uses the EMH field to support an ACK based data delivery. After sending the data packet, the transmitter waits for the ACK from each of its destinations in a strictly sequential order. A retransmission of the multicast packet is performed only if the ACK from any of the nodes in EMH is missing. We compare MMP with IEEE 802.11 and results show that MMP substantially improves the performance of multicast packet delivery in MANET without creating much MAC overhead. In addition, MMP provides a better mechanism to detect the movement of its nexthop members. --- paper_title: MAC Layer Multicast in Wireless Multihop Networks paper_content: Many applications in wireless ad-hoc networks require multicast communication. In order to provide efficient multicast, various multicast routing protocols have been designed in recent years to facilitate formation of routes between multicast senders and receivers. There has also been some work to develop a suitable MAC protocol to improve efficiency of multicast communication. In this work we explore some approaches for reliable multicast at the MAC layer. We develop a multicast extension of IEEE 802.11 protocol and evaluate its performance. We have implemented our protocol in the popular ns-2 simulator and have performed experiments with multicast routing protocol.1. Our approach demonstrates superior performance in terms of packet deliveryfraction as well as delay compared to the IEEE 802.11 protocol. --- paper_title: Reliable MAC layer multicast in IEEE 802.11 wireless networks paper_content: Multicast/broadcast is an important service primitive in networks. The IEEE 802.11 multicast/broadcast protocol is based on the basic access procedure of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). This protocol does not provide any media access control (MAC) layer recovery on multicast/broadcast frames. As a result, the reliability of the multicast/broadcast service is reduced due to the increased probability of lost frames resulting from interference or collisions. In this paper, we propose a reliable Batch Mode Multicast MAC protocol, BMMM, which substentially reduces the number of contention phases, thus considerably reduces the time required for a multicast/broadcast. We then propose a Location Aware Multicast MAC protocol, LAMM, that uses station location information to further improve upon BMMM. Extensive analysis and simulation results validate the reliability and efficiency of our multicast MAC protocols. --- paper_title: Random access MAC for efficient broadcast support in ad hoc networks paper_content: Wireless communications are becoming an important part of our everyday lifestyle. One major area that will have an enormous impact on the performance of wireless ad hoc networks is the medium access control (MAC) layer. Current random access MAC protocols for ad hoc networks support reliable unicast but not reliable broadcast. We propose a random access MAC protocol, broadcast support multiple access (BSMA), which improves the broadcast reliability in ad hoc networks. --- paper_title: HIMAC: High Throughput MAC Layer Multicasting in Wireless Networks paper_content: Efficient, scalable and robust multicasting support from the MAC layer is needed for meeting the demands of multicast based applications over WiFi and mesh networks. However, the IEEE 802.11 protocol has no specific mechanism for multicasting. It implements multicasting using broadcasting at the base transmission rate. We identify two fundamental reasons for performance limitations of this approach: (a) Channel-state Indifference: irrespective of the current quality of the channel to the receivers, the transmission always uses the base transmission rate; (b) Demand Ignorance: packets are transmitted by a node even if children in the multicast tree have received those packets by virtue of overhearing. We propose a solution for MAC layer multicasting called HIMAC that uses the following two mechanisms: Unary Channel Feedback (UCF) and Unary Negative Feedback (UNF) to respectively address the shortcomings of 802.11. Our study is supported by measurements in a testbed, and simulations. We observe that the end-to-end throughput of multicast sessions using MAODV can be increased by up to 74% while reducing the end-to-end latency by up to a factor of 56. --- paper_title: Congestion control multicast in wireless ad hoc networks paper_content: In this paper, the interaction of the Medium Access Control (MAC) and routing layer is used to address the congestion control multicast routing problem in wireless ad hoc networks. We first introduce the Broadcast Medium Window (BMW) MAC protocol, which provides robust delivery to broadcast packets at the MAC layer. In doing so, we show that although BMW is able to provide high reliability under low to medium network load, reliability dramatically degrades under high network load. We then extend the wireless On-Demand Multicast Routing Protocol (ODMRP) to facilitate congestion control in ad hoc networks using BMW to combat the poor performance under highly congested network conditions. Through simulation, we show that ODMRP with congestion control adapts well to multicast sources that are aggressive in data transmissions. --- paper_title: Reliable MAC layer multicast in IEEE 802.11 wireless networks paper_content: Multicast/broadcast is an important service primitive in networks. The IEEE 802.11 multicast/broadcast protocol is based on the basic access procedure of Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). This protocol does not provide any media access control (MAC) layer recovery on multicast/broadcast frames. As a result, the reliability of the multicast/broadcast service is reduced due to the increased probability of lost frames resulting from interference or collisions. In this paper, we propose a reliable Batch Mode Multicast MAC protocol, BMMM, which substentially reduces the number of contention phases, thus considerably reduces the time required for a multicast/broadcast. We then propose a Location Aware Multicast MAC protocol, LAMM, that uses station location information to further improve upon BMMM. Extensive analysis and simulation results validate the reliability and efficiency of our multicast MAC protocols. --- paper_title: Multicast medium access control in wireless ad hoc network paper_content: The basic carrier sense multiaccess control scheme for multicast communications in wireless ad hoc network suffers from the well know hidden terminal problem. The data packet collision probability is relatively high, and the packet delivery ratio is sensitive to the network topology, nodes distribution and traffic load. This paper generalizes the virtual earner sense collision avoidance approach to reduce packet collisions. The sender and receivers exchange RTS and CTS packets to reserve the channel. When more than one receivers reply with CTS packets, the sender will detect an "expected" collision which may be interpreted as a valid "clear-to-send" signal provided the collision satisfies the given timing requirements. Together with a receiver-initiated local recovery mechanism, the reliability of multicast communications can be improved to almost 100%. --- paper_title: MAC reliable broadcast in ad hoc networks paper_content: Traditional wireless ad hoc medium access control (MAC) protocols often utilize control frames such as request-to-send (RTS), clear-to-send (CTS) and acknowledgement (ACK) to reliably deliver unicast data. However, little effort has been given to improve the reliable delivery of broadcast data. Often, broadcast data are transmitted blindly without any consideration of hidden terminals. In this paper, we proposed a novel MAC protocol, broadcast medium window (BMW) that reliably delivers broadcast data. --- paper_title: RMAC: a reliable multicast MAC protocol for wireless ad hoc networks paper_content: This work presents a new MAC protocol called RMAC that supports reliable multicast for wireless ad hoc networks. By utilizing the busy tone mechanism to realize multicast reliability, RMAC has the following three novelties: (1) it uses a variable-length control frame to stipulate an order for the receivers to respond, such that the problem of feedback collision is solved; (2) it extends the traditional usage of busy tone for preventing data frame collisions into the multicast scenario; and (3) it introduces a new usage of busy tone for acknowledging data frames. In addition, we also generalize RMAC into a comprehensive MAC protocol that provides both reliable and unreliable services for all the three modes of communications: unicast, multicast, and broadcast. Our evaluation shows that RMAC achieves high reliability with very limited overhead. We also compare RMAC with other reliable multicast MAC protocols, showing that RMAC not only provides higher reliability but also involves lower cost. --- paper_title: Supporting MAC layer multicast in IEEE 802.11 based MANETs: issues and solutions paper_content: In IEEE 802.11 based mobile ad hoc networks (MANET) multicast packets are generally forwarded as one hop broadcast; mainly to reach all the multicast members in the neighborhood in a single transmission. Because of the broadcast property of the forwarding, packets suffer from increased instances of the hidden terminal problem. Mobility of nodes makes things more difficult, and unlike unicast transmissions where MAC can detect the movement of a nexthop by making several retries, it is not possible in case of multicast forwarding. To address these issues, we propose a multicast aware MAC protocol (MMP) for MANET. The basic objective of MMP is to provide a MAC layer support for multicast traffic. This is done by attaching an extended multicast header (EMH) by the multicast agent, which provides the address of the nexthop nodes that are supposed to receive the multicast packet. The MAC layer in MMP uses the EMH field to support an ACK based data delivery. After sending the data packet, the transmitter waits for the ACK from each of its destinations in a strictly sequential order. A retransmission of the multicast packet is performed only if the ACK from any of the nodes in EMH is missing. We compare MMP with IEEE 802.11 and results show that MMP substantially improves the performance of multicast packet delivery in MANET without creating much MAC overhead. In addition, MMP provides a better mechanism to detect the movement of its nexthop members. --- paper_title: Rate-adaptive multicast in mobile ad-hoc networks paper_content: A current trend in wireless communications is to enable wireless devices to transmit at different rates. That multirate capability has been defined in many standards such as 802.11a, 802.11b, 802.11g, and HiperLAN2. We propose a rate-adaptive multicast (RAM) protocol that is multirate-aware. During the process of path discovery, the quality of wireless links is estimated to suggest optimal transmission rates, which are then used to calculate the total transmission time incurred by the mobile nodes on a path. Among several considered paths from a source to a destination, RAM selects the path with the lowest total transmission time. Our work is the first that proposes the use of the multirate capability in multicast. The proposed RAM protocol works with any multirate standards, and does not require any modifications to the standards. Our simulation results show that RAM outperforms single-rate multicast in terms of packet delivery ratio, packet end-to-end delay, and throughput of the multicast group. --- paper_title: Ad hoc Multicast Routing Algorithm with Swarm Intelligence paper_content: Swarm intelligence refers to complex behaviors that arise from very simple individual behaviors and interactions, which is often observed in nature, especially among social insects such as ants. Although each individual (an ant) has little intelligence and simply follows basic rules using local information obtained from the environment, such as ant's pheromone trail laying and following behavior, globally optimized behaviors, such as finding a shortest path, emerge when they work collectively as a group. In this paper, we apply this biologically inspired metaphor to the multicast routing problem in mobile ad hoc networks. Our proposed multicast protocol adapts a core-based approach which establishes multicast connectivity among members through a designated node (core). An initial multicast connection can be rapidly setup by having the core flood the network with an announcement so that nodes on the reverse paths to the core will be requested by group members to serve as forwarding nodes. In addition, each member who is not the core periodically deploys a small packet that behaves like an ant to opportunistically explore different paths to the core. This exploration mechanism enables the protocol to discover new forwarding nodes that yield lower total forwarding costs, where cost is abstract and can be used to represent any metric to suit the application. Simulations have been conducted to demonstrate the performance of the proposed approach and to compare it with certain existing multicast protocols. --- paper_title: A performance comparison study of ad hoc wireless multicast protocols paper_content: In this paper we investigate the performance of multicast routing protocols in wireless mobile ad hoc networks. An ad hoc network is composed of mobile nodes without the presence of a wired support infrastructure. In this environment, routing/multicasting protocols are faced with the challenge of producing multihop routes under host mobility and bandwidth constraints. In recent years, a number of new multicast protocols of different styles have been proposed for ad hoc networks. However, systematic performance evaluations and comparative analysis of these protocols in a common realistic environment has not yet been performed. In this study, we simulate a set of representative wireless ad hoc multicast protocols and evaluate them in various network scenarios. The relative strengths, weaknesses, and applicability of each multicast protocol to diverse situations are studied and discussed. --- paper_title: Application versus network layer multicasting in ad hoc networks: The ALMA routing protocol. Ad Hoc Networks Journal paper_content: Application layer multicasting has emerged as an appealing alternative to network layer multicasting in wireline networks. Here, we examine the suitability of application layer multicast in ad hoc networks. To this effect, we propose a flexible receiver-driven overlay multicast protocol which we call Application Layer Multicast Algorithm (ALMA). ALMA constructs an overlay multicast tree in a dynamic, decentralized and incremental way. First, ALMA is receiver-driven: the member nodes find their connections according to their needs. Second, it is flexible, and thus, it can satisfy the performance goals and the needs of a wide range of applications. Third, it is highly adaptive: it reconfigures the tree in response to mobility or congestion. In addition, our protocol has the advantages of an application layer protocol: (a) simplicity of deployment, (b) independence from lower layer protocols, and (c) capability of exploiting features such as reliability and security that may be provided by the lower layers. Through extensive simulations, we show that ALMA performs favorably against the currently best application layer and network layer protocols. In more detail, we find that ALMA performs significantly better than ODMRP, a network layer, for small group sizes. We conclude that the application layer approach and ALMA seem very promising for ad hoc multicasting. --- paper_title: Multipoint communication by hierarchically encoded data paper_content: A novel multipoint communication paradigm, in which each destination receives a subset of the source's signal that corresponds to that destination's terminal and access bandwidth constraints, is presented. The approach to realizing this paradigm is based on integration of layered coding of the source's signal, routing based on bandwidth demand, optimization of signal parameters, and layered error control. The author gives an overview of several hierarchical signal coding techniques; presents methods for finding maximum bandwidth available to destination, establishing maximum-bandwidth routes; and optimally assigns bandwidth to the signal layers to maximize overall reception quality. Error control procedures whereby the network, source, and destinations cooperate to maintain layered-based data integrity, using erasure recovery coding and prioritized packet detection are also presented. > --- paper_title: Multiple Description Coding: Compression Meets the Network paper_content: This article focuses on the compressed representations of pictures. The representation does not affect how many bits get from the Web server to the laptop, but it determines the usefulness of the bits that arrive. Many different representations are possible, and there is more involved in their choice than merely selecting a compression ratio. The techniques presented represent a single information source with several chunks of data ("descriptions") so that the source can be approximated from any subset of the chunks. By allowing image reconstruction to continue even after a packet is lost, this type of representation can prevent a Web browser from becoming dormant. ---
Title: Multicast Routing Protocols in Mobile Ad Hoc Networks: A Comparative Survey and Taxonomy Section 1: Introduction Description 1: This section provides an overview of Mobile Ad Hoc Networks (MANETs) and the significance of multicast routing in these networks. It also outlines the benefits and complexities associated with multicasting in MANETs and provides a brief summary of the paper's organization. Section 2: Multicast Routing Protocol Design: Issues and Challenges Description 2: This section discusses the unique features of MANETs that make designing multicast routing protocols challenging. It details the major issues and challenges, including dynamic topology, limited capacity, energy consumption, QoS, security, reliability, and scalability. Section 3: Taxonomy of Multicast Routing Protocols Description 3: This section proposes a classification for existing multicast routing protocols into three categories based on their operational layer—network layer, application layer, and MAC layer. It also provides a comprehensive discussion of their features, advantages, and limitations. Section 4: Network Layer Multicasting versus Application Layer Multicasting versus MAC Layer Multicasting Description 4: This section compares multicast protocols implemented at different layers of the protocol stack, outlining specific functionalities, benefits, and drawbacks of each approach. Section 5: Multicast Session Life Cycle Description 5: This section describes the different stages of a multicast session, including initialization, maintenance, and termination. It discusses various events that can affect the session's performance and the strategies deployed by existing protocols to handle these events. Section 6: Multicast Routing Protocols in MANETs Description 6: This section presents detailed descriptions of some existing multicast routing protocols categorized by their operational layers. It includes the design, operation, and discussion of protocols such as ABAM, DDM, BEMRP, WBM, MZRP, MCEDAR, among others. Section 7: Multicast Evaluation Criteria Description 7: This section outlines the criteria for evaluating multicast routing protocols, including packet delivery ratio, total overhead, control overhead, average latency, delivery efficiency, reachability, average throughput, and stress. Section 8: Comparison and Summary Description 8: This section provides a comparative and qualitative evaluation of the described multicast routing protocols based on various characteristics like routing approach, route acquisition latency, multicast control overhead, and QoS support. Section 9: Conclusion and Future Work Description 9: This section summarizes the survey, stating the major issues and challenges in designing multicast routing protocols for MANETs. It also outlines potential future research areas open for exploration, such as interoperability, interaction, heterogeneity, integration, mobility, congestion control, power efficiency, and network coding.
3D Human Motion Editing and Synthesis: A Survey
10
--- paper_title: The EMOTE model for effort and shape paper_content: Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate that the domain of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components, provides us with valuable parameters for the form and execution of qualitative aspects of movements. Inspired by some tenets shared among LMA proponents, we also point out that Effort and Shape phrasing across movements and the engagement of the whole body are essential aspects to be considered in the search for naturalness in procedurally generated gestures. Finally, we present EMOTE (Expressive MOTion Engine), a 3D character animation system that applies Effort and Shape qualities to independently defined underlying movements and thereby generates more natural synthetic gestures. --- paper_title: Efficient synthesis of physically valid human motion paper_content: Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Traditional approaches often require optimizing or constraining parameters involving joint torques; obtaining first derivatives for these parameters is generally an O(D2) process, where D is the number of degrees of freedom of the character. In this paper, we describe a set of objective functions and constraints that lead to linear time analytical first derivatives. The surprising finding is that this set includes constraints on physical validity, such as ground contact constraints. Considering only constraints and objective functions that lead to linear time first derivatives results in fast per-iteration computation times and an optimization problem that appears to scale well to more complex characters. We show that qualities such as squash-and-stretch that are expected from physically based optimization result from our approach. Our animation system is particularly useful for synthesizing highly dynamic motions, and we show examples of swinging and leaping motions for characters having from 7 to 22 degrees of freedom. --- paper_title: Learning silhouette features for control of human motion paper_content: We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system. --- paper_title: Improv: a system for scripting interactive actors in virtual worlds paper_content: Improv is a system for the creation of real−time behavior−based animated actors. There have been several recent efforts to build network distributed autonomous agents. But in general these efforts do not focus on the author’s view. To create rich interactive worlds inhabited by believable animated actors, authors need the proper tools. Improv provides tools to create actors that respond to users and to each other in real−time, with personalities and moods consistent with the author’s goals and intentions. Improv consists of two subsystems. The first subsystem is an Animation Engine that uses procedural techniques to enable authors to create layered, continuous, non−repetitive motions and smooth transitions between them. The second subsystem is a Behavior Engine that enables authors to create sophisticated rules governing how actors communicate, change, and make decisions. The combined system provides an integrated set of tools for authoring the "minds" and "bodies" of interactive actors. The system uses an english−style scripting language so that creative experts who are not primarily programmers can create powerful interactive applications. --- paper_title: Learning physics-based motion style with nonlinear inverse optimization paper_content: This paper presents a novel physics-based representation of realistic character motion. The dynamical model incorporates several factors of locomotion derived from the biomechanical literature, including relative preferences for using some muscles more than others. elastic mechanisms at joints due to the mechanical properties of tendons, ligaments, and muscles, and variable stiffness at joints depending on the task. When used in a spacetime optimization framework, the parameters of this model define a wide range of styles of natural human movement.Due to the complexity of biological motion, these style parameters are too difficult to design by hand. To address this, we introduce Nonlinear Inverse Optimization, a novel algorithm for estimating optimization parameters from motion capture data. Our method can extract the physical parameters from a single short motion sequence. Once captured, this representation of style is extremely flexible: motions can be generated in the same style but performing different tasks, and styles may be edited to change the physical properties of the body. --- paper_title: Incomplete motion feature tracking algorithm in video sequences paper_content: To effectively track incomplete motion features, a novel feature tracking algorithm for motion capture is presented. According to feature attributes and relationship among features, extracted features are classified as four types of features. Then different strategies are applied to track different kinds of features. To verify the tracks, cross correlation test and predicted 3D model based test are used to test and remove outliers. Experimental results demonstrate the effectiveness of our algorithm. --- paper_title: The EMOTE model for effort and shape paper_content: Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate that the domain of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components, provides us with valuable parameters for the form and execution of qualitative aspects of movements. Inspired by some tenets shared among LMA proponents, we also point out that Effort and Shape phrasing across movements and the engagement of the whole body are essential aspects to be considered in the search for naturalness in procedurally generated gestures. Finally, we present EMOTE (Expressive MOTion Engine), a 3D character animation system that applies Effort and Shape qualities to independently defined underlying movements and thereby generates more natural synthetic gestures. --- paper_title: Composable controllers for physics-based character animation paper_content: An ambitious goal in the area of physics-based computer animation is the creation of virtual actors that autonomously synthesize realistic human motions and possess a broad repertoire of lifelike motor skills. To this end, the control of dynamic, anthropomorphic figures subject to gravity and contact forces remains a difficult open problem. We propose a framework for composing controllers in order to enhance the motor abilities of such figures. A key contribution of our composition framework is an explicit model of the “pre-conditions” under which motor controllers are expected to function properly. We demonstrate controller composition with pre-conditions determined not only manually, but also automatically based on Support Vector Machine (SVM) learning theory. We evaluate our composition framework using a family of controllers capable of synthesizing basic actions such as balance, protective stepping when balance is disturbed, protective arm reactions when falling, and multiple ways of standing up after a fall. We furthermore demonstrate these basic controllers working in conjunction with more dynamic motor skills within a prototype virtual stunt-person. Our composition framework promises to enable the community of physics-based animation practitioners to easily exchange motor controllers and integrate them into dynamic characters. --- paper_title: Video-based character animation paper_content: In this paper we introduce a video-based representation for free viewpoint visualization and motion control of 3D character models created from multiple view video sequences of real people. Previous approaches to video-based rendering provide no control of scene dynamics to manipulate, retarget, and create new 3D content from captured scenes. Here we contribute a new approach, combining image based reconstruction and video-based animation to allow controlled animation of people from captured multiple view video sequences. We represent a character as a motion graph of free viewpoint video motions for animation control. We introduce the use of geometry videos to represent reconstructed scenes of people for free viewpoint video rendering. We describe a novel spherical matching algorithm to derive global surface to surface correspondence in spherical geometry images for motion blending and the construction of seamless transitions between motion sequences. Finally, we demonstrate interactive video-based character animation with real-time rendering and free viewpoint visualization. This approach synthesizes highly realistic character animations with dynamic surface shape and appearance captured from multiple view video of people. --- paper_title: Free-viewpoint video of human actors paper_content: In free-viewpoint video, the viewer can interactively choose his viewpoint in 3-D space to observe the action of a dynamic real-world scene from arbitrary perspectives. The human body and its motion plays a central role in most visual media and its structure can be exploited for robust motion estimation and efficient visualization. This paper describes a system that uses multi-view synchronized video footage of an actor's performance to estimate motion parameters and to interactively re-render the actor's appearance from any viewpoint.The actor's silhouettes are extracted from synchronized video frames via background segmentation and then used to determine a sequence of poses for a 3D human body model. By employing multi-view texturing during rendering, time-dependent changes in the body surface are reproduced in high detail. The motion capture subsystem runs offline, is non-intrusive, yields robust motion parameter estimates, and can cope with a broad range of motion. The rendering subsystem runs at real-time frame rates using ubiquous graphics hardware, yielding a highly naturalistic impression of the actor. The actor can be placed in virtual environments to create composite dynamic scenes. Free-viewpoint video allows the creation of camera fly-throughs or viewing the action interactively from arbitrary perspectives. --- paper_title: Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces paper_content: Optimization is an appealing way to compute the motion of an animated character because it allows the user to specify the desired motion in a sparse, intuitive way. The difficulty of solving this problem for complex characters such as humans is due in part to the high dimensionality of the search space. The dimensionality is an artifact of the problem representation because most dynamic human behaviors are intrinsically low dimensional with, for example, legs and arms operating in a coordinated way. We describe a method that exploits this observation to create an optimization problem that is easier to solve. Our method utilizes an existing motion capture database to find a low-dimensional space that captures the properties of the desired behavior. We show that when the optimization problem is solved within this low-dimensional subspace, a sparse sketch can be used as an initial guess and full physics constraints can be enabled. We demonstrate the power of our approach with examples of forward, vertical, and turning jumps; with running and walking; and with several acrobatic flips. --- paper_title: Automatic Joint Parameter Estimation from Magnetic Motion Capture Data paper_content: This paper describes a technique for using magnetic motion capture data to determine the joint parameters of an articulated hierarchy. This technique makes it possible to determine limb lengths, joint locations, and sensor placement for a human subject without external measurements. Instead, the joint parameters are inferred with high accuracy from the motion data acquired during the capture session. The parameters are computed by performing a linear least squares fit of a rotary joint model to the input data. A hierarchical structure for the articulated model can also be determined in situations where the topology of the model is not known. Once the system topology and joint parameters have been recovered, the resulting model can be used to perform forward and inverse kinematic procedures. We present the results of using the algorithm on human motion capture data, as well as validation results obtained with data from a simulation and a wooden linkage of known dimensions. --- paper_title: The Process of Motion Capture : Dealing with the Data paper_content: This paper presents a detailed description of the process of motion capture, whereby sensor information from a performer is transformed into an articulated, hierarchical rigid-body object. We describe the gathering of the data, the real-time construction of a virtual skeleton which a director can use for immediate feedback, and the offline processing which produces the articulated object. This offline process involves a robust statistical estimation of the size of the skeleton and an inverse kinematic optimization to produce the desired joint angle trajectories. Additionally, we discuss a variation on the inverse kinematic optimization which can be used when the standard approach does not yield satisfactory results for the special cases when joint angle consistency is desired between a group of motions. These procedures work well and have been used to produce motions for a number of commercial games. --- paper_title: Mapping optical motion capture data to skeletal motion using a physical model paper_content: Motion capture has become a premiere technique for animation of humanlike characters. To facilitate its use, researchers have focused on the manipulation of data for retargeting, editing, combining, and reusing motion capture libraries. In many of these efforts joint angle plus root trajectories are used as input, although this format requires an inherent mapping from the raw data recorded by many popular motion capture set-ups. In this paper, we propose a novel solution to this mapping problem from 3D marker position data recorded by optical motion capture systems to joint trajectories for a fixed limb-length skeleton using a forward dynamic model. To accomplish the mapping, we attach virtual springs to marker positions located on the appropriate landmarks of a physical simulation and apply resistive torques to the skeleton's joints using a simple controller. For the motion capture samples, joint-angle postures are resolved from the simulation's equilibrium state, based on the internal torques and external forces. Additional constraints, such as foot plants and hand holds, may also be treated as addition forces applied to the system and are a trivial and natural extension to the proposed technique. We present results for our approach as applied to several motion-captured behaviors. --- paper_title: Motion graphs paper_content: In this paper we present a novel method for creating realistic, controllable motion. Given a corpus of motion capture data, we automatically construct a directed graph called a motion graph that encapsulates connections among the database. The motion graph consists both of pieces of original motion and automatically generated transitions. Motion can be generated simply by building walks on the graph. We present a general framework for extracting particular graph walks that meet a user's specifications. We then show how this framework can be applied to the specific problem of generating different styles of locomotion along arbitrary paths. --- paper_title: Motion Synthesis By Example paper_content: A technique is proposed for creating new animations from a set of representative example motions stored in a motion database. Animations are created by cutting-and-pasting together the example motion segments as required. Motion segments are selected based upon how well they fit into a desired motion and are then automatically tailored for a precise fit. Various fundamental problems associated with the use of motion databases are outlined. A prototype implementation is used to validate the proposed concepts and to explore possible solutions to the aforementioned problems. --- paper_title: Human Motion Synthesis with Optimization-based Graphs paper_content: Continuous constrained optimization is a powerful tool for synthesizing novel human motion segments that are short. Graph-based motion synthesis methods such as motion graphs and move trees are popular ways to synthesize long motions by playing back a sequence of existing motion segments. However, motion graphs only support transitions between similar frames, and move trees only support transitions between the end of one motion segment and the start of another. In this paper, we introduce an optimization-based graph that combines continuous constrained optimization with graph-based motion synthesis. The constrained optimization is used to create a vast number of complex realistic-looking transitions in the graph. The graph can then be used to synthesize long motions with non-trivial transitions that for example allow the character to switch its behavior abruptly while retaining motion naturalness. We also propose to build this graph semi-autonomously by requiring a user to classify generated transitions as acceptable or not and explicitly minimizing the amount of required classifications. This process guarantees the quality consistency of the optimization-based graph at the cost of limited user involvement. --- paper_title: Motion texture: a two-level statistical model for character motion synthesis paper_content: In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion. --- paper_title: Evaluating motion graphs for character animation paper_content: Realistic and directable humanlike characters are an ongoing goal in animation. Motion graph data structures hold much promise for achieving this goal; however, the quality of the results obtainable from a motion graph may not be easy to predict from its input motion clips. This article describes a method for using task-based metrics to evaluate the capability of a motion graph to create the set of animations required by a particular application. We examine this capability for typical motion graphs across a range of tasks and environments. We find that motion graph capability degrades rapidly with increases in the complexity of the target environment or required tasks, and that addressing deficiencies in a brute-force manner tends to lead to large, unwieldy motion graphs. The results of this method can be used to evaluate the extent to which a motion graph will fulfill the requirements of a particular application, lessening the risk of the data structure performing poorly at an inopportune moment. The method can also be used to characterize the deficiencies of motion graphs whose performance will not be sufficient, and to evaluate the relative effectiveness of different options for improving those motion graphs. --- paper_title: Motion retargeting and evaluation for VR-based training of free motions paper_content: Virtual reality (VR) has emerged as one of the important and effective tools for education and training. Most VR-based training systems are situation based, where the trainees are trained for discrete decision making in special situations presented by the VR environments. In contrast, this paper discusses the application of VR to a different class of training, for learning free motion, often required in sports and the arts. We propose a VR-based motion-training framework that contains an intuitive motion-guiding interface, posture-oriented motion retargeting, and an evaluation and advice scheme for corrective feedback. Applications of the proposed framework to simple fencing training and a dance imitation game are demonstrated . --- paper_title: Physically based motion transformation paper_content: We introduce a novel algorithm for transforming character animation sequences that preserves essential physical properties of the motion. By using the spacetime constraints dynamics formulation our algorithm maintains realism of the original motion sequence without sacrificing full user control of the editing process. In contrast to most physically based animation techniques that synthesize motion from scratch, we take the approach of motion transformationas the underlying paradigm for generating computer animations. In doing so, we combine the expressive richness of an input animation sequence with the controllability of spacetime optimization to create a wide range of realistic character animations. The spacetime dynamics formulation also allows editing of intuitive, high-level motion concepts such as the time and placement of footprints, length and mass of various extremities, number of body joints and gravity. Our algorithm is well suited for the reuse of highly-detailed captured motion animations. In addition, we describe a new methodology for mapping a motion between characters with drastically different numbers of degrees of freedom. We use this method to reduce the complexity of the spacetime optimization problems. Furthermore, our approach provides a paradigm for controlling complex dynamic and kinematic systems with simpler ones. --- paper_title: Achieving good connectivity in motion graphs paper_content: Motion graphs have been widely successful in the synthesis of human motions. However, the quality of the generated motions depends heavily on the connectivity of the graphs and the quality of transitions in them. Achieving both of these criteria simultaneously though is difficult. Good connectivity requires transitions between less similar poses, while good motion quality requires transitions only between very similar poses. This paper introduces a new method for building motion graphs. The method first builds a set of interpolated motion clips, which contains many more similar poses than the original data set. The method then constructs a well-connected motion graph (wcMG), by using as little of the interpolated motion clip frames as necessary to provide good connectivity and only smooth transitions. Based on experiments, wcMGs outperform standard motion graphs across different measures, generate good quality motions, allow for high responsiveness in interactive control applications, and do not even require post-processing of the synthesized motions. --- paper_title: A physically-based motion retargeting filter paper_content: This article presents a novel constraint-based motion editing technique. On the basis of animator-specified kinematic and dynamic constraints, the method converts a given captured or animated motion to a physically plausible motion. In contrast to previous methods using spacetime optimization, we cast the motion editing problem as a constrained state estimation problem, based on the per-frame Kalman filter framework. The method works as a filter that sequentially scans the input motion to produce a stream of output motion frames at a stable interactive rate. Animators can tune several filter parameters to adjust to different motions, turn the constraints on or off based on their contributions to the final result, or provide a rough sketch (kinematic hint) as an effective way of producing the desired motion. Experiments on various systems show that the technique processes the motions of a human with 54 degrees of freedom, at about 150 fps when only kinematic constraints are applied, and at about 10 fps when both kinematic and dynamic constraints are applied. Experiments on various types of motion show that the proposed method produces remarkably realistic animations. --- paper_title: Motion retargeting in the presence of topological variations paper_content: Research on motion retargeting and synthesis for character animation has been mostly focused on character scale variations. In our recent work we have addressed the motion retargeting problem for characters with slightly different topologies. In this paper we present a new method for retargeting captured motion data to an enhanced character skeleton having a topology that is different from that of the original captured motion. The new topology could include altered hierarchical structures and scaled segments. In order to solve this problem, we propose a framework based on the concept of a motion control net (MCN). This is an external structure analogous to the convex hull of a set of control points defining a parametric curve or a surface patch. The MCN encapsulates the motion characteristics of the character. Retargeting is achieved as a generalized inverse kinematics problem using an external MCN. The retargeting solution requires the dynamic modification of the MCN structure. This also allows US to interactively edit the MCN and modify the conditions for the motion analysis. The new method can automatically synthesize new segment information and, by combining the segment motion into the MCN domain with a suitable displacement of control points embedded in the original motion capture sensor data, it can also generate realistic new motions that resemble the motion patterns in the original data. Copyright © 2006 John Wiley & Sons, Ltd. --- paper_title: Fourier principles for emotion-based human figure animation paper_content: This paper describes the method for modeling human figure locomotions with emotions. Fourier expansions of experimental data of actual human behaviors serve as a basis from which the method can interpolate or extrapolate the human locomotions. This means, for instance, that transition from a walk to a run is smoothly and realistically performed by the method. Moreover an individual's character or mood, appearing during the human behaviors, is also extracted by the method. For example, the method gets "briskness" from the experimental data for a "normal" walk and a "brisk" walk. Then the "brisk" run is generated by the method, using another Fourier expansion of the measured data of running. The superposition of these human behaviors is shown as an efficient technique for generating rich variations of human locomotions. In addition, step-length, speed, and hip position during the locomotions are also modeled, and then interactively controlled to get a desired animation. Abstract --- paper_title: Motion signal processing paper_content: Techniques from the image and signal processing domain can be successfully applied to designing, modifying, and adapting animated motion. For this purpose, we introduce multiresolution motion filtering, multitarget motion interpolation with dynamic timewarping, waveshaping and motion displacement mapping. The techniques are well-suited for reuse and adaptation of existing motion data such as joint angles, joint coordinates or higher level motion parameters of articulated figures with many degrees of freedom. Existing motions can be modified and combined interactively and at a higher level of abstraction than conventional systems support. This general approach is thus complementary to keyframing, motion capture, and procedural animation. --- paper_title: Path editing technique based on motion graphs paper_content: This paper improved the algorithm of generating transitions and searching for path,and proposed a path editing method based on motion graphs.With regard to generating transitions,this paper detected the motion clips which can be used to blend automatically by minimizing the average frame distance between blending frames,and proposed Enhanced Dynamic Time Wrapping(EDTW) algorithm to solve this optimization problem.Concerning path search in the motion graph,this paper used the area between two curves as the target function and improved the strategy of incremental search and the strategy of branch and bound.The result shows that the proposed algorithm can edit and generate the character motions that well match the paths specified by users. --- paper_title: Motion editing with spacetime constraints paper_content: In this paper, we present a method for editing a pre-existing motion such that it meets new needs yet preserves as much of the original quality as possible. Our approach enables the user to interactively position characters using direct manipulation. A spacetime constraints solver finds these positions while considering the entire motion. This paper discusses the three central challenges of creating such an approach: defining a constraint formulation that is rich enough to be effective, yet simple enough to afford fast solution; providing a solver that is fast enough to solve the constraint problems at interactive rates; and creating an interface that allows users to specify and visualize changes to entire motions. We present examples with a prototype system that permits interactive motion editing for articulated 3D characters on personal computers. I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism – Animation; I.3.6 [Computer Graphics]: Methodology and Techniques Interaction Techniques; G.1.6 [Numerical Analysis]: Optimization. Spacetime Constraints, Motion Displacement Mapping. --- paper_title: Snap-together motion: assembling run-time animations paper_content: Many virtual environments and games must be populated with synthetic characters to create the desired experience. These characters must move with sufficient realism, so as not to destroy the visual quality of the experience, yet be responsive, controllable, and efficient to simulate. In this paper we present an approach to character motion called Snap-Together Motion that addresses the unique demands of virtual environments. Snap-Together Motion (STM) preprocesses a corpus of motion capture examples into a set of short clips that can be concatenated to make continuous streams of motion. The result process is a simple graph structure that facilitates efficient planning of character motions. A user-guided process selects "common" character poses and the system automatically synthesizes multi-way transitions that connect through these poses. In this manner well-connected graphs can be constructed to suit a particular application, allowing for practical interactive control without the effort of manually specifying all transitions. --- paper_title: A hierarchical approach to interactive motion editing for human-like figures paper_content: This paper presents a technique for adapting existing motion of a human-like character to have the desired features that are specified by a set of constraints. This problem can be typically formulated as a spacetime constraint problem. Our approach combines a hierarchical curve fitting technique with a new inverse kinematics solver. Using the kinematics solver, we can adjust the configuration of an articulated figure to meet the constraints in each frame. Through the fitting technique, the motion displacement of every joint at each constrained frame is interpolated and thus smoothly propagated to frames. We are able to adaptively add motion details to satisfy the constraints within a specified tolerance by adopting a multilevel Bspline representation which also provides a speedup for the interpolation. The performance of our system is further enhanced by the new inverse kinematics solver. We present a closed-form solution to compute the joint angles of a limb linkage. This analytical method greatly reduces the burden of a numerical optimization to find the solutions for full degrees of freedom of a human-like articulated figure. We demonstrate that the technique can be used for retargetting a motion to compensate for geometric variations caused by both characters and environments. Furthermore, we can also use this technique for directly manipulating a motion clip through a graphical interface. CR Categories: I.3.7 [Computer Graphics]: Threedimensional Graphics—Animation; G.1.2 [Numerical Analysis]: Approximation—Spline and piecewise polynomial approximation --- paper_title: Continuous motion graph for crowd simulation paper_content: This paper presents a new constrained motion synthesis algorithm for crowd. The algorithm is based on a novel continuous motion graph where infinite number of motion clips can be created by using path editing technique. Because we can create motions with arbitrary trajectories, we can speed up the motion synthesizing time significantly as well as satisfying constraints exactly. The resulting artifact of the motion such as foot skating is solved by the foot skating solver. The algorithm is tested on several different scenarios. --- paper_title: Verbs and adverbs: multidimensional motion interpolation paper_content: The article describes a system for real-time interpolated animation that addresses some of these problems. Through creating parameterized motions-which the authors call verbs parameterized by adverbs-a single authored verb produces a continuous range of subtle variations of a given motion at real-time rates. As a result, simulated figures alter their actions based on their momentary mood or in response to changes in their goals or environmental stimuli. For example, they demonstrate a walk verb that can show emotions such as happiness and sadness, and demonstrate subtle variations due to walking up or down hill while turning to the left and right. They also describe verb graphs, which act as the glue to assemble verbs and their adverbs into a runtime data structure. Verb graphs provide the means for seamless transition from verb to verb for the simulated figures within an interactive runtime system. Finally they briefly discuss the discrete event simulator that handles the runtime main loop. --- paper_title: Interactive motion generation from examples paper_content: There are many applications that demand large quantities of natural looking motion. It is difficult to synthesize motion that looks natural, particularly when it is people who must move. In this paper, we present a framework that generates human motions by cutting and pasting motion capture data. Selecting a collection of clips that yields an acceptable motion is a combinatorial problem that we manage as a randomized search of a hierarchy of graphs. This approach can generate motion sequences that satisfy a variety of constraints automatically. The motions are smooth and human-looking. They are generated in real time so that we can author complex motions interactively. The algorithm generates multiple motions that satisfy a given set of constraints, allowing a variety of choices for the animator. It can easily synthesize multiple motions that interact with each other using constraints. This framework allows the extensive re-use of motion capture data for new purposes. --- paper_title: Motion path editing paper_content: In this paper we provide methods that allow for path-based editing of existing motion data. We begin by exploring the concept of path as an abstraction of motion, and show how many of the common motion editing tools fail to provide proper control over this useful property. We provide a simple extension to displacement mapping methods that provide better control over the path in a manner that is easy to implement in current systems. We then extend this simple method to avoid violation of geometric constraints such as foot- skate. demonstrate how path transformations work with constraint-based approaches to provide an interactive method for altering the path of a motion. This leads to several useful applications. A path is an abstraction of the positional movement of a character. The path encodes the direction of motion, which is different from, but related to, the orientation of the character as it moves along the path. This abstraction leads to the idea of representing a motion relative to the path, allowing the path to be altered and the motion to be adjusted accordingly. The methods we present maintain the relationship between the motion and the path. This paper is organized as follows. We begin by describing an ex- ample of what our techniques are capable of and useful for (Section 2). This discussion both motivates our methods as well as discusses their relationship to existing techniques. We then introduce the ab- straction of a path for a motion (Section 3) including methods for automatically creating it. The most basic form of path transformation, presented in Section 4, can create a new motion that follows an arbitrary path and ori- ents the character appropriately. However, this transformation may damage some of the fine details in the motion such as the crispness of footplants. Better results can be obtained by using constraint processing to explicitly maintain details, as described in Section 5. The motion is continuously updated as the user drags portions of the path. Even the most sophisticated of the methods presented works interactively in all of our examples. We conclude by discussing experiments with our prototype implementation. --- paper_title: Motion graphs paper_content: In this paper we present a novel method for creating realistic, controllable motion. Given a corpus of motion capture data, we automatically construct a directed graph called a motion graph that encapsulates connections among the database. The motion graph consists both of pieces of original motion and automatically generated transitions. Motion can be generated simply by building walks on the graph. We present a general framework for extracting particular graph walks that meet a user's specifications. We then show how this framework can be applied to the specific problem of generating different styles of locomotion along arbitrary paths. --- paper_title: Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting paper_content: In this paper, we present a new method for solving the Motion Retargeting Problem, by using an intermediate skeleton. This allows us to convert movements between hierarchically and geometrically different characters. An Inverse Kinematics engine is then used to enforce Cartesian constraints while staying as close as possible to the captured motion. --- paper_title: Improv: a system for scripting interactive actors in virtual worlds paper_content: Improv is a system for the creation of real−time behavior−based animated actors. There have been several recent efforts to build network distributed autonomous agents. But in general these efforts do not focus on the author’s view. To create rich interactive worlds inhabited by believable animated actors, authors need the proper tools. Improv provides tools to create actors that respond to users and to each other in real−time, with personalities and moods consistent with the author’s goals and intentions. Improv consists of two subsystems. The first subsystem is an Animation Engine that uses procedural techniques to enable authors to create layered, continuous, non−repetitive motions and smooth transitions between them. The second subsystem is a Behavior Engine that enables authors to create sophisticated rules governing how actors communicate, change, and make decisions. The combined system provides an integrated set of tools for authoring the "minds" and "bodies" of interactive actors. The system uses an english−style scripting language so that creative experts who are not primarily programmers can create powerful interactive applications. --- paper_title: Style translation for human motion paper_content: Style translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel correspondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is estimated with system identification, our system is capable of translating streaming input with simple linear operations at each frame. --- paper_title: Automated derivation of behavior vocabularies for autonomous humanoid motion paper_content: In this paper we address the problem of automatically deriving vocabularies of motion modules from human motion data, taking advantage of the underlying spatio-temporal structure in motion. We approach this problem with a data-driven methodology for modularizing a motion stream (or time-series of human motion) into a vocabulary of parameterized primitive motion modules and a set of meta-level behaviors characterizing extended combinations of the primitives. Central to this methodology is the discovery of spatio-temporal structure in a motion stream. We estimate this structure by extending an existing nonlinear dimension reduction technique, Isomap, to handle motion data with spatial and temporal dependencies. The motion vocabularies derived by our methodology provide a substrate of autonomous behavior and can be used in a variety of applications. We demonstrate the utility of derived vocabularies for the application of synthesizing new humanoid motion that is structurally similar to the original demonstrated motion. --- paper_title: Animating by multi-level sampling paper_content: We describe a method for synthesizing joint angle and translation data based on the information in motion capture data. The synthetic data is realistic not only in that it resembles the original training data, but in that it has random variations that are statistically similar to what one would find in repeated measurements of the motion. To achieve this result, the training data is broken into frequency bands using a wavelet decomposition, and the information in these bands is used to create the synthetic data one frequency band at a time. The method takes into account the fact that there are correlations among numerous features of the data. For example, a point characterized by a particular time and frequency band will depend upon points close to it in time in other frequency bands. Such correlations are modeled with a kernel-based representation of the joint probability distributions of the features. The data is synthesized by sampling from these densities and improving the results using a new iterative maximization technique. We have applied this technique to the synthesis of joint angle and translation data of a wallaby hopping on a treadmill. The synthetic data was used to animate characters that have limbs proportional to the wallaby. --- paper_title: Learning Statistical Models of Human Motion paper_content: Non-linear statistical models of deformation provide methods to learn a priori shape and deformation for an object or class of objects by example. This paper extends these models of deformation to that of motion by augmenting the discrete representation of piecewise nonlinear principle component analysis of shape with a markov chain which represents the temporal dynamics of the model. In this manner, mean trajectories can be learnt and reproduced for either the simulation of movement or for object tracking. This paper demonstrates the use of these techniques in learning human motion from capture data. --- paper_title: Verbs and adverbs: multidimensional motion interpolation paper_content: The article describes a system for real-time interpolated animation that addresses some of these problems. Through creating parameterized motions-which the authors call verbs parameterized by adverbs-a single authored verb produces a continuous range of subtle variations of a given motion at real-time rates. As a result, simulated figures alter their actions based on their momentary mood or in response to changes in their goals or environmental stimuli. For example, they demonstrate a walk verb that can show emotions such as happiness and sadness, and demonstrate subtle variations due to walking up or down hill while turning to the left and right. They also describe verb graphs, which act as the glue to assemble verbs and their adverbs into a runtime data structure. Verb graphs provide the means for seamless transition from verb to verb for the simulated figures within an interactive runtime system. Finally they briefly discuss the discrete event simulator that handles the runtime main loop. --- paper_title: Physically valid statistical models for human motion generation paper_content: This article shows how statistical motion priors can be combined seamlessly with physical constraints for human motion modeling and generation. The key idea of the approach is to learn a nonlinear probabilistic force field function from prerecorded motion data with Gaussian processes and combine it with physical constraints in a probabilistic framework. In addition, we show how to effectively utilize the new model to generate a wide range of natural-looking motions that achieve the goals specified by users. Unlike previous statistical motion models, our model can generate physically realistic animations that react to external forces or changes in physical quantities of human bodies and interaction environments. We have evaluated the performance of our system by comparing against ground-truth motion data and alternative methods. --- paper_title: Gaussian Process Dynamical Models for Human Motion paper_content: We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces. --- paper_title: Optimized keyframe extraction for 3D character animations paper_content: In this paper, we propose a new method to automatically extract keyframes from animation sequences. Our method can be applied equally to both skeletal and mesh animations. It uses animation saliency computed on the original data to help select the group of keyframes that can reconstruct the input animation with less perception error. For computational efficiency, we perform nonlinear dimension reduction using locally linear embedding and then carry out the optimal search in much lower-dimensional space. With this approach, reconstruction of the animation from the extracted keyframes shows much better results as compared with earlier approaches. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Optimization-based key frame extraction for motion capture animation paper_content: In this paper, we present a new solution for extracting key frames from motion capture data using an optimization algorithm to obtain compact and sparse key frame data that can represent the original dense human body motion capture animation. The use of the genetic algorithm helps determine the optimal solution with global exploration capability while the use of a probabilistic simplex method helps expedite the speed of convergence. By finding the chromosome that maximizes the fitness function, the algorithm provides the optimal number of key frames as well as the low reconstruction error with an ordinary interpolation technique. The reconstruction error is computed between the original motion and the reconstruction one by the weighted differences of joint positions and velocities. The resulting set of key frames is obtained by iterative application of the algorithm with initial populations generated randomly and intelligently. We also present experiments which demonstrate that the method can effectively extract key frames with a high compression ratio and reconstruct all other non key frames with high quality. --- paper_title: Motion capture assisted animation: texturing and synthesis paper_content: We discuss a method for creating animations that allows the animator to sketch an animation by setting a small number of keyframes on a fraction of the possible degrees of freedom. Motion capture data is then used to enhance the animation. Detail is added to degrees of freedom that were keyframed, a process we call texturing. Degrees of freedom that were not keyframed are synthesized. The method takes advantage of the fact that joint motions of an articulated figure are often correlated, so that given an incomplete data set, the missing degrees of freedom can be predicted from those that are present. --- paper_title: 3D Human Motion Synthesis based on Nonlinear Manifold Learning paper_content: Due to the popularity of optical motion capture system,more realistic human motion data can be acquired easily and widely used in various applications such as video games,animation films,sports simulation and virtual reality. This paper proposes a framework and algorithm for 3D human motion synthesis based on nonlinear manifold learning. In this framework,high-dimensional motion samples are mapped into low-dimensional manifold,with nonlinear dimensionality reduction method,to the intrinsic representation of motion semantic features. Furthermore,the sample which is generated by user interactions in low-dimensional manifold can be reconstructed to obtain a 3D motion sequence which owns a new motion semantic feature by reverse mapping. The experimental results show that the method proposed in this paper can not only precisely control the physical features of motions(such as the location of a specific joint),but also can be used to synthesize new motion data which owns Abstract motion semantic,such as motion styles. --- paper_title: The human motion database: A cognitive and parametric sampling of human motion paper_content: Motion databases have a strong potential to guide progress in the field of machine recognition and motion-based animation. Existing databases either have a very loose structure that does not sample the domain according to any controlled methodology or too few action samples which limits their potential to quantitatively evaluate the performance of motion-based techniques. The controlled sampling of the motor domain in the database may lead investigators to identify the fundamental difficulties of motion cognition problems and allow the addressing of these issues in a more objective way. In this paper, we describe the construction of our Human Motion Database using controlled sampling methods (parametric and cognitive sampling) to obtain the structure necessary for the quantitative evaluation of several motion-based research problems. The Human Motion Database is organized into several components: the praxicon dataset, the cross-validation dataset, the generalization dataset, the compositionality dataset, and the interaction dataset. The main contributions of this paper include (1) a survey of human motion databases describing data sources related to motion synthesis and analysis problems, (2) a sampling methodology that takes advantage of a systematic controlled capture, denoted as cognitive sampling and parametric sampling, and (3) a novel structured motion database organized into several datasets addressing a number of aspects in the motion domain. --- paper_title: Perception of Human Motion With Different Geometric Models paper_content: Human figures have been animated using a variety of geometric models, including stick figures, polygonal models and NURBS-based models with muscles, flexible skin or clothing. This paper reports on experimental results indicating that a viewer's perception of motion characteristics is affected by the geometric model used for rendering. Subjects were shown a series of paired motion sequences and asked if the two motions in each pair were the same or different. The motion sequences in each pair were rendered using the same geometric model. For the three types of motion variation tested, sensitivity scores indicate that subjects were better able to observe changes with the polygonal model than they were with the stick-figure model. --- paper_title: Hybrid control for interactive character animation paper_content: We implement a framework for animating interactive characters by combining kinematic animation with physical simulation. The combination of animation techniques allows the characters to exploit the advantages of each technique. For example, characters can perform natural-looking kinematic gaits and react dynamically to unexpected situations. Kinematic techniques such as those based on motion capture data can create very natural-looking animation. However, motion capture based techniques are not suitable for modeling the complex interactions between dynamically interacting characters. Physical simulation, on the other hand, is well suited for such tasks. Our work develops kinematic and dynamic controllers and transition methods between the two control methods for interactive character animation. In addition, we utilize the motion graph technique to develop complex kinematic animation from shorter motion clips as a method of kinematic control. --- paper_title: Dynamic response for motion capture animation paper_content: Human motion capture embeds rich detail and style which is difficult to generate with competing animation synthesis technologies. However, such recorded data requires principled means for creating responses in unpredicted situations, for example reactions immediately following impact. This paper introduces a novel technique for incorporating unexpected impacts into a motion capture-driven animation system through the combination of a physical simulation which responds to contact forces and a specialized search routine which determines the best plausible re-entry into motion library playback following the impact. Using an actuated dynamic model, our system generates a physics-based response while connecting motion capture segments. Our method allows characters to respond to unexpected changes in the environment based on the specific dynamic effects of a given contact while also taking advantage of the realistic movement made available through motion capture. We show the results of our system under various conditions and with varying responses using martial arts motion capture as a testbed. --- paper_title: An Efficient Keyframe Extraction from Motion Capture Data paper_content: This paper proposes a keyframe extraction method based on a novel layered curve simplification algorithm for motion capture data. Bone angles are employed as motion features and keyframe candidates can be selected based on them. After that, the layered curve simplification algorithm will be used to refine those candidates and the keyframe collection can be gained. To meet different requirements for compression and level of detail of motion abstraction, adaptive extraction parameters are also applied. The experiments demonstrate that our method can not only compress and summarize the motion capture data efficiently, but also keep the consistency of keyframe collection between similar human motion sequences, which is of great benefit to further motion data retrieval or editing. --- paper_title: A Hierarchical Model Incorporating Segmented Regions and Pixel Descriptors for Video Background Subtraction paper_content: Background subtraction is important for detecting moving objects in videos. Currently, there are many approaches to performing background subtraction. However, they usually neglect the fact that the background images consist of different objects whose conditions may change frequently. In this paper, a novel hierarchical background model is proposed based on segmented background images. It first segments the background images into several regions by the mean-shift algorithm. Then, a hierarchical model, which consists of the region models and pixel models, is created. The region model is a kind of approximate Gaussian mixture model extracted from the histogram of a specific region. The pixel model is based on the cooccurrence of image variations described by histograms of oriented gradients of pixels in each region. Benefiting from the background segmentation, the region models and pixel models corresponding to different regions can be set to different parameters. The pixel descriptors are calculated only from neighboring pixels belonging to the same object. The experimental results are carried out with a video database to demonstrate the effectiveness, which is applied to both static and dynamic scenes by comparing it with some well-known background subtraction methods. --- paper_title: Incomplete motion feature tracking algorithm in video sequences paper_content: To effectively track incomplete motion features, a novel feature tracking algorithm for motion capture is presented. According to feature attributes and relationship among features, extracted features are classified as four types of features. Then different strategies are applied to track different kinds of features. To verify the tracks, cross correlation test and predicted 3D model based test are used to test and remove outliers. Experimental results demonstrate the effectiveness of our algorithm. --- paper_title: Group Animation Based on Multiple Autonomous Agents paper_content: Group animation is a continuous challenge in computer animation. In this paper, a group animation framework based on multiple autonomous agents. Each animated character in a group is modeled as an autonomous agent, which can perceive information from the virtual environment, generate appropriate intentions, manage rational behaviors, and create motions to realize the intention. Contrary to traditional intelligent animated character techniques, motions for the animated character are generated using a motion capture data, instead of modeling the complex motion generation mechanism. First a fundamental motion library is built using motion capture system, then appropriate fundamental motions are selected from the fundamental motion library, and desired motions are synthesized from these fundamental ones. The animated characters manage their movement autonomically in the virtual environment. To produce the required group animation, animators only need to specify some simple information and record the useful group motion. The experiment shows the effectiveness of this framework. --- paper_title: Interactive generation of falling motions paper_content: Interactive generation of falling motions for virtual character with realistic responses to unexpected push, hit or collision with the environment is interesting work to many applications, such as computer games, film production, and virtual training environments. In this paper, we propose a new method to simulate protective behaviors in response to the ways a human may fall to the ground as well as incorporate the reactive motions into motion capture animation. It is based on simulated trajectory prediction and biomechanics inspired adjustment. According to the external perturbations, our system predicts a motion trajectory and uses it to select a desired transition-to sequence. At the same time, physically generated falling motions will fill in the gap between the two-motion capture sequences before and after the transition. Utilizing a parallel simulation, our method is able to predict a character's motion trajectory real-time under dynamics, which ensures that the character moves towards the target sequence and makes the character's behavior more life-like. Our controller is designed to generate physically plausible motion following an upcoming motion with adjustment from biomechanics rules, which is key to avoid an unconscious look for a character during the transition. Based on a relatively small motion database, our system is effective in generating various interactive falling behaviors. Copyright © 2006 John Wiley & Sons, Ltd. --- paper_title: Style synthesis and editing of motion data in non-linear subspace paper_content: A new framework for automatic synthesis and editing of human motion style based on 3D human motion data and isometric feature mapping(ISOMAP) dimension reduction was proposed.In this framework,the generalized ISOMAP was extended to process out-of-samples data by building an optimal mapping function from the input high dimensional space to the embedding low dimensional nonlinear stylized space.The decomposable generative model was used to learn separate style parameters and content parameters of human motions.New motions with new style could be edited and reconstructed by adjusting and mapping these parameters from subspace to original motion data space.The experimental results show that the proposed method can generate new complex motion styles automatically and accurately in virtual reality scene. --- paper_title: Real-time density-based crowd simulation paper_content: Virtual characters in games and simulations often need to plan visually convincing paths through a crowded environment. This paper describes how crowd density information can be used to guide a large number of characters through a crowded environment. Crowd density information helps characters avoid congested routes that could lead to traffic jams. It also encourages characters to use a wide variety of routes to reach their destination. Our technique measures the desirability of a route by combining distance information with crowd density information. We start by building a navigation mesh for the walkable regions in a polygonal two-dimensional (2-D) or multilayered three-dimensional (3-D) environment. The skeleton of this navigation mesh is the medial axis. Each walkable region in the navigation mesh maintains an up-to-date density value. This density value is equal to the area occupied by all the characters inside a given region divided by the total area of this region. These density values are mapped onto the medial axis to form a weighted graph. An A* search on this graph yields a backbone path for each character, and forces are used to guide the characters through the weighted environment. The characters periodically replan their routes as the density values are updated. Our experiments show that we can compute congestion-avoiding paths for tens of thousands of characters in real-time. Copyright © 2012 John Wiley & Sons, Ltd. --- paper_title: Obscuring length changes during animated motion paper_content: In this paper we examine to what extent the lengths of the links in an animated articulated-figure can be changed without the viewer being aware of the change. This is investigated in terms of a framework that emphasizes the role of attention in visual perception. We conducted a set of five experiments to establish bounds for the sensitivity to changes in length as a function of several parameters and the amount of attention available. We found that while length changes of 3% can be perceived when the relevant links are given full attention, changes of over 20% can go unnoticed when attention is not focused in this way. These results provide general guidelines for algorithms that produce or process character motion data and also bring to light some of the potential gains that stand to be achieved with attention-based algorithms. --- paper_title: Controllable real-time locomotion using mobility maps paper_content: Graph-based approaches for sequencing motion capture data have produced some of the most realistic and controllable character motion to date. Most previous graph-based approaches have employed a run-time global search to find paths through the motion graph that meet user-defined constraints such as a desired locomotion path. Such searches do not scale well to large numbers of characters. In this paper, we describe a locomotion approach that benefits from the realism of graph-based approaches while maintaining basic user control and scaling well to large numbers of characters. Our approach is based on precomputing multiple least cost sequences from every state in a state-action graph. We store these precomputed sequences in a data structure called a mobility map and perform a local search of this map at run-time to generate motion sequences in real time that achieve user constraints in a natural manner. We demonstrate the quality of the motion through various example locomotion tasks including target tracking and collision avoidance. We demonstrate scalability by animating crowds of up to 150 rendered articulated walking characters at real-time rates. --- paper_title: An efficient search algorithm for motion data using weighted PCA paper_content: Good motion data is costly to create. Such an expense often makes the reuse of motion data through transformation and retargetting a more attractive option than creating new motion from scratch. Reuse requires the ability to search automatically and efficiently a growing corpus of motion data, which remains a difficult open problem. We present a method for quickly searching long, unsegmented motion clips for subregions that most closely match a short query clip. Our search algorithm is based on a weighted PCA-based pose representation that allows for flexible and efficient pose-to-pose distance calculations. We present our pose representation and the details of the search algorithm. We evaluate the performance of a prototype search application using both synthetic and captured motion data. Using these results, we propose ways to improve the application's performance. The results inform a discussion of the algorithm's good scalability characteristics. --- paper_title: Compression of motion capture databases paper_content: We present a lossy compression algorithm for large databases of motion capture data. We approximate short clips of motion using Bezier curves and clustered principal component analysis. This approximation has a smoothing effect on the motion. Contacts with the environment (such as foot strikes) have important detail that needs to be maintained. We compress these environmental contacts using a separate, JPEG like compression algorithm and ensure these contacts are maintained during decompression.Our method can compress 6 hours 34 minutes of human motion capture from 1080 MB data into 35.5 MB with little visible degradation. Compression and decompression is fast: our research implementation can decompress at about 1.2 milliseconds/frame, 7 times faster than real-time (for 120 frames per second animation). Our method also yields smaller compressed representation for the same error or produces smaller error for the same compressed size. --- paper_title: Human Motion Database with a Binary Tree and Node Transition Graphs paper_content: Database of human motion has been widely used for recognizing human motion and synthesizing humanoid motions. In this paper, we propose a data structure for storing and extracting human motion data and demonstrate that the database can be applied to the recognition and motion synthesis problems in robotics. We develop an efficient method for building a human motion database from a collection of continuous, multi-dimensional motion clips. The database consists of a binary tree representing the hierarchical clustering of the states observed in the motion clips, as well as node transition graphs representing the possible transitions among the nodes in the binary tree. Using databases constructed from real human motion data, we demonstrate that the proposed data structure can be used for human motion recognition, state estimation and prediction, and robot motion planning. --- paper_title: Efficient content-based retrieval of motion capture data paper_content: The reuse of human motion capture data to create new, realistic motions by applying morphing and blending techniques has become an important issue in computer animation. This requires the identification and extraction of logically related motions scattered within some data set. Such content-based retrieval of motion capture data, which is the topic of this paper, constitutes a difficult and time-consuming problem due to significant spatio-temporal variations between logically related motions. In our approach, we introduce various kinds of qualitative features describing geometric relations between specified body points of a pose and show how these features induce a time segmentation of motion capture data streams. By incorporating spatio-temporal invariance into the geometric features and adaptive segments, we are able to adopt efficient indexing methods allowing for flexible and efficient content-based retrieval and browsing in huge motion capture databases. Furthermore, we obtain an efficient preprocessing method substantially accelerating the cost-intensive classical dynamic time warping techniques for the time alignment of logically similar motion data streams. We present experimental results on a test data set of more than one million frames, corresponding to 180 minutes of motion. The linearity of our indexing algorithms guarantees the scalability of our results to much larger data sets. --- paper_title: Human Motion Retrieval from Hand-Drawn Sketch paper_content: The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users' expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated. --- paper_title: Perceptual metrics for character animation: sensitivity to errors in ballistic motion paper_content: Motion capture data and techniques for blending, editing, and sequencing that data can produce rich, realistic character animation; however, the output of these motion processing techniques sometimes appears unnatural. For example, the motion may violate physical laws or reflect unreasonable forces from the character or the environment. While problems such as these can be fixed, doing so is not yet feasible in real time environments. We are interested in developing ways to estimate perceived error in animated human motion so that the output quality of motion processing techniques can be better controlled to meet user goals.This paper presents results of a study of user sensitivity to errors in animated human motion. Errors were systematically added to human jumping motion, and the ability of subjects to detect these errors was measured. We found that users were able to detect motion with errors, and noted some interesting trends: errors in horizontal velocity were easier to detect than errors in vertical velocity, and added accelerations were easier to detect than added decelerations. On the basis of our results, we propose a perceptually based metric for measuring errors in ballistic human motion. --- paper_title: Automated extraction and parameterization of motions in large data sets paper_content: Large motion data sets often contain many variants of the same kind of motion, but without appropriate tools it is difficult to fully exploit this fact. This paper provides automated methods for identifying logically similar motions in a data set and using them to build a continuous and intuitively parameterized space of motions. To find logically similar motions that are numerically dissimilar, our search method employs a novel distance metric to find "close" motions and then uses them as intermediaries to find more distant motions. Search queries are answered at interactive speeds through a precomputation that compactly represents all possibly similar motion segments. Once a set of related motions has been extracted, we automatically register them and apply blending techniques to create a continuous space of motions. Given a function that defines relevant motion parameters, we present a method for extracting motions from this space that accurately possess new parameters requested by the user. Our algorithm extends previous work by explicitly constraining blend weights to reasonable values and having a run-time cost that is nearly independent of the number of example motions. We present experimental results on a test data set of 37,000 frames, or about ten minutes of motion sampled at 60 Hz. --- paper_title: A puppet interface for retrieval of motion capture data paper_content: Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors. --- paper_title: 3D motion retrieval with motion index tree paper_content: With the development of Motion capture techniques, more and more 3D motion libraries become available. In this paper, we present a novel content-based 3D motion retrieval algorithm. We partition the motion library and construct a motion index tree based on a hierarchical motion description. The motion index tree serves as a classifier to determine the sub-library that contains the promising similar motions to the query sample. The Nearest Neighbor rule-based dynamic clustering algorithm is adopted to partition the library and construct the motion index tree. The similarity between the sample and the motion in the sub-library is calculated through elastic match. To improve the efficiency of the similarity calculation, an adaptive clustering-based key-frame extraction algorithm is adopted. The experiment demonstrates the effectiveness of this algorithm. --- paper_title: A data-driven approach to quantifying natural human motion paper_content: In this paper, we investigate whether it is possible to develop a measure that quantifies the naturalness of human motion (as defined by a large database). Such a measure might prove useful in verifying that a motion editing operation had not destroyed the naturalness of a motion capture clip or that a synthetic motion transition was within the space of those seen in natural human motion. We explore the performance of mixture of Gaussians (MoG), hidden Markov models (HMM), and switching linear dynamic systems (SLDS) on this problem. We use each of these statistical models alone and as part of an ensemble of smaller statistical models. We also implement a Naive Bayes (NB) model for a baseline comparison. We test these techniques on motion capture data held out from a database, keyframed motions, edited motions, motions with noise added, and synthetic motion transitions. We present the results as receiver operating characteristic (ROC) curves and compare the results to the judgments made by subjects in a user study. --- paper_title: A system for analyzing and indexing human-motion databases paper_content: We demonstrate a data-driven approach for representing, compressing, and indexing human-motion databases. Our modeling approach is based on piecewise-linear components that are determined via a divisive clustering method. Selection of the appropriate linear model is determined automatically via a classifier using a subspace of the most significant, or principle features (markers). We show that, after offline training, our model can accurately estimate and classify human motions. We can also construct indexing structures for motion sequences according to their transition trajectories through these linear components. Our method not only provides indices for whole and/or partial motion sequences, but also serves as a compressed representation for the entire motion database. Our method also tends to be immune to temporal variations, and thus avoids the expense of time-warping. --- paper_title: Group motion graphs paper_content: We introduce Group Motion Graphs, a data-driven animation technique for groups of discrete agents, such as flocks, herds, or small crowds. Group Motion Graphs are conceptually similar to motion graphs constructed from motion-capture data, but have some important differences: we assume simulated motion; transition nodes are found by clustering group configurations from the input simulations: and clips to join transitions are explicitly constructed via constrained simulation. Graphs built this way offer known bounds on the trajectories that they generate, making it easier to search for particular output motions. The resulting animations show realistic motion at significantly reduced computational cost compared to simulation, and improved control. --- paper_title: Capturing and animating skin deformation in human motion paper_content: During dynamic activities, the surface of the human body moves in many subtle but visually significant ways: bending, bulging, jiggling, and stretching. We present a technique for capturing and animating those motions using a commercial motion capture system and approximately 350 markers. Although the number of markers is significantly larger than that used in conventional motion capture, it is only a sparse representation of the true shape of the body. We supplement this sparse sample with a detailed, actor-specific surface model. The motion of the skin can then be computed by segmenting the markers into the motion of a set of rigid parts and a residual deformation (approximated first as a quadratic transformation and then with radial basis functions). We demonstrate the power of this approach by capturing flexing muscles, high frequency motions, and abrupt decelerations on several actors. We compare these results both to conventional motion capture and skinning and to synchronized video of the actors. --- paper_title: Discover Novel Visual Categories From Dynamic Hierarchies Using Multimodal Attributes paper_content: Learning novel visual categories from observations and experiences in unexplored environment is a vitally important cognitive ability for human beings. A dynamic category hierarchy that is an inherent structure in a human mind is a key component for this ability. This paper develops a framework to build dynamic category hierarchy based on object attributes and a topic model. Since humans trend to utilize multimodal information to learn novel categories, we also develop an algorithm to learn multimodal object attributes from multimodal data. The new multimodal attributes can describe objects efficiently and can generalize from learned categories to novel ones. By comparison with a state-of-the-art unimodal attribute, the multimodal attributes can achieve 4%-19% improvements on average. We also develop a constrained topic model, which can accurately construct category hierarchies for large-scale categories. Based on them, the novel framework can effectively detect novel categories and relate them with known categories for further category learning. Extensive experiments are conducted using a public multimodal dataset, i.e., color and point cloud data, to evaluate the multimodal attributes and the dynamic category hierarchy. The experimental results show the effectiveness of multimodal attributes to describe objects and the satisfactory performance of the dynamic category hierarchy to discover novel categories. By comparison with state-of-the-art methods, the dynamic category hierarchy achieves 7% improvements. --- paper_title: A robust method for analyzing the physical correctness of motion capture data paper_content: The physical correctness of motion capture data is important for human motion analysis and athlete training. However, until now there is little work that wholly explores this problem of analyzing the physical correctness of motion capture data. In this paper, we carefully discuss this problem and solve two major issues in it. Firstly, a new form of Newton-Euler equations encoded by quaternions and Euler angles which are very fit for analyzing the motion capture data are proposed. Secondly, a robust optimization method is proposed to correct the motion capture data to satisfy the physical constraints. We demonstrate the advantage of our method with several experiments. ---
Title: 3D Human Motion Editing and Synthesis: A Survey Section 1: Introduction Description 1: Discusses the challenges and difficulties in achieving realistic 3D human motion data and introduces the four categories of motion synthesis methods, with a focus on motion capture data-driven methods. Section 2: Classification of 3D Human Motion Synthesis Description 2: Elaborates on the four main categories of motion synthesis methods: manual methods, physics-based methods, video-based methods, and motion capture data-driven methods, with detailed analysis of each. Section 3: Motion Capture Data Representation Description 3: Describes the various formats and hierarchical structures used to store and represent motion capture data, as well as the processing stages required to obtain structured information. Section 4: Motion Capture Data-Driven Methods Description 4: Details the motion capture data-driven methods, including the techniques for motion editing and synthesis to generate realistic motion, and discusses their advantages and limitations. Section 5: Motion Editing Methods Description 5: Discusses the different motion editing methods that focus on modifying attributes of motion data to meet specific animation requirements while maintaining other attributes unchanged. Section 6: Motion Synthesis Methods Description 6: Explains the concept of motion synthesis, focusing on techniques to synthesize continuous, long-time motions that conform to constraints and user requirements. Section 7: Motion Synthesis Methods Based on Motion Graph Description 7: Describes motion graph construction, detection of similarity, generation of transitions, graph construction methods, and goal-achieved graph search, including several notable implementations. Section 8: The Statistical Motion Synthesis Description 8: Discusses statistical motion synthesis techniques, covering various statistical models and their application to human motion synthesis with a focus on different approaches and their effectiveness. Section 9: Other Motion Synthesis Methods Based on Motion Capture Data Description 9: Explores additional motion synthesis methods that leverage motion capture data, aside from motion graphs and statistical models, for synthesizing detailed and constraint-satisfied motion. Section 10: Discussion Description 10: Provides an overview of the developments in 3D human motion synthesis technology, identifies ongoing problems and new research directions, and discusses solutions for database organization, compression, search, evaluation, group motion synthesis, and reactive human motion synthesis.
A Review of Multi-material and Composite Parts Production by Modified Additive Manufacturing Methods
13
--- paper_title: Additive Manufacturing Technologies: Rapid Prototyping to Direct Digital Manufacturing paper_content: Additive Manufacturing Technologies: Rapid Prototyping to Direct Digital Manufacturing deals with various aspects of joining materials to form parts. Additive Manufacturing (AM) is an automated technique for direct conversion of 3D CAD data into physical objects using a variety of approaches. Manufacturers have been using these technologies in order to reduce development cycle times and get their products to the market quicker, more cost effectively, and with added value due to the incorporation of customizable features. Realizing the potential of AM applications, a large number of processes have been developed allowing the use of various materials ranging from plastics to metals for product development. Authors Ian Gibson, David W. Rosen and Brent Stucker explain these issues, as well as: Providing a comprehensive overview of AM technologies plus descriptions of support technologies like software systems and post-processing approaches Discussing the wide variety of new and emerging applications like micro-scale AM, medical applications, direct write electronics and Direct Digital Manufacturing of end-use components Introducing systematic solutions for process selection and design for AM Additive Manufacturing Technologies: Rapid Prototyping to Direct Digital Manufacturing is the perfect book for researchers, students, practicing engineers, entrepreneurs, and manufacturing industry professionals interested in additive manufacturing. --- paper_title: Modeling and Fabrication of Heterogeneous Three-Dimensional Objects Based on Additive Manufacturing paper_content: Heterogeneous object modeling and fabrication has been studied in the past few decades. Recently the idea of digital materials has been demonstrated by using Additive Manufacturing (AM) processes. Our previous study illustrated that the mask-image-projection based Stereolithography (MIP-SL) process is promising in fabricating such heterogeneous objects. In the paper, we present an integrated framework for modeling and fabricating heterogenous objects based on the MIP-SL process. Our approach can achieve desired grading transmission between different materials in the object by considering the fabrication constraints of the MIP-SL process. The MIP-SL process planning of a heterogeneous model and the hardware setup for its fabrication are also presented. Test cases including physical experiments are performed to demonstrate the possibility of using heterogeneous materials to achieve desired physical properties. Future work on the design and fabrication of objects with heterogeneous materials is also discussed.Copyright © 2013 by ASME --- paper_title: Modelling for randomly oriented multi material additive manufacturing component and its fabrication paper_content: Abstract Additive Manufacturing (AM) is one of the advanced manufacturing processes, which was initially used only for visualization purpose as Rapid Prototype (RP) components. In later stages due to the advancement of materials processing in AM technology it is also used to manufacture tools and functional parts. In material science field AM is very much useful in the development of multi material component such as functionally gradient materials, heterogeneous material structures and porous material structures. These structures have tremendous applications in the field of aeronautical, automobile and medical industries. But some of the traditional techniques, which are used for fabrication of these structures, have difficulties such as uniform & random distribution, size and shape control and maximum percentage of secondary materials to the primary materials. In this work a novel methodology is introduced for the fabrication of randomly oriented multi material (ROMM) using Polyjet 3D Printing (3DP) machine, which takes into account for the distribution of plastic reinforcement in matrix elastomer as modelled using Computer Aided Design (CAD) software. CATIA VB SCRIPT has been used for ROMM CAD modelling. Stress–strain behaviour of Polyjet 3DP component (with pure elastomer and with randomly oriented plastic reinforced elastomer) is carried out in Universal Testing Machine (UTM). It has been found that ROMM with plastic reinforcement provides significantly improved stiffness compared to pure elastomer component. In addition, the stiffness is consistent among different ROMM Polyjet 3DP components, which were taken at three different orientations (Horizontal, Inclined and Vertical) from the ROMM rectangular plate domain. It shows that reinforcement is uniformly distributed. Normal distribution curve and volumetric analysis is carried out in ROMM to verify uniform and random distribution of plastic reinforcement in elastomer. Based on the experimental results, this modelling and manufacturing technique can be used for the spatial orientation of reinforcement in the ROMM component and its fabrication with better stiffness for form & fit and functional parts applications. --- paper_title: Fabrication and mechanical properties of homogeneous zirconia toughened alumina ceramics via cyclic solution infiltration and in situ precipitation paper_content: Abstract Alumina ceramic composites toughened with various contents of fine-sized zirconia particulates were fabricated via cyclic infiltrating pre-sintered alumina preforms with zirconium oxychloride solution and immersion in ammonia solution to induce in situ precipitation. Homogeneous distribution of zirconia throughout the bulk material has been substantiated by line-scan analysis and backscattering images taken from sections with different distances from surface. It was found that a higher drying temperature and increase in infiltration numbers lead to a greater zirconia content and bigger grain size. The hardness of fabricated zirconia toughened alumina composite was firstly improved probably due to the microstructure refining effect, while a further increase in zirconia content results in the decrease of hardness. A significantly higher indentation toughness has been observed for samples containing >10 wt.% zirconia compared with other specimens, which could be attributed to the coarser zirconia grain size and the related greater tendency to transformation into monoclinic phase. --- paper_title: Preparation of zirconia–mullite composites by an infiltration route paper_content: Abstract In order to improve the mechanical properties of pure mullite ceramics, a process has been developed that incorporates Zr(Y)O 2 into mullite by infiltrating partially reaction sintered (porous) mullite compacts with mixed ZrOCl 2 ·8H 2 O and YCl 3 ·6H 2 O solutions, corresponding to ZrO 2 /Y 2 O 3 = 97/3 (mol.%). The infiltration of small amount (∼7.2 wt.%) of Zr(Y)O 2 significantly improved the densification behavior and mechanical properties of mullite bodies sintered at 1620 °C, for 10 h. The mocrostructure contained a few elongated mullite crystals (aspect ratio > 6) which were embedded in a fine-grained matrix. The relatively large Zr(Y)O 2 inclusions were mainly dispersed intergranularly in the matrix, but small ( 2 O 3 content than that of stoichiometric mullite composition formed at triple points of the mullite/mullite grains. --- paper_title: Stereolithography of spatially controlled multi-material bioactive poly(ethylene glycol) scaffolds. paper_content: Abstract Challenges remain in tissue engineering to control the spatial, mechanical, temporal and biochemical architectures of scaffolds. Unique capabilities of stereolithography (SL) for fabricating multi-material spatially controlled bioactive scaffolds were explored in this work. To accomplish multi-material builds, a mini-vat setup was designed allowing for self-aligning X–Y registration during fabrication. The mini-vat setup allowed the part to be easily removed and rinsed, and different photocrosslinkable solutions to be easily removed and added to the vat. Two photocrosslinkable hydrogel biopolymers, poly(ethylene glycol) dimethacrylate (PEG-dma, MW 1000) and poly(ethylene glycol) diacrylate (PEG-da, MW 3400), were used as the primary scaffold materials. Multi-material scaffolds were fabricated by including controlled concentrations of fluorescently labeled dextran, fluorescently labeled bioactive PEG or bioactive PEG in different regions of the scaffold. The presence of the fluorescent component in specific regions of the scaffold was analyzed with fluorescent microscopy, while human dermal fibroblast cells were seeded on top of the fabricated scaffolds with selective bioactivity and phase contrast microscopy images were used to show specific localization of cells in the regions patterned with bioactive PEG. Multi-material spatial control was successfully demonstrated in features down to 500 μm. In addition, the equilibrium swelling behavior of the two biopolymers after SL fabrication was determined and used to design constructs with the specified dimensions at the swollen state. The use of multi-material SL and the relative ease of conjugating different bioactive ligands or growth factors to PEG allows for the fabrication of tailored three-dimensional constructs with specified spatially controlled bioactivity. --- paper_title: Fabrication of fine-grained alumina ceramics by a novel process integrating stereolithography and liquid precursor infiltration processing paper_content: Abstract In this study, we report a novel approach, integrating stereolithography and liquid precursor infiltration techniques, to fabricate fine-grained alumina ceramics. The XRD patterns of the sample immersed with Zr 4+ or Zr 4+ (Y 3+ ) show that the sintered body contains Al 2 O 3 and t-ZrO 2 as the major phase and minor phase, respectively. Moreover, the t-ZrO 2 phase in the sample immersed with Zr 4+ (Y 3+ ) shows intense peaks compared to the composite immersed with Zr 4+ . On the other hand, the sample immersed with Mg 2+ contained Al 2 O 3 and MgAl 2 O 4 as the major phase and minor phase, respectively. The microstructure of the sample immersed with Zr 4+ shows that ZrO 2 particles are homogeneously distributed in the Al 2 O 3 matrix, thus inhibiting the grain growth of alumina particles. Moreover, the sample immersed with Mg 2+ shows a more dense and fine-grained structure. Compared to the non-infiltrated sample, the average grain size of the alumina sample immersed with Zr 4+ or Mg 2+ decreased. The sample infiltrated with Zr 4+ (Y 3+ ) had the smallest alumina average grain size of 1.14 µm. The ceramics prepared by infiltration showed a higher hardness (19.54 GPa), but a slightly lower fracture toughness (4.02 MPa m 1/2 ) compared to the samples (17.2 GPa, 4.13 MPa m 1/2 ) without infiltration. --- paper_title: A novel quasicrystal-resin composite for stereolithography paper_content: Abstract Laser stereolithography (SL) is an additive manufacturing technology which is increasingly being used to produce customized end-user parts of any complex shape. It requires the use of a photo-curable resin which can be loaded with ceramic powders or carbon fibers to produce composite parts. However, the range of available materials compatible with the SL process is rather limited. In particular, photo-curable resins reinforced by metal particles are difficult to process, because of fundamental limitations related to the high reflectivity of intermetallics in the UV–visible range. In this work, the unique properties of Al-based quasicrystalline alloys are being used to develop a new UV-curable resin reinforced by such metal particles. The optical properties of the quasicrystalline particles and of the filled resin are studied and they are found to be compatible with the SL process. The volume fraction of the filler particles in the liquid resin is optimized to increase the polymerization depth while preserving suitable rheological behaviour. Finally, 3D composite parts are being built by SL. The composite parts have improved mechanical properties compared to the unfilled resin (higher hardness, reduced wear losses and lower friction coefficient) and compete favourably with the other commercial photo-curable resins. --- paper_title: Structure property relationship of metal matrix syntactic foams manufactured by a binder jet printing process paper_content: Abstract The present research work has investigated the synthesis of ceramic structures based on inorganic, spherical-hollow microballoons using a binder jet printing process. Binder jet printing is a process that allows the synthesis process of complex and intricate parts with minimal waste of the feedstock material. The ceramic microballoons here investigated were based on a mullite derivative. The printed ceramic parts were cured and sintered as the precursor templates for metal matrix syntactic foams (MMSFs). The MMSFs were manufactured by infiltrating the printed ceramic templates by molten aluminum. The flexural strength of the cured, sintered, and infiltrated structures were also investigated. It is proposed that binder jet printing followed by a sintering and pressureless infiltration process represents an advantageous technology for designing complex MMSF structures. --- paper_title: Fabrication of functionally graded reaction infiltrated SiC–Si composite by three-dimensional printing (3DP™) process paper_content: Carbon performs have been fabricated using the three-dimensional printing (3DP™) process for reaction-infiltrated SiC–Si composites. Starting with glassy carbon powders of 45–105 μm sizes, the preform was produced by printing acetone-based furfuryl resin binder. The bulk density and open porosity of the resulting preform was 0.6 g cm−3 and 48%, respectively. The binder printing conditions during preform fabrication mostly determined the preform microstructure. Pressureless reactive infiltration of such preforms at 1450°C in nitrogen atmosphere formed a SiC–Si composite with a coarse-SiC grain structure. Some residual carbon remained inside the SiC grains in this reaction bonded SiC–C due to sluggish reactivity of the larger carbon powder particles. Relatively complex-shaped carbon preforms with overhang, undercut, and inner channel structures were produced, demonstrating the capability of the 3DP process. A functionally graded SiC–Si composite was also fabricated, by varying carbon-yielding binder dosage during the preform fabrication, in order to control the spatial SiC concentration within the SiC–Si composite. --- paper_title: Fabrication of Al2O3-based composites by indirect 3D-printing paper_content: Abstract A powder mixture of alumina and dextrin was used as a precursor material for fabrication of porous alumina preforms by indirect three-dimensional printing. Post-pressureless infiltration of the fabricated preforms with copper alloys resulted in dense composites with interpenetrating microstructure. The fabrication procedure involves four steps: a) freeze-drying of Al2O3/dextrin blends, b) three-dimensional printing of the green bodies, c) drying, dextrin decomposition and sintering of the printed bodies and d) post-pressureless infiltration of Cu-alloy into as-fabricated Al2O3 porous preforms. As result of the dextrin decomposition and Al2O3 sintering an average linear shrinkage of 17.7% was measured. After sintering the Al2O3 preforms with ∼36 vol.% porosity were obtained. A post-infiltration with copper alloy at 1300 °C for 1.5 h led to formation of dense Al2O3/Cu parts. X-ray analysis showed the presence of α-Al2O3, Cu and Cu2O only. Al2O3/Cu composite exhibits a fracture toughness of ∼5.5 MPa m1 / 2 and bending strength of ∼236 MPa. Fractographic analysis showed that crack bridging by plastically deformed metal phase may control the fracture toughness of this composite. --- paper_title: A new physics-based model for equilibrium saturation determination in binder jetting additive manufacturing process paper_content: Abstract In binder jetting additive manufacturing (BJ-AM) process, the features are created through the interaction between droplets of the liquid binding agent and the layered powder bed. The amount of binder, which is termed binder saturation, depends strongly on the liquid binder and powder bed interaction including the spreading (i.e. lateral migration) and penetration (vertical migration) of the binder in powder bed, and is of crucial importance for determining the accuracy and strength of the printed parts. In the present study, a new physics-based model is developed to predict the optimal saturation levels for the green part printing, which is realized via capillary pressure estimation that is based on the binder and powder bed interactions in the equilibrium state. The proposed model was evaluated by both the Ti-6Al-4V and 420 stainless steel powders that exhibit different powder characteristics and packing densities. In order to estimate the equilibrium saturation using the proposed model, the physical characteristics such as average contact angle between the binder and powder material, specific surface area of powder particles, saturation and capillary pressure characterization curve were determined. Features with various degrees of dimensions (1-D, 2-D, 3-D) were printed out using M-Lab ExOne printer for determining the equilibrium saturation. Good agreement was observed between the theoretical predictions and experimentally measured saturation levels for the Ti-6Al-4V powder. On the other hand, the model underestimated the optimal saturation level for the 420 stainless steel powder, which was likely caused by the micro-surface areas from powder particle surface that do not contribute to the binder-powder bed interactions. --- paper_title: Mechanical modeling based on numerical homogenization of an Al2O3/Al composite manufactured via binder jet printing paper_content: Abstract The present research work takes advantage of a recently published numerical homogenization implementation in MATLAB to find the elasticity tensor of a ceramic–metallic composite (CMC) system to be compared to an experimental data. Numerical homogenization is an efficient way to determine effective macroscopic properties of a composite material. This technique represents an effective means to model a two-phase composite. In this work, an extension of a previously published numeric homogenization code was investigated in order to model the compressive elastic modulus of the ceramic–metallic composites. The extension to the numerical code makes use of physical micrograph images to accurately describe the phase distribution of the composite. Multiple micrographs were taken from each sample, to subsequently better represent the actual microstructure of the composite as a whole. The composites were created using a binder jet 3D printing technology, where a ceramic precursor material was initially assembled, followed by a molten metal infiltration process. It was found that the studied numerical homogenization yielded an elastic modulus approximately 11.5% lower than the experimental data, suggesting a reliable modeling technique for predicting the elastic tensor of CMCs. --- paper_title: Fabricating a pearl/PLGA composite scaffold by the low-temperature deposition manufacturing technique for bone tissue engineering paper_content: Here we developed a composite scaffold of pearl/poly(lactic-co-glycolic acid) (pearl/PLGA) utilizing the low-temperature deposition manufacturing (LDM). LDM makes it possible to fabricate scaffolds with designed microstructure and macrostructure, while keeping the bioactivity of biomaterials by working at a low temperature. Process optimization was carried out to fabricate a mixture of pearl powder, PLGA and 1,4-dioxane with the designed hierarchical structures, and freeze-dried at a temperature of −40 °C. Scaffolds with square and designated bone shape were fabricated by following the 3D model. Marrow stem cells (MSCs) were seeded on the pearl/PLGA scaffold and then cultured in a rotating cell culture system. The adhesion, proliferation and differentiation of MSCs into osteoblasts were determined using scanning electronic microscopy, WST-1 assay, alkaline phosphatase activity assay, immunofluorescence staining and real-time reverse transcription polymerase chain reaction. The results showed that the composite scaffold had high porosity (81.98 ± 3.75%), proper pore size (micropores: <10 µm; macropore: 495 ± 54 µm) and mechanical property (compressive strength: 0.81 ± 0.04 MPa; elastic modulus: 23.14 ± 0.75 MPa). The pearl/PLGA scaffolds exhibited better biocompatibility and osteoconductivity compared with the tricalcium phosphate/PLGA scaffold. All these results indicate that the pearl/PLGA scaffolds fulfill the basic requirements of bone tissue engineering scaffold. --- paper_title: Low-temperature deposition manufacturing: A novel and promising rapid prototyping technology for the fabrication of tissue-engineered scaffold paper_content: Abstract Developed in recent years, low-temperature deposition manufacturing (LDM) represents one of the most promising rapid prototyping technologies. It is not only based on rapid deposition manufacturing process but also combined with phase separation process. Besides the controlled macropore size, tissue-engineered scaffold fabricated by LDM has inter-connected micropores in the deposited lines. More importantly, it is a green manufacturing process that involves non-heating liquefying of materials. It has been employed to fabricate tissue-engineered scaffolds for bone, cartilage, blood vessel and nerve tissue regenerations. It is a promising technology in the fabrication of tissue-engineered scaffold similar to ideal scaffold and the design of complex organs. In the current paper, this novel LDM technology is introduced, and its control parameters, biomedical applications and challenges are included and discussed as well. --- paper_title: Multinozzle Low-Temperature Deposition System for Construction of Gradient Tissue Engineering Scaffolds paper_content: Tissue engineering is a technology that enables us to construct complicated hominine organs composed of many different types of cells. One of the key points to achieve this goal is to control the material composition and porous structure of the scaffold accurately. A disposable syringe based volume-driven injecting (VDI) nozzle was proposed and designed to extrude both natural derived and synthetic polymers. A multinozzle low-temperature deposition and manufacturing (M-LDM) system is proposed to fabricate scaffolds with heterogeneous materials and gradient hierarchical porous structures. PLGA, collagen, gelatin, chitosan can be extruded without leaking to form hierarchical porous scaffolds for primary study. Composite scaffolds with two kinds of materials were fabricated via two different nozzles to get both hydrophilic and mechanical properties. The results from scanning electron microscopy (SEM) demonstrated that the natural-derived biomaterials were strongly absorbed onto the synthetic biomaterials to form a stable network. Several gradient PLGA/TCP scaffolds were also fabricated to supply several samples. © 2008 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 2009 --- paper_title: A Novel Osteochondral Scaffold Fabricated via Multi-nozzle Low-temperature Deposition Manufacturing paper_content: A functional-region/separate-interface/single-cell-type of tissue engineering pathway was evaluated to regenerate osteochondral defects that are deep in the marrow cavity. A gradient osteochondral scaffold fabricated via a rapid prototyping technology, called multi-nozzle low-temperature deposition manufacturing, was composed of three parts with different materials and pore-structures, respectively, for bone, cartilage and a separate interface between them. The separate interface was composed of micro-pores, that were less than 5 urn and with low porosity, to reduce or to avoid the destruction of the micro-environment in vivo by preventing blood and cells and reducing the amount of oxygen and nutrients moving from the marrow cavity to the articulate marrow. The preliminary results after 6 weeks of implantation into 4 mm diameter osteochondral defects in the knee joints of rabbits showed that the defects with the scaffold/cells composition had bone-like or cartilage-like tissue filling the defects with smo... --- paper_title: Dynamic Analysis of a Bi-stable Buckled Structure for Vibration Energy Harvester paper_content: Vibration energy harvesting offers a viable alternative to batteries for powering sensors in remote locations. In the past decade, the energy harvesting community has turned to nonlinear structures as an effective means for creating high-performance devices. In particular, researchers have used buckled structures to improve vibration scavenging power production at low frequencies (<100 Hz) and to broaden device operational bandwidths. To achieve these ends, accurate structural models are needed. These models are critical for carrying out a systematic and quantitative device design process. Specifically, the models enable the user to optimize device geometries, arrive at meaningful estimates of power production, and estimate device lifetimes, etc. This work focuses on the dynamic behavior of a bi-stable switching energy harvester made from a buckled beam structure, coupled to two cantilever beams with tip masses via a torsional rod. Results from experimental testing of the energy harvesting structure under different forced vibration conditions are compared with a nonlinear model created of the structure. For the model, linear equations of motion for free vibration of each component have been derived using Hamilton’s principle, and shape functions for each individual component are determined by applying boundary conditions for the linear vibration. Nonlinear dynamic behavior effects are integrated through consideration of large deformation of the main beam. The effects of different parameters on the vibrational system, including the geometry of the structure, buckling load and natural frequency of the cantilever arms, have been investigated. These parameters can play an important role in the optimization process of energy harvesters. Finally, parametric results obtained from the presented method are compared with the experimental data in different aspects. --- paper_title: Feedstock material property – process relationships in fused deposition of ceramics (FDC) paper_content: Fused deposition of ceramics (FDC) is a solid freeform fabrication technique based on extrusion of highly loaded polymer systems. The process utilizes particle loaded thermoplastic binder feedstock in the form of a filament. The filament acts as both the piston driving the extrusion and also the feedstock being deposited. Filaments can fail during FDC via buckling, when the extrusion pressure needed is higher than the critical buckling load that the filament can support. Compressive elastic modulus determines the load carrying ability of the filament and the viscosity determines the resistance to extrusion (or extrusion pressure). A methodology for characterizing the compressive mechanical properties of FDC filament feedstocks has been developed. It was found that feedstock materials with a ratio (E/ηa) greater than a critical value (3×105 to 5×105 s‐1) do not buckle during FDC while those with a ratio less than this range buckle. --- paper_title: Fabrication of Piezoelectric Ceramic / Polymer Composite Transducers using Fused Deposition of Ceramics paper_content: The Fused Deposition of Ceramics (FDC) technique was used to fabricate ceramic skeletons for development of piezoelectric composite transducers for medical imaging. The green parts were designed in order to have 30 vol% of PZT-5H ceramic in the final composites. Physical characterization of the sintered samples revealed that 96% of the theoretical density was achieved. Optical microscopy showed that defects, such as small roads and bubbles were eliminated due to powder processing improvements. The electromechanical properties of the final composites were found to be similar to properties obtained for conventionally made composites. --- paper_title: Extrusion-based additive manufacturing of ZrO2 using photoinitiated polymerization paper_content: Abstract Thanks to the exceptional combination of mechanical, thermal, chemical and biological properties, engineering ceramics are becoming increasingly important in the nowadays-industrial landscape. Traditionally, ceramic components are produced via near net-shape techniques, involving labor and expensive processing routes that use molds or dies such as injection molding, green machining, firing, sintering and finish machining. Hence, traditional ceramic manufacturing technologies lack the ability to compete in a customized and small series market, as it exists for e.g. aerospace and biomedical applications. The fabrication of miniature and complex parts as well as small features is also a limitation. In this context, Additive Manufacturing provides an important contribution, given the large design freedom and decoupling of production cost and complexity. Especially, nozzle based techniques offer new opportunities thanks to the efficient usage of material and economical advantage in prototyping and single/small series component production. In this work, a novel processing route for the production of AM ceramics components is proposed. It combines the advantages of AM syringe extrusion and UV curing into a single 3D printing technique, along with the potential for multi-material, complex and miniature ceramic component production. Different dispersions of ZrO 2 and commercially available UV resins (containing from 22.5 to 55 vol.% ZrO 2 ) are prepared using two different mixing techniques. The homogeneity, rheology and printability of those dispersions are subsequently investigated, along with firing and sintering trials. A sample density of about 92.1% has been obtained after sintering, proving the potential of the technology in development. A novel methodology for the process assessment is also proposed. --- paper_title: Multinozzle Low-Temperature Deposition System for Construction of Gradient Tissue Engineering Scaffolds paper_content: Tissue engineering is a technology that enables us to construct complicated hominine organs composed of many different types of cells. One of the key points to achieve this goal is to control the material composition and porous structure of the scaffold accurately. A disposable syringe based volume-driven injecting (VDI) nozzle was proposed and designed to extrude both natural derived and synthetic polymers. A multinozzle low-temperature deposition and manufacturing (M-LDM) system is proposed to fabricate scaffolds with heterogeneous materials and gradient hierarchical porous structures. PLGA, collagen, gelatin, chitosan can be extruded without leaking to form hierarchical porous scaffolds for primary study. Composite scaffolds with two kinds of materials were fabricated via two different nozzles to get both hydrophilic and mechanical properties. The results from scanning electron microscopy (SEM) demonstrated that the natural-derived biomaterials were strongly absorbed onto the synthetic biomaterials to form a stable network. Several gradient PLGA/TCP scaffolds were also fabricated to supply several samples. © 2008 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 2009 --- paper_title: An accurate mathematical study on the free vibration of stepped thickness circular/annular Mindlin functionally graded plates paper_content: Abstract An analytical solution based on a new exact closed form procedure is presented for free vibration analysis of stepped circular and annular FG plates via first order shear deformation plate theory of Mindlin. The material properties change continuously through the thickness of the plate, which can vary according to a power-law distribution of the volume fraction of the constituents, whereas Poisson’s ratio is set to be constant. Based on the domain decomposition technique, five highly coupled governing partial differential equations of motion for freely vibrating FG plates were exactly solved by introducing the new potential functions as well as using the method of separation of variables. Several comparison studies were presented by those reported in the literature and the FEM analysis, for various thickness values and combinations of stepped thickness variations of circular/annular FG plates to demonstrate highly stability and accuracy of present exact procedure. The effect of the geometrical and material plate parameters such as step thickness ratios, step locations and the power law index on the natural frequencies of FG plates is investigated. --- paper_title: Melt electrospinning of poly(ε-caprolactone) scaffolds: Phenomenological observations associated with collection and direct writing paper_content: Melt electrospinning and its additive manufacturing analogue, melt electrospinning writing (MEW), are two processes which can produce porous materials for applications where solvent toxicity and accumulation in solution electrospinning are problematic. This study explores the melt electrospinning of poly(e-caprolactone) (PCL) scaffolds, specifically for applications in tissue engineering. The research described here aims to inform researchers interested in melt electrospinning about technical aspects of the process. This includes rapid fiber characterization using glass microscope slides, allowing influential processing parameters on fiber morphology to be assessed, as well as observed fiber collection phenomena on different collector substrates. The distribution and alignment of melt electrospun PCL fibers can be controlled to a certain degree using patterned collectors to create large numbers of scaffolds with shaped macroporous architectures. However, the buildup of residual charge in the collected fibers limits the achievable thickness of the porous template through such scaffolds. One challenge identified for MEW is the ability to control charge buildup so that fibers can be placed accurately in close proximity, and in many centimeter heights. The scale and size of scaffolds produced using MEW, however, indicate that this emerging process will fill a technological niche in biofabrication. --- paper_title: Toward fast and cost-effective ink-jet printing of solid electrolyte for lithium microbatteries paper_content: Abstract Ink-jet printing of ionogel for low-cost microbattery is presented. Such an approach allows to provide liquid-like electrolyte performances for all-solid microdevices. Ink-jet printing process is possible thanks to sol precursor of the ionogel. This full silica based ionogels confining ionic liquid are known to be thermal resistant, serving safety and technologies requiring solder reflow. High ionic conductivity and compatibility with porous composite electrodes allow reaching good electrochemical cycling performance: full Li-ion cell with LiFePO 4 and Li 4 Ti 5 O 12 porous composite electrodes shows a surface capacity of 300 μAh cm −2 for more than 100 cycles. Such surface capacities are very competitive as compared to those obtained for microdevices based on expensive PVD processes. --- paper_title: Laser melting functionally graded composition of Waspaloy® and Zirconia powders paper_content: An approach for fabricating functionally graded specimens of supernickel alloy and ceramic compositions via Selective Laser Melting (SLM) is presented. The focus aimed at using the functionally graded material (FGM) concept to gradually grade powdered compositions of Zirconia within a base material of Waspaloy®. A high power Nd:YAG laser was used to process the material compositions to a high density with gradual but discrete changes between layered compositions. The graded specimens initially consisted of 100% Waspaloy® with subsequent layers containing increased volume compositions of Zirconia (0–10%). Specimens were examined for porosity and microstructure. It was found that specimens contained an average porosity of 0.34% with a gradual change between layers without any major interface defects. --- paper_title: Additive manufacturing of star poly(ε-caprolactone) wet-spun scaffolds for bone tissue engineering applications paper_content: Three-dimensional fibrous scaffolds made of a three-arm star poly(e-caprolactone) were developed by employing a novel computer-aided wet-spinning apparatus to precisely control the deposition patte... --- paper_title: Metal nanoparticle direct inkjet printing for low-temperature 3D micro metal structure fabrication paper_content: Inkjet printing of functional materials is a key technology toward ultra-low-cost, large-area electronics. We demonstrate low-temperature 3D micro metal structure fabrication by direct inkjet printing of metal nanoparticles (NPs) as a versatile, direct 3D metal structuring approach representing an alternative to conventional vacuum deposition and photolithographic methods. Metal NP ink was inkjet-printed to exploit the large melting temperature drop of the nanomaterial and the ease of the NP ink formulation. Parametric studies on the basic conditions for stable 3D inkjet printing of NP ink were carried out. Furthermore, diverse 3D metal microstructures, including micro metal pillar arrays, helices, zigzag and micro bridges were demonstrated and electrical characterization was performed. Since the process requires low temperature, it carries substantial potential for fabrication of electronics on a plastic substrate. --- paper_title: Electrospinning of Polymeric Nanofibers for Tissue Engineering Applications: A Review paper_content: Interest in electrospinning has recently escalated due to the ability to produce materials with nanoscale properties. Electrospun fibers have been investigated as promising tissue engineering scaffolds since they mimic the nanoscale properties of native extracellular matrix. In this review, we examine electrospinning by providing a brief description of the theory behind the process, examining the effect of changing the process parameters on fiber morphology, and discussing the potential applications and impacts of electrospinning on the field of tissue engineering. --- paper_title: Frontiers of 3D Printing/Additive Manufacturing: from Human Organs to Aircraft Fabrication† paper_content: It has been more than three decades since stereolithography began to emerge in various forms of additive manufacturing and 3D printing. Today these technologies are proliferating worldwide in various forms of advanced manufacturing. The largest segment of the 3D printing market today involves various polymer component fabrications, particularly complex structures not attainable by other manufacturing methods. Conventional printer head systems have also been adapted to selectively print various speciated human cells and special molecules in attempts to construct human organs, beginning with skin and various tissue patches. These efforts are discussed along with metal and alloy fabrication of a variety of implant and bone replacement components by creating powder layers, which are selectively melted into complex forms (such as foams and other open-cellular structures) using laser and electron beams directed by CAD software. Efforts to create a “living implant” by bone ingrowth and eventual vascularization within these implants will be discussed briefly. Novel printer heads for direct metal droplet deposition as in other 3D printing systems are briefly described since these concepts will allow for the eventual fabrication of very large and complex products, including automotive and aerospace structures and components. --- paper_title: Interfacial characterization of SLM parts in multi-material processing: Metallurgical diffusion between 316L stainless steel and C18400 copper alloy paper_content: Abstract Multi-material processing in selective laser melting using a novel approach, by the separation of two different materials within a single dispensing coating system was investigated. 316L stainless steel and UNS C18400 Cu alloy multi-material samples were produced using selective laser melting and their interfacial characteristics were analyzed using focused ion beam, scanning electron microscopy, energy dispersive spectroscopy and electron back scattered diffraction techniques. A substantial amount of Fe and Cu element diffusion was observed at the bond interface suggesting good metallurgical bonding. Quantitative evidence of good bonding at the interface was also obtained from the tensile tests where the fracture was initiated at the copper region. Nevertheless, the tensile strength of steel/Cu SLM parts was evaluated to be 310 ± 18 MPa and the variation in microhardness values was found to be gradual along the bonding interface from the steel region (256 ± 7 HV 0.1 ) to the copper region (72 ± 3 HV 0.1 ). --- paper_title: Markov Decision Process for Image-Guided Additive Manufacturing paper_content: Additive manufacturing (AM) is a process to produce three-dimensional parts with complex and free-form geometries layer by layer from computer-aided-design models. However, real-time quality control is the main challenge that hampers the wide adoption of AM. Advancements in sensing systems facilitate AM monitoring and control. Realizing full potentials of sensing data for AM quality control depends to a great extent on effective analytical methods and tools that will handle complicated imaging data, and extract pertinent information about defect conditions and process dynamics. This letter considers the optimal control problem for AM parts whose layerwise defect states can be monitored using advanced sensing systems. Specifically, we formulate the in situ AM control problem as a Markov decision process and utilize the layerwise imaging data to find an optimal control policy. We take into account the stochastic uncertainty in the variations of layerwise defects and aim at mitigating the defects before they reach the nonrecoverable stage. Finally, the model is used to derive an optimal control policy by utilizing the defect-state signals estimated from layerwise images in a metal AM application. --- paper_title: Inkjet-Printed Flexible Graphene-Based Supercapacitor paper_content: Abstract A flexible supercapacitor is being developed for integrating with and powering flexible electronics for military and commercial applications. Graphene oxide dispersed in water was used as an ink for inkjet printing the electrode active material onto metal film on Kapton current collectors. After printing, the graphene oxide was thermally reduced at 200 °C to produce conductive graphene electrodes. These electrodes were heat sealed together with added electrolyte and separator, and the assembled supercapacitor performance was evaluated. The specific capacitance of the graphene is good, and the overall performance of the packaged device serves as a proof of concept. But in the future, thicker graphene electrodes and further package optimization will be required to obtain good device-level performance. A number of issues associated with using Kapton for packaging these devices are identified and discussed. --- paper_title: Macrostructures with hierarchical porosity produced from alumina–aluminum hydroxide–chitosan wet-spun fibers paper_content: Abstract This paper reports on the development of macrostructures with hierarchical porosity produced from Al 2 O 3 –Al(OH) 3 –chitosan wet-spun fibers. Aqueous suspensions (13 vol% of solids, 1.3 vol% of chitosan, 0.1 M acetic acid, pH–4) containing different Al 2 O 3 –Al(OH) 3 ratios were extruded through a 500 μm diameter syringe needle into a 2 M NaOH coagulation bath. After washing and drying, these continuous fibers were controllably chopped into 5 mm long staples and shaped under vacuum into 40×40 mm 2 cylindrical macroelements, using 2 wt% chitosan solution as binder. By varying the Al 2 O 3 –Al(OH) 3 content (100–0, 50–50 and 0–100 vol%) and sintering temperature (1100–1500 °C), structures with different levels of porosity (up to 84%), specific surface area (up to 7 m 2 g −1 ) and mechanical strength (up to 9 MPa in uniaxial compression) were obtained. The ratio between the porosity inside the solid part of the structure and the interfilament space was also affected by these parameters and was adjusted according to the numerous potential applications of this system. --- paper_title: Fabrication of inkjet printed organic photovoltaics on flexible Ag electrode with additives paper_content: Abstract In this paper, we describe organic photovoltaics (OPVs) based on flexible thin film Ag anodes that are fabricated using a controlled deposition of photoactive layer by inkjet printing. The inkjet printed OPV photo-active layer is a P3HT:PCBM blend incorporated with a high boiling point additive, 1,6-hexanedithiol which serves to allow improved morphology. The devices show comparable power conversion efficiency to those fabricated using spin-coating techniques. Optimization of procedures for OPV fabrication without ITO electrodes or spin-coating of the active layer is a vital step towards realizing the potential of OPVs for mass production. --- paper_title: Fabrication of Functionally Graded Materials Via Inkjet Color Printing paper_content: A new method for fabricating functionally graded materials (FGMs) via inkjet color printing is reported in this paper. Al 2 O 3 and ZrO 2 aqueous suspensions were stabilized electrostatically and placed in different color reservoirs in inkjet cartridges. The volume and composition of the suspensions printed in droplets at a small area were controlled by the inkjet cyan-magenta-yellow-black color printing principle. The analysis of energy-dispersive spectrometry shows that with multi-layer printing, the composition profile of the printed FGM is consistent with the designed profile. The new method shows the potential for fabricating FGMs with arbitrarily designed three-dimensional composition profiles. --- paper_title: Inkjet Printing Resolution Study for Multi-Material Rapid Prototyping paper_content: In addition to its application in media printing, inkjet printing is becoming an increasingly attractive option for the distribution and patterning of materials for a wide variety of applications. In this study a commercial inkjet printer was modified to study the resolution of fluid dot placement required to fabricate 3D multi-material patterns layer by layer. A Java-based computer program was developed to convert stereolithography (STL) data layer by layer, control ink cartridges individually and print ink with customized fluid dot placement arrangements. The study found that complement printing between nozzles which are 30µm in diameter and 144µm apart is essential to achieve a sufficiently dense 3D pattern. When printed with 36µm vertical spacing a layer thickness of 1.30µm is achievable, and when printing layer by layer, the thickness increases almost at a linear rate. --- paper_title: 3D printed microfluidic circuitry via multijet-based additive manufacturing paper_content: The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or “three-dimensional (3D) printing” technologies – predominantly stereolithography – as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) – a layer-by-layer, multi-material inkjetting process – for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community. --- paper_title: Laser and electron-beam powder-bed additive manufacturing of metallic implants: A review on processes, materials and designs paper_content: Additive manufacturing (AM), also commonly known as 3D printing, allows the direct fabrication of functional parts with complex shapes from digital models. In this review, the current progress of two AM processes suitable for metallic orthopaedic implant applications, namely selective laser melting (SLM) and electron beam melting (EBM) are presented. Several critical design factors such as the need for data acquisition for patient-specific design, design dependent porosity for osteo-inductive implants, surface topology of the implants and design for reduction of stress-shielding in implants are discussed. Additive manufactured biomaterials such as 316L stainless steel, titanium-6aluminium-4vanadium (Ti6Al4V) and cobalt-chromium (CoCr) are highlighted. Limitations and future potential of such technologies are also explored. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:369–385, 2016. --- paper_title: Multi-materials drop-on-demand inkjet technology based on pneumatic diaphragm actuator paper_content: Micro-droplet jetting belongs to the field of precision fluid dispensing techniques. Unlike traditional subtraction manufacture process, micro-droplet jetting as an additive fabrication technique with features of non-contact and data-driven represents a new development trend of modern manufacturing process. In this paper, the design, fabrication and performance of a multi-materials drop-on-demand (DOD) inkjet system based on pneumatic diaphragm actuator were described. For capturing the droplet ejection process and measuring the droplet dimension, a self-made in situ imaging system based on time delayed external trigger was set up. The performance of the generator was studied by adjusting the structure and control parameters. Furthermore, the influence of fluid properties on the droplet ejection process was experimentally investigated. Micro-solderballs of 160.5 μm in diameter and UV curing adhesive micro-bumps of 346.94 μm in contact diameter with the substrate were produced. The results demonstrated that the DOD inkjet generator possesses characteristics of robust, easy to operate and maintain, and able to withstand high temperature as well as applicability to a wide variety of materials including polymers, low melting point resin and high melting point metal. The system has a great potential of being used in the fields of IC and MEMS packaging, 3D printing, organic semiconductor fabrication, and biological and chemical analysis. --- paper_title: Three-dimensional printed millifluidic devices for zebrafish embryo tests paper_content: Implementations of Lab-on-a-Chip technologies for in-situ analysis of small model organisms and embryos (both invertebrate and vertebrate) are attracting an increasing interest. A significant hurdle to widespread applications of microfluidic and millifluidic devices for in-situ analysis of small model organisms is the access to expensive clean room facilities and complex microfabrication technologies. Furthermore, these resources require significant investments and engineering know-how. For example, poly(dimethylsiloxane) soft lithography is still largely unattainable to the gross majority of biomedical laboratories willing to pursue development of chip-based platforms. They often turn instead to readily available but inferior classical solutions. We refer to this phenomenon as workshop-to-bench gap of bioengineering science. To tackle the above issues, we examined the capabilities of commercially available Multi-Jet Modelling (MJM) and Stereolithography (SLA) systems for low volume fabrication of optical-grade millifluidic devices designed for culture and biotests performed on millimetre-sized specimens such as zebrafish embryos. The selected 3D printing technologies spanned a range from affordable personal desktop systems to high-end professional printers. The main motivation of our work was to pave the way for off-the-shelf and user-friendly 3D printing methods in order to rapidly and inexpensively build optical-grade millifluidic devices for customized studies on small model organisms. Compared with other rapid prototyping technologies such as soft lithography and infrared laser micromachining in poly(methyl methacrylate), we demonstrate that selected SLA technologies can achieve user-friendly and rapid production of prototypes, superior feature reproduction quality, and comparable levels of optical transparency. A caution need to be, however, exercised as majority of tested SLA and MJM resins were found toxic and caused significant developmental abnormalities in zebrafish embryos. Taken together, our data demonstrate that SLA technologies can be used for rapid and accurate production of devices for biomedical research. However, polymer biotoxicity needs to be carefully evaluated. --- paper_title: Flexible organic phototransistors based on a combination of printing methods paper_content: Abstract Highly photosensitive organic phototransistors (OPTs) are successfully demonstrated on a flexible substrate using all-solution process as well as a combination of printing methods which consist of roll-to-plate reverse offset printing (ROP), inkjet printing and bar coating. Excellent electrical switching characteristics are obtained from heterogeneous interfacial properties of the reverse-offset-printed silver nanoparticle electrode and the inkjet-printed p -channel polymeric semiconductor. In particular, the OPTs exhibit remarkably photosensitivity with a photo-to-dark current ratio exceeding 5 orders. This optoelectronic properties of the combinational printed OPTs are theoretically and experimentally studied, and found the comparable tendency. In addition, excellent mechanical stability is observed with up to 0.5% of strain applied to the OPTs. Hence, by manufactured with a combination of various graphic art printing methods such as roll-to-plate ROP, inkjet printing, and bar coating, these devices are very promising candidates for large-area and low-cost printed and flexible optoelectronics applications. --- paper_title: Biomimetic wet-stable fibres via wet spinning and diacid-based crosslinking of collagen triple helices paper_content: One of the limitations of electrospun collagen as bone-like fibrous structure is the potential collagen triple helix denaturation in the fibre state and the corresponding inadequate wet stability even after crosslinking. Here, we have demonstrated the feasibility of accomplishing wet-stable fibres by wet spinning and diacid-based crosslinking of collagen triple helices, whereby fibre ability to act as bonemimicking mineralisation system has also been explored. Circular dichroism (CD) demonstrated nearly complete triple helix retention in resulting wet-spun fibres, and the corresponding chemically crosslinked fibres successfully preserved their fibrous morphology following 1-week incubation in phosphate buffer solution (PBS). The presented novel diacid-based crosslinking route imparted superior tensile modulus and strength to the resulting fibres indicating that covalent functionalization of distant collagen molecules is unlikely to be accomplished by current state-of-the-art carbodiimide-based crosslinking. To mimic the constituents of natural bone extra cellular matrix (ECM), the crosslinked fibres were coated with carbonated hydroxyapatite (CHA) through biomimetic precipitation, resulting in an attractive biomaterial for guided bone regeneration (GBR), e.g. in bony defects of the maxillofacial region. © 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license --- paper_title: Hybrid printing of mechanically and biologically improved constructs for cartilage tissue engineering applications paper_content: Bioprinting is an emerging technique used to fabricate viable, 3D tissue constructs through the precise deposition of cells and hydrogels in a layer-by-layer fashion. Despite the ability to mimic the native properties of tissue, printed 3D constructs that are composed of naturally-derived biomaterials still lack structural integrity and adequate mechanical properties for use in vivo, thus limiting their development for use in load-bearing tissue engineering applications, such as cartilage. Fabrication of viable constructs using a novel multi-head deposition system provides the ability to combine synthetic polymers, which have higher mechanical strength than natural materials, with the favorable environment for cell growth provided by traditional naturally-derived hydrogels. However, the complexity and high cost associated with constructing the required robotic system hamper the widespread application of this approach. Moreover, the scaffolds fabricated by these robotic systems often lack flexibility, which further restrict their applications. To address these limitations, advanced fabrication techniques are necessary to generate complex constructs with controlled architectures and adequate mechanical properties. In this study, we describe the construction of a hybrid inkjet printing/electrospinning system that can be used to fabricate viable tissues for cartilage tissue engineering applications. Electrospinning of polycaprolactone fibers was alternated with inkjet printing of rabbit elastic chondrocytes suspended in a fibrin?collagen hydrogel in order to fabricate a five-layer tissue construct of 1?mm thickness. The chondrocytes survived within the printed hybrid construct with more than 80% viability one week after printing. In addition, the cells proliferated and maintained their basic biological properties within the printed layered constructs. Furthermore, the fabricated constructs formed cartilage-like tissues both in vitro and in vivo as evidenced by the deposition of type II collagen and glycosaminoglycans. Moreover, the printed hybrid scaffolds demonstrated enhanced mechanical properties compared to printed alginate or fibrin?collagen gels alone. This study demonstrates the feasibility of constructing a hybrid inkjet printing system using off-the-shelf components to produce cartilage constructs with improved biological and mechanical properties. --- paper_title: Ink-jet printed porous composite LiFePO4 electrode from aqueous suspension for microbatteries paper_content: This work demonstrates ink-jet printed LiFePO4-based composite porous electrodes for microbattery application. As binder and dispersant, we found that aqueous inks with more suitable rheological properties with respect to ink-jet printing are prepared with the low molecular weight poly-acrylic-co-maleic acid copolymer, rather than with the carboxymethyl cellulose standard binder of the lithium-ion technology. The ink-jet printed thin and porous electrode shows very high rate charge/discharge behavior, both in LiPF6/ethylene carbonate-dimethyl carbonate (LP30) and lithium bis(trifluoromethane)sulfonylimide salt (Li-TFSI) in N-methyl-N-propylpyrrolidinium bis(trifluoromethane)suflonylimide ionic liquid (PYR13-TFSI) electrolytes, as well as good cyclability. --- paper_title: Inkjet printed polymeric electron blocking and surface energy modifying layer for low dark current organic photodetectors paper_content: Abstract The reduction of dark current is required to enhance the signal-to-noise ratio and decrease the power consumption in photodetectors. This is typically achieved by introducing additional functional layers to suppress carrier injection, a task that proves to be challenging especially in printed devices. Here we report on the successful reduction of dark current below 100 nA cm −2 (at −1 V bias) in an inkjet printed photodetector by the insertion of an electron blocking layer based on poly[3-(3,5-di-tert-butyl-4-methoxyphenyl)-thiophene], while preserving a high quantum yield. Furthermore, the electron blocking layer strongly increases the surface energy of the hydrophobic photoactive layer, therefore simplifying the printing of transparent top electrodes from water based formulations without the addition of surfactants. --- paper_title: Process-level modeling and simulation for HP's Multi Jet Fusion 3D printing technology paper_content: The 3D printing technology is expected to revolutionize part manufacturing by enabling rapid and inexpensive production at a small scale. HP's Multi Jet Fusion 3D printing technology is developed to provide new levels of part quality in a fast and inexpensive way compared to existing 3D printing technologies. The printed part quality is determined by the interplay of the printing device and materials used for printing. Thus, it is essential to have a proper cyber-physical system model for the printing system for process-level simulation of the HP's Multi Jet Fusion technology. In this paper, we propose an approach for the process-level modeling and simulation of HP's Multi Jet Fusion technology. Our approach can be used to carry out simulation of the 3D printing system, to provide guidance for optimization and development of the printing process and exploration of materials. Preliminary results potentially indicate that the simulation of our proposed model is significantly faster than the finite element method, which is a widely used technique for 3D printing simulation. --- paper_title: Electrospun cartilage‐derived matrix scaffolds for cartilage tissue engineering paper_content: Macroscale scaffolds created from cartilage-derived matrix (CDM) demonstrate chondroinductive or chondro-inductive properties, but many fabrication methods do not allow for control of nanoscale architecture. In this regard, electrospun scaffolds have shown significant promise for cartilage tissue engineering. However, nanofibrous materials generally exhibit a relatively small pore size and require techniques such as multilayering or the inclusion of sacrificial fibers to enhance cellular infiltration. The objectives of this study were (1) to compare multilayer to single-layer electrospun poly(ɛ-caprolactone) (PCL) scaffolds for cartilage tissue engineering, and (2) to determine whether incorporation of CDM into the PCL fibers would enhance chondrogenesis by human adipose-derived stem cells (hASCs). PCL and PCL–CDM scaffolds were prepared by sequential collection of 60 electrospun layers from the surface of a grounded saline bath into a single scaffold, or by continuous electrospinning onto the surface of a grounded saline bath and harvest as a single-layer scaffold. Scaffolds were seeded with hASCs and evaluated over 28 days in culture. The predominant effects on hASCs of incorporation of CDM into scaffolds were to stimulate sulfated glycosaminoglycan synthesis and COL10A1 gene expression. Compared with single-layer scaffolds, multilayer scaffolds enhanced cell infiltration and ACAN gene expression. However, compared with single-layer constructs, multilayer PCL constructs had a much lower elastic modulus, and PCL–CDM constructs had an elastic modulus approximately 1% that of PCL constructs. These data suggest that multilayer electrospun constructs enhance homogeneous cell seeding, and that the inclusion of CDM stimulates chondrogenesis-related bioactivity. © 2014 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 102A: 3998–4008, 2014. --- paper_title: Selective Laser Melting of Polymer Powder – Part Mechanics as Function of Exposure Speed☆ paper_content: Abstract The selective laser melting of polymer powders is a well-established technology for additive manufacturing applications, although there is still a deficit in basic process knowledge. Considering the demands of series production, the technique of selective laser melting of polymers is faced with various challenges concerning suitable material systems, process strategies and part properties. Consequently, basic research is necessary to understand and optimize processes in order to enable a shift from prototyping applications to serial production of small-lot sized series. A better understanding of the interaction between the sub-processes of selective laser melting and the resulting part properties is necessary for the derivation of new process strategies for increased part quality. Selective laser melting of polymers is mainly divided in the three phases of powder feeding, tempering and geometry exposure. By the interaction of these sub-processes, the resulting temperature fields determine the part properties through microstructural changes in the pore number and distribution. In addition to absolute temperature values, the time dependency of the thermal fields has an influence on the porosity of the molten parts. Current process strategies aim for a decrease in building time by increasing scan speed and laser power, although the absolute energy input into the material does not change when scan speed and laser power are increased at a constant ratio. In prior investigations, the authors showed a correlation between the heating rate and the shape of the resulting melt pool. Based on this correlation, the interaction between heating rates (on a fixed level of exposure energy) and mechanical part properties (tensile test) is analyzed within the paper. The study also implies additional results for other levels of energy input during geometry exposure, which allow for a cross-check of the results. Furthermore, part positioning in the build chamber as well as part density are taken into account. Based on these basic investigations, new process strategies considering the time dependent material behavior can be derived. --- paper_title: Additive Manufacturing of Ceramic‐Based Materials paper_content: This paper offers a review of present achievements in the field of processing of ceramic-based materials with complex geometry using the main additive manufacturing (AM) technologies. In AM, the geometrical design of a desired ceramic-based component is combined with the materials design. In this way, the fabrication times and the product costs of ceramic-based parts with required properties can be substantially reduced. However, dimensional accuracy and surface finish still remain crucial features in today's AM due to the layer-by-layer formation of the parts. In spite of the fact that significant progress has been made in the development of feedstock materials, the most difficult limitations for AM technologies are the restrictions set by material selection for each AM method and aspects considering the inner architectural design of the manufactured parts. Hence, any future progress in the field of AM should be based on the improvement of the existing technologies or, alternatively, the development of new approaches with an emphasis on parts allowing the near-net formation of ceramic structures, while optimizing the design of new materials and of the part architecture. --- paper_title: Interfacial characterization of SLM parts in multi-material processing: Intermetallic phase formation between AlSi10Mg and C18400 copper alloy paper_content: Abstract Multi-material processing in selective laser melting (SLM) using AlSi10Mg and UNS C18400 copper alloy was carried out. The interfacial characteristics were analyzed with FIB, SEM, XRD, EDS and EBSD techniques. Al 2 Cu intermetallic compound was formed at the Al/Cu bond interface after the SLM process. The tensile strength of Al/Cu SLM parts was evaluated to be 176 ± 31 MPa and flexural strength under a 3 point bending test was evaluated to be around 200 MPa for Cu at root and 500 MPa for Al at root. Further analysis suggested that the formation of intermetallic compounds translated the fracture mechanism at the interface from ductile to brittle cleavage. The microhardness values also varied along the interface with high microhardness at the interface due to the intermetallic compounds. --- paper_title: Additive manufacturing technologies: state of the art and trends paper_content: The rapid prototyping has been developed from the 1980s to produce models and prototypes until the technologies evolution today. Nowadays, these technologies have other names such as 3D printing or additive manufacturing, and so forth, but they all have the same origins from rapid prototyping. The design and manufacturing process stood the same until new requirements such as a better integration on production line, a largest series of manufacturing or the reduce weight of products due to heavy costs of machines and materials. The ability to produce complex geometries allows proposing of design and manufacturing solutions in the industrial field in order to be ever more effective. The additive manufacturing (AM) technology develops rapidly with news solutions and markets which sometimes need to demonstrate their reliability. The community needs to survey some evolutions such as the new exchange format, the faster 3D printing systems, the advanced numerical simulation or the emergence of new use. This review is addressed to persons who wish have a global view on the AM and improve their understanding. We propose to review the different AM technologies and the new trends to get a global overview through the engineering and manufacturing process. This article describes the engineering and manufacturing cycle with the 3D model management and the most recent technologies from the evolution of additive manufacturing. Finally, the use of AM resulted in new trends that are exposed below with the description of some new economic activities. --- paper_title: Combined micro and macro additive manufacturing of a swirling flow coaxial phacoemulsifier sleeve with internal micro-vanes. paper_content: Microstereolithography (microSL) technology can fabricate complex, three-dimensional (3D) microstructures, although microSL has difficulty producing macrostructures with micro-scale features. There are potentially many applications where 3D micro-features can benefit the overall function of the macrostructure. One such application involves a medical device called a coaxial phacoemulsifier where the tip of the phacoemulsifier is inserted into the eye through a relatively small incision and used to break the lens apart while removing the lens pieces and associated fluid from the eye through a small tube. In order to maintain the eye at a constant pressure, the phacoemulsifier also includes an irrigation solution that is injected into the eye during the procedure through a coaxial sleeve. It has been reported, however, that the impinging flow from the irrigation solution on the corneal endothelial cells in the inner eye can damage these cells during the procedure. As a result, a method for reducing the impinging flow velocities and the resulting shear stresses on the endothelial cells during this procedure was explored, including the design and development of a complex, 3D micro-vane within the sleeve. The micro-vane introduces swirl into the irrigation solution, producing a flow with rapidly dissipating flow velocities. Fabrication of the sleeve and fitting could not be accomplished using microSL alone, and thus, a two-part design was accomplished where a sleeve with the micro-vane was fabricated with microSL and a threaded fitting used to attach the sleeve to the phacoemulsifier was fabricated using an Objet Eden 333 rapid prototyping machine. The new combined device was tested within a water container using particle image velocimetry, and the results showed successful swirling flow with an ejection of the irrigation fluid through the micro-vane in three different radial directions corresponding to the three micro-vanes. As expected, the sleeve produced a swirling flow with rapidly dissipating streamwise flow velocities where the maximum measured streamwise flow velocities using the micro-vane were lower than those without the micro-vane by 2 mm from the tip where they remained at approximately 70% of those produced by the conventional sleeve as the flow continued to develop. It is believed that this new device will reduce damage to endothelial cells during cataract surgery and significantly improve patient outcomes from this procedure. This unique application demonstrates the utility of combining microSL with a macro rapid prototyping technology for fabricating a real macro-scale device with functional, 3D micro-scale features that would be difficult and costly to fabricate using alternative manufacturing methods. --- paper_title: Developing Gradient Metal Alloys through Radial Deposition Additive Manufacturing paper_content: Interest in additive manufacturing (AM) has dramatically expanded in the last several years, owing to the paradigm shift that the process provides over conventional manufacturing. Although the vast majority of recent work in AM has focused on three-dimensional printing in polymers, AM techniques for fabricating metal alloys have been available for more than a decade. Here, laser deposition (LD) is used to fabricate multifunctional metal alloys that have a strategically graded composition to alter their mechanical and physical properties. Using the technique in combination with rotational deposition enables fabrication of compositional gradients radially from the center of a sample. A roadmap for developing gradient alloys is presented that uses multi-component phase diagrams as maps for composition selection so as to avoid unwanted phases. Practical applications for the new technology are demonstrated in low-coefficient of thermal expansion radially graded metal inserts for carbon-fiber spacecraft panels. --- paper_title: An analytical closed-form solution for free vibration of stepped circular/annular Mindlin functionally graded plate paper_content: An exact solution based on a unique procedure is presented for free vibration of stepped circular and annular functionally graded (FG) plates via first-order shear deformation plate theory of Mindlin. A power-law distribution of the volume fraction of the components is considered for the Young's Modulus and Poisson's ratio of the studied FG plate. Free vibration of the plate is solved by introducing some new potential functions and the use of separation of variables method. Finally, several comparisons of the developed model were presented with the FEA analysis, to demonstrate the accuracy of the proposed exact procedure. The effect of the geometrical parameters such as step thickness ratios and step locations on the natural frequencies of FG plates is also investigated. --- paper_title: Development of nickel‐titanium graded composition components paper_content: Purpose – Three layer‐additive manufacturing methods were evaluated to producing nickel‐titanium graded composition material. One potential application is fabrication of attachment clips that join thermal protection systems to launch vehicle structure. Thermal gradients during flight generate excessive bending and shear loads that limit the service lifetime of the Inconel clips currently used. It is envisioned that a graded composition component could be tailored to reduce the stress concentrations.Design/methodology/approach – Deposits with nearly continuous composition grade were built from Ti‐6‐4 and Inconel 718 powder using laser direct metal deposition. Layered deposits were produced by flat wire welding from Ti‐6‐4 and Inconel 718 wire. Ultrasonic consolidation was used to produce layered deposits from pure nickel and commercially pure titanium foils. Microstructure, bond line morphology, chemical composition, and reaction phases were characterized.Findings – All three manufacturing methods require ... --- paper_title: Laser deposition of compositionally graded titanium–vanadium and titanium–molybdenum alloys paper_content: Compositionally graded binary titanium–vanadium and titanium–molybdenum alloys have been deposited using the laser engineered net-shaping (LENS™) process. A compositional gradient, from elemental Ti to Ti–25at.% V or Ti–25at.% Mo, has been achieved within a length of ∼25 mm. The feedstock used for depositing the graded alloy consists of elemental Ti and V (or Mo) powders. Though the microstructural features across the graded alloy correspond to those typically observed in α/β Ti alloys, the scale of the features is refined in a number of cases. Microhardness measurements across the graded samples exhibit an increase in hardness with increasing alloying content up to a composition of ∼12% in case of Ti–xV and up to a composition of ∼10% in case of the Ti–xMo alloys. Further increase in the alloying content resulted in a decrease in hardness for both the Ti–xV as well as the Ti–xMo alloys. A notable feature of these graded deposits is the large prior β grain size resulting from the directionally solidified nature of the microstructure. Thus, grains ∼10 mm in length grows in a direction perpendicular to the substrate. The ability to achieve such substantial changes in composition across rather limited length makes this process a highly attractive candidate for combinatorial materials science studies. --- paper_title: Direct laser deposition of Ti-6Al-4V from elemental powder blends paper_content: Purpose ::: ::: ::: ::: ::: This paper aims to achieve Ti-6Al-4V from Ti, Al and V elemental powder blends using direct laser deposition (DLD) and to understand the effects of laser transverse speed and laser power on the initial fabrication of deposit’s microstructure and Vickers hardness. ::: ::: ::: ::: ::: Design/methodology/approach ::: ::: ::: ::: ::: Two sets of powder blends with different weight percentage ratio for three elemental powder were used during DLD process. Five experiments with different processing parameters were performed to evaluate how microstructure and Vickers hardness change with laser power and laser transverse speed. Energy dispersive X-ray spectroscopy, optical microscopy and Vickers hardness test were used to analyze deposits’ properties. ::: ::: ::: ::: ::: Findings ::: ::: ::: ::: ::: This paper reveals that significant variance of elemental powder’s size and density would cause lack of weight percentage of certain elements in final part and using multiple coaxial powder nozzles design would be a solution. Also, higher laser power or slower laser transverse speed tend to benefit the formation of finer microstructures and increase Vickers hardness. ::: ::: ::: ::: ::: Originality/value ::: ::: ::: ::: ::: This paper demonstrates a new method to fabricate Ti-6Al-4V and gives out a possible weight percentage ratio 87:7:6 for Ti:Al:V at powder blends during DLD process. The relationship between microstructure and Vickers hardness with laser power and laser transverse speed would provide valuable reference for people working on tailoring material properties using elemental powder method. --- paper_title: Functionally graded Co-Cr-Mo coating on Ti-6Al-4V alloy structures. paper_content: Abstract Functionally graded, hard and wear-resistant Co–Cr–Mo alloy was coated on Ti–6Al–4V alloy with a metallurgically sound interface using Laser Engineering Net Shaping (LENS™). The addition of the Co–Cr–Mo alloy onto the surface of Ti–6Al–4V alloy significantly increased the surface hardness without any intermetallic phases in the transition region. A 100% Co–Cr–Mo transition from Ti–6Al–4V was difficult to produce due to cracking. However, using optimized LENS™ processing parameters, crack-free coatings containing up to 86% Co–Cr–Mo were deposited on Ti–6Al–4V alloy with excellent reproducibility. Human osteoblast cells were cultured to test in vitro biocompatibility of the coatings. Based on in vitro biocompatibility, increasing the Co–Cr–Mo concentration in the coating reduced the live cell numbers after 14 days of culture on the coating compared with base Ti–6Al–4V alloy. However, coated samples always showed better bone cell proliferation than 100% Co–Cr–Mo alloy. Producing near net shape components with graded compositions using LENS™ could potentially be a viable route for manufacturing unitized structures for metal-on-metal prosthetic devices to minimize the wear-induced osteolysis and aseptic loosening that are significant problems in current implant design. --- paper_title: Friction Stir Additive Manufacturing: Route to High Structural Performance paper_content: Aerospace and automotive industries provide the next big opportunities for additive manufacturing. Currently, the additive industry is confronted with four major challenges that have been identified in this article. These challenges need to be addressed for the additive technologies to march into new frontiers and create additional markets. Specific potential success in the transportation sectors is dependent on the ability to manufacture complicated structures with high performance. Most of the techniques used for metal-based additive manufacturing are fusion based because of their ability to fulfill the computer-aided design to component vision. Although these techniques aid in fabrication of complex shapes, achieving high structural performance is a key problem due to the liquid–solid phase transformation. In this article, friction stir additive manufacturing (FSAM) is shown as a potential solid-state process for attaining high-performance lightweight alloys for simpler geometrical applications. To illustrate FSAM as a high-performance route, manufactured builds of Mg-4Y-3Nd and AA5083 are shown as examples. In the Mg-based alloy, an average hardness of 120 HV was achieved in the built structure and was significantly higher than that of the base material (97 HV). Similarly for the Al-based alloy, compared with the base hardness of 88 HV, the average built hardness was 104 HV. A potential application of FSAM is illustrated by taking an example of a simple stiffener assembly. --- paper_title: A study of subgrain formation in Al 3003 H-18 foils undergoing ultrasonic additive manufacturing using a dislocation density based crystal plasticity finite element framework paper_content: A novel dislocation density based crystal plasticity finite element model (DDCP-FEM) framework has been extended to predict subgrain formation during ultrasonic additive manufacturing of Al 3003 H-18 tempered foils. The present study identifies various microstructural transitions such as recrystallization and dislocation density evolutions that occur during the processing of these foils as a function of input processing parameters such as normal force, ultrasonic oscillation amplitude, and initial microstructure. Furthermore, changes in average grain sizes in the Al 3003 H-18 foils have been calculated before and after processing from both microstructures and the simulation study. The simulation predictions were in good agreement with experimental results. This provides evidence that DDCP-FEM can be used as a tool for optimizing input processing parameters so that minimal grain fragmentation occurs during processing leading to better mechanical properties for 3 dimensional components made using ultrasonic... --- paper_title: Characterization of the laminated object manufacturing (LOM) process paper_content: Laminated object manufacturing (LOM) is a rapid prototyping process where a part is built sequentially from layers of paper. Studied in the present paper are the precision and accuracy of the LOM process and the dimensional stability of LOM parts. The process was found to exhibit both constant and random sources of error in the part dimensions. The dimensional error was the largest normal to the plane of the paper, exacerbated by the moisture absorption and subsequent swelling. The key process parameters were identified and optimized for sufficient bonding and cutting accuracy. --- paper_title: Ultrasonic Additive Manufacturing – A Hybrid Production Process for Novel Functional Products paper_content: Ultrasonic Additive Manufacturing (UAM), or Ultrasonic Consolidation as it is also referred, is a hybrid form of manufacture, primarily for metal components. The unique nature of the process permits extremely novel functionality to be realised such as multi-material structures with embedded componentry. UAM has been subject to research and investigation at Loughborough University since 2001. This paper introduces UAM then details a number of key findings in a number of areas that have been of particular focus at Loughborough in recent years. These include; the influence of pre-process material texture on interlaminar bonding, secure fibre positioning through laser machined channels, and freeform electrical circuitry integration. --- paper_title: Dissimilar metal friction welding of austenitic–ferritic stainless steels paper_content: Abstract Continuous drive friction welding studies on austenitic–ferritic stainless steel combination has been attempted in this investigation. Parameter optimization, microstructure–mechanical property correlation and fracture behaviour is a major contribution of the study. Sound welds are obtained at certain weld parameter combinations only. The mechanical properties of dissimilar metal welds are comparable to those of ferritic stainless steel welds. Evaluation of the joints for resistance to pitting corrosion revealed that the dissimilar welds exhibit lower resistance to pitting corrosion compared to the ferritic and austenitic stainless steel welds. Interface on the austenitic stainless steel side exhibited higher residual stress possibly due to its higher flow stress and higher coefficient of thermal expansion. --- paper_title: Statistical Characterization of Ultrasonic Additive Manufacturing Ti/Al Composites paper_content: Ultrasonic additive manufacturing (UAM) is an emerging solid-state fabrication process that can be used for layered creation of solid metal structures. In UAM, ultrasonic energy is used to induce plastic deformation and nascent surface formation at the interface between layers of metal foil, thus creating bonding between the layers. UAM is an inherently stochastic process with a number of unknown facets that can affect the bond quality. In order to take advantage of the unique benefits of UAM, it is necessary to understand the relationship between manufacturing parameters (machine settings) and bond quality by quantifying the mechanical strength of UAM builds. This research identifies the optimum combination of processing parameters, including normal force, oscillation amplitude, weld speed, and number of bilayers for the manufacture of commercially pure, grade 1 titanium+1100-O aluminum composites. A multifactorial experiment was designed to study the effect of the above factors on the outcome measures ultimate shear strength and ultimate transverse tensile strength. Generalized linear models were used to study the statistical significance of each factor. For a given factor, the operating levels were selected to cover the full range of machine capabilities. Transverse shear and transverse tensile experiments were conducted to quantify the bond strength of the builds. Optimum levels of each parameter were established based on statistical contrast trend analyses. The results from these analyses indicate that high mechanical strength can be achieved with a process window bounded by a 1500 N normal force, 30 μm oscillation amplitude, about 42 mm/s weld speed, and two bilayers. The effects of each process parameter on bond strength are discussed and explained. --- paper_title: Characterization of interfacial microstructures in 3003 aluminum alloy blocks fabricated by ultrasonic additive manufacturing paper_content: Ultrasonic additive manufacturing (UAM) is a solid-state processing technique that uses ultrasonic vibrations to bond metal tapes into near net-shaped components. The benefits of UAM include the production of complex geometries and the incorporation of smart materials to produce functional composites and join dissimilar metals. The majority of the current research focuses on processing parameter optimization to eliminate macroscopic void formation at the interface. The present study utilizes ion-channeling contrast imaging from a focused ion beam, electron backscattered diffraction and transmission electron microscopy to examine microstructural changes induced during the UAM process. The results indicate that there is a bonding mechanism due to localized plastic deformation of asperities that undergo recrystallization and grain growth across the interface. Evidence for localized solidification microstructures, generated due to frictional sliding between the sonotrode horn and the tape material, is also presented. --- paper_title: Rapid laminated tooling paper_content: Abstract Rapid laminating methods predates rapid prototyping by several years, indeed the first work on laminated tooling by Professor Nakagawa was initially published in the late 1980s. Since then, Nakagawa’s team has been joined by a number of other research groups, with interest being boosted by the advent of rapid prototyping. This paper presents some of the finding of a 3 years study undertaken to investigate the use of laminated steel tooling for a range of automotive and aerospace production processes. The results of the programme are illustrated using production tooling for injection moulding of automotive components. The benefits of laminated tooling are shown, not just in terms of reduced cost and lead-times but perhaps more importantly, through reduced cycle times and improved part quality by the use of conformal cooling. --- paper_title: NiTi–Al interface strength in ultrasonic additive manufacturing composites paper_content: Abstract Ultrasonic Additive Manufacturing (UAM) is a new rapid prototyping process for creating metal-matrix composites at or near room temperature. The low process temperatures enable composite materials that have tailored CTEs through utilizing recovery stresses generated by highly prestrained Shape Memory Alloy (SMA) fibers embedded within the matrix. The strength of the fiber–matrix interface, which is the limiting factor in UAM composites, has not been characterized. In this study, we characterize the shear strength of the fiber–matrix interface and study the bonding between the fiber and matrix in composites fabricated with prestrained NiTi embedded in an Al 3003-H18 matrix. In heating the composite, stresses develop due to the blocked behavior of NiTi and the difference in CTE of the matrix and fiber. Differential scanning calorimetry is used to observe composite failure temperatures; an average interface shear strength of 7.28 MPa is determined using constitutive models of the NiTi element and Al matrix. The constitutive models describe the thermally-induced strain of the composite, showing an effective CTE of zero at 135 °C. The models show that by increasing the embedded fiber length, interface failure temperatures can be increased so that zero CTE behaviors can be utilized without irreversibly changing the NiTi prestrain. Results from energy dispersive X-ray spectroscopy indicate that the bonding between the fiber and interface is mechanical in nature with no evidence to support chemical or metallurgical bonding. --- paper_title: Characterization of Process for Embedding SiC Fibers in Al 6061 O Matrix Through Ultrasonic Consolidation paper_content: In this paper, continuous SiC fibers were embedded in an Al 6061 O matrix through ultrasonic consolidation at room temperature. The optimum embedding parameters were determined through peel tests and metallographic analysis. The influence of the embedded fiber volume fraction and base metal thickness on the interface bond strength was studied, and the fiber/matrix bond strength was tested through fiber pullout test. The results showed that embedding ≥0.8% volume fraction of SiC fiber in a 6061 O matrix could significantly increase and even its interfacial strength, but there is a threshold for embedded fiber volume fraction at specific parameters, over which the plastic flow and friction may be insufficient to have a strong bond at foil/foil interfaces between fibers. The study also showed that base metal thickness did not have significant influence on the interfacial strength with an exception of samples with a base metal thickness of 500 μm. Based on the results, it was proposed that microfriction at consolidation interfaces plays an important role for joint formation, and localized plastic flow around fibers is important to have fibers fully and safely embedded. --- paper_title: Interfacial shear strength estimates of NiTi–Al matrix composites fabricated via ultrasonic additive manufacturing paper_content: The purpose of this study is to understand and improve the interfacial shear strength of metal matrix composites fabricated via ultrasonic additive manufacturing (UAM). NiTi–Al composites can exhibit dramatically lower thermal expansion compared to aluminum, yet blocking stresses developed during thermal cycling have been found to degrade and eventually cause interface failure in these composites. In this study, the strength of the interface was characterized with pullout tests. Since adhered aluminum was consistently observed on all pullout samples, the matrix yielded prior to the interface breaking. Measured pullout loads were utilized as an input to a finite element model for stress and shear lag analysis. The aluminum matrix experiences a calculated peak shear stress near 230 MPa, which is above its ultimate shear strength of 150–200 MPa thus corroborating the experimentally-observed matrix failure. The influence of various fiber surface treatments and consolidation characteristics on bond mechanisms was studied with scanning electron microscopy, energy dispersive X-ray spectroscopy, optical microscopy, and focused ion beam microscopy. --- paper_title: 3D bioprinting of tissues and organs paper_content: Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology. --- paper_title: Biofabrication of multi-material anatomically shaped tissue constructs paper_content: Additive manufacturing in the field of regenerative medicine aims to fabricate organized tissue-equivalents. However, the control over shape and composition of biofabricated constructs is still a challenge and needs to be improved. The current research aims to improve shape, by converging a number of biocompatible, quality construction materials into a single three-dimensional fiber deposition process. To demonstrate this, several models of complex anatomically shaped constructs were fabricated by combined deposition of poly(vinyl alcohol), poly(e-caprolactone), gelatin methacrylamide/gellan gum and alginate hydrogel. Sacrificial components were co-deposited as temporary support for overhang geometries and were removed after fabrication by immersion in aqueous solutions. Embedding of chondrocytes in the gelatin methacrylamide/gellan component demonstrated that the fabrication and the sacrificing procedure did not affect cell viability. Further, it was shown that anatomically shaped constructs can be successfully fabricated, yielding advanced porous thermoplastic polymer scaffolds, layered porous hydrogel constructs, as well as reinforced cell-laden hydrogel structures. In conclusion, anatomically shaped tissue constructs of clinically relevant sizes can be generated when employing multiple building and sacrificial materials in a single biofabrication session. The current techniques offer improved control over both internal and external construct architecture underscoring its potential to generate customized implants for human tissue regeneration. (Some figures may appear in colour only in the online journal) --- paper_title: Creating Perfused Functional Vascular Channels Using 3D Bio-Printing Technology paper_content: We developed a methodology using 3D bio-printing technology to create a functional in vitro vascular channel with perfused open lumen using only cells and biological matrices. The fabricated vasculature has a tight, confluent endothelium lining, presenting barrier function for both plasma protein and highmolecular weight dextran molecule. The fluidic vascular channel is capable of supporting the viability of tissue up to 5 mm in distance at 5 million cells/mL density under the physiological flow condition. In static-cultured vascular channels, active angiogenic sprouting from the vessel surface was observed whereas physiological flow strongly suppressed this process. Gene expression analysis was reported in this study to show the potential of this vessel model in vascular biology research. The methods have great potential in vascularized tissue fabrication using 3D bio-printing technology as the vascular channel is simultaneously created while cells and matrix are printed around the channel in desired 3D patterns. It can also serve as a unique experimental tool for investigating fundamental mechanisms of vascular remodeling with extracellular matrix and maturation process under 3D flow condition. --- paper_title: Bioprinting technology and its applications paper_content: Bioprinting technology has emerged as a powerful tool for building tissue and organ structures in the field of tissue engineering. This technology allows precise placement of cells, biomaterials and biomolecules in spatially predefined locations within confined three-dimensional (3D) structures. Various bioprinting technologies have been developed and utilized for applications in life sciences, ranging from studying cellular mechanisms to constructing tissues and organs for implantation, including heart valve, myocardial tissue, trachea and blood vessels. In this article, we introduce the general principles and limitations of the most widely used bioprinting technologies, including jetting- and extrusion-based systems. Application-based research focused on tissue regeneration is presented, as well as the current challenges that hamper clinical utility of bioprinting technology. --- paper_title: Low temperature additive manufacturing of three dimensional scaffolds for bone-tissue engineering applications: Processing related challenges and property assessment paper_content: Abstract In the last two decades, additive manufacturing (AM) has made significant progress towards the fabrication of biomaterials and tissue engineering constructs. One direction of research is focused on the development of mechanically stable implants with patient-specific size/shape and another direction has been to fabricate tissue-engineered scaffolds with designed porous architecture to facilitate vascularization. Among AM techniques, three dimensional powder printing (3DPP) is suitable for fabrication of bone related prosthetic devices, while three dimensional plotting (3DPL) is based on extrusion of biopolymers to create artificial tissues. In the present review, we aim to develop a better understanding of the science and engineering aspects of these low temperature AM techniques (3DPP and 3DPL) in the context of the bone-tissue engineering applications. While recognizing multiple property requirements of a 3D scaffold, the central theme is to discuss the critical roles played by the binder and powder properties together with the interplay among processing parameters in the context of the physics of binder-material interaction for the fabrication of implants with predefined architecture having structural complexity. An effort also has been exerted to discuss the existing challenges to translate the design concepts and material/binder formulations to develop implantable scaffolds with a more emphasis on bioceramics and biopolymers. Summarizing, this review highlights the need to adopt intelligent processing approaches and targeted application-specific biocompatibility characterization, while fabricating mechanically stable and biologically functionalized 3D tissue equivalents. --- paper_title: Bioprinting technology and its applications paper_content: Bioprinting technology has emerged as a powerful tool for building tissue and organ structures in the field of tissue engineering. This technology allows precise placement of cells, biomaterials and biomolecules in spatially predefined locations within confined three-dimensional (3D) structures. Various bioprinting technologies have been developed and utilized for applications in life sciences, ranging from studying cellular mechanisms to constructing tissues and organs for implantation, including heart valve, myocardial tissue, trachea and blood vessels. In this article, we introduce the general principles and limitations of the most widely used bioprinting technologies, including jetting- and extrusion-based systems. Application-based research focused on tissue regeneration is presented, as well as the current challenges that hamper clinical utility of bioprinting technology. --- paper_title: Surface and Shape Deposition Manufacturing for the Fabrication of a Curved Surface Gripper paper_content: Biological systems such as the gecko are complex, involving a wide variety of materials and length scales. Bio-inspired robotic systems seek to emulate this complexity, leading to manufacturing challenges. A new design for a membrane-based gripper for curved surfaces requires the inclusion of microscale features, macroscale structural elements, electrically patterned thin films, and both soft and hard materials. Surface and shape deposition manufacturing (S2DM) is introduced as a process that can create parts with multiple materials, as well as integrated thin films and microtextures. It combines SDM techniques, laser cutting and patterning, and a new texturing technique, surface microsculpting. The process allows for precise registration of sequential additive/subtractive manufacturing steps. S2DM is demonstrated with the manufacture of a gripper that picks up common objects using a gecko-inspired adhesive. The process can be extended to other integrated robotic components that benefit from the integration of textures, thin films, and multiple materials. --- paper_title: Biomimetic Robotic Mechanisms via Shape Deposition Manufacturing paper_content: At small scales, the fabrication of robots from off-the-shelf structural materials, sensors and actuators becomes increasingly difficult. New manufacturing methods such as Shape Deposition Manufacturing offer an alternative approach in which sensors and actuators are embedded directly into three-dimensional structures without fasteners or connectors. In addition, structures can be fabricated with spatially varying material properties such as specific stiffness and damping. These capabilities allow us to consider biomimetic designs that draw their inspiration from crustaceans and insects. Recent research on insect physiology has revealed the importance of passive compliance and damping in achieving robustness and simplifying control. We describe the design and fabrication of small robot limbs with locally varying stiffness and embedded sensors and actuators. We discuss the process planning issues associated with creating such structures and present results obtained via Shape Deposition Manufacturing. --- paper_title: Shape Deposition Manufacturing of a Soft, Atraumatic, and Deployable Surgical Grasper paper_content: Laparoscopic pancreaticoduodenectomy (also known as the Whipple procedure) is a highly-complex minimallyinvasive surgical (MIS) procedure used to remove cancer from the head of the pancreas. While mortality rates of the MIS approach are comparable with those of open procedures, morbidity rates remain high due to the delicate nature of the pancreatic tissue, proximity of high-pressure vasculature, and the number of complex anastomoses required [1]. The sharp, rigid nature of the tools and forceps used to manipulate these structures, coupled with lack of haptic feedback, can result in leakage or hemorrhage, which can obfuscate the surgeon’s view and force the surgeon to convert to an open procedure. We present a deployable atraumatic grasper with onboard pressure sensing, allowing a surgeon to grasp and manipulate soft tissue during laparoscopic pancreatic surgery. Created using shape deposition manufacturing, with pressure sensors embedded in each finger enabling real-time grip force monitoring, the device offers the potential to reduce the risk of intraoperative hemorrhage by providing the surgeon with a soft, compliant interface between delicate pancreatic tissue structures and metal laparoscopic forceps that are currently used to manipulate and retract these structures on an ad-hoc basis. Initial manipulation tasks in a simulated environment have demonstrated that the device can be deployed though a 15mm trocar and develop a stable grasp on a pancreas analog using Intuitive Surgical’s daVinci robotic end-effectors. --- paper_title: Multimechanism oral dosage forms fabricated by three dimensional printing. paper_content: Four types of complex oral drug delivery devices have been fabricated using the three dimensional printing process. Immediate-extended release tablets were fabricated which were composed of two drug-containing sections of different pH-based release mechanisms. Pulsed release of chlorpheniramine maleate occurred after a lag time of 10 min followed by extended release of the compound over a period of 7 h. Breakaway tablets were fabricated composed of three sections. An interior fast-eroding section separating two drug-releasing sub-units eroded in 30-45 min in simulated gastric fluid. Enteric dual pulsatory tablets were constructed of one continuous enteric excipient phase into which diclofenac sodium was printed into two separated areas. These samples showed two pulses of release during in vitro USP dissolution at 1 and 8 h with a lag time between pulses of about 4 h. Dual pulsatory tablets were also fabricated. These samples were composed of two erosion based excipient sections of opposite pH based solubility. One section eroded immediately during the acid dissolution stage releasing diclofenac during the first 30 min, and the second section began eroding 5 h later during the high pH stage. --- paper_title: Inkjet printing for pharmaceutics – A review of research and manufacturing paper_content: Global regulatory, manufacturing and consumer trends are driving a need for change in current pharmaceutical sector business models, with a specific focus on the inherently expensive research costs, high-risk capital-intensive scale-up and the traditional centralised batch manufacturing paradigm. New technologies, such as inkjet printing, are being explored to radically transform pharmaceutical production processing and the end-to-end supply chain. This review provides a brief summary of inkjet printing technologies and their current applications in manufacturing before examining the business context driving the exploration of inkjet printing in the pharmaceutical sector. We then examine the trends reported in the literature for pharmaceutical printing, followed by the scientific considerations and challenges facing the adoption of this technology. We demonstrate that research activities are highly diverse, targeting a broad range of pharmaceutical types and printing systems. To mitigate this complexity we show that by categorising findings in terms of targeted business models and Active Pharmaceutical Ingredient (API) chemistry we have a more coherent approach to comparing research findings and can drive efficient translation of a chosen drug to inkjet manufacturing. --- paper_title: Three-Dimensional Printing of Carbamazepine Sustained-Release Scaffold. paper_content: Carbamazepine is the first-line anti-epileptic drug for focal seizures and generalized tonic-clonic seizures. Although sustained-release formulations exist, an initial burst of drug release is still present and this results in side effects. Zero-order release formulations reduce fluctuations in serum drug concentrations, thereby reducing side effects. Three-dimensional printing can potentially fabricate zero-order release formulations with complex geometries. 3D printed scaffolds with varying hole positions (side and top/bottom), number of holes (4, 8, and 12), and hole diameters (1, 1.5, and 2 mm) were designed. Dissolution tests and high performance liquid chromatography analysis were conducted. Good correlations in the linear release profiles of all carbamazepine-containing scaffolds with side holes (R(2) of at least 0.91) were observed. Increasing the hole diameters (1, 1.5, and 2 mm) resulted in increased rate of drug release in the scaffolds with 4 holes (0.0048, 0.0065, and 0.0074 mg/min) and 12 holes (0.0021, 0.0050, and 0.0092 mg/min), and the initial amount of carbamazepine released in the scaffolds with 8 holes (0.4348, 0.7246, and 1.0246 mg) and 12 holes (0.1995, 0.8598, and 1.4366 mg). The ultimate goal of this research is to improve the compliance of patients through a dosage form that provides a zero-order drug release profile for anti-epileptic drugs, so as to achieve therapeutic doses and minimize side effects. --- paper_title: 3D printing of tablets containing multiple drugs with defined release profiles. paper_content: We have employed three-dimensional (3D) extrusion-based printing as a medicine manufacturing technique for the production of multi-active tablets with well-defined and separate controlled release profiles for three different drugs. This 'polypill' made by a 3D additive manufacture technique demonstrates that complex medication regimes can be combined in a single tablet and that it is viable to formulate and 'dial up' this single tablet for the particular needs of an individual. The tablets used to illustrate this concept incorporate an osmotic pump with the drug captopril and sustained release compartments with the drugs nifedipine and glipizide. This combination of medicines could potentially be used to treat diabetics suffering from hypertension. The room temperature extrusion process used to print the formulations used excipients commonly employed in the pharmaceutical industry. Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR) and X-ray powder diffraction (XRPD) were used to assess drug-excipient interaction. The printed formulations were evaluated for drug release using USP dissolution testing. We found that the captopril portion showed the intended zero order drug release of an osmotic pump and noted that the nifedipine and glipizide portions showed either first order release or Korsmeyer-Peppas release kinetics dependent upon the active/excipient ratio used. --- paper_title: Analytical analysis of buckling and post-buckling of fluid conveying multi-walled carbon nanotubes paper_content: Abstract In this work, buckling and post-buckling analysis of fluid conveying multi-walled carbon nanotubes are investigated analytically. The nonlinear governing equations of motion and boundary conditions are derived based on Eringen nonlocal elasticity theory. The nanotube is modeled based on Euler–Bernoulli and Timoshenko beam theories. The Von Karman strain–displacement equation is used to model the structural nonlinearities. Furthermore, the Van der Waals interaction between adjacent layers is taken into account. An analytical approach is employed to determine the critical (buckling) fluid flow velocities and post-buckling deflection. The effects of the small-scale parameter, Van der Waals force, ends support, shear deformation and aspect ratio are carefully examined on the critical fluid velocities and post-buckling behavior. --- paper_title: Adaptation of pharmaceutical excipients to FDM 3D printing for the fabrication of patient-tailored immediate release tablets. paper_content: This work aims to employ fused deposition modelling 3D printing to fabricate immediate release pharmaceutical tablets with several model drugs. It investigates the addition of non-melting filler to methacrylic matrix to facilitate FDM 3D printing and explore the impact of (i) the nature of filler, (ii) compatibility with the gears of the 3D printer and iii) polymer: filler ratio on the 3D printing process. Amongst the investigated fillers in this work, directly compressible lactose, spray-dried lactose and microcrystalline cellulose showed a level of degradation at 135°C whilst talc and TCP allowed consistent flow of the filament and a successful 3D printing of the tablet. A specially developed universal filament based on pharmaceutically approved methacrylic polymer (Eudragit EPO) and thermally stable filler, TCP (tribasic calcium phosphate) was optimised. Four model drugs with different physicochemical properties were included into ready-to-use mechanically stable tablets with immediate release properties. Following the two thermal processes (hot melt extrusion (HME) and fused deposition modelling (FDM) 3D printing), drug contents were 94.22%, 88.53%, 96.51% and 93.04% for 5-ASA, captopril, theophylline and prednisolone respectively. XRPD indicated that a fraction of 5-ASA, theophylline and prednisolone remained crystalline whilst captopril was in amorphous form. By combining the advantages of thermally stable pharmaceutically approved polymers and fillers, this unique approach provides a low cost production method for on demand manufacturing of individualised dosage forms. --- paper_title: Dropwise additive manufacturing of pharmaceutical products for solvent-based dosage forms. paper_content: In recent years, the US Food and Drug Administration has encouraged pharmaceutical companies to develop more innovative and efficient manufacturing methods with improved online monitoring and control. Mini-manufacturing of medicine is one such method enabling the creation of individualized product forms for each patient. This work presents dropwise additive manufacturing of pharmaceutical products (DAMPP), an automated, controlled mini-manufacturing method that deposits active pharmaceutical ingredients (APIs) directly onto edible substrates using drop-on-demand (DoD) inkjet printing technology. The use of DoD technology allows for precise control over the material properties, drug solid state form, drop size, and drop dynamics and can be beneficial in the creation of high-potency drug forms, combination drugs with multiple APIs or individualized medicine products tailored to a specific patient. In this work, DAMPP was used to create dosage forms from solvent-based formulations consisting of API, polymer, and solvent carrier. The forms were then analyzed to determine the reproducibility of creating an on-target dosage form, the morphology of the API of the final form and the dissolution behavior of the drug over time. DAMPP is found to be a viable alternative to traditional mass-manufacturing methods for solvent-based oral dosage forms. --- paper_title: Control of the non-linear static deflection experienced by a fluid-carrying double-walled carbon nanotube using an external distributed load paper_content: In this work, an external distributed line load is used to control the postbuckled static deflection of a double-walled carbon nanotube within which a continuous non-viscous fluid flows at a constant velocity. The non-linear equation of motion and boundary conditions are derived using the non-local Timoshenko beam model for the double-walled carbon nanotube, incorporating Von Karman-type geometric non-linear behavior and taking account of the Van der Waals interaction between adjacent layers. First, in the absence of the control force, an analytical approach is used to determine the buckling fluid flow velocities and steady state non-linear static deflection at velocities greater than the buckling velocity for both the cases of a simple-simple and a simple-clamped end support. Then, the control force participates as an indeterminate parameter in the equations. The control force is obtained by resolving equations in which the postbuckled configuration is considered to be an exponential function of time wit... --- paper_title: 3-dimensional (3D) fabricated polymer based drug delivery systems. paper_content: Drug delivery from 3-dimensional (3D) structures is a rapidly growing area of research. It is essential to achieve structures wherein drug stability is ensured, the drug loading capacity is appropriate and the desired controlled release profile can be attained. Attention must also be paid to the development of appropriate fabrication machinery that allows 3D drug delivery systems (DDS) to be produced in a simple, reliable and reproducible manner. The range of fabrication methods currently being used to form 3D DDSs include electrospinning (solution and melt), wet-spinning and printing (3-dimensional). The use of these techniques enables production of DDSs from the macro-scale down to the nano-scale. This article reviews progress in these fabrication techniques to form DDSs that possess desirable drug delivery kinetics for a wide range of applications. --- paper_title: Hot-melt extruded filaments based on pharmaceutical grade polymers for 3D printing by fused deposition modeling. paper_content: Fused deposition modeling (FDM) is a 3D printing technique based on the deposition of successive layers of thermoplastic materials following their softening/melting. Such a technique holds huge potential for the manufacturing of pharmaceutical products and is currently under extensive investigation. Challenges in this field are mainly related to the paucity of adequate filaments composed of pharmaceutical grade materials, which are needed for feeding the FDM equipment. Accordingly, a number of polymers of common use in pharmaceutical formulation were evaluated as starting materials for fabrication via hot melt extrusion of filaments suitable for FDM processes. By using a twin-screw extruder, filaments based on insoluble (ethylcellulose, Eudragit(®) RL), promptly soluble (polyethylene oxide, Kollicoat(®) IR), enteric soluble (Eudragit(®) L, hydroxypropyl methylcellulose acetate succinate) and swellable/erodible (hydrophilic cellulose derivatives, polyvinyl alcohol, Soluplus(®)) polymers were successfully produced, and the possibility of employing them for printing 600μm thick disks was demonstrated. The behavior of disks as barriers when in contact with aqueous fluids was shown consistent with the functional application of the relevant polymeric components. The produced filaments were thus considered potentially suitable for printing capsules and coating layers for immediate or modified release, and, when loaded with active ingredients, any type of dosage forms. --- paper_title: A flexible-dose dispenser for immediate and extended release 3D printed tablets paper_content: The advances in personalised medicine increased the demand for a fast, accurate and reliable production method of tablets that can be digitally controlled by healthcare staff. A flexible dose tablet system is presented in this study that proved to be suitable for immediate and extended release tablets with a realistic drug loading and an easy-to-swallow tablet design. The method bridges the affordable and digitally controlled Fused Deposition Modelling (FDM) 3D printing with a standard pharmaceutical manufacturing process, Hot Melt Extrusion (HME). The reported method was compatible with three methacrylic polymers (Eudragit RL, RS and E) as well as a cellulose-based one (hydroxypropyl cellulose, HPC SSL). The use of a HME based pharmaceutical filament preserved the linear relationship between the mass and printed volume and was utilized to digitally control the dose via an input from computer software with dose accuracy in the range of 91-95%. Higher resolution printing quality doubled the printing time, but showed a little effect on in vitro release pattern of theophylline and weight accuracy. Physical characterization studies indicated that the majority of the model drug (theophylline) in the 3D printed tablet exists in a crystal form. Owing to the small size, ease of use and the highly adjustable nature of FDM 3D printers, the method holds promise for future individualised treatment. --- paper_title: 3D printing in pharmaceutics: A new tool for designing customized drug delivery systems. paper_content: Three-dimensional printing includes a wide variety of manufacturing techniques, which are all based on digitally-controlled depositing of materials (layer-by-layer) to create freeform geometries. Therefore, three-dimensional printing processes are commonly associated with freeform fabrication techniques. For years, these methods were extensively used in the field of biomanufacturing (especially for bone and tissue engineering) to produce sophisticated and tailor-made scaffolds from patient scans. This paper aims to review the processes that can be used in pharmaceutics, including the parameters to be controlled. In practice, it not straightforward for a formulator to be aware of the various technical advances made in this field, which is gaining more and more interest. Thus, a particular aim of this review is to give an overview on the pragmatic tools, which can be used for designing customized drug delivery systems using 3D printing. --- paper_title: Integration of additive manufacturing and inkjet printed electronics: a potential route to parts with embedded multifunctionality paper_content: Additive manufacturing, an umbrella term for a number of different manufacturing techniques, has attracted increasing interest recently for a number of reasons, such as the facile customisation of parts, reduced time to manufacture from initial design, and possibilities in distributed manufacturing and structural electronics. Inkjet printing is an additive manufacturing technique that is readily integrated with other manufacturing processes, eminently scalable and used extensively in printed electronics. It therefore presents itself as a good candidate for integration with other additive manufacturing techniques to enable the creation of parts with embedded electronics in a timely and cost effective manner. This review introduces some of the fundamental principles of inkjet printing; such as droplet generation, deposition, phase change and post-deposition processing. Particular focus is given to materials most relevant to incorporating structural electronics and how post-processing of these materials has been able to maintain compatibility with temperature sensitive substrates. Specific obstacles likely to be encountered in such an integration and potential strategies to address them will also be discussed. --- paper_title: Metal-based Inkjet Inks for Printed Electronics paper_content: A review on applications of metal-based inkjet inks for printed electronics with a particular focus on inks con- taining metal nanoparticles, complexes and metallo-organic compounds. The review describes the preparation of such inks and obtaining conductive patterns by using various sintering methods: thermal, photonic, microwave, plasma, electri- cal, and chemically triggered. Various applications of metal-based inkjet inks (metallization of solar cell, RFID antennas, OLEDs, thin film transistors, electroluminescence devices) are reviewed. --- paper_title: 3D Printing for the Rapid Prototyping of Structural Electronics paper_content: In new product development, time to market (TTM) is critical for the success and profitability of next generation products. When these products include sophisticated electronics encased in 3D packaging with complex geometries and intricate detail, TTM can be compromised - resulting in lost opportunity. The use of advanced 3D printing technology enhanced with component placement and electrical interconnect deposition can provide electronic prototypes that now can be rapidly fabricated in comparable time frames as traditional 2D bread-boarded prototypes; however, these 3D prototypes include the advantage of being embedded within more appropriate shapes in order to authentically prototype products earlier in the development cycle. The fabrication freedom offered by 3D printing techniques, such as stereolithography and fused deposition modeling have recently been explored in the context of 3D electronics integration - referred to as 3D structural electronics or 3D printed electronics. Enhanced 3D printing may eventually be employed to manufacture end-use parts and thus offer unit-level customization with local manufacturing; however, until the materials and dimensional accuracies improve (an eventuality), 3D printing technologies can be employed to reduce development times by providing advanced geometrically appropriate electronic prototypes. This paper describes the development process used to design a novelty six-sided gaming die. The die includes a microprocessor and accelerometer, which together detect motion and upon halting, identify the top surface through gravity and illuminate light-emitting diodes for a striking effect. By applying 3D printing of structural electronics to expedite prototyping, the development cycle was reduced from weeks to hours. --- paper_title: Hybrid additive manufacturing of 3D electronic systems paper_content: A novel hybrid additive manufacturing (AM) technology combining digital light projection (DLP) stereolithography (SL) with 3D micro-dispensing alongside conventional surface mount packaging is presented in this work. This technology overcomes the inherent limitations of individual AM processes and integrates seamlessly with conventional packaging processes to enable the deposition of multiple materials. This facilitates the creation of bespoke end-use products with complex 3D geometry and multi-layer embedded electronic systems. Through a combination of four-point probe measurement and non-contact focus variation microscopy, it was identified that there was no obvious adverse effect of DLP SL embedding process on the electrical conductivity of printed conductors. The resistivity maintained to be less than 4 × 10−4 Ω centerdot cm before and after DLP SL embedding when cured at 100 °C for 1 h. The mechanical strength of SL specimens with thick polymerized layers was also identified through tensile testing. It was found that the polymerization thickness should be minimised (less than 2 mm) to maximise the bonding strength. As a demonstrator a polymer pyramid with embedded triple-layer 555 LED blinking circuitry was successfully fabricated to prove the technical viability. --- paper_title: Inkjet printing for flexible electronics: Materials, processes and equipments paper_content: Inkjet printing, known as digital writing technique, can directly deposit functional materials to form pattern onto substrate. This paper provides an overview of inkjet printing technologies for flexible electronics. Firstly, we highlight materials challenges in implementing flexible devices into practical application, especially for inkjet printing process. Then the micro/nano-patterning technologies of inkjet printing are discussed, including conventional inkjet printing techniques and electrohydrodynamic printing techniques. Thirdly, the related equipments on inkjet printing are shown. Finally, challenges for its future development are also discussed. The main purpose of the work is to condense the basic knowledge and highlight the challenges associated with the burgeoning and exciting field of inkjet printing for flexible electronics. --- paper_title: Aerosol based direct-write micro-additive fabrication method for sub-mm 3D metal-dielectric structures paper_content: The fabrication of 3D metal-dielectric structures at sub-mm length scale is highly important in order to realize low-loss passives and GHz wavelength antennas with applications in wearable and Internet-of-Things (IoT) devices. The inherent 2D nature of lithographic processes severely limits the available manufacturing routes to fabricate 3D structures. Further, the lithographic processes are subtractive and require the use of environmentally harmful chemicals. In this letter, we demonstrate an additive manufacturing method to fabricate 3D metal-dielectric structures at sub-mm length scale. A UV curable dielectric is dispensed from an Aerosol Jet system at 10–100 µm length scale and instantaneously cured to build complex 3D shapes at a length scale <1 mm. A metal nanoparticle ink is then dispensed over the 3D dielectric using a combination of jetting action and tilted dispense head, also using the Aerosol Jet technique and at a length scale 10–100 µm, followed by the nanoparticle sintering. Simulation studies are carried out to demonstrate the feasibility of using such structures as mm-wave antennas. The manufacturing method described in this letter opens up the possibility of fabricating an entirely new class of custom-shaped 3D structures at a sub-mm length scale with potential applications in 3D antennas and passives. --- paper_title: Synthesis of Ag/RGO composite as effective conductive ink filler for flexible inkjet printing electronics paper_content: Abstract The inkjet printing technology has been an attractive alternative to conventional photolithography to fabricate flexible electronics, owing to its advantages including easy control and low cost. However, development of an appropriate ink possessing low cost, high conductivity and good dispersivity for inkjet printing is still a big challenge. In this work, the well dispersed Ag/RGO composite was obtained by anchoring silver nanoparticles (Ag NPs) on the surface of reduced graphene oxide (RGO) sheet, which served as one of promising conductive ink fillers for printable flexible electronics. The synthesized Ag/RGO composite improved the conductivity of ink with Ag NPs and promoted the dispersivity of RGO to avoid nozzle jam, which offered a proper solution to the two critical issues of conductive ink: conductivity and dispersivity. The Ag/RGO composite was attested to be a suitable material for fabricating printable flexible electronics at 100 °C and developing graphene-based functional electronic devices. --- paper_title: Aerosol Jet Printing of Nano Particle Based Electrical Chip Interconnects paper_content: Abstract Mask less fabricated 3D interconnects may have a big potential in future microelectronic applications by enhancing freedom of device design or power- and footprint saving capabilities along with performance improvements. Within this paper research activities are reported about evaluation of Aerosol Jet Printing (AJP) for feasibility of fabricating non-planar nano particle based electrical chip interconnects on a 3D-integrated System in a Package (SiP) including MEMS that is capable to wake up integrated electronics from power down mode using piezoelectric MEMS components. 3D-stacked multi-chip modules with a footprint of 9 mm x 10 mm are functional connected by AJP after single chips are mounted to printed circuit board (PCB) using underfiller adhesives. AJP, morphology of printed paths, and electrical resistances are investigated. Several challenging factors like printed line width, printed layer thickness, focus limiting standoff height of printhead, thermo-mechanical properties of printed interconnects along with SiP layout were identified and discussed during process development. --- paper_title: Formulation and processing of novel conductive solution inks in continuous inkjet printing of 3-D electric circuits paper_content: One of the greatest challenges for the inkjet printing electrical circuits is formulation and processing of conductive inks. In the present investigation, two different formulations of particle-free conductive solutions are introduced that are low in cost, easy to deposit, and possess good electrical properties. A novel aqueous solution consisting of silver nitrate and additives is initially described. This solution demonstrates excellent adherence to glass and polymers and has an electrical resistivity only 2.9 times that of bulk silver after curing. A metallo-organic decomposition (MOD) ink is subsequently introduced. This ink produces a close-packed silver crystal microstructure after low-temperature thermolysis and subsequent high-temperature annealing. The electrical conductance of the final consolidated trace produced with the MOD ink is very close to bulk silver. In addition, the traces produced with the MOD material exhibit excellent wear and fracture resistance. When utilized in a specialized continuous inkjet (CIJ) printing technology system, both particle-free solution inks are able to produce conductive traces in three dimensions. The importance of three-dimensional (3-D) printing of conductive traces is finally discussed in relation to the broad range of applications in the freeform fabrication industry. --- paper_title: Inkjet printed System-in-Package design and manufacturing paper_content: Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current non-additive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. --- paper_title: Printed optics: 3D printing of embedded optical elements for interactive devices paper_content: We present an approach to 3D printing custom optical elements for interactive devices labelled Printed Optics. Printed Optics enable sensing, display, and illumination elements to be directly embedded in the casing or mechanical structure of an interactive device. Using these elements, unique display surfaces, novel illumination techniques, custom optical sensors, and embedded optoelectronic components can be digitally fabricated for rapid, high fidelity, highly customized interactive devices. Printed Optics is part of our long term vision for interactive devices that are 3D printed in their entirety. In this paper we explore the possibilities for this vision afforded by fabrication of custom optical elements using today's 3D printing technology. --- paper_title: Three dimensional printing of high dielectric capacitor using projection based stereolithography method paper_content: Abstract We report that efficient high dielectric polymer/ceramic composite materials can be optically printed into three-dimensional (3D) capacitor by the projection based stereolithography (SLA) method. Surface decoration of Ag on Pb(Zr,Ti)O 3 (PZT@Ag) particles were used as filler to enhance the dielectric permittivity. Polymer nanocomposites were fabricated by incorporating PZT@Ag particles into the photocurable polymer solutions, followed by exposure to the digitally controlled optical masks to generate 3D structures. The dielectric permittivity of Flex/PZT@Ag composite reaches as high as 120 at 100 Hz with 18 vol% filler, which is about 30 times higher than that of pure Flex. Furthermore, the dielectric loss is as low as 0.028 at 100 Hz. The results are in good agreement with the effective medium theory (EMT) model. The calculated specific capacitance of our 3D printed capacitor is about 63 F g −1 at the current density of 0.5 A g −1 . Cyclic voltammetry (CV) curves indicate 3D printed capacitors possess low resistance and ideal capacitive properties. These results not only provide a tool to fabricate capacitor with complex shapes but lay the groundwork for creating highly efficient polymer-based composites via 3D printing method for electronic applications. --- paper_title: Printing conformal electronics on 3D structures with Aerosol Jet technology paper_content: Fabrication of 3D mechanical structures is sometimes achieved by layer-wise printing of inks and resins in conjunction with treatments such as photonic curing and laser sintering. The non-treated material is typically dissolved leaving the final 3D part. Such techniques are generally limited to a single material which makes it difficult to integrate high resolution, conformal electronics over part surfaces. In this paper, we demonstrate a novel, non-contact technique for printing conformal circuits called Aerosol Jet printing. This technique creates a collimated jet of aerosol droplets that extend 2–5 mm from the nozzle to the target. The deposited features can be as small as 10 microns or as large as a centimeter wide. A variety of materials can be printed such as metal nanoparticle inks, polymers, adhesives, ceramics, and bio-active matter. The print head direction and XYZ positioning is controlled by CAD/CAM software which allows conformal printing onto 3D substrates having a high level of surface topography. For example, metallic traces can be printed into 3D shapes such as trenches and via holes, as well as onto sidewalls and convex and concave surfaces. We discuss the fabrication of a conformal phase array antenna, embedded circuitry and sensors, and electronic packaging. --- paper_title: Integrating stereolithography and direct print technologies for 3D structural electronics fabrication paper_content: Purpose – The purpose of this paper is to present a hybrid manufacturing system that integrates stereolithography (SL) and direct print (DP) technologies to fabricate three‐dimensional (3D) structures with embedded electronic circuits. A detailed process was developed that enables fabrication of monolithic 3D packages with electronics without removal from the hybrid SL/DP machine during the process. Successful devices are demonstrated consisting of simple 555 timer circuits designed and fabricated in 2D (single layer of routing) and 3D (multiple layers of routing and component placement).Design/methodology/approach – A hybrid SL/DP system was designed and developed using a 3D Systems SL 250/50 machine and an nScrypt micro‐dispensing pump integrated within the SL machine through orthogonally‐aligned linear translation stages. A corresponding manufacturing process was also developed using this system to fabricate 2D and 3D monolithic structures with embedded electronic circuits. The process involved part de... --- paper_title: Aerosol jet printed top grids for organic optoelectronic devices paper_content: Abstract Aerosol jet deposited metallic grids are very promising as transparent electrodes for large area organic solar cells and organic light emitting diodes. However, the homogeneity and the printing speed remain a challenge. We report homogeneous and rapidly printed metallic lines based on a complex-based metal–organic silver ink using a processing temperature of 140 °C. We show that inhomogeneities, which are present in printed structures at increased printing speeds and mainly caused by drying effects, can be improved by adding high boiling point solvents. We demonstrate solution processed highly conductive and transparent hybrid electrodes on inverted organic solar cells comprising digitally printed top silver grids. --- paper_title: A facile method for integrating direct-write devices into three-dimensional printed parts paper_content: Integrating direct-write (DW) devices into three-dimensional (3D) printed parts is key to continuing innovation in engineering applications such as smart material systems and structural health monitoring. However, this integration is challenging because: (1) most 3D printing techniques leave rough or porous surfaces if they are untreated; (2) the thermal sintering process required for most conductive inks could degrade the polymeric materials of 3D printed parts; and (3) the extensive pause needed for the DW process during layer-by-layer fabrication may cause weaker interlayer bonding and create structural weak points. These challenges are rather common during the insertion of conductive patterns inside 3D printed structures. As an avoidance tactic, we developed a simple ‘print-stick-peel’ method to transfer the DW device from the polytetrafluoroethylene or perfluoroalkoxy alkanes film onto any layer of a 3D printed object. This transfer can be achieved using the self-adhesion of 3D printing materials or applying additional adhesive. We demonstrated this method by transferring Aerosol Jet® printed strain sensors into parts fabricated by PolyJet™ printing. This report provides an investigation and discussion on the sensitivity, reliability, and influence embedding the sensor has on mechanical properties. --- paper_title: Application of 3D Printing for Smart Objects with Embedded Electronic Sensors and Systems paper_content: Applications of a 3D printing process are presented. This process integrates liquid-state printed components and interconnects with IC chips in all three dimensions, various orientations, and multiple printing layers to deliver personalized system-level functionalities. As an example application, a form-fitting glove is demonstrated with embedded programmable heater, temperature sensor, and the associated control electronics for thermotherapeutic treatment. --- paper_title: 3-dimensional circuit device fabrication process using stereolithography and direct writing paper_content: Additive manufacturing (AM) technology is a method to fabricate a 3-dimensional structure by stacking thin layers. Among AM technologies, it is known that stereolithography (SL) technology can fabricate a structure having high dimensional accuracy. Recently, this technology is being applied for real manufacturing. Meanwhile, direct writing (DW) technology has been used to apply a lowviscosity liquid material on a substrate. In this regards, in this research, we applied the DW technology to deposit a conductive material on the surface of a three-dimensional structure. Moreover, a three-dimensional circuit device (3DCD) which is different from conventional two-dimensional PCBs can be fabricated by the hybrid process of SL and DW technologies. An insulated structure of circuit board having high precision was fabricated using stereolithography. Furthermore, a circuit is fabricated on the several layers using DW. And the 3DCD sample that detects a light was fabricated, successfully. --- paper_title: Hybrid additive manufacturing of 3D electronic systems paper_content: A novel hybrid additive manufacturing (AM) technology combining digital light projection (DLP) stereolithography (SL) with 3D micro-dispensing alongside conventional surface mount packaging is presented in this work. This technology overcomes the inherent limitations of individual AM processes and integrates seamlessly with conventional packaging processes to enable the deposition of multiple materials. This facilitates the creation of bespoke end-use products with complex 3D geometry and multi-layer embedded electronic systems. Through a combination of four-point probe measurement and non-contact focus variation microscopy, it was identified that there was no obvious adverse effect of DLP SL embedding process on the electrical conductivity of printed conductors. The resistivity maintained to be less than 4 × 10−4 Ω centerdot cm before and after DLP SL embedding when cured at 100 °C for 1 h. The mechanical strength of SL specimens with thick polymerized layers was also identified through tensile testing. It was found that the polymerization thickness should be minimised (less than 2 mm) to maximise the bonding strength. As a demonstrator a polymer pyramid with embedded triple-layer 555 LED blinking circuitry was successfully fabricated to prove the technical viability. --- paper_title: Integration of Direct-Write (DW) and Ultrasonic Consolidation (UC) Technologies to Create Advanced Structures with Embedded Electrical Circuitry. paper_content: In many instances conductive traces are needed in small, compact and enclosed areas. However, with traditional manufacturing techniques, embedded electrical traces or antenna arrays have not been a possibility. By integrating Direct Write and Ultrasonic Consolidation technologies, electronic circuitry, antennas and other devices can be manufactured directly into a solid metal structure and subsequently completely enclosed. This can achieve a significant reduction in mass and volume of a complex electronic system without compromising performance. --- paper_title: Approaches for Additive Manufacturing of 3D Electronic Applications paper_content: Abstract Additive manufacturing processes typically used for mechanical parts can be combined with enhanced technologies for electronics production to enable a highly flexible manufacturing of personalized 3D electronic devices. To illustrate different approaches for implementing electrical and electronic functionality, conductive paths and electronic components were embedded in a powder bed printed substrate using an enhanced 3D printer. In addition, a modified Aerosol Jet printing process and assembly technologies adapted from the technology of Molded Interconnect Devices were applied to print circuit patterns and to electrically interconnect components on the surface of the 3D substrates. --- paper_title: Printing Three-Dimensional Electrical Traces in Additive Manufactured Parts for Injection of Low Melting Temperature Metals paper_content: While techniques exist for the rapid prototyping of mechanical and electrical components separately, this paper describes a method where commercial additive manufacturing (AM) techniques can be used to concurrently construct the mechanical structure and electronic circuits in a robotic or mechatronic system. The technique involves printing hollow channels within 3D printed parts that are then filled with a low melting point liquid metal alloy that solidifies to form electrical traces. This method is compatible with most conventional fused deposition modeling and stereolithography (SLA) machines and requires no modification to an existing printer, though the technique could easily be incorporated into multimaterial machines. Three primary considerations are explored using a commercial fused deposition manufacturing (FDM) process as a testbed: material and manufacturing process parameters, simplified injection fluid mechanics, and automatic part generation using standard printed circuit board (PCB) software tools. Example parts demonstrate the ability to embed circuits into a 3D printed structure and populate the surface with discrete electronic components. [DOI: 10.1115/1.4029435] --- paper_title: Exploring the mechanical strength of additively manufactured metal structures with embedded electrical materials paper_content: Abstract Ultrasonic Additive Manufacturing (UAM) enables the integration of a wide variety of components into solid metal matrices due to the process induced high degree of metal matrix plastic flow at low bulk temperatures. Exploitation of this phenomenon allows the fabrication of previously unobtainable novel engineered metal matrix components. The feasibility of directly embedding electrical materials within UAM metal matrices was investigated in this work. Three different dielectric materials were embedded into UAM fabricated aluminium metal-matrices with, research derived, optimal processing parameters. The effect of the dielectric material hardness on the final metal matrix mechanical strength after UAM processing was investigated systematically via mechanical peel testing and microscopy. It was found that when the Knoop hardness of the dielectric film was increased from 12.1 HK/0.01 kg to 27.3 HK/0.01 kg, the mechanical peel testing and linear weld density of the bond interface were enhanced by 15% and 16%, respectively, at UAM parameters of 1600 N weld force, 25 µm sonotrode amplitude, and 20 mm/s welding speed. This work uniquely identified that the mechanical strength of dielectric containing UAM metal matrices improved with increasing dielectric material hardness. It was therefore concluded that any UAM metal matrix mechanical strength degradation due to dielectric embedding could be restricted by employing a dielectric material with a suitable hardness (larger than 20 HK/0.01 kg). This result is of great interest and a vital step for realising electronic containing multifunctional smart metal composites for future industrial applications. --- paper_title: Exploring the mechanical performance and material structures of integrated electrical circuits within solid state metal additive manufacturing matrices paper_content: Ultrasonic Additive Manufacturing (UAM) enables the integration of a wide variety of components into solid metal matrices due to a high degree of metal plastic flow at low matrix bulk temperatures. ... --- paper_title: Ink-jet printed nanoparticle microelectromechanical systems paper_content: Reports a method to additively build three-dimensional (3-D) microelectromechanical systems (MEMS) and electrical circuitry by ink-jet printing nanoparticle metal colloids. Fabricating metallic structures from nanoparticles avoids the extreme processing conditions required for standard lithographic fabrication and molten-metal-droplet deposition. Nanoparticles typically measure 1 to 100 nm in diameter and can be sintered at plastic-compatible temperatures as low as 300/spl deg/C to form material nearly indistinguishable from the bulk material. Multiple ink-jet print heads mounted to a computer-controlled 3-axis gantry deposit the 10% by weight metal colloid ink layer-by-layer onto a heated substrate to make two-dimensional (2-D) and 3-D structures. We report a high-Q resonant inductive coil, linear and rotary electrostatic-drive motors, and in-plane and vertical electrothermal actuators. The devices, printed in minutes with a 100 /spl mu/m feature size, were made out of silver and gold material with high conductivity,and feature as many as 400 layers, insulators, 10:1 vertical aspect ratios, and etch-released mechanical structure. These results suggest a route to a desktop or large-area MEMS fabrication system characterized by many layers, low cost, and data-driven fabrication for rapid turn-around time, and represent the first use of ink-jet printing to build active MEMS. --- paper_title: Exploring the mechanical strength of additively manufactured metal structures with embedded electrical materials paper_content: Abstract Ultrasonic Additive Manufacturing (UAM) enables the integration of a wide variety of components into solid metal matrices due to the process induced high degree of metal matrix plastic flow at low bulk temperatures. Exploitation of this phenomenon allows the fabrication of previously unobtainable novel engineered metal matrix components. The feasibility of directly embedding electrical materials within UAM metal matrices was investigated in this work. Three different dielectric materials were embedded into UAM fabricated aluminium metal-matrices with, research derived, optimal processing parameters. The effect of the dielectric material hardness on the final metal matrix mechanical strength after UAM processing was investigated systematically via mechanical peel testing and microscopy. It was found that when the Knoop hardness of the dielectric film was increased from 12.1 HK/0.01 kg to 27.3 HK/0.01 kg, the mechanical peel testing and linear weld density of the bond interface were enhanced by 15% and 16%, respectively, at UAM parameters of 1600 N weld force, 25 µm sonotrode amplitude, and 20 mm/s welding speed. This work uniquely identified that the mechanical strength of dielectric containing UAM metal matrices improved with increasing dielectric material hardness. It was therefore concluded that any UAM metal matrix mechanical strength degradation due to dielectric embedding could be restricted by employing a dielectric material with a suitable hardness (larger than 20 HK/0.01 kg). This result is of great interest and a vital step for realising electronic containing multifunctional smart metal composites for future industrial applications. --- paper_title: A MEMS-scale vibration energy harvester based on coupled component structure and bi-stable states paper_content: Due to the rapid growth in demand for power for sensing devices located in remote locations, scientists' attention has been drawn to vibration energy harvesting as an alternative to batteries. As a result of over two decades of micro-scale vibration energy harvester research, the use of mechanical nonlinearity in the dynamic behavior of the piezoelectric power generating structures had been recognized as one of the promising solutions to the challenges presented by chaotic, low-frequency vibration sources found in common application environments. In this study, the design and performance of a unique MEMS-scale nonlinear vibration energy harvester based on coupled component structures and bi-stable states are investigated. The coupled-components within the device consist of a main buckled beam bonded with piezoelectric layers, a torsional rod, and two cantilever arms with tip masses at their ends. These arms are connected to the main beam through the torsional rod and are designed to help the main beam snap between its buckled stable states when subjected to sufficient vibration loading. The fabrication of the device will be discussed, including use of plasma-enhanced chemical vapor deposition (PECVD) of silicon nitride under an alternating power field to control compressive stress development within the main buckled beam. After completing the fabrication process, the next step would be testing the device under a variety of vibration loading conditions for its potential use as a vibration energy harvester. ---
Title: A Review of Multi-material and Composite Parts Production by Modified Additive Manufacturing Methods Section 1: Introduction Description 1: Provide an overview of the unique capabilities of Additive Manufacturing (AM) in producing multi-material parts and composite materials, including examples and significant achievements. Section 2: Multi-material and composite additive manufacturing methods Description 2: Describe various modifications of AM techniques combined with other methods to produce multi-material and composite products. Section 3: Stereolithography methods Description 3: Discuss the adaptations of Stereolithography (SLA) for multi-material and composite production, including specific processes and applications. Section 4: Binder jetting methods Description 4: Explain the binder jetting process for creating multi-material composites, focusing on techniques to achieve the desired material properties and structures. Section 5: Extrusion-based printing methods Description 5: Cover the use of extrusion-based AM methods for multi-material components with tailored porosities, including examples in biomedical applications. Section 6: Material jetting printing methods Description 6: Explore the principles and applications of material jetting for multi-material production, including innovative combinations with other techniques. Section 7: Directed energy deposition (DED) methods Description 7: Review the DED process for multi-material and composite production, focusing on metallic parts and the methods for mixing and depositing different materials. Section 8: Laminating metallic parts Description 8: Discuss Laminated Object Manufacturing (LOM) and other lamination methods for creating metallic parts with layers bonded by various approaches. Section 9: Bioprinting methods Description 9: Outline bioprinting techniques that allow for the fabrication of structures with multiple cells and biomaterials, noting specific applications in tissue engineering. Section 10: Shape deposition methods Description 10: Describe Shape Deposition Manufacturing (SDM) and its capability to produce multi-material structures by combining additive and subtractive processes. Section 11: 3D printed drug delivery systems Description 11: Examine the use of 3D printing for creating drug delivery systems tailored to specific release profiles and patient needs, including various AM techniques. Section 12: Electronics embedded 3D printed components Description 12: Cover the integration of electronic components within 3D printed parts, including methods for printing conductive traces and embedding active/passive components. Section 13: Summary and Conclusions Description 13: Summarize the reviewed AM processes, their modifications, and their applications in different industries. Discuss the potential and current limitations of AM in multi-material and composite production.
Video Processing From Electro-Optical Sensors for Object Detection and Tracking in a Maritime Environment: A Survey
9
--- paper_title: Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis paper_content: This paper provides a review of the literature in on-road vision-based vehicle detection, tracking, and behavior understanding. Over the past decade, vision-based surround perception has progressed from its infancy into maturity. We provide a survey of recent works in the literature, placing vision-based vehicle detection in the context of sensor-based on-road surround analysis. We detail advances in vehicle detection, discussing monocular, stereo vision, and active sensor-vision fusion for on-road vehicle detection. We discuss vision-based vehicle tracking in the monocular and stereo-vision domains, analyzing filtering, estimation, and dynamical models. We discuss the nascent branch of intelligent vehicles research concerned with utilizing spatiotemporal measurements, trajectories, and various features to characterize on-road behavior. We provide a discussion on the state of the art, detail common performance metrics and benchmarks, and provide perspective on future research directions in the field. --- paper_title: Stereovision based obstacle detection system for unmanned surface vehicle paper_content: This paper presents the stereo based obstacle detection system for unmanned surface vehicle (USV). The system is designed and developed toward the aim of real-time and robust obstacle detection and tracking at sea. Stereo vision methods and techniques have been employed to offer the capacity of detecting, locating and tracking multiple obstacles in the near field. Field test in the real scenes has been conducted, and the obstacle detection system for USV is proven to provide stable and satisfactory performance. The valid range with good accuracy of depth estimation is from 20 to 200 meters for high speed USV. --- paper_title: Passive target tracking of marine traffic ships using onboard monocular camera for unmanned surface vessel paper_content: Enhancing the performance of passive target tracking and trajectory estimation of marine traffic ships is focused using a monocular camera mounted on an unmanned surface vessel. To accurately estimate the trajectory of a target traffic ship, the relative bearing and range information between the observing ship and the target ship is required. Monocular vision provides bearing information with reasonable accuracy but with no explicit range information. The relative range information can be extracted from the bearing changes induced by the relative ship motion in the framework of bearings-only tracking (BOT). BOT can be effective in crossing situations with large bearing angle changes. However, it often fails in head-on or overtaking situations due to small bearing angle changes and the resulting low observability of the tracking filter. To deal with the lack of observability, the vertical pixel distance between the horizon and the target ship in the image is used, which improves the overall target tracking performance. The feasibility and performance of the proposed tracking approach were validated through field experiments at sea. --- paper_title: Argos - a Video Surveillance System for boat Traffic Monitoring in Venice paper_content: Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system. --- paper_title: Multispectral Target Detection and Tracking for Seaport Video Surveillance paper_content: In this paper, a video surveillance process is presented including target detection and tracking of ships at the entrance of a seaport in order to improve security and to prevent terrorist attacks. This process is helpful in the automatic analysis of movements inside the seaport. Steps of detection and tracking are completed using IR data whereas the pattern recognition stage is achieved on color data. A comparison of results of detection and tracking is presented on both IR and color data in order to justify the choice of IR images for these two steps. A draft description of the pattern recognition stage is finally drawn up as development prospect. --- paper_title: Attenuation of Electromagnetic Radiation by Haze, Fog, Clouds, and Rain paper_content: Abstract : The report assembles, under one cover, the values of aerosol attenuation coefficients of regions in the electromagnetic (EM) spectrum containing so-called 'atmospheric windows,' in which EM radiation suffers the least amount of atmospheric gaseous absorption. The purpose is to enable rapid quantitative assessment of target acquisition terminal guidance sensors using the windows during adverse weather. Both calculated and available measured values are presented. Being a compilation drawn from numerous sources, the report is intended more as a handbook for ready use than as a theoretical treatise. --- paper_title: SITUATION AWARENESS IN REMOTE CONTROL CENTRES FOR UNMANNED SHIPS paper_content: The feasibility of unmanned, autonomous merchant vessels is investigated by the EU project MUNIN (Maritime Unmanned Navigation through Intelligence in Networks). The ships will be manned during passage to and from port and unmanned during ocean-passage. When unmanned, the ships will be controlled by an automatic system informed by onboard sensors allowing the ship to make standard collision avoidance manoeuvres according to international regulation. The ship will be continuously monitored by a remote shore centre able to take remote control should the automatic systems falter. For the humans in the shore control centre the usual problems of automations remains as well as a pronounced problem of keeping up adequate situation awareness through remote sensing. --- paper_title: Challenges in video based object detection in maritime scenario using computer vision paper_content: This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here. --- paper_title: Integrated visual information for maritime surveillance paper_content: The main contribution of this chapter is to provide a data fusion (DF) scheme for combining in a unique view, radar and visual data. The experimental evaluation of the performance for the modules included in the framework has been carried out using publicly available data from the VOC dataset and the MarDT - Maritime Detection and Tracking (MarDT) data set, containing data coming from different real VTS systems, with ground truth information. Moreover, an operative scenario where traditional VTS systems can benefit from the proposed approach is presented. --- paper_title: Model-based segmentation of FLIR images paper_content: The use of gray-scale intensities together with the edge information present in a forward-looking infrared (FLIR) image to obtain a precise and accurate segmentation of a target is presented. A model of FLIR images based on gray-scale and edge information is incorporated in a gradient relaxation technique which explicitly maximizes a criterion function based on the inconsistency and ambiguity of classification of pixels with respect to their neighbors. Four variations of the basic technique which provide automatic selection of thresholds to segment FLIR images are considered. These methods are compared, and several examples of segmentation of ship images are given. > --- paper_title: Detection and classification of infrared decoys and small targets in a sea background paper_content: A combination of algorithms has been developed for the detection, tracking, and classification of targets at sea. In a flexible software setup, different methods of preprocessing and detection can be chosen for the processing of infrared and visible-light images. Two projects, in which the software is used, are discussed. In the SURFER project, the algorithms are used for the detection and classification of small targets, e.g., swimmers, dinghies, speedboats, and floating mines. Different detection methods are applied to recorded data. We will present a method to describe the background by fitting continuous functions to the data, and show that this provides a better separation between objects and clutter. The detection of targets using electro- optical systems is one part of this project, in which also algorithms for fusion of electro-optical data with radar data are being developed. In the second project, a simple infrared image-seeker has been built that is used to test the effectiveness of infrared decoys launched from a ship. In a more complicated image seeker algorithm, features such as contrast and size and characterization of trajectory are used to differentiate between ship, infrared decoys and false alarms resulting from clutter. In this paper, results for the detection of small targets in a sea background are shown for a number of detection methods. Further, a description is given of the simulator imaging seeker, and some results of the imaging seeker software applied to simulated and recorded data will be shown. --- paper_title: MuSCoWERT: multi-scale consistence of weighted edge Radon transform for horizon detection in maritime images. paper_content: This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement. --- paper_title: Clutter-adaptive infrared small target detection in infrared maritime scenarios paper_content: A clutter-adaptive target-detection method is proposed to detect small moving targets in midwave infrared maritime scenarios. In particular, we focus our attention on the sea-sky background and targets of interest are ships along the horizontal sea-sky line. In the distant sea-sky background infrared imagery, small targets frequently appear as weak intensity and do not have enough structure information in the vicinity of horizontal sea-sky-line, that is sea-sky region, and the complexity of background clutters has direct impact on the target detection performance. Thus, a fuzzy system constructed by the extracted image-based features is designed for the clutter classification. And then, based on the determined clutter type, low clutter or high clutter, the horizontal sea-sky line can be detected successfully. In the target detection stage, under the guidance of the sea-sky line, a modified multilevel filter method is applied to enhance the target in the sea-sky region. Finally, the recursive segmentation method and the target validation technique are adopted for the target extraction and validation. In the experiments, the target detection performance of proposed method is validated by extensive experiments on infrared images taken in different imaging conditions. It achieves high accuracy with a low false alarm rate. --- paper_title: Infrared small target enhancement by using sequential top-hat filters paper_content: Generally in the infrared images, the targets have low contrast with the background, which makes the detection of the small targets difficult. To improve the detectability of the infrared small targets, this paper presents a novel algorithm for infrared small target enhancement by using sequential top-hat filters. Moreover, the proposed algorithm has been compared with several existing algorithms. The experimental results indicate that sequential top-hat filters could well enhance the infrared small targets and effectively suppress the background clutters. --- paper_title: Horizon Detection Using Machine Learning Techniques paper_content: Detecting a horizon in an image is an important part of many image related applications such as detecting ships on the horizon, flight control, and port security. Most of the existing solutions for the problem only use image processing methods to identify a horizon line in an image. This results in good accuracy for many cases and is fast in computation. However, for some images with difficult environmental conditions like a foggy or cloudy sky these image processing methods are inherently inaccurate in identifying the correct horizon. This paper investigates how to detect the horizon line in a set of images using a machine learning approach. The performance of the SVM, J48, and Na?ve Bayes classifiers, used for the problem, has been compared. Accuracy of 90-99% in identifying horizon was achieved on image data set of 20 images. --- paper_title: Polynomial background estimation using visible light video streams for robust automatic detection in a maritime environment paper_content: For naval surveillance, automatic detection of surface objects, like vessels, in a maritime environment is an ::: important contribution of the Electro-Optical (EO) sensor systems on board. Based on previous research using ::: single images, a background estimation approach using low-order polynomials is proposed for the automatic ::: detection of objects in a maritime environment. The polynomials are fitted to the intensity values in the image ::: after which the deviation between the fitted intensity values and the measured intensity values are used for detection. The research presented in this paper, includes the time information by using video streams instead of single images. Hereby, the level of fusing time information and the number of frames necessary for stable detection and tracking behaviour are analysed and discussed. The performance of the detection approach is tested on a, during the fall of 2007, collected extensive dataset of maritime pictures in the Mediterranean Sea and in the North Sea on board of an Air Defence Command frigate, HNLMS Tromp. --- paper_title: MSCM-LiFe: Multi-scale cross modal linear feature for horizon detection in maritime images paper_content: This paper proposes a new method for horizon detection called the multi-scale cross modal linear feature. This method integrates three different concepts related to the presence of horizon in maritime images to increase the accuracy of horizon detection. Specifically it uses the persistence of horizon in multi-scale median filtering, and its detection as a linear feature commonly detected by two different methods, namely the Hough transform of edgemap and the intensity gradient. We demonstrate the performance of the method over 13 videos comprising of more than 3000 frames and show that the proposed method detects horizon with small error in most of the cases, outperforming three state-of-the-art methods. --- paper_title: Vision-guided flight stability and control for micro air vehicles paper_content: Substantial progress has been made recently towards designing, building and test-flying remotely piloted micro air vehicles (MAVs) and small unmanned air vehicles. We seek to complement this progress in overcoming the aerodynamic obstacles to flight at very small scales with a visionguided flight stability and autonomy system, based on a robust horizon detection algorithm. In this paper, we first motivate the use of computer vision for MAV autonomy, arguing that given current sensor technology, vision may be the only practical approach to the problem. We then describe our statistical vision-based horizon detection algorithm, which has been demonstrated at 30 Hz with over 99.9% correct horizon identification. Next, we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feedback controller for selfstabilized flight and report result... --- paper_title: A vision system for intelligent mission profiles of micro air vehicles paper_content: Recently, much progress has been made toward the development of small-scale aircraft, known broadly as Micro Air Vehicles (MAVs). Until recently, these platforms were exclusively remotely piloted, with no autonomous or intelligent capabilities, due at least in part to stringent payload restrictions that limit onboard sensors. However, the one sensor that is critical to most conceivable MAV missions, such as remote surveillance, is an onboard video camera and transmitter that streams flight video to a nearby ground station. Exploitation of this key sensor is, therefore, desirable, since no additional onboard hardware (and weight) is required. As such, in this paper we develop a general and unified computer vision framework for MAVs that not only addresses basic flight stability and control, but enables more intelligent missions as well. This paper is organized as follows. We first develop a real-time feature extraction method called multiscale linear discriminant analysis (MLDA), which explicitly incorporates color into its feature representation, while implicitly encoding texture through a dynamic multiscale representation of image details. We demonstrate key advantages of MLDA over other possible multiscale approaches (e.g., wavelets), especially in dealing with transient video noise. Next, we show that MLDA provides a natural framework for performing real-time horizon detection. We report horizon-detection results for a range of images differing in lighting and scenery and quantify performance as a function of image noise. Furthermore, we show how horizon detection naturally leads to closed-loop flight stabilization. Then, we motivate the use of tree-structured belief networks (TSBNs) with MLDA features for sky/ground segmentation. This type of segmentation augments basic horizon detection and enables certain MAV missions where prior assumptions about the flight vehicle's orientation are not possible. Again, we report segmentation results for a range of images and quantify robustness to image noise. Finally, we demonstrate the seamless extension of this framework, through the idea of visual contexts, for the detection of artificial objects and/or structures and illustrate several examples of such additional segmentation. This extension thus enables mission profiles that require, for example, following a specific road or the tracking of moving ground objects. Throughout, our approach and algorithms are heavily influenced by real-time constraints and robustness to transient video noise. --- paper_title: Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system paper_content: Abstract Visual surveillance in the maritime domain has been explored for more than a decade. Although it has produced a number of working systems and resulted in a mature technology, surveillance has been restricted to the port facilities or areas close to the coast line assuming a fixed-camera scenario. This paper presents a novel algorithm for open-sea visual maritime surveillance. We explore a challenging situation with a forward-looking camera mounted on a buoy or other floating platform. The proposed algorithm detects, localizes, and tracks ships in the field of view of the camera. Specifically, developed algorithm is uniquely designed to handle rapidly moving camera. Its performance is robust in the presence of a random relatively-large camera motion. In the context of ship detection we developed a new horizon detection scheme for a complex maritime domain. The performance of our algorithm and its comprising elements is evaluated. Ship detection precision of 88% is achieved on a large dataset collected from a prototype system. --- paper_title: Omnidirectional vision on UAV for attitude computation paper_content: Unmanned aerial vehicles (UAVs) are the subject of an increasing interest in many applications. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensors in order to provide efficient navigation functions. In this paper, we propose a method for attitude computation catadioptric images. We first demonstrate the advantages of the catadioptric vision sensor for this application. In fact, the geometric properties of the sensor permit to compute easily the roll and pitch angles. The method consists in separating the sky from the earth in order to detect the horizon. We propose an adaptation of the Markov random fields for catadioptric images for this segmentation. The second step consists in estimating the parameters of the horizon line thanks to a robust estimation algorithm. We also present the angle estimation algorithm and finally, we show experimental results on synthetic and real images captured from an airplane --- paper_title: Vision-based horizon extraction for micro air vehicle flight control paper_content: Recently, more and more research has been done on micro air vehicles (MAVs). An autonomous flight control system is necessary for developing practical MAVs to be used for a wide array of missions. Due to the limitations of size, weight, and power, MAVs have the very low payload capacity and moments of inertia. The current technologies with rate and acceleration sensors applied on larger aircrafts are impractical to MAVs, and they are difficult to be scaled down to satisfy the demands of MAVs. Since surveillance has been considered as the primary mission of MAVs, it is essential for MAVs to be equipped with on-board imaging sensors such as cameras, which have rich information content. So vision-based techniques, without increasing the MAVs payload, may be a feasible idea for flight autonomy of MAVs. In this paper, a new robust horizon extraction algorithm based on the orientation projection method is proposed, which is the foundation of a vision-based flight control system. The horizon extraction algorithm is effective for both color images and gray images. The horizon can be extracted not only from fine images captured in fair conditions but also from blurred images captured in cloudy, even foggy days. In order to raise the computational speed to meet real-time requirements, the algorithmic optimization is also discussed in the paper, which is timesaving by narrowing the seeking scope of orientations and adopting the table look-up method. According to the orientation and position of the horizon in the image, two important angular attitude parameters for stability and control, the roll angle and the pitch angle, could be calculated. Several experimental results demonstrate the feasibility and robustness of the algorithm. --- paper_title: Automatic detection of small surface targets with electro-optical sensors in a harbor environment paper_content: In modern warfare scenarios naval ships must operate in coastal environments. These complex environments, in bays and narrow straits, with cluttered littoral backgrounds and many civilian ships may contain asymmetric threats of fast targets, such as rhibs, cabin boats and jet-skis. Optical sensors, in combination with image enhancement and automatic detection, assist an operator to reduce the response time, which is crucial for the protection of the naval and land-based supporting forces. In this paper, we present our work on automatic detection of small surface targets which includes multi-scale horizon detection and robust estimation of the background intensity. To evaluate the performance of our detection technology, data was recorded with both infrared and visual-light cameras in a coastal zone and in a harbor environment. During these trials multiple small targets were used. Results of this evaluation are shown in this paper. --- paper_title: Real-world multisensor image alignment using edge focusing and Hausdorff distances paper_content: The area-based methods, such as using Laplacian pyramid and Fourier transform-based phase matching, benefit by highlighting high spatial frequencies to reduce sensitivity to the feature inconsistency problem in the multisensor image registration. The feature extraction and matching methods are more powerful and versatile to process poor quality IR images. We implement multi-scale hierarchical edge detection and edge focusing and introduce a new salience measure for the horizon, for multisensor image registration. The common features extracted from images of two modalities can be still different in detail. Therefore, the transformation space match methods with the Hausdorff distance measure is more suitable than the direct feature matching methods. We have introduced image quadtree partition technique to the Hausdorff distance matching, that dramatically reduces the size of the search space. Image registration of real world visible/IR images of battle fields is shown. --- paper_title: Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system paper_content: Abstract Visual surveillance in the maritime domain has been explored for more than a decade. Although it has produced a number of working systems and resulted in a mature technology, surveillance has been restricted to the port facilities or areas close to the coast line assuming a fixed-camera scenario. This paper presents a novel algorithm for open-sea visual maritime surveillance. We explore a challenging situation with a forward-looking camera mounted on a buoy or other floating platform. The proposed algorithm detects, localizes, and tracks ships in the field of view of the camera. Specifically, developed algorithm is uniquely designed to handle rapidly moving camera. Its performance is robust in the presence of a random relatively-large camera motion. In the context of ship detection we developed a new horizon detection scheme for a complex maritime domain. The performance of our algorithm and its comprising elements is evaluated. Ship detection precision of 88% is achieved on a large dataset collected from a prototype system. --- paper_title: Automatic detection of small surface targets with electro-optical sensors in a harbor environment paper_content: In modern warfare scenarios naval ships must operate in coastal environments. These complex environments, in bays and narrow straits, with cluttered littoral backgrounds and many civilian ships may contain asymmetric threats of fast targets, such as rhibs, cabin boats and jet-skis. Optical sensors, in combination with image enhancement and automatic detection, assist an operator to reduce the response time, which is crucial for the protection of the naval and land-based supporting forces. In this paper, we present our work on automatic detection of small surface targets which includes multi-scale horizon detection and robust estimation of the background intensity. To evaluate the performance of our detection technology, data was recorded with both infrared and visual-light cameras in a coastal zone and in a harbor environment. During these trials multiple small targets were used. Results of this evaluation are shown in this paper. --- paper_title: Model-based segmentation of FLIR images paper_content: The use of gray-scale intensities together with the edge information present in a forward-looking infrared (FLIR) image to obtain a precise and accurate segmentation of a target is presented. A model of FLIR images based on gray-scale and edge information is incorporated in a gradient relaxation technique which explicitly maximizes a criterion function based on the inconsistency and ambiguity of classification of pixels with respect to their neighbors. Four variations of the basic technique which provide automatic selection of thresholds to segment FLIR images are considered. These methods are compared, and several examples of segmentation of ship images are given. > --- paper_title: Detection and classification of infrared decoys and small targets in a sea background paper_content: A combination of algorithms has been developed for the detection, tracking, and classification of targets at sea. In a flexible software setup, different methods of preprocessing and detection can be chosen for the processing of infrared and visible-light images. Two projects, in which the software is used, are discussed. In the SURFER project, the algorithms are used for the detection and classification of small targets, e.g., swimmers, dinghies, speedboats, and floating mines. Different detection methods are applied to recorded data. We will present a method to describe the background by fitting continuous functions to the data, and show that this provides a better separation between objects and clutter. The detection of targets using electro- optical systems is one part of this project, in which also algorithms for fusion of electro-optical data with radar data are being developed. In the second project, a simple infrared image-seeker has been built that is used to test the effectiveness of infrared decoys launched from a ship. In a more complicated image seeker algorithm, features such as contrast and size and characterization of trajectory are used to differentiate between ship, infrared decoys and false alarms resulting from clutter. In this paper, results for the detection of small targets in a sea background are shown for a number of detection methods. Further, a description is given of the simulator imaging seeker, and some results of the imaging seeker software applied to simulated and recorded data will be shown. --- paper_title: Using histograms to detect and track objects in color video paper_content: Two methods of detecting and tracking objects in color video are presented. Color and edge histograms are explored as ways to model the background and foreground of a scene. The two types of methods are evaluated to determine their speed, accuracy and robustness. Histogram comparison techniques are used to compute similarity values that aid in identifying regions of interest. Foreground objects are detected and tracked by dividing each video frame into smaller regions (cells) and comparing the histogram of each cell to the background model. Results are presented for video sequences of human activity. --- paper_title: Target detection of maritime search and rescue: Saliency accumulation method paper_content: Target detection using visual image sequence is an important task of maritime search and rescue. A novel saliency accumulation approach is proposed. Each frame of image sequence is firstly transformed to LAB space. The saliency maps for two color channels and intensity channel are obtained by spectral residual in frequency domain and the local saliency in space domain taking advantages of both approaches. Then saliency map of every frame is fusion of those from each weighted feature. In order to reduce the sea clutters or detect the weak small target, the saliency maps from successive frames are accumulated to reach a binary saliency map by applying a threshold. The approach is simple and effective. Experiments are conducted to demonstrate the validity of proposed approach for target detection in maritime search and rescue. --- paper_title: Aquatic debris monitoring using smartphone-based robotic sensors paper_content: Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption. --- paper_title: Pfinder: Real-time tracking of the human body paper_content: Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding. --- paper_title: A Hybrid Color-Based Foreground Object Detection Method for Automated Marine Surveillance paper_content: This paper proposes a hybrid foreground object detection method suitable for the marine surveillance applications. Our approach combines an existing foreground object detection method with an image color segmentation technique to improve accuracy. The foreground segmentation method employs a Bayesian decision framework, while the color segmentation part is graph-based and relies on the local variation of edges. We also establish the set of requirements any practical marine surveillance algorithm should fulfill, and show that our method conforms to these requirements. Experiments show good results in the domain of marine surveillance sequences. --- paper_title: Saliency Detection: A Spectral Residual Approach paper_content: The ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of this basic intelligent behavior still remains a challenge. This paper presents a simple method for the visual saliency detection. Our model is independent of features, categories, or other forms of prior knowledge of the objects. By analyzing the log-spectrum of an input image, we extract the spectral residual of an image in spectral domain, and propose a fast method to construct the corresponding saliency map in spatial domain. We test this model on both natural pictures and artificial images such as psychological patterns. The result indicate fast and robust saliency detection of our method. --- paper_title: Adaptive background mixture models for real-time tracking paper_content: A common method for real-time segmentation of moving regions in image sequences involves "background subtraction", or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian, distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow. --- paper_title: Spatio-temporal Saliency detection using phase spectrum of quaternion fourier transform paper_content: Salient areas in natural scenes are generally regarded as the candidates of attention focus in human eyes, which is the key stage in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), neuromorphic vision toolkit (NVT) and etc., but they demand high computational cost and their remarkable results mostly rely on the choice of parameters. Recently a simple and fast approach based on Fourier transform called spectral residual (SR) was proposed, which used SR of the amplitude spectrum to obtain the saliency map. The results are good, but the reason is questionable. --- paper_title: Robust Computer Vision: An Interdisciplinary Challenge paper_content: This special issue is dedicated to examining the use of techniques from robust statistics in solving computer vision problems. It represents a milestone of recent progress within a subarea of our field that is nearly as old as the field itself, but has seen rapid growth over the past decade. Our Introduction considers the meaning of robustness in computer vision, summarizes the papers, and outlines the relationship between techniques in computer vision and statistics as a means of highlighting future directions. It complements the available reviews on this topic [12, 13]. --- paper_title: Detection and tracking of moving objects in a maritime environment using level set with shape priors paper_content: Over the years, maritime surveillance has become increasingly important due to the recurrence of piracy. While surveillance has traditionally been a manual task using crew members in lookout positions on parts of the ship, much work is being done to automate this task using digital cameras coupled with a computer that uses image processing techniques that intelligently track object in the maritime environment. One such technique is level set segmentation which evolves a contour to objects of interest in a given image. This method works well but gives incorrect segmentation results when a target object is corrupted in the image. This paper explores the possibility of factoring in prior knowledge of a ship’s shape into level set segmentation to improve results, a concept that is unaddressed in maritime surveillance problem. It is shown that the developed video tracking system outperforms level set-based systems that do not use prior shape knowledge, working well even where these systems fail. --- paper_title: Real-time automatic small infrared target detection using local spectral filtering in the frequency paper_content: Accurate and fast detection of small infrared target has very important meaning for infrared precise guidance, early ::: warning, video surveillance, etc. Based on human visual attention mechanism, an automatic detection algorithm for ::: small infrared target is presented. In this paper, instead of searching for infrared targets, we model regular patches that do ::: not attract much attention by our visual system. This is inspired by the property that the regular patches in spatial domain ::: turn out to correspond to the spikes in the amplitude spectrum. Unlike recent approaches using global spectral filtering, ::: we define the concept of local maxima suppression using local spectral filtering to smooth the spikes in the amplitude ::: spectrum, thereby producing the pop-out of the infrared targets. In the proposed method, we firstly compute the ::: amplitude spectrum of an input infrared image. Second, we find the local maxima of the amplitude spectrum using cubic ::: facet model. Third, we suppress the local maxima using the convolution of the local spectrum with a low-pass Gaussian ::: kernel of an appropriate scale. At last, the detection result in spatial domain is obtained by reconstructing the 2D signal ::: using the original phase and the log amplitude spectrum by suppressing local maxima. The experiments are performed ::: for some real-life IR images, and the results prove that the proposed method has satisfying detection effectiveness and ::: robustness. Meanwhile, it has high detection efficiency and can be further used for real-time detection and tracking. --- paper_title: Argos - a Video Surveillance System for boat Traffic Monitoring in Venice paper_content: Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system. --- paper_title: Multispectral Target Detection and Tracking for Seaport Video Surveillance paper_content: In this paper, a video surveillance process is presented including target detection and tracking of ships at the entrance of a seaport in order to improve security and to prevent terrorist attacks. This process is helpful in the automatic analysis of movements inside the seaport. Steps of detection and tracking are completed using IR data whereas the pattern recognition stage is achieved on color data. A comparison of results of detection and tracking is presented on both IR and color data in order to justify the choice of IR images for these two steps. A draft description of the pattern recognition stage is finally drawn up as development prospect. --- paper_title: Clutter-adaptive infrared small target detection in infrared maritime scenarios paper_content: A clutter-adaptive target-detection method is proposed to detect small moving targets in midwave infrared maritime scenarios. In particular, we focus our attention on the sea-sky background and targets of interest are ships along the horizontal sea-sky line. In the distant sea-sky background infrared imagery, small targets frequently appear as weak intensity and do not have enough structure information in the vicinity of horizontal sea-sky-line, that is sea-sky region, and the complexity of background clutters has direct impact on the target detection performance. Thus, a fuzzy system constructed by the extracted image-based features is designed for the clutter classification. And then, based on the determined clutter type, low clutter or high clutter, the horizontal sea-sky line can be detected successfully. In the target detection stage, under the guidance of the sea-sky line, a modified multilevel filter method is applied to enhance the target in the sea-sky region. Finally, the recursive segmentation method and the target validation technique are adopted for the target extraction and validation. In the experiments, the target detection performance of proposed method is validated by extensive experiments on infrared images taken in different imaging conditions. It achieves high accuracy with a low false alarm rate. --- paper_title: Infrared small target enhancement by using sequential top-hat filters paper_content: Generally in the infrared images, the targets have low contrast with the background, which makes the detection of the small targets difficult. To improve the detectability of the infrared small targets, this paper presents a novel algorithm for infrared small target enhancement by using sequential top-hat filters. Moreover, the proposed algorithm has been compared with several existing algorithms. The experimental results indicate that sequential top-hat filters could well enhance the infrared small targets and effectively suppress the background clutters. --- paper_title: Foreground object detection from videos containing complex background paper_content: This paper proposes a novel method for detection and segmentation of foreground objects from a video which contains both stationary and moving background objects and undergoes both gradual and sudden "once-off" changes. A Bayes decision rule for classification of background and foreground from selected feature vectors is formulated. Under this rule, different types of background objects will be classified from foreground objects by choosing a proper feature vector. The stationary background object is described by the color feature, and the moving background object is represented by the color co-occurrence feature. Foreground objects are extracted by fusing the classification results from both stationary and moving pixels. Learning strategies for the gradual and sudden "once-off" background changes are proposed to adapt to various changes in background through the video. The convergence of the learning process is proved and a formula to select a proper learning rate is also derived. Experiments have shown promising results in extracting foreground objects from many complex backgrounds including wavering tree branches, flickering screens and water surfaces, moving escalators, opening and closing doors, switching lights and shadows of moving objects. --- paper_title: Horizon Detection Using Machine Learning Techniques paper_content: Detecting a horizon in an image is an important part of many image related applications such as detecting ships on the horizon, flight control, and port security. Most of the existing solutions for the problem only use image processing methods to identify a horizon line in an image. This results in good accuracy for many cases and is fast in computation. However, for some images with difficult environmental conditions like a foggy or cloudy sky these image processing methods are inherently inaccurate in identifying the correct horizon. This paper investigates how to detect the horizon line in a set of images using a machine learning approach. The performance of the SVM, J48, and Na?ve Bayes classifiers, used for the problem, has been compared. Accuracy of 90-99% in identifying horizon was achieved on image data set of 20 images. --- paper_title: Small target detection under the sea using multi-scale spectral residual and maximum symmetric surround paper_content: Detection of small salient targets under the sea is useful for maritime visual surveillance, maritime search and rescue, and ship collision avoidance. In this paper, we propose a small target detection method based on visual saliency map, which is generated by the multi-scale spectral residual and maximum symmetric surround methods. First, the multi-scale spectral residual method is used to detect the targets of different sizes. Then the maximum symmetric surround method is employed to improve the resolution and suppress background for each scale. The final saliency map is obtained by finding the best scale using the 2D entropy. We compare our approach to three salient region detection methods with ground truth and a salient target segmentation application. The experimental results show that the proposed method outperforms the other three methods in both qualitative and quantitative terms. --- paper_title: Detection and location of people in video images using adaptive fusion of color and edge information paper_content: A new method of finding people in video images is presented. The detection is based on a novel background modeling and subtraction approach which uses both color and edge information. We introduce confidence maps gray-scale images whose intensity is a function of confidence that a pixel has changed - to fuse intermediate results and represent the results of background subtraction. The latter is used to delineate a person's body by guiding contour collection to segment the person from the background. The method is tolerant to scene clutter, slow illumination changes, and camera noise, and runs in near real time on a standard platform. --- paper_title: Identification and tracking of maritime objects in near-infrared image sequences for collision avoidance paper_content: This paper describes the continuing development of an image processing system for use on high-speed passenger ferries. The system automatically identifies objects in a maritime scene and uses the detected motion to alert a human observer to potential collision situations. Three integrated image-processing algorithms, namely an image preprocessor, a motion cue generator, and a target tracker, perform the identification and tracking of maritime objects. The pre-processing filters the image and applies a histogram technique to segment the sea from potential objects of interest. The segmented image is passed to the motion cue generator, which provides motion cues based on the differences between consecutive frames of segmented image data. The target tracker applies dynamic constraints on object motion to solve the correspondence problem, thus increasing the confidence that an identified object is a target. Identified and tracked objects are highlighted to a human observer using a white box viewing cue placed directly around the object of interest. --- paper_title: Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system paper_content: Abstract Visual surveillance in the maritime domain has been explored for more than a decade. Although it has produced a number of working systems and resulted in a mature technology, surveillance has been restricted to the port facilities or areas close to the coast line assuming a fixed-camera scenario. This paper presents a novel algorithm for open-sea visual maritime surveillance. We explore a challenging situation with a forward-looking camera mounted on a buoy or other floating platform. The proposed algorithm detects, localizes, and tracks ships in the field of view of the camera. Specifically, developed algorithm is uniquely designed to handle rapidly moving camera. Its performance is robust in the presence of a random relatively-large camera motion. In the context of ship detection we developed a new horizon detection scheme for a complex maritime domain. The performance of our algorithm and its comprising elements is evaluated. Ship detection precision of 88% is achieved on a large dataset collected from a prototype system. --- paper_title: Adaptive maritime video surveillance paper_content: Maritime assets such as ports, harbors, and vessels are vulnerable to a variety of near-shore threats such as small-boat attacks. Currently, such vulnerabilities are addressed predominantly by watchstanders and manual video surveillance, which is manpower intensive. Automatic maritime video surveillance techniques are being introduced to reduce manpower costs, but they have limited functionality and performance. For example, they only detect simple events such as perimeter breaches and cannot predict emerging threats. They also generate too many false alerts and cannot explain their reasoning. To overcome these limitations, we are developing the Maritime Activity Analysis Workbench (MAAW), which will be a mixed-initiative real-time maritime video surveillance tool that uses an integrated supervised machine learning approach to label independent and coordinated maritime activities. It uses the same information to predict anomalous behavior and explain its reasoning; this is an important capability for watchstander training and for collecting performance feedback. In this paper, we describe MAAW's functional architecture, which includes the following pipeline of components: (1) a video acquisition and preprocessing component that detects and tracks vessels in video images, (2) a vessel categorization and activity labeling component that uses standard and relational supervised machine learning methods to label maritime activities, and (3) an ontology-guided vessel and maritime activity annotator to enable subject matter experts (e.g., watchstanders) to provide feedback and supervision to the system. We report our findings from a preliminary system evaluation on river traffic video. --- paper_title: Traditional and Recent Approaches in Background Modeling for Foreground Detection: An Overview paper_content: Background modeling for foreground detection is often used in different applications ::: to model the background and then detect the moving objects in the scene like in video ::: surveillance. The last decade witnessed very significant publications in this field. Furthermore, ::: several surveys can be found in literature but none of them addresses an overall ::: review in this field. So, the purpose of this paper is to provide a complete survey ::: of the traditional and recent approaches. First, we categorize the different approaches ::: found in literature. We have classified them in terms of the mathematical models used ::: and we have discussed them in terms of the critical situations that they claim to handle. ::: Furthermore, we present the available resources, datasets and libraries. Then, we ::: conclude with several promising directions for future research. --- paper_title: Automatic detection of small surface targets with electro-optical sensors in a harbor environment paper_content: In modern warfare scenarios naval ships must operate in coastal environments. These complex environments, in bays and narrow straits, with cluttered littoral backgrounds and many civilian ships may contain asymmetric threats of fast targets, such as rhibs, cabin boats and jet-skis. Optical sensors, in combination with image enhancement and automatic detection, assist an operator to reduce the response time, which is crucial for the protection of the naval and land-based supporting forces. In this paper, we present our work on automatic detection of small surface targets which includes multi-scale horizon detection and robust estimation of the background intensity. To evaluate the performance of our detection technology, data was recorded with both infrared and visual-light cameras in a coastal zone and in a harbor environment. During these trials multiple small targets were used. Results of this evaluation are shown in this paper. --- paper_title: Tracking Ships from Fast Moving Camera through Image Registration paper_content: This paper presents an algorithm that detects and tracks marine vessels in video taken by a nonstationary camera installed on an untethered buoy. The video is characterized by large inter-frame motion of the camera, cluttered background, and presence of compression artifacts. Our approach performs segmentation of ships in individual frames processed with a color-gradient filter. The threshold selection is based on the histogram of the search region. Tracking of ships in a sequence is enabled by registering the horizon images in one coordinate system and by using a multihypothesis framework. Registration step uses an area-based technique to correlate a processed strip of the image over the found horizon line. The results of evaluation of detection, localization, and tracking of the ships show significant increase in performance in comparison to the previously used technique. --- paper_title: Automatic Obstacle Detection for USV’s Navigation Using Vision Sensors paper_content: This paper presents an automatic method to acquire, identify, and track obstacles from an Unmanned Surface Vehicle (USV) location in marine environments using 2D Commercial Of The Shelf (COTS) video sensors, and analyzing video streams as input. The guiding line of this research is to develop real-time automatic identification and tracking abilities in marine environment with COTS sensors. The output of this algorithm provides obstacle’s location in x-y coordinates. The ability to recognize and identify obstacles becomes more essential for USV’s autonomous capabilities, such as obstacle avoidance, decision modules, and other Artificial Intelligence (AI) abilities using low cost sensors. Our algorithm is not based on obstacles characterization. Algorithm performances were tested in various scenarios with real-time USV’s video streams, indicating that the algorithm can be used for real-time applications with high success rate and fast time computation. --- paper_title: A Novel Hierarchical Method of Ship Detection from Spaceborne Optical Image Based on Shape and Texture Features paper_content: Ship detection from remote sensing imagery is very important, with a wide array of applications in areas such as fishery management, vessel traffic services, and naval warfare. This paper focuses on the issue of ship detection from spaceborne optical images (SDSOI). Although advantages of synthetic-aperture radar (SAR) result in that most of current ship detection approaches are based on SAR images, disadvantages of SAR still exist, such as the limited number of SAR sensors, the relatively long revisit cycle, and the relatively lower resolution. With the increasing number of and the resulting improvement in continuous coverage of the optical sensors, SDSOI can partly overcome the shortcomings of SAR-based approaches and should be investigated to help satisfy the requirements of real-time ship monitoring. In SDSOI, several factors such as clouds, ocean waves, and small islands affect the performance of ship detection. This paper proposes a novel hierarchical complete and operational SDSOI approach based on shape and texture features, which is considered a sequential coarse-to-fine elimination process of false alarms. First, simple shape analysis is adopted to eliminate evident false candidates generated by image segmentation with global and local information and to extract ship candidates with missing alarms as low as possible. Second, a novel semisupervised hierarchical classification approach based on various features is presented to distinguish between ships and nonships to remove most false alarms. Besides a complete and operational SDSOI approach, the other contributions of our approach include the following three aspects: 1) it classifies ship candidates by using their class probability distributions rather than the direct extracted features; 2) the relevant classes are automatically built by the samples' appearances and their feature attribute in a semisupervised mode; and 3) besides commonly used shape and texture features, a new texture operator, i.e., local multiple patterns, is introduced to enhance the representation ability of the feature set in feature extraction. Experimental results of SDSOI on a large image set captured by optical sensors from multiple satellites show that our approach is effective in distinguishing between ships and nonships, and obtains a satisfactory ship detection performance. --- paper_title: Model-based segmentation of FLIR images paper_content: The use of gray-scale intensities together with the edge information present in a forward-looking infrared (FLIR) image to obtain a precise and accurate segmentation of a target is presented. A model of FLIR images based on gray-scale and edge information is incorporated in a gradient relaxation technique which explicitly maximizes a criterion function based on the inconsistency and ambiguity of classification of pixels with respect to their neighbors. Four variations of the basic technique which provide automatic selection of thresholds to segment FLIR images are considered. These methods are compared, and several examples of segmentation of ship images are given. > --- paper_title: A Hybrid Color-Based Foreground Object Detection Method for Automated Marine Surveillance paper_content: This paper proposes a hybrid foreground object detection method suitable for the marine surveillance applications. Our approach combines an existing foreground object detection method with an image color segmentation technique to improve accuracy. The foreground segmentation method employs a Bayesian decision framework, while the color segmentation part is graph-based and relies on the local variation of edges. We also establish the set of requirements any practical marine surveillance algorithm should fulfill, and show that our method conforms to these requirements. Experiments show good results in the domain of marine surveillance sequences. --- paper_title: Evaluation of Maritime Vision Techniques for Aerial Search of Humans in Maritime Environments paper_content: Searching for humans lost in vast stretches of ocean has always been a difficult task. In this paper, a range of machine vision approaches are investigated as candidate tools to mitigate the risk of human fatigue and complacency after long hours performing these kind of search tasks. Our two-phased approach utilises point target detection followed by temporal tracking of these targets. Four different point target detection techniques and two tracking techniques are evaluated. We also evaluate the use of different colour spaces for target detection. This paper has a particular focus on Hidden Markov Model based ::: tracking techniques, which seem best able to incorporate a priori knowledge about the maritime search problem, to improve detection performance. --- paper_title: Machine vision for detection of the rescue target in the marine casualty paper_content: In marine rescue, the detection of the target such as life rafts depends on the visual search by man as yet. However, human eyes are sometimes inadequate owing to a long flight and wide views. In order to carry out the prompt rescue of human life, development of the searching system in place of the human eyes is surely required. This paper deals with a new search method for detection of the rescue target using image processing techniques. To detect the small target in the wide views over the sea, we have proposed a new method including the image processing techniques based on the color information and the composite image sensor which increases about the measurement accuracy and the image processing speed at actual field. At the first step of the study, we attempt to extract the image data of the rescue target with the orange color in an experimental sea. > --- paper_title: Max-mean and max-median filters for detection of small targets paper_content: This paper deals with the problem of detection and tracking of low observable small-targets from a sequence of IR images against structural background and non-stationary clutter. There are many algorithms reported in the open literature for detection and tracking of targets of significant size in the image plane with good results. However, the difficulties of detecting small-targets arise from the fact that they are not easily discernable from clutter. The focus of research in this area is to reduce the false alarm rate to an acceptable level. Triple Temporal Filter reported by Jerry Silverman et. al., is one of the promising algorithms in this are. In this paper, we investigate the usefulness of Max-Mean and Max-Median filters in preserving the edges of clouds and structural backgrounds, which helps in detecting small-targets. Subsequently, anti-mean and anti-median operations result in good performance of detecting targets against moving clutter. The raw image is first filtered by max-mean/max-median filter. Then the filtered output is subtracted from the original image to enhance the potential targets. A thresholding step is incorporated in order to limit the number of potential target pixels. The threshold is obtained by using the statistics of the image. Finally, the thresholded images are accumulated so that the moving target forms a continuous trajectory and can be detected by using the post-processing algorithm. It is assumed that most of the targets occupy a couple of pixels. Head-on moving and maneuvering targets are not considered. These filters have ben tested successfully with the available database and the result are presented. --- paper_title: Detection filters and algorithm fusion for ATR paper_content: Detection involves locating all candidate regions of interest (objects) in a scene independent of the object class with object distortions and contrast differences, etc., present. It is one of the most formidable problems in automatic target recognition, since it involves analysis of every local scene region. We consider new detection algorithms and the fusion of their outputs to reduce the probability of false alarm P/sub FA/ while maintaining high probability of detection P/sub D/. Emphasis is given to detecting obscured targets in infrared imagery. --- paper_title: Identification and tracking of maritime objects in near-infrared image sequences for collision avoidance paper_content: This paper describes the continuing development of an image processing system for use on high-speed passenger ferries. The system automatically identifies objects in a maritime scene and uses the detected motion to alert a human observer to potential collision situations. Three integrated image-processing algorithms, namely an image preprocessor, a motion cue generator, and a target tracker, perform the identification and tracking of maritime objects. The pre-processing filters the image and applies a histogram technique to segment the sea from potential objects of interest. The segmented image is passed to the motion cue generator, which provides motion cues based on the differences between consecutive frames of segmented image data. The target tracker applies dynamic constraints on object motion to solve the correspondence problem, thus increasing the confidence that an identified object is a target. Identified and tracked objects are highlighted to a human observer using a white box viewing cue placed directly around the object of interest. --- paper_title: Nautical Scene Segmentation Using Variable Size Image Windows and Feature Space Reclustering paper_content: This paper describes the development of a system for the segmentation of small vessels and objects present in a maritime environment. The system assumes no a priori knowledge of the sea, but uses statistical analysis within variable size image windows to determine a characteristic vector that represents the current sea state. A space of characteristic vectors is searched and a main group of characteristic vectors and its centroid found automatically by using a new method of iterative reclustering. This method is an extension and improvement of the work described in [9]. A Mahalanobis distance measure from the centroid is calculated for each characteristic vector and is used to determine inhomogenities in the sea caused by the presence of a rigid object. The system has been tested using several input image sequences of static small objects such as buoys and small and large maritime vessels moving into and out of a harbour scene and the system successfully segmented these objects. --- paper_title: Image quality assessment: from error visibility to structural similarity paper_content: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/. --- paper_title: Discriminating small extended targets at sea from clutter and other classes of boats in infrared and visual light imagery paper_content: Operating in a coastal environment, with a multitude of boats of different sizes, detection of small extended targets is only one problem. A further difficulty is in discriminating detections of possible threats from alarms due to sea and coastal clutter, and from boats that are neutral for a specific operational task. Adding target features to detections allows filtering out clutter before tracking. Features can also be used to add labels resulting from a classification step. Both will help tracking by facilitating association. Labeling and information from features can be an aid to an operator, or can reduce the number of false alarms for more automatic systems. In this paper we present work on clutter reduction and classification of small extended targets from infrared and visual light imagery. Several methods for discriminating between classes of objects were examined, with an emphasis on less complex techniques, such as rules and decision trees. Similar techniques can be used to discriminate between targets and clutter, and between different classes of boats. Different features are examined that possibly allow discrimination between several classes. Data recordings are used, in infrared and visual light, with a range of targets including rhibs, cabin boats and jet-skis. --- paper_title: Algorithms for Visual Maritime Surveillance with Rapidly Moving Camera paper_content: Visual surveillance in the maritime domain has been explored for more than a decade. Although it has produced a number of working systems and resulted in a mature technology, surveillance has been restricted to the port facilities or areas close to the coastline assuming a fixed-camera scenario. This dissertation presents several contributions in the domain of maritime surveillance. First, a novel algorithm for open-sea visual maritime surveillance is introduced. We explore a challenging situation with a camera mounted on a buoy or other floating platform. The developed algorithm detects, localizes, and tracks ships in the field of view of the camera. Specifically, our method is uniquely designed to handle a rapidly moving camera. Its performance is robust in the presence of a random relatively-large camera motion. In the context of ship detection, a new horizon detection scheme for a complex maritime domain is also developed. Second, the performance of the ship detection algorithm is evaluated on a dataset of 55,000 images. Accuracy of detection of up to 88% of ships is achieved. Lastly, we consider the topic of detection of the vanishing line of the ocean surface plane as a way to estimate the horizon in difficult situations. This allows extension of the ship-detection algorithm to beyond open-sea scenarios. --- paper_title: Image quality assessment: from error visibility to structural similarity paper_content: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/. --- paper_title: Tracking Ships from Fast Moving Camera through Image Registration paper_content: This paper presents an algorithm that detects and tracks marine vessels in video taken by a nonstationary camera installed on an untethered buoy. The video is characterized by large inter-frame motion of the camera, cluttered background, and presence of compression artifacts. Our approach performs segmentation of ships in individual frames processed with a color-gradient filter. The threshold selection is based on the histogram of the search region. Tracking of ships in a sequence is enabled by registering the horizon images in one coordinate system and by using a multihypothesis framework. Registration step uses an area-based technique to correlate a processed strip of the image over the found horizon line. The results of evaluation of detection, localization, and tracking of the ships show significant increase in performance in comparison to the previously used technique. --- paper_title: Small infrared target fusion detection based on support vector machines in the wavelet domain paper_content: A novel method for fusion detection of small infrared targets based on support vector machines (SVM) in the wavelet domain is presented. Target detection task plays an important role in automatic target recognition (ATR) systems because overall ATR performance depends closely on detection results. SVM is a powerful methodology for solving problems in nonlinear classification, function estimation and density estimation. Least-squares support vector machines (LS-SVMs) are reformulations to standard SVMs. The proposed algorithm can be divided into four steps. First, each frame of the image sequence is decomposed by the discrete wavelet frame (DWF). Second, the components with low frequency are performed by regression based on LS-SVM. The one-order partial derivatives in row and column directions are derived. Therefore, feature images of the gradient strength can be obtained. Third, feature images of five consecutive frames are fused to accumulate the energy of target of interest and greatly reduce false alarms. Finally, the segmentation method based on contrast between target and background is utilized to extract the target. In terms of connectivity of moving targets, the majority of residual clutter and false alarms that survive are removed based on 3-D morphological dilation across three consecutive frames along the motion direction of the moving targets. Actual infrared image sequences in backgrounds of sea and sky are applied to validate the proposed approach. Experimental results demonstrate the robustness of the proposed method with high performance. --- paper_title: Infrared Image Segmentation by Combining Fractal Geometry with Wavelet Transformation paper_content: An infrared image is decomposed into three levels by discrete stationary wavelet transform (DSWT). Noise is reduced by wiener filter in the high resolution levels in the DSWT domain. Nonlinear gray transformation operation is used to enhance details in the low resolution levels in the DSWT domain. Enhanced infrared image is obtained by inverse DSWT. The enhanced infrared image is divided into many small blocks. The fractal dimensions of all the blocks are computed. Region of interest (ROI) is extracted by combining all the blocks, which have similar fractal dimensions. ROI is segmented by global threshold method. The man-made objects are efficiently separated from the infrared image by the proposed method. Copyright © 2014 IFSA Publishing, S. L. --- paper_title: Aquatic debris monitoring using smartphone-based robotic sensors paper_content: Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption. --- paper_title: Segmentation of FLIR images by target enhancement and image model paper_content: A new segmentation algorithm of forward-looking infrared (FLIR) images is presented, which first uses median subtraction filter to enhance targets and suppress backgrounds, then uses MBS algorithm to perform segmentation. This algorithm can obtain a precise and accurate segmentation of a target from low contrast FLIR images in complex background. Experimental results compared with MBS algorithm are given. We find that the proposed algorithm shows much better segmentation performance than MBS algorithm in complex background.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: Robust real-time ship detection and tracking for visual surveillance of cage aquaculture paper_content: Abstract This paper presents a visual surveillance scheme for cage aquaculture that automatically detects and tracks ships (intruders). For ship detection and tracking, we propose a robust foreground detection and background updating to effectively reduce the influence of sea waves. Furthermore, we propose a fast 4-connected component labeling method to greatly reduce the computational cost associated with the conventional method. Wave ripples are removed from regions with ships. An improved full search algorithm based on adaptive template block matching with a wave ripple removal is presented to quickly, accurately, and reliably track overlapping ships whose scales change. Experimental results demonstrate that the proposed schemes have outstanding performance in ship detection and tracking. The proposed visual surveillance system for cage aquaculture triggers an alarm if intruders are detected. The security of cage aquaculture can be increased. The proposed visual surveillance can thus greatly help the popularization of cage aquaculture for ocean farming. --- paper_title: Detecting Moving Objects, Ghosts, and Shadows in Video Streams paper_content: Background subtraction methods are widely exploited for moving object detection in videos in many applications, such as traffic monitoring, human motion capture, and video surveillance. How to correctly and efficiently model and update the background model and how to deal with shadows are two of the most distinguishing and challenging aspects of such approaches. The article proposes a general-purpose method that combines statistical assumptions with the object-level knowledge of moving objects, apparent objects (ghosts), and shadows acquired in the processing of the previous frames. Pixels belonging to moving objects, ghosts, and shadows are processed differently in order to supply an object-based selective update. The proposed approach exploits color information for both background subtraction and shadow detection to improve object segmentation and background update. The approach proves fast, flexible, and precise in terms of both pixel accuracy and reactivity to background changes. --- paper_title: Detection and tracking of moving objects in a maritime environment using level set with shape priors paper_content: Over the years, maritime surveillance has become increasingly important due to the recurrence of piracy. While surveillance has traditionally been a manual task using crew members in lookout positions on parts of the ship, much work is being done to automate this task using digital cameras coupled with a computer that uses image processing techniques that intelligently track object in the maritime environment. One such technique is level set segmentation which evolves a contour to objects of interest in a given image. This method works well but gives incorrect segmentation results when a target object is corrupted in the image. This paper explores the possibility of factoring in prior knowledge of a ship’s shape into level set segmentation to improve results, a concept that is unaddressed in maritime surveillance problem. It is shown that the developed video tracking system outperforms level set-based systems that do not use prior shape knowledge, working well even where these systems fail. --- paper_title: Region-based Mixture of Gaussians modelling for foreground detection in dynamic scenes paper_content: One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from expectation maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to be amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes. HighlightsA generalised framework for region based mixture modelling is proposed.Batch and online update equations are derived using expectation maximisation theory.Region-based Mixture of Gaussians (RMoG) algorithm for dynamic background subtractionExperiments show state-of-the-art results in subtracting dynamic backgrounds. --- paper_title: Argos - a Video Surveillance System for boat Traffic Monitoring in Venice paper_content: Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system. --- paper_title: A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking paper_content: In this paper, three dynamic background subtraction algorithms for colour images are presented and compared. The performances of these algorithms defined as 'Selective Update using Temporal Averaging', 'Selective Update using Non-foreground Pixels of the Input Image' and 'Selective Update using Temporal Median' are only different for background pixels. Then using an invariant colour filter and a suitable motion tracking technique, an object-level classification is offered that recognises the behaviours of all foreground blobs. This novel approach, which selectively excludes foreground blobs from the background frames, is included in all three methods. It is shown that the 'Selective Update using Temporal Median' produces the correct background image for each input frame. The advantages of the third algorithm are: it operates in unconstrained outdoor and indoor scenes. Also it is able to handle difficult situations such as removing ghosts and including stationary objects in the background image efficiently. Meanwhile, the algorithm's parameters are computed automatically or are fixed. The efficiency of the new algorithm is confirmed by the results obtained on a number of image sequences. --- paper_title: Wallflower: principles and practice of background maintenance paper_content: Background maintenance is a frequent element of video surveillance systems. We develop Wallflower, a three-component system for background maintenance: the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background; the region-level component fills in homogeneous regions of foreground objects; and the frame-level component detects sudden, global changes in the image and swaps in better approximations of the background. We compare our system with 8 other background subtraction algorithms. Wallflower is shown to outperform previous algorithms by handling a greater set of the difficult situations that can occur. Finally, we analyze the experimental results and propose normative principles for background maintenance. --- paper_title: Towards robust automatic traffic scene analysis in real-time paper_content: Automatic symbolic traffic scene analysis is essential to many areas of IVHS (Intelligent Vehicle Highway Systems). Traffic scene information can be used to optimize traffic flow during busy periods, identify stalled vehicles and accidents, and aid the decision-making of an autonomous vehicle controller. Improvements in technologies for machine vision-based surveillance and high-level symbolic reasoning have enabled the authors to develop a system for detailed, reliable traffic scene analysis. The machine vision component of the system employs a contour tracker and an affine motion model based on Kalman filters to extract vehicle trajectories over a sequence of traffic scene images. The symbolic reasoning component uses a dynamic belief network to make inferences about traffic events such as vehicle lane changes and stalls. In this paper, the authors discuss the key tasks of the vision and reasoning components as well as their integration into a working prototype. Preliminary results of an implementation on special purpose hardware using C-40 Digital Signal Processors show that near real-time performance can be achieved without further improvements. --- paper_title: Outdoor infrared video surveillance: A novel dynamic technique for the subtraction of a changing background of IR images paper_content: Abstract For security applications, automatic detection and tracking of moving objects is an important and challenging issue especially in uncontrolled environments. Recently, due to the decreasing costs and increasing miniaturization of infrared sensors, the use of infrared imaging technology has become an interesting alternative in such applications. In this paper, a framework is proposed to detect, track and classify both pedestrians and vehicles in realistic scenarios using a stationary infrared camera. More specifically, a novel dynamic background-subtraction technique to robustly adapt detection to illumination changes in outdoor scenes is proposed. We noticed that combining results with edge detection enables to reduce considerably false alarms while this reinforces also tracking efficiency. The proposed system was implemented and tested successfully in various environmental conditions. --- paper_title: Polynomial background estimation using visible light video streams for robust automatic detection in a maritime environment paper_content: For naval surveillance, automatic detection of surface objects, like vessels, in a maritime environment is an ::: important contribution of the Electro-Optical (EO) sensor systems on board. Based on previous research using ::: single images, a background estimation approach using low-order polynomials is proposed for the automatic ::: detection of objects in a maritime environment. The polynomials are fitted to the intensity values in the image ::: after which the deviation between the fitted intensity values and the measured intensity values are used for detection. The research presented in this paper, includes the time information by using video streams instead of single images. Hereby, the level of fusing time information and the number of frames necessary for stable detection and tracking behaviour are analysed and discussed. The performance of the detection approach is tested on a, during the fall of 2007, collected extensive dataset of maritime pictures in the Mediterranean Sea and in the North Sea on board of an Air Defence Command frigate, HNLMS Tromp. --- paper_title: Identification and tracking of maritime objects in near-infrared image sequences for collision avoidance paper_content: This paper describes the continuing development of an image processing system for use on high-speed passenger ferries. The system automatically identifies objects in a maritime scene and uses the detected motion to alert a human observer to potential collision situations. Three integrated image-processing algorithms, namely an image preprocessor, a motion cue generator, and a target tracker, perform the identification and tracking of maritime objects. The pre-processing filters the image and applies a histogram technique to segment the sea from potential objects of interest. The segmented image is passed to the motion cue generator, which provides motion cues based on the differences between consecutive frames of segmented image data. The target tracker applies dynamic constraints on object motion to solve the correspondence problem, thus increasing the confidence that an identified object is a target. Identified and tracked objects are highlighted to a human observer using a white box viewing cue placed directly around the object of interest. --- paper_title: Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system paper_content: Abstract Visual surveillance in the maritime domain has been explored for more than a decade. Although it has produced a number of working systems and resulted in a mature technology, surveillance has been restricted to the port facilities or areas close to the coast line assuming a fixed-camera scenario. This paper presents a novel algorithm for open-sea visual maritime surveillance. We explore a challenging situation with a forward-looking camera mounted on a buoy or other floating platform. The proposed algorithm detects, localizes, and tracks ships in the field of view of the camera. Specifically, developed algorithm is uniquely designed to handle rapidly moving camera. Its performance is robust in the presence of a random relatively-large camera motion. In the context of ship detection we developed a new horizon detection scheme for a complex maritime domain. The performance of our algorithm and its comprising elements is evaluated. Ship detection precision of 88% is achieved on a large dataset collected from a prototype system. --- paper_title: Adaptive maritime video surveillance paper_content: Maritime assets such as ports, harbors, and vessels are vulnerable to a variety of near-shore threats such as small-boat attacks. Currently, such vulnerabilities are addressed predominantly by watchstanders and manual video surveillance, which is manpower intensive. Automatic maritime video surveillance techniques are being introduced to reduce manpower costs, but they have limited functionality and performance. For example, they only detect simple events such as perimeter breaches and cannot predict emerging threats. They also generate too many false alerts and cannot explain their reasoning. To overcome these limitations, we are developing the Maritime Activity Analysis Workbench (MAAW), which will be a mixed-initiative real-time maritime video surveillance tool that uses an integrated supervised machine learning approach to label independent and coordinated maritime activities. It uses the same information to predict anomalous behavior and explain its reasoning; this is an important capability for watchstander training and for collecting performance feedback. In this paper, we describe MAAW's functional architecture, which includes the following pipeline of components: (1) a video acquisition and preprocessing component that detects and tracks vessels in video images, (2) a vessel categorization and activity labeling component that uses standard and relational supervised machine learning methods to label maritime activities, and (3) an ontology-guided vessel and maritime activity annotator to enable subject matter experts (e.g., watchstanders) to provide feedback and supervision to the system. We report our findings from a preliminary system evaluation on river traffic video. --- paper_title: Detection of Dynamic Background Due to Swaying Movements From Motion Features paper_content: Dynamically changing background (dynamic background) still presents a great challenge to many motion-based video surveillance systems. In the context of event detection, it is a major source of false alarms. There is a strong need from the security industry either to detect and suppress these false alarms, or dampen the effects of background changes, so as to increase the sensitivity to meaningful events of interest. In this paper, we restrict our focus to one of the most common causes of dynamic background changes: 1) that of swaying tree branches and 2) their shadows under windy conditions. Considering the ultimate goal in a video analytics pipeline, we formulate a new dynamic background detection problem as a signal processing alternative to the previously described but unreliable computer vision-based approaches. Within this new framework, we directly reduce the number of false alarms by testing if the detected events are due to characteristic background motions. In addition, we introduce a new data set suitable for the evaluation of dynamic background detection. It consists of real-world events detected by a commercial surveillance system from two static surveillance cameras. The research question we address is whether dynamic background can be detected reliably and efficiently using simple motion features and in the presence of similar but meaningful events, such as loitering. Inspired by the tree aerodynamics theory, we propose a novel method named local variation persistence (LVP), that captures the key characteristics of swaying motions. The method is posed as a convex optimization problem, whose variable is the local variation. We derive a computationally efficient algorithm for solving the optimization problem, the solution of which is then used to form a powerful detection statistic. On our newly collected data set, we demonstrate that the proposed LVP achieves excellent detection results and outperforms the best alternative adapted from existing art in the dynamic background literature. --- paper_title: Wavelet transform methods for object detection and recovery paper_content: We show that a biorthogonal spline wavelet closely approximates the prewhitening matched filter for detecting Gaussian objects in Markov noise. The filterbank implementation of the wavelet transform acts as a hierarchy of such detectors operating at discrete object scales. If the object to be detected is Gaussian and its scale happens to coincide with one of those computed by the wavelet transform, and if the background noise is truly Markov, then optimum detection is realized by thresholding the appropriate subband image. In reality, the Gaussian may be a rather coarse approximation of the object, and the background noise may deviate from the Markov assumption. In this case, we may view the wavelet decomposition as a means for computing an orthogonal feature set for input to a classifier. We use a supervised linear classifier applied to feature vectors comprised of samples taken from the subbands of an N-octave, undecimated wavelet transform. The resulting map of test statistic values indicates the presence and location of objects. The object itself is reconstructed by using the test statistic to emphasize wavelet subbands, followed by computing the inverse wavelet transform. We show two contrasting applications of the wavelets-based object recovery algorithm. For detecting microcalcifications in digitized mammograms, the object and noise models closely match the real image data, and the multiscale matched filter paradigm is highly appropriate. The second application, extracting ship outlines in noisy forward-looking infrared images, is presented as a case where good results are achieved despite the data models being less well matched to the assumptions of the algorithm. --- paper_title: Video object extraction based on adaptive background and statistical change detection paper_content: This paper introduces a system for video object extraction useful for general applications where foreground objects move within a slow changing background. Surveillance of indoor and outdoor sequences is a typical example. The originality of the approach resides in two related components. First, the statistical change detection used in the system does not require any sophisticated parametric tuning as it is based on a probabilistic method. Second, the change is detected between a current instance of the scene and a reference that is updated continuously to take into account slow variation of the background. Simulation results show that the proposed scheme performs well in extracting video objects, with stability and good accuracy, while being of relative reduced complexity. --- paper_title: Target identification in a complex maritime scene paper_content: This paper describes new methods applied to the problem of target identification in complex maritime scenes. The algorithm described here automatically identifies rigid objects moving in a maritime scene. The methods for target identification proposed here are based on the change in statistical descriptors of image segments and on axis projections of the difference image. Results from this stage of processing are passed to a decision maker which determines the resulting regions of interest in the image. Different sequences of maritime scenes were passed to the algorithm and results show that it was capable of identifying most of the rigid objects moving in the scenes despite the variation of the scenes. --- paper_title: Motion-based background subtraction using adaptive kernel density estimation paper_content: Background modeling is an important component of many vision systems. Existing work in the area has mostly addressed scenes that consist of static or quasi-static structures. When the scene exhibits a persistent dynamic behavior in time, such an assumption is violated and detection performance deteriorates. In this paper, we propose a new method for the modeling and subtraction of such scenes. Towards the modeling of the dynamic characteristics, optical flow is computed and utilized as a feature in a higher dimensional space. Inherent ambiguities in the computation of features are addressed by using a data-dependent bandwidth for density estimation using kernels. Extensive experiments demonstrate the utility and performance of the proposed approach. --- paper_title: Background models for tracking objects in water paper_content: This paper presents a novel background analysis technique to enable robust tracking of objects in water- based scenarios. Current pixel-wise statistical background models support automatic change detection in many outdoor situations, but are limited to background changes which can be modeled via a set of per-pixel spatially uncorrelated processes. In water-based scenarios, waves caused by wind or by moving vessels (wakes) form highly correlated moving patterns that confuse traditional background analysis models. In this work we introduce a framework that explicitly models this type of background variation. The framework combines the output of a statistical background model with localized optical flow analysis to produce two motion maps. In the final stage we apply object-level fusion to filter out moving regions that are most likely caused by wave clutter. A tracking algorithm can now handle the resulting set of objects. --- paper_title: Background modeling in the maritime domain paper_content: Maritime environment represents a challenging scenario for automatic video surveillance due to the complexity of the observed scene: waves on the water surface, boat wakes, and weather issues contribute to generate a highly dynamic background. Moreover, an appropriate background model has to deal with gradual and sudden illumination changes, camera jitter, shadows, and reflections that can provoke false detections. Using a predefined distribution (e.g., Gaussian) for generating the background model can result ineffective, due to the need of modeling non-regular patterns. In this paper, a method for creating a "discretization" of an unknown distribution that can model highly dynamic background such as water is described. A quantitative evaluation carried out on two publicly available datasets of videos and images, containing data recorded in different maritime scenarios, with varying light and weather conditions, demonstrates the effectiveness of the approach. --- paper_title: A Hybrid Color-Based Foreground Object Detection Method for Automated Marine Surveillance paper_content: This paper proposes a hybrid foreground object detection method suitable for the marine surveillance applications. Our approach combines an existing foreground object detection method with an image color segmentation technique to improve accuracy. The foreground segmentation method employs a Bayesian decision framework, while the color segmentation part is graph-based and relies on the local variation of edges. We also establish the set of requirements any practical marine surveillance algorithm should fulfill, and show that our method conforms to these requirements. Experiments show good results in the domain of marine surveillance sequences. --- paper_title: Detecting Moving Objects, Ghosts, and Shadows in Video Streams paper_content: Background subtraction methods are widely exploited for moving object detection in videos in many applications, such as traffic monitoring, human motion capture, and video surveillance. How to correctly and efficiently model and update the background model and how to deal with shadows are two of the most distinguishing and challenging aspects of such approaches. The article proposes a general-purpose method that combines statistical assumptions with the object-level knowledge of moving objects, apparent objects (ghosts), and shadows acquired in the processing of the previous frames. Pixels belonging to moving objects, ghosts, and shadows are processed differently in order to supply an object-based selective update. The proposed approach exploits color information for both background subtraction and shadow detection to improve object segmentation and background update. The approach proves fast, flexible, and precise in terms of both pixel accuracy and reactivity to background changes. --- paper_title: Foreground object detection from videos containing complex background paper_content: This paper proposes a novel method for detection and segmentation of foreground objects from a video which contains both stationary and moving background objects and undergoes both gradual and sudden "once-off" changes. A Bayes decision rule for classification of background and foreground from selected feature vectors is formulated. Under this rule, different types of background objects will be classified from foreground objects by choosing a proper feature vector. The stationary background object is described by the color feature, and the moving background object is represented by the color co-occurrence feature. Foreground objects are extracted by fusing the classification results from both stationary and moving pixels. Learning strategies for the gradual and sudden "once-off" background changes are proposed to adapt to various changes in background through the video. The convergence of the learning process is proved and a formula to select a proper learning rate is also derived. Experiments have shown promising results in extracting foreground objects from many complex backgrounds including wavering tree branches, flickering screens and water surfaces, moving escalators, opening and closing doors, switching lights and shadows of moving objects. --- paper_title: Daytime Water Detection by Fusing Multiple Cues for Autonomous Off-Road Navigation paper_content: Abstract : Detecting water hazards is a significant challenge to unmanned ground vehicle autonomous off-road navigation. This paper focuses on detecting the presence of water during the daytime using color cameras. A multi-cue approach is taken. Evidence of the presence of water is generated from color, texture, and the detection of reflections in stereo range data. A rule base for fusing water cues was developed by evaluating detection results from an extensive archive of data collection imagery containing water. This software has been implemented into a run-time passive perception subsystem and tested thus far under Linux on a Pentium based processor. --- paper_title: Background modeling in the maritime domain paper_content: Maritime environment represents a challenging scenario for automatic video surveillance due to the complexity of the observed scene: waves on the water surface, boat wakes, and weather issues contribute to generate a highly dynamic background. Moreover, an appropriate background model has to deal with gradual and sudden illumination changes, camera jitter, shadows, and reflections that can provoke false detections. Using a predefined distribution (e.g., Gaussian) for generating the background model can result ineffective, due to the need of modeling non-regular patterns. In this paper, a method for creating a "discretization" of an unknown distribution that can model highly dynamic background such as water is described. A quantitative evaluation carried out on two publicly available datasets of videos and images, containing data recorded in different maritime scenarios, with varying light and weather conditions, demonstrates the effectiveness of the approach. --- paper_title: Segmentation and tracking of piglets in images paper_content: An algorithm was developed for the segmentation and tracking of piglets and tested on a 200-image sequence of 10 piglets moving on a straw background. The image-capture rate was 1 image/140 ms. The segmentation method was a combination of image differencing with respect to a median background and a Laplacian operator. The features tracked were blob edges in the segmented image. During tracking, the piglets were modelled as ellipses initialised on the blobs. Each piglet was tracked by searching for blob edges in an elliptical window about the piglet's position, which was predicted from its previous two positions. --- paper_title: Efficient adaptive density estimation per image pixel for the task of background subtraction paper_content: We analyze the computer vision task of pixel-level background subtraction. We present recursive equations that are used to constantly update the parameters of a Gaussian mixture model and to simultaneously select the appropriate number of components for each pixel. We also present a simple non-parametric adaptive density estimation method. The two methods are compared with each other and with some previously proposed algorithms. --- paper_title: An iterative image registration technique with an application to stereo vision paper_content: Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. --- paper_title: Non-parametric model for background subtraction paper_content: Background subtraction is a method typically used to segment moving regions in image sequences taken from a static camera by comparing each new frame to a model of the scene background. We present a novel non-parametric background model and a background subtraction approach. The model can handle situations where the background of the scene is cluttered and not completely static but contains small motions such as tree branches and bushes. The model estimates the probability of observing pixel intensity values based on a sample of intensity values for each pixel. The model adapts quickly to changes in the scene which enables very sensitive detection of moving targets. We also show how the model can use color information to suppress detection of shadows. The implementation of the model runs in real-time for both gray level and color imagery. Evaluation shows that this approach achieves very sensitive detection with very low false alarm rates. --- paper_title: Robust real-time ship detection and tracking for visual surveillance of cage aquaculture paper_content: Abstract This paper presents a visual surveillance scheme for cage aquaculture that automatically detects and tracks ships (intruders). For ship detection and tracking, we propose a robust foreground detection and background updating to effectively reduce the influence of sea waves. Furthermore, we propose a fast 4-connected component labeling method to greatly reduce the computational cost associated with the conventional method. Wave ripples are removed from regions with ships. An improved full search algorithm based on adaptive template block matching with a wave ripple removal is presented to quickly, accurately, and reliably track overlapping ships whose scales change. Experimental results demonstrate that the proposed schemes have outstanding performance in ship detection and tracking. The proposed visual surveillance system for cage aquaculture triggers an alarm if intruders are detected. The security of cage aquaculture can be increased. The proposed visual surveillance can thus greatly help the popularization of cage aquaculture for ocean farming. --- paper_title: Detection and tracking of moving objects in a maritime environment using level set with shape priors paper_content: Over the years, maritime surveillance has become increasingly important due to the recurrence of piracy. While surveillance has traditionally been a manual task using crew members in lookout positions on parts of the ship, much work is being done to automate this task using digital cameras coupled with a computer that uses image processing techniques that intelligently track object in the maritime environment. One such technique is level set segmentation which evolves a contour to objects of interest in a given image. This method works well but gives incorrect segmentation results when a target object is corrupted in the image. This paper explores the possibility of factoring in prior knowledge of a ship’s shape into level set segmentation to improve results, a concept that is unaddressed in maritime surveillance problem. It is shown that the developed video tracking system outperforms level set-based systems that do not use prior shape knowledge, working well even where these systems fail. --- paper_title: Multispectral Target Detection and Tracking for Seaport Video Surveillance paper_content: In this paper, a video surveillance process is presented including target detection and tracking of ships at the entrance of a seaport in order to improve security and to prevent terrorist attacks. This process is helpful in the automatic analysis of movements inside the seaport. Steps of detection and tracking are completed using IR data whereas the pattern recognition stage is achieved on color data. A comparison of results of detection and tracking is presented on both IR and color data in order to justify the choice of IR images for these two steps. A draft description of the pattern recognition stage is finally drawn up as development prospect. --- paper_title: Evaluation of Maritime Vision Techniques for Aerial Search of Humans in Maritime Environments paper_content: Searching for humans lost in vast stretches of ocean has always been a difficult task. In this paper, a range of machine vision approaches are investigated as candidate tools to mitigate the risk of human fatigue and complacency after long hours performing these kind of search tasks. Our two-phased approach utilises point target detection followed by temporal tracking of these targets. Four different point target detection techniques and two tracking techniques are evaluated. We also evaluate the use of different colour spaces for target detection. This paper has a particular focus on Hidden Markov Model based ::: tracking techniques, which seem best able to incorporate a priori knowledge about the maritime search problem, to improve detection performance. --- paper_title: Machine vision for detection of the rescue target in the marine casualty paper_content: In marine rescue, the detection of the target such as life rafts depends on the visual search by man as yet. However, human eyes are sometimes inadequate owing to a long flight and wide views. In order to carry out the prompt rescue of human life, development of the searching system in place of the human eyes is surely required. This paper deals with a new search method for detection of the rescue target using image processing techniques. To detect the small target in the wide views over the sea, we have proposed a new method including the image processing techniques based on the color information and the composite image sensor which increases about the measurement accuracy and the image processing speed at actual field. At the first step of the study, we attempt to extract the image data of the rescue target with the orange color in an experimental sea. > --- paper_title: Persistent maritime surveillance using multi-sensor feature association and classification paper_content: In maritime operational scenarios, such as smuggling, piracy, or terrorist threats, it is not only relevant who or what an observed object is, but also where it is now and in the past in relation to other (geographical) objects. In situation and impact assessment, this information is used to determine whether an object is a threat. Single platform (ship, harbor) or single sensor information will not provide all this information. The work presented in this paper focuses on the sensor and object levels that provide a description of currently observed objects to situation assessment. For use of information of objects at higher information levels, it is necessary to have not only a good description of observed objects at this moment, but also from its past. Therefore, currently observed objects have to be linked to previous occurrences. Kinematic features, as used in tracking, are of limited use, as uncertainties over longer time intervals are so large that no unique associations can be made. Features extracted from different sensors (e.g., ESM, EO/IR) can be used for both association and classification. Features and classifications are used to associate current objects to previous object descriptions, allowing objects to be described better, and provide position history. In this paper a description of a high level architecture in which such a multi-sensor association is used is described. Results of an assessment of the usability of several features from ESM (from spectrum), EO and IR (shape, contour, keypoints) data for association and classification are shown. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE). --- paper_title: Maritime Surveillance : Tracking Ships inside a Dynamic Background Using a Fast Level-Set paper_content: Research highlights? Existing vision-based tracking methods are not suitable for the maritime domain. ? We derive a suitable tracking method by combining and modifying existing methods. ? Our method can track tiny targets. ? Our method is validated on several test sequences and two live field trials. Surveillance in a maritime environment is indispensable in the fight against a wide range of criminal activities, including pirate attacks, unlicensed fishing trailers and human trafficking. Computer vision systems can be a useful aid in the law enforcement process, by for example tracking and identifying moving vessels on the ocean. However, the maritime domain poses many challenges for the design of an effective maritime surveillance system. One such challenge is the tracking of moving vessels in the presence of a moving dynamic background (the ocean). We present techniques that address this particular problem. We use a background subtraction method and employ a real-time approximation of level-set-based curve evolution to demarcate the outline of moving vessels in the ocean. We report promising results on both small and large vessels, based on two field trials. --- paper_title: AUTOMATIC MARITIME SURVEILLANCE WITH VISUAL TARGET DETECTION paper_content: In this paper an automatic maritime surveillance system is presented. Boat detection is performed by means of an Haar-like classifier in order to obtain robustness with respect to targets having very different size, reflections and wakes on the water surface, and apparently motionless boats anchored off the coast. Detection results are filtered over the time in order to reduce the false alarm rate. Experimental results show the effectiveness of the approach with different light conditions and camera positions. The system is able to provide the user a global view adding a visual dimension to AIS data. --- paper_title: Evaluation of Maritime Vision Techniques for Aerial Search of Humans in Maritime Environments paper_content: Searching for humans lost in vast stretches of ocean has always been a difficult task. In this paper, a range of machine vision approaches are investigated as candidate tools to mitigate the risk of human fatigue and complacency after long hours performing these kind of search tasks. Our two-phased approach utilises point target detection followed by temporal tracking of these targets. Four different point target detection techniques and two tracking techniques are evaluated. We also evaluate the use of different colour spaces for target detection. This paper has a particular focus on Hidden Markov Model based ::: tracking techniques, which seem best able to incorporate a priori knowledge about the maritime search problem, to improve detection performance. --- paper_title: Automated intelligent video surveillance system for ships paper_content: To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance ::: systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, ::: while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level ::: classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships ::: (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical ::: machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, ::: classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret ::: scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- ::: the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat ::: detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the ::: manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection. --- paper_title: Argos - a Video Surveillance System for boat Traffic Monitoring in Venice paper_content: Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system. --- paper_title: Robust Multiple Car Tracking with Occlusion Reasoning paper_content: In this work we address the problem of occlusion in tracking multiple 3D objects in a known environment and propose a new approach for tracking vehicles in road traffic scenes using an explicit occlusion reasoning step. We employ a contour tracker based on intensity and motion boundaries. The motion of the contour of the vehicles in the image is assumed to be well describable by an affine motion model with a translation and a change in scale. A vehicle contour is represented by closed cubic splines the position and motion of which is estimated along the image sequence. In order to employ linear Kalman Filters we decompose the estimation process into two filters: one for estimating the affine motion parameters and one for estimating the shape of the contours of the vehicles. Occlusion detection is performed by intersecting the depth ordered regions associated to the objects. The intersection part is then excluded in the motion and shape estimation. This procedure also improves the shape estimation in case of adjacent objects since occlusion detection is performed on slightly enlarged regions. In this way we obtain robust motion estimates and trajectories for vehicles even in the case of occlusions, as we show in some experiments with real world traffic scenes. --- paper_title: Contour Tracking By Stochastic Propagation of Conditional Density paper_content: The problem of tracking curves in dense visual clutter is a challenging one. Trackers based on Kalman filters are of limited use; because they are based on Gaussian densities which are unimodal, they cannot represent simultaneous alternative hypotheses. Extensions to the Kalman filter to handle multiple data associations work satisfactorily in the simple case of point targets, but do not extend naturally to continuous curves. A new, stochastic algorithm is proposed here, the Condensation algorithm — Conditional Density Propagation over time. It uses ‘factored sampling’, a method previously applied to interpretation of static images, in which the distribution of possible interpretations is represented by a randomly generated set of representatives. The Condensation algorithm combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time. The result is highly robust tracking of agile motion in clutter, markedly superior to what has previously been attainable from Kalman filtering. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time. --- paper_title: Extended Object Tracking Using Monte Carlo Methods paper_content: This correspondence addresses the problem of tracking extended objects, such as ships or a convoy of vehicles moving in urban environment. Two Monte Carlo techniques for extended object tracking are proposed: an interacting multiple model data augmentation (IMM-DA) algorithm and a modified version of the mixture Kalman filter (MKF) of Chen and Liu , called the mixture Kalman filter modified (MKFm). The data augmentation (DA) technique with finite mixtures estimates the object extent parameters, whereas an interacting multiple model (IMM) filter estimates the kinematic states (position and speed) of the manoeuvring object. Next, the system model is formulated in a partially conditional dynamic linear (PCDL) form. This affords us to propose two latent indicator variables characterizing, respectively, the motion mode and object size. Then, an MKFm is developed with the PCDL model. The IMM-DA and the MKFm performance is compared with a combined IMM-particle filter (IMM-PF) algorithm with respect to accuracy and computational complexity. The most accurate parameter estimates are obtained by the DA algorithm, followed by the MKFm and PF. --- paper_title: Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system paper_content: Abstract Visual surveillance in the maritime domain has been explored for more than a decade. Although it has produced a number of working systems and resulted in a mature technology, surveillance has been restricted to the port facilities or areas close to the coast line assuming a fixed-camera scenario. This paper presents a novel algorithm for open-sea visual maritime surveillance. We explore a challenging situation with a forward-looking camera mounted on a buoy or other floating platform. The proposed algorithm detects, localizes, and tracks ships in the field of view of the camera. Specifically, developed algorithm is uniquely designed to handle rapidly moving camera. Its performance is robust in the presence of a random relatively-large camera motion. In the context of ship detection we developed a new horizon detection scheme for a complex maritime domain. The performance of our algorithm and its comprising elements is evaluated. Ship detection precision of 88% is achieved on a large dataset collected from a prototype system. --- paper_title: Segmenting foreground objects from a dynamic textured background via a robust Kalman filter paper_content: The algorithm presented aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the nonstationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an autoregressive moving average model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. --- paper_title: ficient Implementation of Reid's Multiple ypothesis Tracking Algorithm and Its Evaluation for the Purpose of Visual Tracking paper_content: An efficient implementation of Reid's multiple hypothesis tracking (MHT) algorithm is presented in which the k-best hypotheses are determined in polynomial time using an algorithm due to Murly (1968). The MHT algorithm is then applied to several motion sequences. The MHT capabilities of track initiation, termination, and continuation are demonstrated together with the latter's capability to provide low level support of temporary occlusion of tracks. Between 50 and 150 corner features are simultaneously tracked in the image plane over a sequence of up to 51 frames. Each corner is tracked using a simple linear Kalman filter and any data association uncertainty is resolved by the MHT. Kalman filter parameter estimation is discussed, and experimental results show that the algorithm is robust to errors in the motion model. An investigation of the performance of the algorithm as a function of look-ahead (tree depth) indicates that high accuracy can be obtained for tree depths as shallow as three. Experimental results suggest that a real-time MHT solution to the motion correspondence problem is possible for certain classes of scenes. --- paper_title: Tracking Ships from Fast Moving Camera through Image Registration paper_content: This paper presents an algorithm that detects and tracks marine vessels in video taken by a nonstationary camera installed on an untethered buoy. The video is characterized by large inter-frame motion of the camera, cluttered background, and presence of compression artifacts. Our approach performs segmentation of ships in individual frames processed with a color-gradient filter. The threshold selection is based on the histogram of the search region. Tracking of ships in a sequence is enabled by registering the horizon images in one coordinate system and by using a multihypothesis framework. Registration step uses an area-based technique to correlate a processed strip of the image over the found horizon line. The results of evaluation of detection, localization, and tracking of the ships show significant increase in performance in comparison to the previously used technique. --- paper_title: Motion-based background subtraction using adaptive kernel density estimation paper_content: Background modeling is an important component of many vision systems. Existing work in the area has mostly addressed scenes that consist of static or quasi-static structures. When the scene exhibits a persistent dynamic behavior in time, such an assumption is violated and detection performance deteriorates. In this paper, we propose a new method for the modeling and subtraction of such scenes. Towards the modeling of the dynamic characteristics, optical flow is computed and utilized as a feature in a higher dimensional space. Inherent ambiguities in the computation of features are addressed by using a data-dependent bandwidth for density estimation using kernels. Extensive experiments demonstrate the utility and performance of the proposed approach. --- paper_title: Argos - a Video Surveillance System for boat Traffic Monitoring in Venice paper_content: Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system. --- paper_title: Acoustic approaches to remote species identification: a review paper_content: Noninvasive species identification remains a longterm goal of fishers, researchers, and resource managers who use sound to locate, map, and count aquatic organisms. Since the first biological applications of underwater acoustics, four approaches have been used singly or in combination to survey marine and freshwater environments: passive sonar; prior knowledge and direct sampling; echo statistics from high-frequency measures; and matching models to low-frequency measures. Echo amplitudes or targets measured using any sonar equipment are variable signals. Variability in reflected sound is influenced by physical factors associated with the transmission of sound through a compressible fluid, and by biological factors associated with the location, reflective properties, and behaviour of a target. The current trend in acoustic target identification is to increase the amount of information collected through increases in frequency bandwidth or in the number of acoustic beams. Exclusive use of acoustics to identify aquatic organisms reliably will require a set of statistical metrics that discriminate among a wide range of similar body types at any packing density, and incorporation of these algorithms in routine data processing. --- paper_title: An iterative image registration technique with an application to stereo vision paper_content: Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system. --- paper_title: Performance of optical flow techniques paper_content: The performance of six optical flow techniques is compared, emphasizing measurement accuracy. The most accurate methods are found to be the local differential approaches, where nu is computed explicitly in terms of a locally constant or linear model. Techniques using global smoothness constraints appear to produce visually attractive flow fields, but in general seem to be accurate enough for qualitative use only and insufficient as precursors to the computations of egomotion and 3D structures. It is found that some form of confidence measure/threshold is crucial for all techniques in order to separate the inaccurate from the accurate. Drawbacks of the six techniques are discussed. > --- paper_title: Moving object detection in spatial domain using background removal techniques -- State-of-art paper_content: Identifying moving objects is a critical task for many computer vision applications; it provides a classification of the pixels into either foreground or background. A common approach used to achieve such classification is background removal. Even though there exist numerous of background removal algorithms in the literature, most of them follow a simple flow diagram, passing through four major steps, which are pre-processing, background modelling, foreground de- tection and data validation. In this paper, we survey many existing schemes in the literature of background removal, sur- veying the common pre-processing algorithms used in different situations, presenting different background models, and the most commonly used ways to update such models and how they can be initialized. We also survey how to measure the performance of any moving object detection algorithm, whether the ground truth data is available or not, presenting per- formance metrics commonly used in both cases. --- paper_title: Efficient adaptive density estimation per image pixel for the task of background subtraction paper_content: We analyze the computer vision task of pixel-level background subtraction. We present recursive equations that are used to constantly update the parameters of a Gaussian mixture model and to simultaneously select the appropriate number of components for each pixel. We also present a simple non-parametric adaptive density estimation method. The two methods are compared with each other and with some previously proposed algorithms. --- paper_title: Efficient Graph-Based Image Segmentation paper_content: This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions. --- paper_title: An HMM-based segmentation method for traffic monitoring movies paper_content: Shadows of moving objects often obstruct robust visual tracking. We propose an HMM-based segmentation method which classifies in real time each pixel or region into three categories: shadows, foreground, and background objects. In the case of traffic monitoring movies, the effectiveness of the proposed method has been proven through experimental results. --- paper_title: Determining Optical Flow paper_content: Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image. --- paper_title: Robust Computer Vision through Kernel Density Estimation paper_content: Two new techniques based on nonparametric estimation of probability densities are introduced which improve on the performance of equivalent robust methods currently employed in computer vision. The first technique draws from the projection pursuit paradigm in statistics, and carries out regression M-estimation with a weak dependence on the accuracy of the scale estimate. The second technique exploits the properties of the multivariate adaptive mean shift, and accomplishes the fusion of uncertain measurements arising from an unknown number of sources. As an example, the two techniques are extensively used in an algorithm for the recovery of multiple structures from heavily corrupted data. --- paper_title: Moving object detection in dynamic scenes based on optical flow and superpixels paper_content: Moving object detection under a dynamic background has been a serious challenge in real-time computer vision applications. Global motion compensation approaches, a popular existing technique, aims at compensating the moving background for moving target segmentation. However, it suffers from inaccurate global motion parameters estimation. The paper presents a moving object detection technique that combines TV-L1 optical flow with SLIC superpixel segmentation to characterize moving objects from a dynamic background. SLIC superpixel segmentation can adhere to boundaries of objects, and thus improve the segmentation performance. TV-L1 optical flow implemented on GPU reports competitive smooth flow field with real-time performance. Experimental results on various challenging sequences demonstrate that the proposed approach achieve impressive performance. --- paper_title: Background modeling and subtraction by codebook construction paper_content: We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques. --- paper_title: W4: Real-time surveillance of people and their activities paper_content: W/sup 4/ is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. W/sup 4/ can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. W/sup 4/ can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320/spl times/240 resolution images on a 400 MHz dual-Pentium II PC. --- paper_title: Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures paper_content: A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes. --- paper_title: A texture-based method for modeling the background and detecting moving objects paper_content: This paper presents a novel and efficient texture-based method for modeling the background and detecting moving objects from a video sequence. Each pixel is modeled as a group of adaptive local binary pattern histograms that are calculated over a circular region around the pixel. The approach provides us with many advantages compared to the state-of-the-art. Experimental results clearly justify our model. --- paper_title: Background subtraction driven seeds selection for moving objects segmentation and matting paper_content: In this paper, we address the difficult task of moving objects segmentation and matting in dynamic scenes. Toward this end, we propose a new automatic way to integrate a background subtraction (BGS) and an alpha matting technique via a heuristic seeds selection scheme. Specifically, our method can be divided into three main steps. First, we use a novel BGS method as attention mechanisms, generating many possible foreground pixels by tuning it for low false-positives and false-negatives as much as possible. Second, a connected components algorithm is used to give the bounding boxes of the labeled foreground pixels. Finally, matting of the object associated to a given bounding box is performed using a heuristic seeds selection scheme. This matting task is guided by top-down knowledge. Experimental results demonstrate the efficiency and effectiveness of our method. --- paper_title: Complex Background Subtraction by Pursuing Dynamic Spatio-Temporal Models paper_content: Although it has been widely discussed in video surveillance, background subtraction is still an open problem in the context of complex scenarios, e.g., dynamic backgrounds, illumination variations, and indistinct foreground objects. To address these challenges, we propose an effective background subtraction method by learning and maintaining an array of dynamic texture models within the spatio-temporal representations. At any location of the scene, we extract a sequence of regular video bricks, i.e., video volumes spanning over both spatial and temporal domain. The background modeling is thus posed as pursuing subspaces within the video bricks while adapting the scene variations. For each sequence of video bricks, we pursue the subspace by employing the auto regressive moving average model that jointly characterizes the appearance consistency and temporal coherence of the observations. During online processing, we incrementally update the subspaces to cope with disturbances from foreground objects and scene changes. In the experiments, we validate the proposed method in several complex scenarios, and show superior performances over other state-of-the-art approaches of background subtraction. The empirical studies of parameter setting and component analysis are presented as well. --- paper_title: Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes paper_content: Background modeling plays an important role in video surveillance, yet in complex scenes it is still a challenging problem. Among many difficulties, problems caused by illumination variations and dynamic backgrounds are the key aspects. In this work, we develop an efficient background subtraction framework to tackle these problems. First, we propose a scale invariant local ternary pattern operator, and show that it is effective for handling illumination variations, especially for moving soft shadows. Second, we propose a pattern kernel density estimation technique to effectively model the probability distribution of local patterns in the pixel process, which utilizes only one single LBP-like pattern instead of histogram as feature. Third, we develop multimodal background models with the above techniques and a multiscale fusion scheme for handling complex dynamic backgrounds. Exhaustive experimental evaluations on complex scenes show that the proposed method is fast and effective, achieving more than 10% improvement in accuracy compared over existing state-of-the-art algorithms. --- paper_title: Layered Dynamic Textures paper_content: A novel video representation, the layered dynamic texture (LDT), is proposed. The LDT is a generative model, which represents a video as a collection of stochastic layers of different appearance and dynamics. Each layer is modeled as a temporal texture sampled from a different linear dynamical system. The LDT model includes these systems, a collection of hidden layer assignment variables (which control the assignment of pixels to layers), and a Markov random field prior on these variables (which encourages smooth segmentations). An EM algorithm is derived for maximum-likelihood estimation of the model parameters from a training video. It is shown that exact inference is intractable, a problem which is addressed by the introduction of two approximate inference procedures: a Gibbs sampler and a computationally efficient variational approximation. The trade-off between the quality of the two approximations and their complexity is studied experimentally. The ability of the LDT to segment videos into layers of coherent appearance and dynamics is also evaluated, on both synthetic and natural videos. These experiments show that the model possesses an ability to group regions of globally homogeneous, but locally heterogeneous, stochastic dynamics currently unparalleled in the literature. --- paper_title: A texture-based method for detecting moving objects paper_content: The detection of moving objects from video frames plays an important and often very critical role in different kinds of machine vision applications including human detection and tracking, traffic monitoring, humanmachine interfaces and military applications, since it usually is one of the first phases in a system architecture. A common way to detect moving objects is background subtraction. In background subtraction, moving objects are detected by comparing each video frame against an existing model of the scene background. In this paper, we propose a novel block-based algorithm for background subtraction. The algorithm is based on the Local Binary Pattern (LBP) texture measure. Each image block is modelled as a group of weighted adaptive LBP histograms. The algorithm operates in real-time under the assumption of a stationary camera with fixed focal length. It can adapt to inherent changes in scene background and can also handle multimodal backgrounds. --- paper_title: Improving background subtraction using Local Binary Similarity Patterns paper_content: Most of the recently published background subtraction methods can still be classified as pixel-based, as most of their analysis is still only done using pixel-by-pixel comparisons. Few others might be regarded as spatial-based (or even spatiotemporal-based) methods, as they take into account the neighborhood of each analyzed pixel. Although the latter types can be viewed as improvements in many cases, most of the methods that have been proposed so far suffer in complexity, processing speed, and/or versatility when compared to their simpler pixel-based counterparts. In this paper, we present an adaptive background subtraction method, derived from the low-cost and highly efficient ViBe method, which uses a spatiotemporal binary similarity descriptor instead of simply relying on pixel intensities as its core component. We then test this method on multiple video sequences and show that by only replacing the core component of a pixel-based method it is possible to dramatically improve its overall performance while keeping memory usage, complexity and speed at acceptable levels for online applications. --- paper_title: A Biological Hierarchical Model Based Underwater Moving Object Detection paper_content: Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. --- paper_title: HMM topology design using maximum likelihood successive state splitting paper_content: Abstract Modelling contextual variations of phones is widely accepted as an important aspect of a continuous speech recognition system, and HMM distribution clustering has been sucessfully used to obtain robust models of context through distribution tying. However, as systems move to the challenge of spontaneous speech, temporal variation also becomes important. This paper describes a method fordesigning HMM topologies that learn both temporal and contextual variation, extending previous work on successive state splitting (SSS). The new approach uses a maximum likelihood criterion consistently at each step, overcoming the previous SSS limitation to speaker-dependent training. Initial experiments show both performance gains and training cost reduction over SSS with the reformulated algorithm. --- paper_title: A Hybrid Color-Based Foreground Object Detection Method for Automated Marine Surveillance paper_content: This paper proposes a hybrid foreground object detection method suitable for the marine surveillance applications. Our approach combines an existing foreground object detection method with an image color segmentation technique to improve accuracy. The foreground segmentation method employs a Bayesian decision framework, while the color segmentation part is graph-based and relies on the local variation of edges. We also establish the set of requirements any practical marine surveillance algorithm should fulfill, and show that our method conforms to these requirements. Experiments show good results in the domain of marine surveillance sequences. --- paper_title: A Probabilistic Background Model for Tracking paper_content: A new probabilistic background model based on a Hidden Markov Model is presented. The hidden states of the model enable discrimination between foreground, background and shadow. This model functions as a low level process for a car tracker. A particle filter is employed as a stochastic filter for the car tracker. The use of a particle filter allows the incorporation of the information from the low level process via importance sampling. A novel observation density for the particle filter which models the statistical dependence of neighboring pixels based on a Markov random field is presented. The effectiveness of both the low level process and the observation likelihood are demonstrated. --- paper_title: Bayesian modeling of dynamic scenes for object detection paper_content: Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foregrounds modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes. --- paper_title: A Bayesian computer vision system for modeling human interactions paper_content: We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system deals in particularly with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. Finally, a synthetic "Alife-style" training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training. --- paper_title: Image Segmentation in Video Sequences: A Probabilistic Approach paper_content: "Background subtraction" is an old technique for finding moving objects in a video sequence--for example, cars driving on a freeway. The idea is that subtracting the current image from a time-averaged background image will leave only nonstationary objects. It is, however, a crude approximation to the task of classifying each pixel of the current image; it fails with slow-moving objects and does not distinguish shadows from moving objects. The basic idea of this paper is that we can classify each pixel using a model of how that pixel looks when it is part of different classes. We learn a mixture-of-Gaussians classification model for each pixel using an unsupervised technique--an efficient, incremental version of EM. Unlike the standard image-averaging approach, this automatically updates the mixture component for each class according to likelihood of membership; hence slow-moving objects are handled perfectly. Our approach also identifies and eliminates shadows much more effectively than other techniques such as thresholding. Application of this method as part of the Roadwatch traffic surveillance project is expected to result in significant improvements in vehicle identification and tracking. --- paper_title: Layered Dynamic Textures paper_content: A novel video representation, the layered dynamic texture (LDT), is proposed. The LDT is a generative model, which represents a video as a collection of stochastic layers of different appearance and dynamics. Each layer is modeled as a temporal texture sampled from a different linear dynamical system. The LDT model includes these systems, a collection of hidden layer assignment variables (which control the assignment of pixels to layers), and a Markov random field prior on these variables (which encourages smooth segmentations). An EM algorithm is derived for maximum-likelihood estimation of the model parameters from a training video. It is shown that exact inference is intractable, a problem which is addressed by the introduction of two approximate inference procedures: a Gibbs sampler and a computationally efficient variational approximation. The trade-off between the quality of the two approximations and their complexity is studied experimentally. The ability of the LDT to segment videos into layers of coherent appearance and dynamics is also evaluated, on both synthetic and natural videos. These experiments show that the model possesses an ability to group regions of globally homogeneous, but locally heterogeneous, stochastic dynamics currently unparalleled in the literature. --- paper_title: Soft competitive adaptation: neural network learning algorithms based on fitting statistical mixtures paper_content: In this thesis, we consider learning algorithms for neural networks which are based on fitting a mixture probability density to a set of data. ::: We begin with an unsupervised algorithm which is an alternative to the classical winner-take-all competitive algorithms. Rather than updating only the parameters of the "winner" on each case, the parameters of all competitors are updated in proportion to their relative responsibility for the case. Use of such a "soft" competitive algorithm is shown to give better performance than the more traditional algorithms, with little additional cost. ::: We then consider a supervised modular architecture in which a number of simple "expert" networks compete to solve distinct pieces of a large task. A soft competitive mechanism is used to determine how much an expert learns on a case, based on how well the expert performs relative to the other expert networks. At the same time, a separate gating network learns to weight the output of each expert according to a prediction of its relative performance based on the input to the system. Experiments on a number of tasks illustrate that this architecture is capable of uncovering interesting task decompositions and of generalizing better than a single network with small training sets. ::: Finally, we consider learning algorithms in which we assume that the actual output of the network should fall into one of a small number of classes or clusters. The objective of learning is to make the variance of these classes as small as possible. In the classical decision-directed algorithm, we decide that an output belongs to the class it is closest to and minimize the squared distance between the output and the center (mean) of this closest class. In the "soft" version of this algorithm, we minimize the squared distance between the actual output and a weighted average of the means of all of the classes. The weighting factors are the relative probability that the output belongs to each class. This idea may also be used to model the weights of a network, to produce networks which generalize better from small training sets. --- paper_title: Dynamic texture segmentation paper_content: We address the problem of segmenting a sequence of images of natural scenes into disjoint regions that are characterized by constant spatio-temporal statistics. We model the spatio-temporal dynamics in each region by Gauss-Markov models, and infer the model parameters as well as the boundary of the regions in a variational optimization framework. Numerical results demonstrate that - in contrast to purely texture-based segmentation schemes - our method is effective in segmenting regions that differ in their dynamics even when spatial statistics are identical. --- paper_title: A tutorial on hidden Markov models and selected applications in speech recognition paper_content: This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. > --- paper_title: An HMM-based segmentation method for traffic monitoring movies paper_content: Shadows of moving objects often obstruct robust visual tracking. We propose an HMM-based segmentation method which classifies in real time each pixel or region into three categories: shadows, foreground, and background objects. In the case of traffic monitoring movies, the effectiveness of the proposed method has been proven through experimental results. --- paper_title: A dynamic conditional random field model for foreground and shadow segmentation paper_content: This paper proposes a dynamic conditional random field (DCRF) model for foreground object and moving shadow segmentation in indoor video scenes. Given an image sequence, temporal dependencies of consecutive segmentation fields and spatial dependencies within each segmentation field are unified by a dynamic probabilistic framework based on the conditional random field (CRF). An efficient approximate filtering algorithm is derived for the DCRF model to recursively estimate the segmentation field from the history of observed images. The foreground and shadow segmentation method integrates both intensity and gradient features. Moreover, models of background, shadow, and gradient information are updated adaptively for nonstationary background processes. Experimental results show that the proposed approach can accurately detect moving objects and their cast shadows even in monocular grayscale video sequences. --- paper_title: Sparse signal recovery using Markov Random Fields paper_content: Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms. --- paper_title: Segmenting foreground objects from a dynamic textured background via a robust Kalman filter paper_content: The algorithm presented aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the nonstationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an autoregressive moving average model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. --- paper_title: Discovery and Segmentation of Activities in Video paper_content: Hidden Markov models (HMMs) have become the workhorses of the monitoring and event recognition literature because they bring to time-series analysis the utility of density estimation and the convenience of dynamic time warping. Once trained, the internals of these models are considered opaque; there is no effort to interpret the hidden states. We show that by minimizing the entropy of the joint distribution, an HMM's internal state machine can be made to organize observed activity into meaningful states. This has uses in video monitoring and annotation, low bit-rate coding of scene activity, and detection of anomalous behavior. We demonstrate with models of office activity and outdoor traffic, showing how the framework learns principal modes of activity and patterns of activity change. We then show how this framework can be adapted to infer hidden state from extremely ambiguous images, in particular, inferring 3D body orientation and pose from sequences of low-resolution silhouettes. --- paper_title: Topology free hidden Markov models: application to background modeling paper_content: Hidden Markov models (HMMs) are increasingly being used in computer vision for applications such as: gesture analysis, action recognition from video, and illumination modeling. Their use involves an off-line learning step that is used as a basis for on-line decision making (i.e. a stationarity assumption on the model parameters). But, real-world applications are often non-stationary in nature. This leads to the need for a dynamic mechanism to learn and update the model topology as well as its parameters. This paper presents a new framework for HMM topology and parameter estimation in an online, dynamic fashion. The topology and parameter estimation is posed as a model selection problem with an MDL prior. Online modifications to the topology are made possible by incorporating a state splitting criterion. To demonstrate the potential of the algorithm, the background modeling problem is considered. Theoretical validation and real experiments are presented. --- paper_title: Joint Motion Segmentation and Background Estimation in Dynamic Scenes paper_content: We propose a joint foreground-background mixture model (FBM) that simultaneously performs background estimation and motion segmentation in complex dynamic scenes. Our FBM consist of a set of location-specific dynamic texture (DT) components, for modeling local background motion, and set of global DT components, for modeling consistent foreground motion. We derive an EM algorithm for estimating the parameters of the FBM. We also apply spatial constraints to the FBM using an Markov random field grid, and derive a corresponding variational approximation for inference. Unlike existing approaches to background subtraction, our FBM does not require a manually selected threshold or a separate training video. Unlike existing motion segmentation techniques, our FBM can segment foreground motions over complex background with mixed motions, and detect stopped objects. Since most dynamic scene datasets only contain videos with a single foreground object over a simple background, we develop a new challenging dataset with multiple foreground objects over complex dynamic backgrounds. In experiments, we show that jointly modeling the background and foreground segments with FBM yields significant improvements in accuracy on both background estimation and motion segmentation, compared to state-of-the-art methods. --- paper_title: A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants paper_content: The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. --- paper_title: Moving object detection in spatial domain using background removal techniques -- State-of-art paper_content: Identifying moving objects is a critical task for many computer vision applications; it provides a classification of the pixels into either foreground or background. A common approach used to achieve such classification is background removal. Even though there exist numerous of background removal algorithms in the literature, most of them follow a simple flow diagram, passing through four major steps, which are pre-processing, background modelling, foreground de- tection and data validation. In this paper, we survey many existing schemes in the literature of background removal, sur- veying the common pre-processing algorithms used in different situations, presenting different background models, and the most commonly used ways to update such models and how they can be initialized. We also survey how to measure the performance of any moving object detection algorithm, whether the ground truth data is available or not, presenting per- formance metrics commonly used in both cases. --- paper_title: Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures paper_content: A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes. --- paper_title: A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications paper_content: Detection of moving objects in video streams is the first relevant step of information extraction in many computer vision applications. Aside from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving objects provides a focus of attention for recognition, classification, and activity analysis, making these later steps more efficient. We propose an approach based on self organization through artificial neural networks, widely applied in human image processing systems and more generally in cognitive science. The proposed approach can handle scenes containing moving backgrounds, gradual illumination variations and camouflage, has no bootstrapping limitations, can include into the background model shadows cast by moving objects, and achieves robust detection for different types of videos taken with stationary cameras. We compare our method with other modeling techniques and report experimental results, both in terms of detection accuracy and in terms of processing speed, for color video sequences that represent typical situations critical for video surveillance systems. --- paper_title: A Model of Saliency-Based Visual Attention for Rapid Scene Analysis paper_content: A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail. Index Terms—Visual attention, scene analysis, feature extraction, target detection, visual search. ———————— F ———————— --- paper_title: Spatiotemporal Saliency in Dynamic Scenes paper_content: A spatiotemporal saliency algorithm based on a center-surround framework is proposed. The algorithm is inspired by biological mechanisms of motion-based perceptual grouping and extends a discriminant formulation of center-surround saliency previously proposed for static imagery. Under this formulation, the saliency of a location is equated to the power of a predefined set of features to discriminate between the visual stimuli in a center and a surround window, centered at that location. The features are spatiotemporal video patches and are modeled as dynamic textures, to achieve a principled joint characterization of the spatial and temporal components of saliency. The combination of discriminant center-surround saliency with the modeling power of dynamic textures yields a robust, versatile, and fully unsupervised spatiotemporal saliency algorithm, applicable to scenes with highly dynamic backgrounds and moving cameras. The related problem of background subtraction is treated as the complement of saliency detection, by classifying nonsalient (with respect to appearance and motion dynamics) points in the visual field as background. The algorithm is tested for background subtraction on challenging sequences, and shown to substantially outperform various state-of-the-art techniques. Quantitatively, its average error rate is almost half that of the closest competitor. --- paper_title: Background Subtraction Based on Low-Rank and Structured Sparse Decomposition paper_content: Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos. --- paper_title: Video saliency incorporating spatiotemporal cues and uncertainty weighting paper_content: We propose a method to detect visual saliency from video signals by combing both spatial and temporal information and statistical uncertainty measures. The main novelty of the proposed method is twofold. First, separate spatial and temporal saliency maps are generated, where the computation of temporal saliency incorporates a recent psychological study of human visual speed perception, where the perceptual prior probability distribution of the speed of motion is measured through a series of psychovisual experiments. Second, the spatial and temporal saliency maps are merged into one using a spatiotemporally adaptive entropy-based uncertainty weighting approach. Experimental results show that the proposed method significantly outperforms state-of-the-art video saliency detection models. --- paper_title: Detecting salient motion by accumulating directionally-consistent flow paper_content: Motion detection can play an important role in many vision tasks. Yet image motion can arise from "uninteresting" events as well as interesting ones. In this paper, salient motion is defined as motion that is likely to result from a typical surveillance target (e.g., a person or vehicle traveling with a sense of direction through a scene) as opposed to other distracting motions (e.g., the scintillation of specularities on water, the oscillation of vegetation in the wind). We propose an algorithm for detecting this salient motion that is based on intermediate-stage vision integration of optical flow. Empirical results are presented that illustrate the applicability of the proposed methods to real-world video. Unlike many motion detection schemes, no knowledge about expected object size or shape is necessary for rejecting the distracting motion. --- paper_title: Robust principal component analysis for computer vision paper_content: Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for "outliers" which are common in realistic training sets. In computer vision applications, outliers typically occur within a sample (image) due to pixels that are corrupted by noise, alignment errors, or occlusion. We review previous approaches for making PCA robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Principal Component Analysis (RPCA) and describe a robust M-estimation algorithm for learning linear multi-variate representations of high dimensional data such as images. Quantitative comparisons with traditional PCA and previous robust algorithms illustrate the benefits of RPCA when outliers are present. Details of the algorithm are described and a software implementation is being made publically available. --- paper_title: A dynamic conditional random field model for foreground and shadow segmentation paper_content: This paper proposes a dynamic conditional random field (DCRF) model for foreground object and moving shadow segmentation in indoor video scenes. Given an image sequence, temporal dependencies of consecutive segmentation fields and spatial dependencies within each segmentation field are unified by a dynamic probabilistic framework based on the conditional random field (CRF). An efficient approximate filtering algorithm is derived for the DCRF model to recursively estimate the segmentation field from the history of observed images. The foreground and shadow segmentation method integrates both intensity and gradient features. Moreover, models of background, shadow, and gradient information are updated adaptively for nonstationary background processes. Experimental results show that the proposed approach can accurately detect moving objects and their cast shadows even in monocular grayscale video sequences. --- paper_title: On the plausibility of the discriminant center-surround hypothesis for visual saliency paper_content: It has been suggested that saliency mechanisms play a role in perceptual organization. This work evaluates the plausibility of a recently proposed generic principle for visual saliency: that all saliency decisions are optimal in a decision-theoretic sense. The discriminant saliency hypothesis is combined with the classical assumption that bottom-up saliency is a center-surround process to derive a (decision-theoretic) optimal saliency architecture. Under this architecture, the saliency of each image location is equated to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround. The optimal saliency detector is derived for various stimulus modalities, including intensity, color, orientation, and motion, and shown to make accurate quantitative predictions of various psychophysics of human saliency for both static and motion stimuli. These include some classical nonlinearities of orientation and motion saliency and a Weber law that governs various types of saliency asymmetries. The discriminant saliency detectors are also applied to various saliency problems of interest in computer vision, including the prediction of human eye fixations on natural scenes, motion-based saliency in the presence of ego-motion, and background subtraction in highly dynamic scenes. In all cases, the discriminant saliency detectors outperform previously proposed methods from both the saliency and the general computer vision literatures. --- paper_title: Segmentation of infrared image using fuzzy thresholding via local region analysis paper_content: According to the characteristic of infrared images, a target extraction method based on fuzzy thresholding is proposed for vehicle target images. A membership function composed of the modified bi-modality and the inverse S adjacency is used. In order to meet the requirement of real time, the bi-modality measure is calculated only in the boundary regions so that the execution time can be greatly reduced. The inverse S adjacency function is used to take full advantage of the position information of the pixels in the reference region. Our method is processed as follows. First, we calculate the membership values consisting of the modified bi-modality and the new adjacency. And then we perform the fuzzy thresholding and the post-processing to extract the precise target from the background. In order to evaluate the performance of our method, the proposed method is compared with other segmentation methods. The results of experiments prove that the presented algorithm is fast and has a good segmentation performance. --- paper_title: A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection paper_content: The detection of moving objects from stationary cameras is usually approached by background subtraction, i.e. by constructing and maintaining an up-to-date model of the background and detecting moving objects as those that deviate from such a model. We adopt a previously proposed approach to background subtraction based on self-organization through artificial neural networks, that has been shown to well cope with several of the well known issues for background maintenance. Here, we propose a spatial coherence variant to such approach to enhance robustness against false detections and formulate a fuzzy model to deal with decision problems typically arising when crisp settings are involved. We show through experimental results and comparisons that higher accuracy values can be reached for color video sequences that represent typical situations critical for moving object detection. --- paper_title: Motion saliency detection using low-rank and sparse decomposition paper_content: Motion saliency detection has an important impact on further video processing tasks, such as video segmentation, object recognition and adaptive compression. Different to image saliency, in videos, moving regions (objects) catch human beings' attention much easier than static ones. Based on this observation, we propose a novel method of motion saliency detection, which makes use of the low-rank and sparse decomposition on video slices along X-T and Y-T planes to achieve the goal, i.e. separating foreground moving objects from backgrounds. In addition, we adopt the spatial information to preserve the completeness of the detected motion objects. In virtue of adaptive threshold selection and efficient noise elimination, the proposed approach is suitable for different video scenes, and robust to low resolution and noisy cases. The experiments demonstrate the performance of our method compared with the state-of-the-art. --- paper_title: Fuzzy statistical modeling of dynamic backgrounds for moving object detection in infrared videos paper_content: Mixture of Gaussians (MOG) is the most popular technique for background modeling and presents some limitations when dynamic changes occur in the scene like camera jitter and movement in the background. Furthermore, the MOG is initialized using a training sequence which may be noisy and/or insufficient to model correctly the background. All these critical situations generate false classification in the foreground detection mask due to the related uncertainty. In this context, we present a background modeling algorithm based on Type-2 Fuzzy Mixture of Gaussians which is particularly suitable for infrared videos. The use of the Type-2 Fuzzy Set Theory allows to take into account the uncertainty. The results using the OTCBVS benchmark/test dataset videos show the robustness of the proposed method in presence of dynamic backgrounds. --- paper_title: Method for building recognition from FLIR images paper_content: Herein, a method for building recognition is presented in forward looking infrared (FLIR) images with clutter background, which is composed of the several sub-procedures. In the first phase, a three-dimensional (3-D) target model is generated and the model features are predicted based on the sensor's perspective relative to the 3-D target model. The second phase of the process, multi-scale structuring elements are generated pertaining to the 3-D target model and flight trajectory. Structuring elements for infrared image is selected by a look-up-table approach based on the parameters of sensor's view, and the use of morphology-based filters can respond to the size and shape of target to suppress the clutter background. In the following process, iterative segmentation for the result image of background suppression is used to obtain regions of interest (ROIs), and features extraction of ROIs and matching retain the ROIs that are closest to predicted features. Lastly, the target is identified by fusing the line features and multi-frame integration. Experiment results show the proposed algorithm can precisely recognize the target from FLIR images with a complicated background. --- paper_title: Using histograms to detect and track objects in color video paper_content: Two methods of detecting and tracking objects in color video are presented. Color and edge histograms are explored as ways to model the background and foreground of a scene. The two types of methods are evaluated to determine their speed, accuracy and robustness. Histogram comparison techniques are used to compute similarity values that aid in identifying regions of interest. Foreground objects are detected and tracked by dividing each video frame into smaller regions (cells) and comparing the histogram of each cell to the background model. Results are presented for video sequences of human activity. --- paper_title: A Bayesian computer vision system for modeling human interactions paper_content: We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system deals in particularly with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. Finally, a synthetic "Alife-style" training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training. --- paper_title: Efficient background subtraction for real-time tracking in embedded camera networks paper_content: Background subtraction is often the first step of many computer vision applications. For a background subtraction method to be useful in embedded camera networks, it must be both accurate and computationally efficient because of the resource constraints on embedded platforms. This makes many traditional background subtraction algorithms unsuitable for embedded platforms because they use complex statistical models to handle subtle illumination changes. These models make them accurate but the computational requirement of these complex models is often too high for embedded platforms. In this paper, we propose a new background subtraction method which is both accurate and computational efficient. The key idea is to use compressive sensing to reduce the dimensionality of the data while retaining most of the information. By using multiple datasets, we show that the accuracy of our proposed background subtraction method is comparable to that of the traditional background subtraction methods. Moreover, real implementation on an embedded camera platform shows that our proposed method is at least 5 times faster, and consumes significantly less energy and memory resources than the conventional approaches. Finally, we demonstrated the feasibility of the proposed method by the implementation and evaluation of an end-to-end real-time embedded camera network target tracking application. --- paper_title: Independent Component Analysis-Based Background Subtraction for Indoor Surveillance paper_content: In video surveillance, detection of moving objects from an image sequence is very important for target tracking, activity recognition, and behavior understanding. Background subtraction is a very popular approach for foreground segmentation in a still scene image. In order to compensate for illumination changes, a background model updating process is generally adopted, and leads to extra computation time. In this paper, we propose a fast background subtraction scheme using independent component analysis (ICA) and, particularly, aims at indoor surveillance for possible applications in home-care and health-care monitoring, where moving and motionless persons must be reliably detected. The proposed method is as computationally fast as the simple image difference method, and yet is highly tolerable to changes in room lighting. The proposed background subtraction scheme involves two stages, one for training and the other for detection. In the training stage, an ICA model that directly measures the statistical independency based on the estimations of joint and marginal probability density functions from relative frequency distributions is first proposed. The proposed ICA model can well separate two highly-correlated images. In the detection stage, the trained de-mixing vector is used to separate the foreground in a scene image with respect to the reference background image. Two sets of indoor examples that involve switching on/off room lights and opening/closing a door are demonstrated in the experiments. The performance of the proposed ICA model for background subtraction is also compared with that of the well-known FastICA algorithm. --- paper_title: Complex Background Subtraction by Pursuing Dynamic Spatio-Temporal Models paper_content: Although it has been widely discussed in video surveillance, background subtraction is still an open problem in the context of complex scenarios, e.g., dynamic backgrounds, illumination variations, and indistinct foreground objects. To address these challenges, we propose an effective background subtraction method by learning and maintaining an array of dynamic texture models within the spatio-temporal representations. At any location of the scene, we extract a sequence of regular video bricks, i.e., video volumes spanning over both spatial and temporal domain. The background modeling is thus posed as pursuing subspaces within the video bricks while adapting the scene variations. For each sequence of video bricks, we pursue the subspace by employing the auto regressive moving average model that jointly characterizes the appearance consistency and temporal coherence of the observations. During online processing, we incrementally update the subspaces to cope with disturbances from foreground objects and scene changes. In the experiments, we validate the proposed method in several complex scenarios, and show superior performances over other state-of-the-art approaches of background subtraction. The empirical studies of parameter setting and component analysis are presented as well. --- paper_title: Background modeling and subtraction of dynamic scenes paper_content: Background modeling and subtraction is a core component in motion analysis. The central idea behind such module is to create a probabilistic representation of the static scene that is compared with the current input to perform subtraction. Such approach is efficient when the scene to be modeled refers to a static structure with limited perturbation. In this paper, we address the problem of modeling dynamic scenes where the assumption of a static background is not valid. Waving trees, beaches, escalators, natural scenes with rain or snow are examples. Inspired by the work proposed by Doretto et al. (2003), we propose an on-line auto-regressive model to capture and predict the behavior of such scenes. Towards detection of events we introduce a new metric that is based on a state-driven comparison between the prediction and the actual frame. Promising results demonstrate the potentials of the proposed framework. --- paper_title: Wallflower: principles and practice of background maintenance paper_content: Background maintenance is a frequent element of video surveillance systems. We develop Wallflower, a three-component system for background maintenance: the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background; the region-level component fills in homogeneous regions of foreground objects; and the frame-level component detects sudden, global changes in the image and swaps in better approximations of the background. We compare our system with 8 other background subtraction algorithms. Wallflower is shown to outperform previous algorithms by handling a greater set of the difficult situations that can occur. Finally, we analyze the experimental results and propose normative principles for background maintenance. --- paper_title: Spatiotemporal Saliency Detection and Its Applications in Static and Dynamic Scenes paper_content: This paper presents a novel method for detecting salient regions in both images and videos based on a discriminant center-surround hypothesis that the salient region stands out from its surroundings. To this end, our spatiotemporal approach combines the spatial saliency by computing distances between ordinal signatures of edge and color orientations obtained from the center and the surrounding regions and the temporal saliency by simply computing the sum of absolute difference between temporal gradients of the center and the surrounding regions. Our proposed method is computationally efficient, reliable, and simple to implement and thus it can be easily extended to various applications such as image retargeting and moving object extraction. The proposed method has been extensively tested and the results show that the proposed scheme is effective in detecting saliency compared to various state-of-the-art methods. --- paper_title: Background subtraction for static & moving camera paper_content: Background subtraction is one of the most commonly used components in machine vision systems. Despite the numerous algorithms proposed in the literature and used in practical applications, key challenges remain in designing a single system that can handle diverse environmental conditions. In this paper we present Multiple Background Model based Background Subtraction Algorithm as such a candidate. The algorithm was originally designed for handling sudden illumination changes. The new version has been refined with changes at different steps of the process, specifically in terms of selecting optimal color space, clustering of training images for Background Model Bank and parameter for each channel of color space. This has allowed the algorithm's applicability to wide variety of challenges associated with change detection including camera jitter, dynamic background, Intermittent Object Motion, shadows, bad weather, thermal, night videos etc. Comprehensive evaluation demonstrates the superiority of algorithm against state of the art. --- paper_title: A visual sensing platform for creating a smarter multi-modal marine monitoring network paper_content: Demands from various scientific and management communities along with legislative requirements at national and international levels have led to a need for innovative research into large-scale, low-cost, reliable monitoring of our marine and freshwater environments. In this paper we demonstrate the benefits of a multi-modal approach to monitoring and how an in-situ sensor network can be enhanced with the use of contextual image data. We provide an outline of the deployment of a visual sensing system at a busy port and the need for monitoring shipping traffic at the port. Subsequently we present an approach for detecting ships in a challenging image dataset and discuss how this can help to create an intelligent marine monitoring network. --- paper_title: Spatio-temporal patches for night background modeling by subspace learning paper_content: In this paper, a novel background model on spatio-temporal patches is introduced for video surveillance, especially for night outdoor scene, where extreme lighting conditions often cause troubles. The spatio-temporal patch, called brick, is presented to simultaneously capture spatio-temporal information in surveillance video. The set of bricks of a given background patch, under all possible lighting conditions, lies in a low-dimensional subspace, which can be learned by online subspace learning. The proposed method can efficiently model the background and detect the appearance and motion variance caused by foreground. Experimental results on real data show that the proposed method is insensitive to dramatic lighting changes and achieves superior performance to two classical methods. --- paper_title: Robust principal component analysis for computer vision paper_content: Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for "outliers" which are common in realistic training sets. In computer vision applications, outliers typically occur within a sample (image) due to pixels that are corrupted by noise, alignment errors, or occlusion. We review previous approaches for making PCA robust to outliers and present a new method that uses an intra-sample outlier process to account for pixel outliers. We develop the theory of Robust Principal Component Analysis (RPCA) and describe a robust M-estimation algorithm for learning linear multi-variate representations of high dimensional data such as images. Quantitative comparisons with traditional PCA and previous robust algorithms illustrate the benefits of RPCA when outliers are present. Details of the algorithm are described and a software implementation is being made publically available. --- paper_title: On the plausibility of the discriminant center-surround hypothesis for visual saliency paper_content: It has been suggested that saliency mechanisms play a role in perceptual organization. This work evaluates the plausibility of a recently proposed generic principle for visual saliency: that all saliency decisions are optimal in a decision-theoretic sense. The discriminant saliency hypothesis is combined with the classical assumption that bottom-up saliency is a center-surround process to derive a (decision-theoretic) optimal saliency architecture. Under this architecture, the saliency of each image location is equated to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround. The optimal saliency detector is derived for various stimulus modalities, including intensity, color, orientation, and motion, and shown to make accurate quantitative predictions of various psychophysics of human saliency for both static and motion stimuli. These include some classical nonlinearities of orientation and motion saliency and a Weber law that governs various types of saliency asymmetries. The discriminant saliency detectors are also applied to various saliency problems of interest in computer vision, including the prediction of human eye fixations on natural scenes, motion-based saliency in the presence of ego-motion, and background subtraction in highly dynamic scenes. In all cases, the discriminant saliency detectors outperform previously proposed methods from both the saliency and the general computer vision literatures. --- paper_title: Background subtraction under sudden illumination change paper_content: In this paper, we propose a Multiple Background Model based Background Subtraction (MB2S) algorithm that is robust against sudden illumination changes in indoor environment. It uses multiple background models of expected illumination changes followed by both pixel and frame based background subtraction on both RGB and YCbCr color spaces. The masks generated after processing these input images are then combined in a framework to classify background and foreground pixels. Evaluation of proposed approach on publicly available test sequences show higher precision and recall than other state-of-the-art algorithms. --- paper_title: Sparse signal recovery using Markov Random Fields paper_content: Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms. --- paper_title: A Biologically Inspired Vision-Based Approach for Detecting Multiple Moving Objects in Complex Outdoor Scenes paper_content: In the human brain, independent components of optical flows from the medial superior temporal area are speculated for motion cognition. Inspired by this hypothesis, a novel approach combining independent component analysis (ICA) with principal component analysis (PCA) is proposed in this paper for multiple moving objects detection in complex scenes—a major real-time challenge as bad weather or dynamic background can seriously influence the results of motion detection. In the proposed approach, by taking advantage of ICA’s capability of separating the statistically independent features from signals, the ICA algorithm is initially employed to analyze the optical flows of consecutive visual image frames. As a result, the optical flows of background and foreground can be approximately separated. Since there are still many disturbances in the foreground optical flows in the complex scene, PCA is then applied to the optical flows of foreground components so that major optical flows corresponding to multiple moving objects can be enhanced effectively and the motions resulted from the changing background and small disturbances are relatively suppressed at the same time. Comparative experimental results with existing popular motion detection methods for challenging imaging sequences demonstrate that our proposed biologically inspired vision-based approach can extract multiple moving objects effectively in a complex scene. --- paper_title: Segmenting foreground objects from a dynamic textured background via a robust Kalman filter paper_content: The algorithm presented aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the nonstationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an autoregressive moving average model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. --- paper_title: Total Variation Regularized RPCA for Irregularly Moving Object Detection Under Dynamic Background paper_content: Moving object detection is one of the most fundamental tasks in computer vision. Many classic and contemporary algorithms work well under the assumption that backgrounds are stationary and movements are continuous, but degrade sharply when they are used in a real detection system, mainly due to: 1) the dynamic background (e.g., swaying trees, water ripples and fountains in real scenarios, as well as raindrops and snowflakes in bad weather) and 2) the irregular object movement (like lingering objects). This paper presents a unified framework for addressing the difficulties mentioned above, especially the one caused by irregular object movement. This framework separates dynamic background from moving objects using the spatial continuity of foreground, and detects lingering objects using the temporal continuity of foreground. The proposed framework assumes that the dynamic background is sparser than the moving foreground that has smooth boundary and trajectory. We regard the observed video as being made up of the sum of a low-rank static background, a sparse and smooth foreground, and a sparser dynamic background. To deal with this decomposition, i.e., a constrained minimization problem, the augmented Lagrangian multiplier method is employed with the help of the alternating direction minimizing strategy. Extensive experiments on both simulated and real data demonstrate that our method significantly outperforms the state-of-the-art approaches, especially for the cases with dynamic backgrounds and discontinuous movements. --- paper_title: Background subtraction based on cooccurrence of image variations paper_content: This paper presents a novel background subtraction method for detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags. Most methods proposed so far adjust the permissible range of the background image variations according to the training samples of background images. Thus, the detection sensitivity decreases at those pixels having wide permissible ranges. If we can narrow the ranges by analyzing input images, the detection sensitivity can be improved. For this narrowing, we employ the property that image variations at neighboring image blocks have strong correlation, also known as "cooccurrence". This approach is essentially different from chronological background image updating or morphological postprocessing. Experimental results for real images demonstrate the effectiveness of our method. --- paper_title: Foreground detection via robust low rank matrix decomposition including spatio-temporal constraint paper_content: Foreground detection is the first step in video surveillance system to detect moving objects. Robust Principal Components Analysis (RPCA) shows a nice framework to separate moving objects from the background. The background sequence is then modeled by a low rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. In this paper, we propose to use a low-rank matrix factorization with IRLS scheme (Iteratively reweighted least squares) and to address in the minimization process the spatial connexity and the temporal sparseness of moving objects (e.g. outliers). Experimental results on the BMC 2012 datasets show the pertinence of the proposed approach. --- paper_title: A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications paper_content: Detection of moving objects in video streams is the first relevant step of information extraction in many computer vision applications. Aside from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving objects provides a focus of attention for recognition, classification, and activity analysis, making these later steps more efficient. We propose an approach based on self organization through artificial neural networks, widely applied in human image processing systems and more generally in cognitive science. The proposed approach can handle scenes containing moving backgrounds, gradual illumination variations and camouflage, has no bootstrapping limitations, can include into the background model shadows cast by moving objects, and achieves robust detection for different types of videos taken with stationary cameras. We compare our method with other modeling techniques and report experimental results, both in terms of detection accuracy and in terms of processing speed, for color video sequences that represent typical situations critical for video surveillance systems. --- paper_title: A texture-based method for modeling the background and detecting moving objects paper_content: This paper presents a novel and efficient texture-based method for modeling the background and detecting moving objects from a video sequence. Each pixel is modeled as a group of adaptive local binary pattern histograms that are calculated over a circular region around the pixel. The approach provides us with many advantages compared to the state-of-the-art. Experimental results clearly justify our model. --- paper_title: A Bayesian computer vision system for modeling human interactions paper_content: We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system deals in particularly with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. Finally, a synthetic "Alife-style" training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training. --- paper_title: Improving background subtraction using Local Binary Similarity Patterns paper_content: Most of the recently published background subtraction methods can still be classified as pixel-based, as most of their analysis is still only done using pixel-by-pixel comparisons. Few others might be regarded as spatial-based (or even spatiotemporal-based) methods, as they take into account the neighborhood of each analyzed pixel. Although the latter types can be viewed as improvements in many cases, most of the methods that have been proposed so far suffer in complexity, processing speed, and/or versatility when compared to their simpler pixel-based counterparts. In this paper, we present an adaptive background subtraction method, derived from the low-cost and highly efficient ViBe method, which uses a spatiotemporal binary similarity descriptor instead of simply relying on pixel intensities as its core component. We then test this method on multiple video sequences and show that by only replacing the core component of a pixel-based method it is possible to dramatically improve its overall performance while keeping memory usage, complexity and speed at acceptable levels for online applications. --- paper_title: Fuzzy statistical modeling of dynamic backgrounds for moving object detection in infrared videos paper_content: Mixture of Gaussians (MOG) is the most popular technique for background modeling and presents some limitations when dynamic changes occur in the scene like camera jitter and movement in the background. Furthermore, the MOG is initialized using a training sequence which may be noisy and/or insufficient to model correctly the background. All these critical situations generate false classification in the foreground detection mask due to the related uncertainty. In this context, we present a background modeling algorithm based on Type-2 Fuzzy Mixture of Gaussians which is particularly suitable for infrared videos. The use of the Type-2 Fuzzy Set Theory allows to take into account the uncertainty. The results using the OTCBVS benchmark/test dataset videos show the robustness of the proposed method in presence of dynamic backgrounds. --- paper_title: Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures paper_content: A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes. --- paper_title: Motion Competition: A Variational Approach to Piecewise Parametric Motion Segmentation paper_content: We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatio-temporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length. ::: ::: Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the Mumford-Shah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set. ::: ::: We propose two different representations of this motion boundary: an explicit spline-based implementation which can be applied to the motion-based tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects. ::: ::: Numerical results both for simulated ground truth experiments and for real-world sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion. --- paper_title: Background Subtraction Based on Low-Rank and Structured Sparse Decomposition paper_content: Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos. --- paper_title: Layered Dynamic Textures paper_content: A novel video representation, the layered dynamic texture (LDT), is proposed. The LDT is a generative model, which represents a video as a collection of stochastic layers of different appearance and dynamics. Each layer is modeled as a temporal texture sampled from a different linear dynamical system. The LDT model includes these systems, a collection of hidden layer assignment variables (which control the assignment of pixels to layers), and a Markov random field prior on these variables (which encourages smooth segmentations). An EM algorithm is derived for maximum-likelihood estimation of the model parameters from a training video. It is shown that exact inference is intractable, a problem which is addressed by the introduction of two approximate inference procedures: a Gibbs sampler and a computationally efficient variational approximation. The trade-off between the quality of the two approximations and their complexity is studied experimentally. The ability of the LDT to segment videos into layers of coherent appearance and dynamics is also evaluated, on both synthetic and natural videos. These experiments show that the model possesses an ability to group regions of globally homogeneous, but locally heterogeneous, stochastic dynamics currently unparalleled in the literature. --- paper_title: Robust principal component analysis? paper_content: This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces. --- paper_title: Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation paper_content: Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that the above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios. --- paper_title: Morphing Active Contours paper_content: A method for deforming curves in a given image to a desired position in a second image is introduced in this paper. The algorithm is based on deforming the first image toward the second one via a partial Differential equation, while tracking the deformation of the curves of interest in the first image with an additional, coupled, partial Differential equation. The tracking is performed by projecting the velocities of the first equation into the second one. In contrast with previous PDE based approaches, both the images and the curves on the frames/slices of interest are used for tracking. The technique can be applied to object tracking and sequential segmentation. The topology of the deforming curve can change, without any special topology handling procedures added to the scheme. This permits for example the automatic tracking of scenes where, due to occlusions, the topology of the objects of interest changes from frame to frame. In addition, this work introduces the concept of projecting velocities to obtain systems of coupled partial Differential equations for image analysis applications. We show examples for object tracking and segmentation of electronic microscopy. We also briefly discuss possible uses of this framework iifor three dimensional morphing. --- paper_title: NLEBS: automatic target detection using a unique nonlinear-enhancement-based system in IR images paper_content: A new automatic target detection method for IR images that only requires information about the size of the targets is described. The proposed nonlinear-enhancement-based system (NLEBS) algorithm is based on a nonlinear enhancement paradigm that increases the contrast of the targets with minimal change in the clutter's contrast. The NLEBS employs several stages of processing, each with a different operational purpose. First, the nonlinear enhancement operation is performed by using an iterative procedure. After binarization, segmentation merging causes each local image region to grow by filling in holes. Then segmentation pruning is applied to remove spurious segments. Finally, a heuristic-based metric is employed to validate the possible targets. The performance of the NLEBS was tested with a large set of IR images. The results of these experiments showed a probability of detection greater than 90% and a false-alarm rate of about 1 false alarm per image. --- paper_title: Segmenting foreground objects from a dynamic textured background via a robust Kalman filter paper_content: The algorithm presented aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the nonstationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an autoregressive moving average model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. --- paper_title: Joint Motion Segmentation and Background Estimation in Dynamic Scenes paper_content: We propose a joint foreground-background mixture model (FBM) that simultaneously performs background estimation and motion segmentation in complex dynamic scenes. Our FBM consist of a set of location-specific dynamic texture (DT) components, for modeling local background motion, and set of global DT components, for modeling consistent foreground motion. We derive an EM algorithm for estimating the parameters of the FBM. We also apply spatial constraints to the FBM using an Markov random field grid, and derive a corresponding variational approximation for inference. Unlike existing approaches to background subtraction, our FBM does not require a manually selected threshold or a separate training video. Unlike existing motion segmentation techniques, our FBM can segment foreground motions over complex background with mixed motions, and detect stopped objects. Since most dynamic scene datasets only contain videos with a single foreground object over a simple background, we develop a new challenging dataset with multiple foreground objects over complex dynamic backgrounds. In experiments, we show that jointly modeling the background and foreground segments with FBM yields significant improvements in accuracy on both background estimation and motion segmentation, compared to state-of-the-art methods. --- paper_title: Bayesian modeling of dynamic scenes for object detection paper_content: Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foregrounds modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes. --- paper_title: Using adaptive tracking to classify and monitor activities in a site paper_content: We describe a vision system that monitors activity in a site over extended periods of time. The system uses a distributed set of sensors to cover the site, and an adaptive tracker detects multiple moving objects in the sensors. Our hypothesis is that motion tracking is sufficient to support a range of computations about site activities. We demonstrate using the tracked motion data to calibrate the distributed sensors, to construct rough site models, to classify detected objects, to learn common patterns of activity for different object classes, and to detect unusual activities. --- paper_title: An unsupervised, online learning framework for moving object detection paper_content: Object detection with a learned classifier has been applied successfully to difficult tasks such as detecting faces and pedestrians. Systems using this approach usually learn the classifier offline with manually labeled training data. We present a framework that learns the classifier online with automatically labeled data for the specific case of detecting moving objects from video. Motion information is used to automatically label training examples collected directly from the live detection task video. An online learner based on the Winnow algorithm incrementally trains a task-specific classifier with these examples. Since learning occurs online and without manual help, it can continue in parallel with detection and adapt the classifier over time. The framework is demonstrated on a person detection task for an office corridor scene. In this task, we use background subtraction to automatically label training examples. After the initial manual effort of implementing the labeling method, the framework runs by itself on the scene video stream to gradually train an accurate detector. --- paper_title: Semi-Supervised On-line Boosting for Robust Tracking paper_content: Recently, on-line adaptation of binary classifiers for tracking have been investigated. On-line learning allows for simple classifiers since only the current view of the object from its surrounding background needs to be discriminiated. However, on-line adaption faces one key problem: Each update of the tracker may introduce an error which, finally, can lead to tracking failure (drifting). The contribution of this paper is a novel on-line semi-supervised boosting method which significantly alleviates the drifting problem in tracking applications. This allows to limit the drifting problem while still staying adaptive to appearance changes. The main idea is to formulate the update process in a semi-supervised fashion as combined decision of a given prior and an on-line classifier. This comes without any parameter tuning. In the experiments, we demonstrate real-time tracking of our SemiBoost tracker on several challenging test sequences where our tracker outperforms other on-line tracking methods. --- paper_title: Incremental Learning for Robust Visual Tracking paper_content: Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination. --- paper_title: A Boosted Particle Filter: Multitarget Detection and Tracking paper_content: The problem of tracking a varying number of non-rigid objects has two major difficulties. First, the observation models and target distributions can be highly non-linear and non-Gaussian. Second, the presence of a large, varying number of objects creates complex interactions with overlap and ambiguities. To surmount these difficulties, we introduce a vision system that is capable of learning, detecting and tracking the objects of interest. The system is demonstrated in the context of tracking hockey players using video sequences. Our approach combines the strengths of two successful algorithms: mixture particle filters and Adaboost. The mixture particle filter [17] is ideally suited to multi-target tracking as it assigns a mixture component to each player. The crucial design issues in mixture particle filters are the choice of the proposal distribution and the treatment of objects leaving and entering the scene. Here, we construct the proposal distribution using a mixture model that incorporates information from the dynamic models of each player and the detection hypotheses generated by Adaboost. The learned Adaboost proposal distribution allows us to quickly detect players entering the scene, while the filtering process enables us to keep track of the individual players. The result of interleaving Adaboost with mixture particle filters is a simple, yet powerful and fully automatic multiple object tracking system. --- paper_title: EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation paper_content: This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a “subspace constancy assumption” that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this “EigenTracking” technique to track and recognize the gestures of a moving hand. --- paper_title: Motion Competition: A Variational Approach to Piecewise Parametric Motion Segmentation paper_content: We present a novel variational approach for segmenting the image plane into a set of regions of parametric motion on the basis of two consecutive frames from an image sequence. Our model is based on a conditional probability for the spatio-temporal image gradient, given a particular velocity model, and on a geometric prior on the estimated motion field favoring motion boundaries of minimal length. ::: ::: Exploiting the Bayesian framework, we derive a cost functional which depends on parametric motion models for each of a set of regions and on the boundary separating these regions. The resulting functional can be interpreted as an extension of the Mumford-Shah functional from intensity segmentation to motion segmentation. In contrast to most alternative approaches, the problems of segmentation and motion estimation are jointly solved by continuous minimization of a single functional. Minimizing this functional with respect to its dynamic variables results in an eigenvalue problem for the motion parameters and in a gradient descent evolution for the motion discontinuity set. ::: ::: We propose two different representations of this motion boundary: an explicit spline-based implementation which can be applied to the motion-based tracking of a single moving object, and an implicit multiphase level set implementation which allows for the segmentation of an arbitrary number of multiply connected moving objects. ::: ::: Numerical results both for simulated ground truth experiments and for real-world sequences demonstrate the capacity of our approach to segment objects based exclusively on their relative motion. --- paper_title: Motion-based background subtraction using adaptive kernel density estimation paper_content: Background modeling is an important component of many vision systems. Existing work in the area has mostly addressed scenes that consist of static or quasi-static structures. When the scene exhibits a persistent dynamic behavior in time, such an assumption is violated and detection performance deteriorates. In this paper, we propose a new method for the modeling and subtraction of such scenes. Towards the modeling of the dynamic characteristics, optical flow is computed and utilized as a feature in a higher dimensional space. Inherent ambiguities in the computation of features are addressed by using a data-dependent bandwidth for density estimation using kernels. Extensive experiments demonstrate the utility and performance of the proposed approach. --- paper_title: A Unified Algebraic Approach to 2-D and 3-D Motion Segmentation paper_content: We present an analytic solution to the problem of estimating multiple 2-D and 3-D motion models from two-view correspondences or optical flow. The key to our approach is to view the estimation of multiple motion models as the estimation of a single multibody motion model. This is possible thanks to two important algebraic facts. First, we show that all the image measurements, regardless of their associated motion model, can be fit with a real or complex polynomial. Second, we show that the parameters of the motion model associated with an image measurement can be obtained from the derivatives of the polynomial at the measurement. This leads to a novel motion segmentation algorithm that applies to most of the two-view motion models adopted in computer vision. Our experiments show that the proposed algorithm outperforms existing algebraic methods in terms of efficiency and robustness, and provides a good initialization for iterative techniques, such as EM, which is strongly dependent on correct initialization. --- paper_title: Piecewise-Smooth Dense Optical Flow via Level Sets paper_content: We propose a new algorithm for dense optical flow computation. Dense optical flow schemes are challenged by the presence of motion discontinuities. In state of the art optical flow methods, over-smoothing of flow discontinuities accounts for most of the error. A breakthrough in the performance of optical flow computation has recently been achieved by Brox et~al. Our algorithm embeds their functional within a two phase active contour segmentation framework. Piecewise-smooth flow fields are accommodated and flow boundaries are crisp. Experimental results show the superiority of our algorithm with respect to alternative techniques. We also study a special case of optical flow computation, in which the camera is static. In this case we utilize a known background image to separate the moving elements in the sequence from the static elements. Tests with challenging real world sequences demonstrate the performance gains made possible by incorporating the static camera assumption in our algorithm. --- paper_title: Detecting salient motion by accumulating directionally-consistent flow paper_content: Motion detection can play an important role in many vision tasks. Yet image motion can arise from "uninteresting" events as well as interesting ones. In this paper, salient motion is defined as motion that is likely to result from a typical surveillance target (e.g., a person or vehicle traveling with a sense of direction through a scene) as opposed to other distracting motions (e.g., the scintillation of specularities on water, the oscillation of vegetation in the wind). We propose an algorithm for detecting this salient motion that is based on intermediate-stage vision integration of optical flow. Empirical results are presented that illustrate the applicability of the proposed methods to real-world video. Unlike many motion detection schemes, no knowledge about expected object size or shape is necessary for rejecting the distracting motion. --- paper_title: The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields paper_content: Most approaches for estimating optical flow assume that, within a finite image region, only a single motion is present. Thissingle motion assumptionis violated in common situations involving transparency, depth discontinuities, independently moving objects, shadows, and specular reflections. To robustly estimate optical flow, the single motion assumption must be relaxed. This paper presents a framework based onrobust estimationthat addresses violations of the brightness constancy and spatial smoothness assumptions caused by multiple motions. We show how therobust estimation frameworkcan be applied to standard formulations of the optical flow problem thus reducing their sensitivity to violations of their underlying assumptions. The approach has been applied to three standard techniques for recovering optical flow: area-based regression, correlation, and regularization with motion discontinuities. This paper focuses on the recovery of multiple parametric motion models within a region, as well as the recovery of piecewise-smooth flow fields, and provides examples with natural and synthetic image sequences. --- paper_title: Background models for tracking objects in water paper_content: This paper presents a novel background analysis technique to enable robust tracking of objects in water- based scenarios. Current pixel-wise statistical background models support automatic change detection in many outdoor situations, but are limited to background changes which can be modeled via a set of per-pixel spatially uncorrelated processes. In water-based scenarios, waves caused by wind or by moving vessels (wakes) form highly correlated moving patterns that confuse traditional background analysis models. In this work we introduce a framework that explicitly models this type of background variation. The framework combines the output of a statistical background model with localized optical flow analysis to produce two motion maps. In the final stage we apply object-level fusion to filter out moving regions that are most likely caused by wave clutter. A tracking algorithm can now handle the resulting set of objects. --- paper_title: A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms paper_content: Over the past few years, several methods for segmenting a scene containing multiple rigidly moving objects have been proposed. However, most existing methods have been tested on a handful of sequences only, and each method has been often tested on a different set of sequences. Therefore, the comparison of different methods has been fairly limited. In this paper, we compare four 3D motion segmentation algorithms for affine cameras on a benchmark of 155 motion sequences of checkerboard, traffic, and articulated scenes. --- paper_title: A Unified Algebraic Approach to 2-D and 3-D Motion Segmentation paper_content: We present an analytic solution to the problem of estimating multiple 2-D and 3-D motion models from two-view correspondences or optical flow. The key to our approach is to view the estimation of multiple motion models as the estimation of a single multibody motion model. This is possible thanks to two important algebraic facts. First, we show that all the image measurements, regardless of their associated motion model, can be fit with a real or complex polynomial. Second, we show that the parameters of the motion model associated with an image measurement can be obtained from the derivatives of the polynomial at the measurement. This leads to a novel motion segmentation algorithm that applies to most of the two-view motion models adopted in computer vision. Our experiments show that the proposed algorithm outperforms existing algebraic methods in terms of efficiency and robustness, and provides a good initialization for iterative techniques, such as EM, which is strongly dependent on correct initialization. --- paper_title: Object segmentation in video: A hierarchical variational approach for turning point trajectories into dense regions paper_content: Point trajectories have emerged as a powerful means to obtain high quality and fully unsupervised segmentation of objects in video shots. They can exploit the long term motion difference between objects, but they tend to be sparse due to computational reasons and the difficulty in estimating motion in homogeneous areas. In this paper we introduce a variational method to obtain dense segmentations from such sparse trajectory clusters. Information is propagated with a hierarchical, nonlinear diffusion process that runs in the continuous domain but takes superpixels into account. We show that this process raises the density from 3% to 100% and even increases the average precision of labels. --- paper_title: Object segmentation by long term analysis of point trajectories paper_content: Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting. --- paper_title: Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation paper_content: Object detection is a fundamental step for automated video analysis in many vision applications. Object detection in a video is usually performed by object detectors or background subtraction techniques. Often, an object detector requires manually labeled examples to train a binary classifier, while background subtraction needs a training sequence that contains no objects to build a background model. To automate the analysis, object detection without a separate training phase becomes a critical task. People have tried to tackle this task by using motion information. But existing motion-based methods are usually limited when coping with complex scenarios such as nonrigid motion and dynamic background. In this paper, we show that the above challenges can be addressed in a unified framework named DEtecting Contiguous Outliers in the LOw-rank Representation (DECOLOR). This formulation integrates object detection and background learning into a single process of optimization, which can be solved by an alternating algorithm efficiently. We explain the relations between DECOLOR and other sparsity-based methods. Experiments on both simulated data and real sequences demonstrate that DECOLOR outperforms the state-of-the-art approaches and it can work effectively on a wide range of complex scenarios. --- paper_title: Joint Motion Segmentation and Background Estimation in Dynamic Scenes paper_content: We propose a joint foreground-background mixture model (FBM) that simultaneously performs background estimation and motion segmentation in complex dynamic scenes. Our FBM consist of a set of location-specific dynamic texture (DT) components, for modeling local background motion, and set of global DT components, for modeling consistent foreground motion. We derive an EM algorithm for estimating the parameters of the FBM. We also apply spatial constraints to the FBM using an Markov random field grid, and derive a corresponding variational approximation for inference. Unlike existing approaches to background subtraction, our FBM does not require a manually selected threshold or a separate training video. Unlike existing motion segmentation techniques, our FBM can segment foreground motions over complex background with mixed motions, and detect stopped objects. Since most dynamic scene datasets only contain videos with a single foreground object over a simple background, we develop a new challenging dataset with multiple foreground objects over complex dynamic backgrounds. In experiments, we show that jointly modeling the background and foreground segments with FBM yields significant improvements in accuracy on both background estimation and motion segmentation, compared to state-of-the-art methods. --- paper_title: Forward-Backward Error: Automatic Detection of Tracking Failures paper_content: This paper proposes a novel method for tracking failure detection. The detection is based on the Forward-Backward error, i.e. the tracking is performed forward and backward in time and the discrepancies between these two trajectories are measured. We demonstrate that the proposed error enables reliable detection of tracking failures and selection of reliable trajectories in video sequences. We demonstrate that the approach is complementary to commonly used normalized cross-correlation (NCC). Based on the error, we propose a novel object tracker called Median Flow. State-of-the-art performance is achieved on challenging benchmark video sequences which include non-rigid objects. --- paper_title: Tracking-Learning-Detection paper_content: This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object's location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector's errors and updates it to avoid these errors in the future. We study how to identify the detector's errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of “experts”: (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches. --- paper_title: Adaptive Color Attributes for Real-Time Visual Tracking paper_content: Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attribute-based evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24 % in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second. --- paper_title: Automatic classification of ships from infrared (FLIR) images paper_content: The aim of the research presented in this paper is to find out whether automatic classification of ships from Forward Looking InfraRed (FLIR) images is feasible in maritime patrol aircraft. An image processing system has been developed for this task. It includes iterative shading correction and a top hat filter for the detection of the ship. It uses a segmentation algorithm based on the gray value distribution of the waves and the Hough transform to locate the waterline of the ship. A model has been developed to relate the size of the ship and the angle between waterline and horizon in image coordinates, to the real-life size and aspect angle of the ship. The model uses the camera elevation and distance to the ship. A data set was used consisting of two civil ships and four different frigates under different aspect angles and distances. From each of these ship images, 32 features were calculated, among which are the apparent size, the location of the hot spot and of the superstructures of the ship, and moment invariant functions. All features were used in feature selection processing using both the Mahalanobis and nearest neighbor (NN) criteria in forward, backward, and branch & bound feature selection procedures, to find the most significant features. Classification has been performed using a k-NN, a linear, and a quadratic classifier. In particular, using the 1-NN classifier, good results were achieved using a two-step classification algorithm. --- paper_title: Detection and tracking of moving objects in a maritime environment using level set with shape priors paper_content: Over the years, maritime surveillance has become increasingly important due to the recurrence of piracy. While surveillance has traditionally been a manual task using crew members in lookout positions on parts of the ship, much work is being done to automate this task using digital cameras coupled with a computer that uses image processing techniques that intelligently track object in the maritime environment. One such technique is level set segmentation which evolves a contour to objects of interest in a given image. This method works well but gives incorrect segmentation results when a target object is corrupted in the image. This paper explores the possibility of factoring in prior knowledge of a ship’s shape into level set segmentation to improve results, a concept that is unaddressed in maritime surveillance problem. It is shown that the developed video tracking system outperforms level set-based systems that do not use prior shape knowledge, working well even where these systems fail. --- paper_title: Argos - a Video Surveillance System for boat Traffic Monitoring in Venice paper_content: Visual surveillance in dynamic scenes is currently one of the most active research topics in computer vision, many existing applications are available. However, difficulties in realizing effective video surveillance systems that are robust to the many different conditions that arise in real environments, make the actual deployment of such systems very challenging. In this article, we present a real, unique and pioneer video surveillance system for boat traffic monitoring, ARGOS. The system runs continuously 24 hours a day, 7 days a week, day and night in the city of Venice (Italy) since 2007 and it is able to build a reliable background model of the water channel and to track the boats navigating the channel with good accuracy in real-time. A significant experimental evaluation, reported in this article, has been performed in order to assess the real performance of the system. --- paper_title: A novel framework for making dominant point detection methods non-parametric paper_content: Most dominant point detection methods require heuristically chosen control parameters. One of the commonly used control parameter is maximum deviation. This paper uses a theoretical bound of the maximum deviation of pixels obtained by digitization of a line segment for constructing a general framework to make most dominant point detection methods non-parametric. The derived analytical bound of the maximum deviation can be used as a natural bench mark for the line fitting algorithms and thus dominant point detection methods can be made parameter-independent and non-heuristic. Most methods can easily incorporate the bound. This is demonstrated using three categorically different dominant point detection methods. Such non-parametric approach retains the characteristics of the digital curve while providing good fitting performance and compression ratio for all the three methods using a variety of digital, non-digital, and noisy curves. --- paper_title: A Novel Hierarchical Method of Ship Detection from Spaceborne Optical Image Based on Shape and Texture Features paper_content: Ship detection from remote sensing imagery is very important, with a wide array of applications in areas such as fishery management, vessel traffic services, and naval warfare. This paper focuses on the issue of ship detection from spaceborne optical images (SDSOI). Although advantages of synthetic-aperture radar (SAR) result in that most of current ship detection approaches are based on SAR images, disadvantages of SAR still exist, such as the limited number of SAR sensors, the relatively long revisit cycle, and the relatively lower resolution. With the increasing number of and the resulting improvement in continuous coverage of the optical sensors, SDSOI can partly overcome the shortcomings of SAR-based approaches and should be investigated to help satisfy the requirements of real-time ship monitoring. In SDSOI, several factors such as clouds, ocean waves, and small islands affect the performance of ship detection. This paper proposes a novel hierarchical complete and operational SDSOI approach based on shape and texture features, which is considered a sequential coarse-to-fine elimination process of false alarms. First, simple shape analysis is adopted to eliminate evident false candidates generated by image segmentation with global and local information and to extract ship candidates with missing alarms as low as possible. Second, a novel semisupervised hierarchical classification approach based on various features is presented to distinguish between ships and nonships to remove most false alarms. Besides a complete and operational SDSOI approach, the other contributions of our approach include the following three aspects: 1) it classifies ship candidates by using their class probability distributions rather than the direct extracted features; 2) the relevant classes are automatically built by the samples' appearances and their feature attribute in a semisupervised mode; and 3) besides commonly used shape and texture features, a new texture operator, i.e., local multiple patterns, is introduced to enhance the representation ability of the feature set in feature extraction. Experimental results of SDSOI on a large image set captured by optical sensors from multiple satellites show that our approach is effective in distinguishing between ships and nonships, and obtains a satisfactory ship detection performance. --- paper_title: Ship detection by different data selection templates and multilayer perceptrons from incoherent maritime radar data paper_content: This study presents a novel way for detecting ships in sea clutter. For this purpose, the information contained in the Radar images obtained by an incoherent X-band maritime Radar is used. The ship detection is solved by feedforward artificial neural networks, such as the multilayer perceptrons (MLPs). In a first approach, the MLP processes the information extracted from the Radar images using the commonly used horizontal and vertical selection templates. But, if a suitable combination of these selection templates is done, better detection performances are achieved. So, two improved selection templates are proposed, which are based on cross and plus shapes. All these templates are also applied in a commonly used detector taken as reference, the CA-CFAR detector. Its performance is compared with the one achieved by the proposed detector. This comparison shows how the MLP-based detector outperforms the CA-CFAR detector in all the cases under study. The results are presented in terms of objective (probabilities of false alarm and detection) and subjective estimations of their performances. The improved MLP-based detector also presents low computational cost and high robustness in its performance against changes in the sea conditions and ship properties. --- paper_title: Persistent maritime surveillance using multi-sensor feature association and classification paper_content: In maritime operational scenarios, such as smuggling, piracy, or terrorist threats, it is not only relevant who or what an observed object is, but also where it is now and in the past in relation to other (geographical) objects. In situation and impact assessment, this information is used to determine whether an object is a threat. Single platform (ship, harbor) or single sensor information will not provide all this information. The work presented in this paper focuses on the sensor and object levels that provide a description of currently observed objects to situation assessment. For use of information of objects at higher information levels, it is necessary to have not only a good description of observed objects at this moment, but also from its past. Therefore, currently observed objects have to be linked to previous occurrences. Kinematic features, as used in tracking, are of limited use, as uncertainties over longer time intervals are so large that no unique associations can be made. Features extracted from different sensors (e.g., ESM, EO/IR) can be used for both association and classification. Features and classifications are used to associate current objects to previous object descriptions, allowing objects to be described better, and provide position history. In this paper a description of a high level architecture in which such a multi-sensor association is used is described. Results of an assessment of the usability of several features from ESM (from spectrum), EO and IR (shape, contour, keypoints) data for association and classification are shown. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE). --- paper_title: Maritime Surveillance : Tracking Ships inside a Dynamic Background Using a Fast Level-Set paper_content: Research highlights? Existing vision-based tracking methods are not suitable for the maritime domain. ? We derive a suitable tracking method by combining and modifying existing methods. ? Our method can track tiny targets. ? Our method is validated on several test sequences and two live field trials. Surveillance in a maritime environment is indispensable in the fight against a wide range of criminal activities, including pirate attacks, unlicensed fishing trailers and human trafficking. Computer vision systems can be a useful aid in the law enforcement process, by for example tracking and identifying moving vessels on the ocean. However, the maritime domain poses many challenges for the design of an effective maritime surveillance system. One such challenge is the tracking of moving vessels in the presence of a moving dynamic background (the ocean). We present techniques that address this particular problem. We use a background subtraction method and employ a real-time approximation of level-set-based curve evolution to demarcate the outline of moving vessels in the ocean. We report promising results on both small and large vessels, based on two field trials. --- paper_title: AUTOMATIC MARITIME SURVEILLANCE WITH VISUAL TARGET DETECTION paper_content: In this paper an automatic maritime surveillance system is presented. Boat detection is performed by means of an Haar-like classifier in order to obtain robustness with respect to targets having very different size, reflections and wakes on the water surface, and apparently motionless boats anchored off the coast. Detection results are filtered over the time in order to reduce the false alarm rate. Experimental results show the effectiveness of the approach with different light conditions and camera positions. The system is able to provide the user a global view adding a visual dimension to AIS data. --- paper_title: Adaptive maritime video surveillance paper_content: Maritime assets such as ports, harbors, and vessels are vulnerable to a variety of near-shore threats such as small-boat attacks. Currently, such vulnerabilities are addressed predominantly by watchstanders and manual video surveillance, which is manpower intensive. Automatic maritime video surveillance techniques are being introduced to reduce manpower costs, but they have limited functionality and performance. For example, they only detect simple events such as perimeter breaches and cannot predict emerging threats. They also generate too many false alerts and cannot explain their reasoning. To overcome these limitations, we are developing the Maritime Activity Analysis Workbench (MAAW), which will be a mixed-initiative real-time maritime video surveillance tool that uses an integrated supervised machine learning approach to label independent and coordinated maritime activities. It uses the same information to predict anomalous behavior and explain its reasoning; this is an important capability for watchstander training and for collecting performance feedback. In this paper, we describe MAAW's functional architecture, which includes the following pipeline of components: (1) a video acquisition and preprocessing component that detects and tracks vessels in video images, (2) a vessel categorization and activity labeling component that uses standard and relational supervised machine learning methods to label maritime activities, and (3) an ontology-guided vessel and maritime activity annotator to enable subject matter experts (e.g., watchstanders) to provide feedback and supervision to the system. We report our findings from a preliminary system evaluation on river traffic video. --- paper_title: Ship recognition for improved persistent tracking with descriptor localization and compact representations paper_content: For maritime situational awareness, it is important to identify currently observed ships as earlier encounters. For example, past location and behavior analysis are useful to determine whether a ship is of interest in case of piracy and smuggling. It is beneficial to verify this with cameras at a distance, to avoid the costs of bringing an own asset closer to the ship. The focus of this paper is on ship recognition from electro-optical imagery. The main contribution is an analysis of the effect of using the combination of descriptor localization and compact representations. An evaluation is performed to assess the usefulness in persistent tracking, especially for larger intervals (i.e. re-identification of ships). From the evaluation on recordings of imagery, it is estimated how well the system discriminates between different ships. --- paper_title: Recognition of ships for long-term tracking paper_content: Long-term tracking is important for maritime situational awareness to identify currently observed ships as earlier encounters. In cases of, for example, piracy and smuggling, past location and behavior analysis are useful to determine whether a ship is of interest. Furthermore, it is beneficial to make this assessment with sensors (such as cameras) at a distance, to avoid costs of bringing an own asset closer to the ship for verification. The emphasis of the research presented in this paper, is on the use of several feature extraction and matching methods for recognizing ships from electro-optical imagery within different categories of vessels. We compared central moments, SIFT with localization and SIFT with Fisher Vectors. From the evaluation on imagery of ships, an indication of discriminative power is obtained between and within different categories of ships. This is used to assess the usefulness in persistent tracking, from short intervals (track improvement) to larger intervals (re-identifying ships). The result of this assessment on real data is used in a simulation environment to determine how track continuity is improved. The simulations showed that even limited recognition will improve tracking, connecting both tracks at short intervals as well as over several days. --- paper_title: Discriminating small extended targets at sea from clutter and other classes of boats in infrared and visual light imagery paper_content: Operating in a coastal environment, with a multitude of boats of different sizes, detection of small extended targets is only one problem. A further difficulty is in discriminating detections of possible threats from alarms due to sea and coastal clutter, and from boats that are neutral for a specific operational task. Adding target features to detections allows filtering out clutter before tracking. Features can also be used to add labels resulting from a classification step. Both will help tracking by facilitating association. Labeling and information from features can be an aid to an operator, or can reduce the number of false alarms for more automatic systems. In this paper we present work on clutter reduction and classification of small extended targets from infrared and visual light imagery. Several methods for discriminating between classes of objects were examined, with an emphasis on less complex techniques, such as rules and decision trees. Similar techniques can be used to discriminate between targets and clutter, and between different classes of boats. Different features are examined that possibly allow discrimination between several classes. Data recordings are used, in infrared and visual light, with a range of targets including rhibs, cabin boats and jet-skis. --- paper_title: 3-D object modeling from 2-D occluding contour correspondences by opti-acoustic stereo imaging paper_content: 3-D object modeling from occluding contours from multi-modal optical/sonar stereo.Imaging same contour by zero-baseline stereo, circumventing dense feature matching.Improve reconstruction accuracy from 3-D contour positions and orientations.Identifying degenerate configurations and simple misalignment rectification.Flexibility to use navigation data or a few feature tracks for 3-D trajectory estimation. Utilizing in situ measurements to build 3-D volumetric object models under variety of turbidity conditions is highly desirable for marine sciences. To address the ineffectiveness of feature-based structure from motion and stereo methods under poor visibility, we explore a multi-modal stereo imaging technique that utilizes coincident optical and forward-scan sonar cameras, a so-called opti-acoustic stereo imaging system. The challenges of establishing dense feature correspondences in either opti-acoustic or low-contrast optical stereo images are avoided, by employing 2-D occluding contour correspondences, namely, the images of 3-D object occluding rims. Collecting opti-acoustic stereo pairs while circling an object, matching 2-D apparent contours in optical and sonar views to construct the 3-D occluding rim, and computing the stereo rig trajectory by opti-acoustic bundle adjustment, we generate registered samples of 3-D surface in a reference coordinate system. A surface interpolation gives the 3-D object model.In addition to the key advantage of utilizing range measurements from sonar, the proposed paradigm requires no assumption about local surface curvature as traditionally made in 3-D shape reconstruction from occluding contours. The reconstruction accuracy is improved by computing both the 3-D positions and local surface normals of sampled contours. We also present (1) a simple calibration method to estimate and correct for small discrepancy from the desired relative stereo pose; (2) an analytical analysis of the degenerate configuration that enables special treatment in mapping (tall) elongated objects with dominantly vertical edges. We demonstrate the performance of our method based on the 3-D surface rendering of certain objects, imaged by an underwater opti-acoustic stereo system. --- paper_title: EO system concepts in the littoral paper_content: In recent years, operations executed by naval forces have taken place at many different locations. At present, operations against international terrorism and asymmetric warfare in coastal environments are of major concern. In these scenarios, the threat caused by pirates on-board of small surface targets, such as jetskis and fast inshore attack crafts, is increasing. In the littoral environment, the understanding of its complexity and the efficient use of the limited reaction time, are essential for successful operations. Present-day electro-optical sensor suites, also incorporating Infrared Search and Track systems, can be used for varying tasks as detection, classification and identification. By means of passive electrooptical systems, infrared and visible light sensors, improved situational awareness can be achieved. For long range capability, elevated sensor masts and flying platforms are ideally suited for the surveillance task and improve situational awareness. A primary issue is how to incorporate new electro-optical technology and signal processing into the new sensor concepts, to improve system performance. It is essential to derive accurate information from the high spatial resolution imagery created by the EO sensors. As electro-optical sensors do not have all-weather capability, the performance degradation in adverse scenarios must be understood, in order to support the operational use of adaptive sensor management techniques. In this paper we discuss the approach taken at TNO in the design and assessment of system concepts for future IRST development. An overview of our maritime programme in future IRST and EO system concepts including signal processing is presented. --- paper_title: Multispectral Target Detection and Tracking for Seaport Video Surveillance paper_content: In this paper, a video surveillance process is presented including target detection and tracking of ships at the entrance of a seaport in order to improve security and to prevent terrorist attacks. This process is helpful in the automatic analysis of movements inside the seaport. Steps of detection and tracking are completed using IR data whereas the pattern recognition stage is achieved on color data. A comparison of results of detection and tracking is presented on both IR and color data in order to justify the choice of IR images for these two steps. A draft description of the pattern recognition stage is finally drawn up as development prospect. --- paper_title: The Autonomous Maritime Navigation (AMN) project: Field tests, autonomous and cooperative behaviors, data fusion, sensors, and vehicles paper_content: Many small boat operations can be considered in the categories of “dull, dirty, or dangerous” jobs that are appropriate for automation. Dull jobs include long-range missions or surveillance tasks that can cause physical or mental fatigue in the crew. In addition, operational considerations may limit a human operator's time, such as exposure to heat or cold or union rules on maximum hours per day or job. The marine environment can be a dirty one as well, with wind-driven salt spray damaging unprotected eyes. And military operations such as riverine patrol or interception of potentially hostile craft can be dangerous for a human crew. Robotic systems allowing autonomous small boat operations can be a good match by providing rugged systems that keep sailors out of harm's way and also display “digital acuity” in that their sensors will be as efficient in their first hour of patrolling as in their last. This paper describes work done to build and test an autonomy system allowing several different boats to perform significant missions both by themselves and in cooperative modes. © 2010 Wiley Periodicals, Inc. --- paper_title: Spatio-temporal alignment of sequences paper_content: This paper studies the problem of sequence-to-sequence alignment, namely, establishing correspondences in time and in space between two different video sequences of the same dynamic scene. The sequences are recorded by uncalibrated video cameras which are either stationary or jointly moving, with fixed (but unknown) internal parameters and relative intercamera external parameters. Temporal variations between image frames (such as moving objects or changes in scene illumination) are powerful cues for alignment, which cannot be exploited by standard image-to-image alignment techniques. We show that, by folding spatial and temporal cues into a single alignment framework, situations which are inherently ambiguous for traditional image-to-image alignment methods, are often uniquely resolved by sequence-to-sequence alignment. Furthermore, the ability to align and integrate information across multiple video sequences both in time and in space gives rise to new video applications that are not possible when only image-to-image alignment is used. --- paper_title: Persistent maritime surveillance using multi-sensor feature association and classification paper_content: In maritime operational scenarios, such as smuggling, piracy, or terrorist threats, it is not only relevant who or what an observed object is, but also where it is now and in the past in relation to other (geographical) objects. In situation and impact assessment, this information is used to determine whether an object is a threat. Single platform (ship, harbor) or single sensor information will not provide all this information. The work presented in this paper focuses on the sensor and object levels that provide a description of currently observed objects to situation assessment. For use of information of objects at higher information levels, it is necessary to have not only a good description of observed objects at this moment, but also from its past. Therefore, currently observed objects have to be linked to previous occurrences. Kinematic features, as used in tracking, are of limited use, as uncertainties over longer time intervals are so large that no unique associations can be made. Features extracted from different sensors (e.g., ESM, EO/IR) can be used for both association and classification. Features and classifications are used to associate current objects to previous object descriptions, allowing objects to be described better, and provide position history. In this paper a description of a high level architecture in which such a multi-sensor association is used is described. Results of an assessment of the usability of several features from ESM (from spectrum), EO and IR (shape, contour, keypoints) data for association and classification are shown. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE). --- paper_title: A visual sensing platform for creating a smarter multi-modal marine monitoring network paper_content: Demands from various scientific and management communities along with legislative requirements at national and international levels have led to a need for innovative research into large-scale, low-cost, reliable monitoring of our marine and freshwater environments. In this paper we demonstrate the benefits of a multi-modal approach to monitoring and how an in-situ sensor network can be enhanced with the use of contextual image data. We provide an outline of the deployment of a visual sensing system at a busy port and the need for monitoring shipping traffic at the port. Subsequently we present an approach for detecting ships in a challenging image dataset and discuss how this can help to create an intelligent marine monitoring network. --- paper_title: AUTOMATIC MARITIME SURVEILLANCE WITH VISUAL TARGET DETECTION paper_content: In this paper an automatic maritime surveillance system is presented. Boat detection is performed by means of an Haar-like classifier in order to obtain robustness with respect to targets having very different size, reflections and wakes on the water surface, and apparently motionless boats anchored off the coast. Detection results are filtered over the time in order to reduce the false alarm rate. Experimental results show the effectiveness of the approach with different light conditions and camera positions. The system is able to provide the user a global view adding a visual dimension to AIS data. --- paper_title: Integrated visual information for maritime surveillance paper_content: The main contribution of this chapter is to provide a data fusion (DF) scheme for combining in a unique view, radar and visual data. The experimental evaluation of the performance for the modules included in the framework has been carried out using publicly available data from the VOC dataset and the MarDT - Maritime Detection and Tracking (MarDT) data set, containing data coming from different real VTS systems, with ground truth information. Moreover, an operative scenario where traditional VTS systems can benefit from the proposed approach is presented. --- paper_title: Multisensory data exploitation using advanced image fusion and adaptive colorization paper_content: Multisensory data usually present complimentary information such as visual-band imagery and infrared imagery. There ::: is strong evidence that the fused multisensor imagery increases the reliability of interpretation, and the colorized ::: multisensor imagery improves observer performance and reaction times. In this paper, we propose an optimized joint ::: approach of image fusion and colorization in order to synthesize and enhance multisensor imagery such that the resulting ::: imagery can be automatically analyzed by computers (for target recognition) and easily interpreted by human users (for ::: visual analysis). The proposed joint approach provides two sets of synthesized images, a fused image in grayscale and a ::: colorized image in color using a fusion procedure and a colorization procedure, respectively. The proposed image fusion ::: procedure is based on the advanced discrete wavelet (aDWT) transform. The fused image quality (IQ) can be further ::: optimized with respect to an IQ metric by implementing an iterative aDWT procedure. On the other hand, the daylight ::: coloring technique renders the multisensor imagery with natural colors, which human users are use to observing in ::: everyday life. We hereby propose to locally colorize the multisensor imagery segment by mapping the color statistics of ::: the multisensor imagery to that of the daylight images, with which the colorized images resemble daylight pictures. This ::: local coloring procedure also involves histogram analysis, image segmentation, and pattern recognition. The joint fusion ::: and colorization approach can be performed automatically and adaptively regardless of the image contents. Experimental ::: results with multisensor imagery showed that the fused image is informative and clear, and the colored image appears ::: realistic and natural. We anticipate that this optimized joint approach for multisensor imagery will help improve target ::: recognition and visual analysis. ---
Title: Video Processing From Electro-Optical Sensors for Object Detection and Tracking in a Maritime Environment: A Survey Section 1: INTRODUCTION Description 1: Write about the importance of maritime surveillance, constraints of traditional methods, and the potential of EO sensors in supporting maritime navigation. Section 2: MARITIME DATASET FOR COMPARATIVE EVALUATION Description 2: Discuss the scarcity of publicly available maritime datasets, and introduce the Singapore Maritime Dataset and its features. Section 3: OBJECT DETECTION Description 3: Cover the steps involved in object detection, including horizon detection, static background subtraction, and foreground segmentation, along with comparisons of different methods. Section 4: OBJECT TRACKING Description 4: Explain how object tracking in maritime environments extends object detection by incorporating temporal data and dynamic background subtraction for more robust modeling. Section 5: COMPUTER VISION APPROACHES BEYOND MARITIME Description 5: Explore advanced computer vision techniques from other domains that could potentially be adapted for maritime object detection and tracking. Section 6: CONCLUDING REMARKS Description 6: Summarize the findings of the survey, reflect on the current state of maritime EO data processing, and mention future directions for research. Section 7: Postprocessing of Maritime EO Object Tracking Results Description 7: Detail methods of post-processing to refine and interpret tracking results in useful physical units and vessel classification. Section 8: Multisensor Approaches Description 8: Review how EO sensor data can be effectively combined with other sensor data types like radar and sonar to enhance object detection and tracking. Section 9: Commercial Maritime Systems Description 9: Discuss existing commercial maritime systems that integrate EO sensors for applications such as anti-collision and vessel identification.
Modeling and Simulation of Energy Systems: A Review
5
--- paper_title: Opportunities and challenges for a sustainable energy future paper_content: Access to clean, affordable and reliable energy has been a cornerstone of the world's increasing prosperity and economic growth since the beginning of the industrial revolution. Our use of energy in the twenty–first century must also be sustainable. Solar and water–based energy generation, and engineering of microbes to produce biofuels are a few examples of the alternatives. This Perspective puts these opportunities into a larger context by relating them to a number of aspects in the transportation and electricity generation sectors. It also provides a snapshot of the current energy landscape and discusses several research and development opportunities and pathways that could lead to a prosperous, sustainable and secure energy future for the world. --- paper_title: Bio-fuels from thermochemical conversion of renewable resources: A review paper_content: Demand for energy and its resources, is increasing every day due to the rapid outgrowth of population and urbanization. As the major conventional energy resources like coal, petroleum and natural gas are at the verge of getting extinct, biomass can be considered as one of the promising environment friendly renewable energy options. Different thermo-chemical conversion processes that include combustion, gasification, liquefaction, hydrogenation and pyrolysis, have been used to convert the biomass into various energy products. Although pyrolysis is still under developing stage but during current energy scenario, pyrolysis has received special attention as it can convert biomass directly into solid, liquid and gaseous products by thermal decomposition of biomass in absence of oxygen. In this review article, the focus has been made on pyrolysis while other conventional processes have been discussed in brief. For having better insight, various types of pyrolysis processes have been discussed in detail including slow, fast, flash and catalytic pyrolysis processes. Besides biomass resources and constituents, the composition and uses of pyrolysis products have been discussed in detail. This review article aim to focus on various operational parameters, viz. temperature and particle size of biomass and product yields using various types of biomasses. --- paper_title: A review of energy models paper_content: Energy is a vital input for social and economic development of any nation. With increasing agricultural and industrial activities in the country, the demand for energy is also increasing. Formulation of an energy model will help in the proper allocation of widely available renewable energy sources such as solar, wind, bioenergy and small hydropower in meeting the future energy demand in India. During the last decade several new concepts of energy planning and management such as decentralized planning, energy conservation through improved technologies, waste recycling, integrated energy planning, introduction of renewable energy sources and energy forecasting have emerged. In this paper an attempt has been made to understand and review the various emerging issues related to the energy modeling. The different types of models such as energy planning models, energy supply-demand models, forecasting models, renewable energy models, emission reduction models, optimization models have been reviewed and presented. Also, models based on neural network and fuzzy theory have been reviewed and discussed. The review paper on energy modeling will help the energy planners, researchers and policy makers widely. --- paper_title: A review of computer tools for analysing the integration of renewable energy into various energy systems paper_content: This paper includes a review of the different computer tools that can be used to analyse the integration of renewable energy. Initially 68 tools were considered, but 37 were included in the final analysis which was carried out in collaboration with the tool developers or recommended points of contact. The results in this paper provide the information necessary to identify a suitable energy tool for analysing the integration of renewable energy into various energy-systems under different objectives. It is evident from this paper that there is no energy tool that addresses all issues related to integrating renewable energy, but instead the [`]ideal' energy tool is highly dependent on the specific objectives that must be fulfilled. The typical applications for the 37 tools reviewed (from analysing single-building systems to national energy-systems), combined with numerous other factors such as the energy-sectors considered, technologies accounted for, time parameters used, tool availability, and previous studies, will alter the perception of the [`]ideal' energy tool. In conclusion, this paper provides the information necessary to direct the decision-maker towards a suitable energy tool for an analysis that must be completed. --- paper_title: A review of current challenges and trends in energy systems modeling paper_content: The requirements made on energy system models have changed during the last few decades. New challenges have arisen with the implementation of high shares of Renewable Energies. Along with the climate goals of the Paris Agreement, the national greenhouse gas strategies of industrialized countries involve the total restructuring of their energy systems. In order to archive these climate goals, fitted and customized models are required. For that reason, this paper focuses on national energy system models that incorporate all energy sectors and can support governmental decision making processes. The reviewed models are evaluated in terms of their characteristics, like their underlying methodology, analytical approach, time horizon and transformation path analysis, spatial and temporal resolution, licensing and modeling language. These attributes are set in the context of the region and time in which they were developed in order to identify trends in modeling. Furthermore, the revealed trends are set in the context of current challenges in energy systems modeling. Combining specified research questions and specific greenhouse gas reduction strategies, this paper will help researchers and decision makers find appropriate energy system models. --- paper_title: Modeling of hybrid renewable energy systems paper_content: Hybrid renewable energy systems (HRES) are becoming popular for remote area power generation applications due to advances in renewable energy technologies and subsequent rise in prices of petroleum products. Economic aspects of these technologies are sufficiently promising to include them in developing power generation capacity for developing countries. Research and development efforts in solar, wind, and other renewable energy technologies are required to continue for, improving their performance, establishing techniques for accurately predicting their output and reliably integrating them with other conventional generating sources. The paper describes methodologies to model HRES components, HRES designs and their evaluation. The trends in HRES design show that the hybrid PV/wind energy systems are becoming gaining popular. The issues related to penetration of these energy systems in the present distribution network are highlighted. --- paper_title: Introduction to Energy Systems Modelling paper_content: The energy demand and supply projections of the Swiss government funded by the Swiss Federal Office of Energy and carried out by a consortium of institutes and consulting companies are based on two types of energy models: macroeconomic general equilibrium models and bottom-up models for each sector. While the macroeconomic models are used to deliver the economic, demographic and policy framework conditions as well as the macroeconomic impacts of particular scenarios, the bottom-up models simulate the technical developments in the final energy sectors and try to optimise electricity generation under the given boundary conditions of a particular scenario. This introductory article gives an overview of some of the energy models used in Switzerland and — more importantly — some insights into current advanced energy system modelling practice pointing to the characteristics of the two modelling types and their advantages and limitations. --- paper_title: Energy overview for globalized world economy: Source, supply chain and sink paper_content: Energy use of the globalized world economy is comprehensively overviewed by means of a systems input-output analysis based on statistics of 2010. Emphases are put on the sources of primary energy exploitation, inter-regional trade imbalance of energy use via global supply chains, and sinks of energy use in final demand. The largest final user turns out to be the United States, compared with China as the leading energy exploiter. The global trade volume of energy use is shown in magnitude up to about 90% of the global primary energy exploited. The United States is recognized as the world’s biggest energy use importer, in contrast to Russia as the biggest exporter. Approximately one third of global primary energy exploited is shown to be embodied in inter-regional net trade. Japan and Russia are respectively illustrated to be the world’s leading net importer and leading net exporter of energy use. For China as the leading energy exploiter, about 30% of its exploited energy is for foreign regions’ final use, and 70% for its own final use. For the European Union as the largest sink region, nearly 80% of the energy required in its final use is from foreign regions, led by Russia. As reflected in the results, the conventional perspective based only on the direct energy consumption by region inevitably leads to inter-regional “energy grabbing” and “carbon leakage”, which raises a serious concern of “regional decrease at the expense of global increase”. In current context of energy shortage and climate change, this global energy overview can provide essential strategic implications at the international, national and regional scales for sustainable energy policy making. --- paper_title: A Review on Optimization Modeling of Energy Systems Planning and GHG Emission Mitigation under Uncertainty paper_content: Energy is crucial in supporting people’s daily lives and the continual quest for human development. Due to the associated complexities and uncertainties, decision makers and planners are facing increased pressure to respond more effectively to a number of energy-related issues and conflicts, as well as GHG emission mitigation within the multiple scales of energy management systems (EMSs). This quandary requires a focused effort to resolve a wide range of issues related to EMSs, as well as the associated economic and environmental implications. Effective systems analysis approaches under uncertainty to successfully address interactions, complexities, uncertainties, and changing conditions associated with EMSs is desired, which require a systematic investigation of the current studies on energy systems. Systems analysis and optimization modeling for low-carbon energy systems planning with the consideration of GHG emission reduction under uncertainty is thus comprehensively reviewed in this paper. A number of related methodologies and applications related to: (a) optimization modeling of GHG emission mitigation; (b) optimization modeling of energy systems planning under uncertainty; and (c) model-based decision support tools are examined. Perspectives of effective management schemes are investigated, demonstrating many demanding areas for enhanced research efforts, which include issues of data availability and reliability, concerns in uncertainty, necessity of post-modeling analysis, and usefulness of development of simulation techniques. --- paper_title: Review of models and actors in energy mix optimization – can leader visions and decisions align with optimum model strategies for our future energy systems? paper_content: Organizational behavior and stakeholder processes continually influence energy strategy choices and decisions. Although theoretical optimizations can provide guidance for energy mix decisions from a pure physical systems engineering point of view, these solutions might not be optimal from a political or social perspective. Improving the transparency of our vision sharing and strategy making processes in a systematic way is therefore as important as the actual systems engineering solutions proposed by the modeling tools. Energy trend forecasting and back-casting, scenarios and system analysis have matured into powerful modeling tools for providing advice on optimizing our future energy solutions. The integrated use and iterative improvement of all these approaches can result in energy systems that become better optimized. Such an integrated approach is particularly important to those who have decision-making power over our future energy direction. Some of the challenges and opportunities for energy strategists that strive to promote optimal decisions on our future energy solutions are highlighted in this state-of-the-art review. --- paper_title: Application of energy system models for designing a low-carbon society paper_content: Abstract Rising concern about the effect of greenhouse gas (GHG) emissions on climate change is pushing national governments and the international community to achieve sustainable development in an economy that is less dependent on carbon emitting activities – a vision that is usually termed a “low-carbon society” (LCS). Since the utilization of energy resources is the main source of GHG emissions, restructuring current energy systems in order to incorporate low-carbon energy technologies is essential for the realization of the LCS vision. Energy policies promoting the penetration of these technologies must view the role of energy in society as a system, composed of several energy resources, conversion technologies and energy demand sectors. The feasibility of the LCS in the future can be better understood by means of energy models. Energy models are valuable mathematical tools based on the systems approach. They have been applied to aid decision-making in energy planning, to analyze energy policies and to analyze the implications arising from the introduction of technologies. The design of the LCS requires innovative energy systems considering a trans-disciplinary approach that integrates multi-dimensional elements, related to social, economic, and environmental aspects. This paper reviews the application of energy models considering scenarios towards an LCS under the energy systems approach. The models reviewed consider the utilization of waste for energy, the penetration of clean coal technologies, transportation sector models as a sample of sectoral approaches, and models related to energy-for-development issues in rural areas of developing countries. --- paper_title: Design of Sustainable Biofuel Processes and Supply Chains: Challenges and Opportunities paper_content: The current methodological approach for developing sustainable biofuel processes and supply chains is flawed. Life cycle principles are often retrospectively incorporated in the design phase resulting in incremental environmental improvement rather than selection of fuel pathways that minimize environmental impacts across the life cycle. Further, designing sustainable biofuel supply chains requires joint consideration of economic, environmental, and social factors that span multiple spatial and temporal scales. However, traditional life cycle assessment (LCA) ignores economic aspects and the role of ecological goods and services in supply chains, and hence is limited in its ability for guiding decision-making among alternatives—often resulting in sub-optimal solutions. Simultaneously incorporating economic and environment objectives in the design and optimization of emerging biofuel supply chains requires a radical new paradigm. This work discusses key research opportunities and challenges in the design of emerging biofuel supply chains and provides a high-level overview of the current “state of the art” in environmental sustainability assessment of biofuel production. Additionally, a bibliometric analysis of over 20,000 biofuel research articles from 2000-to-present is performed to identify active topical areas of research in the biofuel literature, quantify the relative strength of connections between various biofuels research domains, and determine any potential research gaps. --- paper_title: Optimization of IGCC processes with reduced order CFD models paper_content: Abstract Integrated gasification combined cycle (IGCC) plants have significant advantages for efficient power generation with carbon capture. Moreover, with the development of accurate CFD models for gasification and combined cycle combustion, key units of these processes can now be modeled more accurately. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges. This study describes the development and demonstration of a reduced order modeling (ROM) framework for these tasks. The approach builds on the concepts of co-simulation and ROM development for process units described in earlier studies. Here we show how the ROMs derived from both gasification and combustion units can be integrated within an equation-oriented simulation environment for the overall optimization of an IGCC process. In addition to a systematic approach to ROM development, the approach includes validation tasks for the CFD model as well as closed-loop tests for the integrated flowsheet. This approach allows the application of equation-based nonlinear programming algorithms and leads to fast optimization of CFD-based process flowsheets. The approach is illustrated on two flowsheets based on IGCC technology. --- paper_title: Future directions in process and product synthesis and design paper_content: Abstract The incorporation of renewable energy sources to the market has brought a new golden era for process synthesis along with new challenges and opportunities. In this work, we give a brief review of the state of the art and then present challenges and future directions in this exciting area. The biggest driver is the rapid improvement in computer technology which greatly increases the number of factors that can be considered during the design process. Thus, some of the key future directions lie in integrating the design process with other aspects of process systems engineering, such as scheduling, planning, control, and supply chain management. In addition, sustainability is now a major consideration in which the design process must consider not only economic sustainability, but social sustainability and environmental sustainability when making design decisions. The tools available to address these challenges are limited but we are in a position to develop them based on strong chemical engineering principles following a multidisciplinary approach with contributions from other disciplines such as biology and biochemistry, computer science, materials, and chemistry. --- paper_title: A review of modelling approaches and tools for the simulation of district-scale energy systems paper_content: We present a comprehensive review of modelling approaches and associated software tools that address district-level energy systems. Buildings play an important role in urban energy systems regarding both the demand and supply of energy. It is no longer sufficient to simulate building energy use assuming isolation from the microclimate and energy system in which they operate, or to model an urban energy system without consideration of the buildings that it serves. This review complements previous studies by focussing on models that address district-level interactions in energy systems, and by assessing the capabilities of the software tools available alongside the theory of the modelling approaches used. --- paper_title: Hybrid and single feedstock energy processes for liquid transportation fuels: A critical review paper_content: Abstract This review provides a detailed account of the key contributions within the energy communities with specific emphasis on thermochemically based hybrid energy systems for liquid transportation fuels. Specifically, the advances in the indirect liquefaction of coal to liquid (CTL), natural gas to liquid (GTL), biomass to liquid (BTL), coal and natural gas to liquid (CGTL), coal and biomass to liquid (CBTL), natural gas and biomass to liquid (BGTL), and coal, biomass, and natural gas to liquid (CBGTL) are presented. This review is the first work that provides a comprehensive description of the contributions for the single-feedstock energy systems and the hybrid feedstock energy systems, for single stand-alone processes and energy supply chain networks. The focus is on contributions in (a) conceptual design, (b) process simulation, (c) economic analysis, (d) heat integration, (e) power integration, (f) water integration, (g) process synthesis, (h) life cycle analysis, (i) sensitivity analysis, (j) uncertainty issues, and (k) supply chain. A classification of the contributions based on the products, as well as different research groups is also provided. --- paper_title: Future opportunities and challenges in the design of new energy conversion systems paper_content: Abstract In this perspective, an overview of the key challenges and opportunities in the design of new energy systems is presented. Recent shifts in the prices of natural energy resources combined with growing environmental concerns are creating a new set of challenges for process design engineers. Due to the massive scale and impact of energy conversion processes, some of the best solutions to the energy crisis lie in the design of new process systems which address these new problems. In particular, many of the most promising solutions take a big-picture approach by integrating many different processes together to take advantage of synergies between seemingly unrelated processes. This paper is an extended version of a paper published as part of the proceedings of the 8th International Conference on the Foundations of Computer-Aided Process Design (FOCAPD 2014) Adams (2014) . --- paper_title: A review of modelling tools for energy and electricity systems with large shares of variable renewables paper_content: Abstract This paper presents a thorough review of 75 modelling tools currently used for analysing energy and electricity systems. Increased activity within model development in recent years has led to several new models and modelling capabilities, partly motivated by the need to better represent the integration of variable renewables. The purpose of this paper is to give an updated overview of currently available modelling tools, their capabilities and to serve as an aid for modellers in their process of identifying and choosing an appropriate model. A broad spectrum of modelling tools, ranging from small-scale power system analysis tools to global long-term energy models, has been assessed. Key information regarding the general logic, spatiotemporal resolution as well as the technological and economic features of the models is presented in three comprehensive tables. This information has been validated and updated by model developers or affiliated contact persons, and is state-of-the-art as of the submission date. With the available suite of modelling tools, most challenges of today's electricity system can be assessed. For a future with an increasing share of variable renewables and increasing electrification of the energy system, there are some challenges such as how to represent short-term variability in long-term studies, incorporate the effect of climate change and ensure openness and transparency in modelling studies. --- paper_title: Energy systems modeling for twenty-first century energy challenges paper_content: Energy systems models are important methods used to generate a range of insight and analysis on the supply and demand of energy. Developed over the second half of the twentieth century, they are now seeing increased relevance in the face of stringent climate policy, energy security and economic development concerns, and increasing challenges due to the changing nature of the twenty-first century energy system. In this paper, we look particularly at models relevant to national and international energy policy, grouping them into four categories: energy systems optimization models, energy systems simulation models, power systems and electricity market models, and qualitative and mixed-methods scenarios. We examine four challenges they face and the efforts being taken to address them: (1) resolving time and space, (2) balancing uncertainty and transparency, (3) addressing the growing complexity of the energy system, and (4) integrating human behavior and social risks and opportunities. In discussing these challenges, we present possible avenues for future research and make recommendations to ensure the continued relevance for energy systems models as important sources of information for policy-making. --- paper_title: A review of urban energy system models: Approaches, challenges and opportunities paper_content: Energy use in cities has attracted significant research in recent years. However such a broad topic inevitably results in number of alternative interpretations of the problem domain and the modelling tools used in its study. This paper seeks to pull together these strands by proposing a theoretical definition of an urban energy system model and then evaluating the state of current practice. Drawing on a review of 219 papers, five key areas of practice were identified – technology design, building design, urban climate, systems design, and policy assessment – each with distinct and incomplete interpretations of the problem domain. We also highlight a sixth field, land use and transportation modelling, which has direct relevance to the use of energy in cities but has been somewhat overlooked by the literature to date. Despite their diversity, these approaches to urban energy system modelling share four common challenges in understanding model complexity, data quality and uncertainty, model integration, and policy relevance. We then examine the opportunities for improving current practice in urban energy systems modelling, focusing on the potential of sensitivity analysis and cloud computing, data collection and integration techniques, and the use of activity-based modelling as an integrating framework. The results indicate that there is significant potential for urban energy systems modelling to move beyond single disciplinary approaches towards a sophisticated integrated perspective that more fully captures the theoretical intricacy of urban energy systems. --- paper_title: A review of energy systems models in the UK: Prevalent usage and categorisation paper_content: In this paper, a systematic review of academic literature and policy papers since 2008 is undertaken with an aim of identifying the prevalent energy systems models and tools in the UK. A list of all referenced models is presented and the literature is analysed with regards sectoral coverage and technological inclusion, as well as mathematical structure of models. --- paper_title: Energy-economic models and the environment paper_content: Abstract The current century—an era of environmental awareness—requires energy resources to satisfy the world's future energy demands. We can use current energy use scenarios to help us to understand how energy systems could change. Such scenarios are not an exercise in prophecy; rather they are designed to challenge our thinking in order to make better decisions today. The conventional modeling approach tends to extrapolate changes in energy consumption from historical trends; however, technology innovation sometimes causes drastic reforms in energy systems in the industrial, commercial, residential and transportation sectors. The economic aspects are another key issue to be considered in order to understand future changes in energy systems. The quantity of the energy supply is set to meet the price of the energy demand of end users. This occurs on the condition that the price of the energy supply equates with the price on the demand side under the market mechanism. This paper reviews the various issues associated with the energy-economic model and its application to national energy policies, renewable energy systems, and the global environment. --- paper_title: A review of energy system models. paper_content: Purpose – The purpose of this paper is to provide a comparative overview of existing energy system models to see whether they are suitable for analysing energy, environment and climate change policies of developing countries.Design/methodology/approach – The paper reviews the available literature and follows a systematic comparative approach to achieve its purpose.Findings – The paper finds that the existing energy system models inadequately capture the developing country features and the problem is more pronounced with econometric and optimisation models than with accounting models.Originality/value – Inaccurate representation of energy systems in the models can lead to inaccurate decisions and poor policy prescriptions. Thus, the paper helps policy makers and users to be aware of the possible pitfalls of various energy system models. --- paper_title: Optimization of single mixed-refrigerant natural gas liquefaction processes described by nondifferentiable models paper_content: Abstract A new strategy for the optimization of natural gas liquefaction processes is presented, in which flowsheets formulated using nondifferentiable process models are efficiently and robustly optimized using an interior-point algorithm. The constraints in the optimization formulation lead to solutions that ensure optimal usage of the area of multistream heat exchangers in the processes in order to minimize irreversibilities. The process optimization problems are solved reliably without the need for a complex initialization procedure even when highly accurate descriptions of the process stream cooling curves are required. In addition to the well-studied PRICO liquefaction process, two significantly more complex single mixed-refrigerant processes are successfully optimized and results are reported for each process subject to constraints imposed by several different operating scenarios. --- paper_title: Agent-based modeling on technological innovation as an evolutionary process paper_content: Abstract This paper describes a multi-agent model built to simulate the process of technological innovation, based on the widely accepted theory that technological innovation can be seen as an evolutionary process. The actors in the simulation include producers and a large number of consumers. Every producer will produce several types of products at each step. Each product is composed of several design parameters and several performance parameters (fitness components). Kauffman’s famous NK model is used to deal with the mapping from a design parameter space (DPS) to a performance parameter space (PPS). In addition to the constructional selection, which can be illustrated by the NK model, we added environmental selection into the simulation and explored technological innovation as the result of the interaction between these two kinds of selection. --- paper_title: Modeling technology adoptions for sustainable development under increasing returns, uncertainty, and heterogeneous agents paper_content: This paper presents a stylized model of technology adoptions for sustainable development under the three potentially most important "stylized facts": increasing returns to adoption, uncertainty, and heterogeneous agents following diverse technology development and adoption strategies. The stylized model deals with three technologies and two heterogeneous agents: a risk-taking one and a risk-averse one. Interactions between the two agents include trade in resources and goods, and technological spillover (free riding and technology trade). With the two heterogeneous agents, we run optimizations to minimize their aggregated costs in order to find out what rational behaviors are under different assumptions if the two agents are somehow cooperative. By considering uncertain carbon taxes, the model also addresses environmental issues as potential driving forces for technology adoptions. --- paper_title: The Use and Effects of Knowledge-Based System Explanations: Theoretical Foundations and a Framework for Empirical Evaluation paper_content: Ever since MYCIN introduced the idea of computer-based explanations to the artificial intelligence community, it has come to be taken for granted that all knowledge-based systems KBS need to provide explanations. While this widely-held belief has led to much research on the generation and implementation of various kinds of explanations, there has been no theoretical basis to justify the use of explanations by KBS users. This paper discusses the role of KBS explanations to provide an understanding of both the specific factors that influence explanation use and the consequences of such use. ::: ::: The first part of the paper proposes a model based on cognitive learning theories to identify the reasons for the provision of KBS explanations from the perspective of facilitating user learning. Using the feedforward and feedback operators of cognitive learning the paper develops strategies for providing KBS explanations and classifies the various types of explanations found in current KBS applications. ::: ::: This second part of the paper presents a two-part framework to investigate empirically the use of KBS explanations. The first part of the framework focuses on the potential factors that influence the explanation seeking behavior of KBS users, including user expertise, the types of explanations provided and the level of user agreement with the KBS. The second part of the framework explores the potential effects of the use of KBS explanations and specifically considers four distinct categories of potential effects: explanation use behavior, learning, perceptions, and judgmental decision making. --- paper_title: Engineering Design via Surrogate Modelling: A Practical Guide paper_content: Preface. About the Authors. Foreword. Prologue. Part I: Fundamentals. 1. Sampling Plans. 1.1 The 'Curse of Dimensionality' and How to Avoid It. 1.2 Physical versus Computational Experiments. 1.3 Designing Preliminary Experiments (Screening). 1.3.1 Estimating the Distribution of Elementary Effects. 1.4 Designing a Sampling Plan. 1.4.1 Stratification. 1.4.2 Latin Squares and Random Latin Hypercubes. 1.4.3 Space-filling Latin Hypercubes. 1.4.4 Space-filling Subsets. 1.5 A Note on Harmonic Responses. 1.6 Some Pointers for Further Reading. References. 2. Constructing a Surrogate. 2.1 The Modelling Process. 2.1.1 Stage One: Preparing the Data and Choosing a Modelling Approach. 2.1.2 Stage Two: Parameter Estimation and Training. 2.1.3 Stage Three: Model Testing. 2.2 Polynomial Models. 2.2.1 Example One: Aerofoil Drag. 2.2.2 Example Two: a Multimodal Testcase. 2.2.3 What About the k -variable Case? 2.3 Radial Basis Function Models. 2.3.1 Fitting Noise-Free Data. 2.3.2 Radial Basis Function Models of Noisy Data. 2.4 Kriging. 2.4.1 Building the Kriging Model. 2.4.2 Kriging Prediction. 2.5 Support Vector Regression. 2.5.1 The Support Vector Predictor. 2.5.2 The Kernel Trick. 2.5.3 Finding the Support Vectors. 2.5.4 Finding . 2.5.5 Choosing C and epsilon. 2.5.6 Computing epsilon : v -SVR 71. 2.6 The Big(ger) Picture. References. 3. Exploring and Exploiting a Surrogate. 3.1 Searching the Surrogate. 3.2 Infill Criteria. 3.2.1 Prediction Based Exploitation. 3.2.2 Error Based Exploration. 3.2.3 Balanced Exploitation and Exploration. 3.2.4 Conditional Likelihood Approaches. 3.2.5 Other Methods. 3.3 Managing a Surrogate Based Optimization Process. 3.3.1 Which Surrogate for What Use? 3.3.2 How Many Sample Plan and Infill Points? 3.3.3 Convergence Criteria. 3.3.4 Search of the Vibration Isolator Geometry Feasibility Using Kriging Goal Seeking. References. Part II: Advanced Concepts. 4. Visualization. 4.1 Matrices of Contour Plots. 4.2 Nested Dimensions. Reference. 5. Constraints. 5.1 Satisfaction of Constraints by Construction. 5.2 Penalty Functions. 5.3 Example Constrained Problem. 5.3.1 Using a Kriging Model of the Constraint Function. 5.3.2 Using a Kriging Model of the Objective Function. 5.4 Expected Improvement Based Approaches. 5.4.1 Expected Improvement With Simple Penalty Function. 5.4.2 Constrained Expected Improvement. 5.5 Missing Data. 5.5.1 Imputing Data for Infeasible Designs. 5.6 Design of a Helical Compression Spring Using Constrained Expected Improvement. 5.7 Summary. References. 6. Infill Criteria With Noisy Data. 6.1 Regressing Kriging. 6.2 Searching the Regression Model. 6.2.1 Re-Interpolation. 6.2.2 Re-Interpolation With Conditional Likelihood Approaches. 6.3 A Note on Matrix Ill-Conditioning. 6.4 Summary. References. 7. Exploiting Gradient Information. 7.1 Obtaining Gradients. 7.1.1 Finite Differencing. 7.1.2 Complex Step Approximation. 7.1.3 Adjoint Methods and Algorithmic Differentiation. 7.2 Gradient-enhanced Modelling. 7.3 Hessian-enhanced Modelling. 7.4 Summary. References. 8. Multi-fidelity Analysis. 8.1 Co-Kriging. 8.2 One-variable Demonstration. 8.3 Choosing X c and X e . 8.4 Summary. References. 9. Multiple Design Objectives. 9.1 Pareto Optimization. 9.2 Multi-objective Expected Improvement. 9.3 Design of the Nowacki Cantilever Beam Using Multi-objective, Constrained Expected Improvement. 9.4 Design of a Helical Compression Spring Using Multi-objective, Constrained Expected Improvement. 9.5 Summary. References. Appendix: Example Problems. A.1 One-Variable Test Function. A.2 Branin Test Function. A.3 Aerofoil Design. A.4 The Nowacki Beam. A.5 Multi-objective, Constrained Optimal Design of a Helical Compression Spring. A.6 Novel Passive Vibration Isolator Feasibility. References. Index. --- paper_title: Expert system methodologies and applications—a decade review from 1995 to 2004 paper_content: Abstract This paper surveys expert systems (ES) development using a literature review and classification of articles from 1995 to 2004 with a keyword index and article abstract in order to explore how ES methodologies and applications have developed during this period. Based on the scope of 166 articles from 78 academic journals (retrieved from five online database) of ES applications, this paper surveys and classifies ES methodologies using the following eleven categories: rule-based systems, knowledge-based systems, neural networks, fuzzy ESs, object-oriented methodology, case-based reasoning, system architecture, intelligent agent systems, database methodology, modeling, and ontology together with their applications for different research and problem domains. Discussion is presented, indicating the followings future development directions for ES methodologies and applications: (1) ES methodologies are tending to develop towards expertise orientation and ES applications development is a problem-oriented domain. (2) It is suggested that different social science methodologies, such as psychology, cognitive science, and human behavior could implement ES as another kind of methodology. (3) The ability to continually change and obtain new understanding is the driving power of ES methodologies, and should be the ES application of future works. --- paper_title: A hybrid neural network-first principles approach to process modeling paper_content: A hybrid neural network-first principles modeling scheme is developed and used to model a fedbatch bioreactor. The hybrid model combines a partial first principles model, which incorporates the available prior knowledge about the process being modeled, with a neural network which serves as an estimator of unmeasuredprocess parameters that are difficult to model from first principles. This hybrid model has better properties than standard “black-box” neural network models in that it is able to interpolate and extrapolate much more accurately, is easier to analyze and interpret, and requires significantly fewer training examples. Two alternative state and parameter estimation strategies, extended Kalman filtering and NLP optimization, are also considered. When no a priori known model of the unobserved process parameters is available, the hybrid network model gives better estimates of the parameters, when compared to these methods. By providing a model of these unmeasured parameters, the hybrid network can also make predictions and hence can be used for process optimization. These results apply both when full and partial state measurements are available, but in the latter case a state reconstruction method must be used for the first principles component of the hybrid model. --- paper_title: A combined first-principles and data-driven approach to model building paper_content: Abstract We address a central theme of empirical model building: the incorporation of first-principles information in a data-driven model-building process. By enabling modelers to leverage all available information, regression models can be constructed using measured data along with theory-driven knowledge of response variable bounds, thermodynamic limitations, boundary conditions, and other aspects of system knowledge. We expand the inclusion of regression constraints beyond intra-parameter relationships to relationships between combinations of predictors and response variables. Since the functional form of these constraints is more intuitive, they can be used to reveal hidden relationships between regression parameters that are not directly available to the modeler. First, we describe classes of a priori modeling constraints. Next, we propose a semi-infinite programming approach for the incorporation of these novel constraints. Finally, we detail several application areas and provide extensive computational results. --- paper_title: A Versatile Simulation Method for Complex Single Mixed Refrigerant Natural Gas Liquefaction Processes paper_content: Natural gas liquefaction is an energy intensive process with very small driving forces particularly in the low temperature region. Small temperature differences in the heat exchangers and high operating and capital costs require the use of an accurate and robust simulation tool for analysis. Unfortunately, state-of-the-art process simulators such as Aspen Plus and Aspen HYSYS have significant limitations in their ability to model multistream heat exchangers, which are critical unit operations in liquefaction processes. In particular, there exist no rigorous checks to prevent temperature crossovers from occurring in the heat exchangers, and the parameters must therefore be determined through a manual iterative approach to establish feasible operating conditions for the process. A multistream heat exchanger model that performs these checks, as well as area calculations for economic analysis, has previously been developed using a nonsmooth modeling approach. In addition, the model was used to successfully si... --- paper_title: The Role of Solid Oxide Fuel Cells in Advanced Hybrid Power Systems of the Future paper_content: In pursuing the implementation of highly efficient, emission-free power, the U.S. Department of Energy (DOE) is looking to the development of hybrid power systems that make use of the coupling of an electrochemical device with a heat engine, or more specifically, a solid oxide fuel cell (SOFC) and a gas turbine.1-4 The synergies of coupling these systems in a hybrid configuration provide the potential for reaching the highest possible electric conversion efficiency ever realized.5 As such, advanced hybrid power systems that incorporate a fuel cell and a gas turbine represent fossil or renewable energy production technology that provide the opportunity for a significant improvement in generation efficiency.6 An example of a simplified process diagram of the power cycle in a hybrid fuel cell gas turbine is shown in Fig. 1. While much of the DOE-sponsored research focuses on improving the performance of solid oxide fuel cells, a hardware simulation facility has been built by the Office of Research and Development at the National Energy Technology Laboratory (NETL) to explore both synergies and technical issues associated with integrated hybrid systems. The facility is part of the Hybrid Performance (Hyper) project, and is made available for public research collaboration with universities, industry, and other research institutions. The Hyper facility is capable of simulating high temperature fuel cell systems from 300 kW to 700 kW coupled with a 120 kW turbine. The purpose of the Hyper project is to specifically address this higher risk research by combining the flexibility of numerical simulation with the accuracy of experimental hardware.7 An illustration of the Hyper facility is shown in Fig. 2. The Hyper facility makes use of pressure vessels and piping to simulate the volume and flow impedance of the cathode and a burner controlled by a real-time fuel cell model running on a dSpace hardware-in-the-loop simulation platform to simulate the fuel cell thermal effluent. The hardware used to simulate the fuel cell is integrated with a 120 kW Garrett Series 85 auxiliary power unit (APU) for turbine and compressor system. The APU consists of single shaft, direct coupled turbine operating at a nominal 40,500 rpm, a two-stage radial compressor, and gear driven synchronous generator. The electrical generator is loaded by an isolated, continuously variable 120 kW resistor Fig. 2. Illustration of the Hybrid Performance (Hyper) simulation facility at NETL. Fig. 1. Simplified flow diagram of a representative direct fired, recuperated fuel cell gas turbine hybrid system. --- paper_title: Modeling chemical processes using prior knowledge and neural networks paper_content: We present a method for synthesizing chemical process models that combines prior knowledge and artificial neural networks. The inclusion of prior knowledge is investigated as a means of improving the neural network predictions when trained on sparse and noisy process data. Prior knowledge enters the hybrid model as a simple process model and first principle equations. The simple model controls the extrapolation of the hybrid in the regions of input space that lack training data. The first principle equations, such as mass and component balances, enforce equality constraints. The neural network compensates for inaccuracy in the prior model. In addition, inequality constraints are imposed during parameter estimation. For illustration, the approach is applied in predicting cell biomass and secondary metabolite in a fed-batch fermentation. The results show that prior knowledge enhances the generalization capabilities of a pure neural network model. The approach is shown to require less data for parameter estimation, produce more accurate and consistent predictions, and provide more reliable extrapolation --- paper_title: Surrogate model generation using self-optimizing variables paper_content: Abstract This paper presents the application of self-optimizing concepts for more efficient generation of steady-state surrogate models. Surrogate model generation generally has problems with a large number of independent variables resulting in a large sampling space. If the surrogate model is to be used for optimization, utilizing self-optimizing variables allows to map a close-to-optimal response surface, which reduces the model complexity. In particular, the mapped surface becomes much “flatter”, allowing for a simpler representation, for example, a linear map or neglecting the dependency of certain variables completely. The proposed method is studied using an ammonia reactor which for some disturbances shows limit-cycle behaviour and/or reactor extinction. Using self-optimizing variables, it is possible to reduce the number of manipulated variables by three and map a response surface close to the optimal response surface. With the original variables, the response surface would include also regions in which the reactor is extinct. --- paper_title: Intelligent systems in process engineering: a review paper_content: Abstract The purpose of this review is three-fold. First, sketch the directions that research and industrial applications of “intelligent systems” have taken in several areas of process engineering. Second, identify the emerging trends in each area, as well as the common threads that cut across several domains of inquiry. Third, stipulate research and development themes of significant importance for the future evolution of “intelligent systems” in process engineering. The paper covers the following seven areas: diagnosis of process operations; monitoring and analysis of process trends; intelligent control; heuristics and logic in planning and scheduling of process operations; modeling languages, simulation, and reasoning; intelligence in scientific computing; knowledge-based engineering design. Certain trends seem to be common and will (in all likelihood) characterize the nature of the future deployment of “intelligent systems”. These trends are: (1) Specialization to narrowly defined classes of problems. (2) Integration of multiple knowledge representations , so that all of relevant knowledge is captured and utilized. (3) Integration of processing methodologies , which tends to blur the past sharp distinctions between AI-based techniques and those from operations research, systems and control theory, probability and statistics. (4) Rapidly expanding range of industrial applications with significant increase in the scope of engineering tasks and size of problems. --- paper_title: Fuel Cell Gas Turbine Hybrid Simulation Facility Design paper_content: Fuel cell hybrid power systems have potential for the highest electrical power generation efficiency. Fuel cell gas turbine hybrid systems are currently under development as the first step in commercializing this technology. The dynamic interdependencies resulting from the integration of these two power generation technologies is not well understood. Unexpected complications can arise in the operation of an integrated system, especially during startup and transient events. Fuel cell gas turbine systems designed to operate under steady state conditions have limitations in studying the dynamics of a transient event without risk to the more fragile components of the system. A 250kW experimental fuel cell gas turbine system test facility has been designed at the National Energy Technology Laboratory (NETL), U.S. Department of Energy to examine the effects of transient events on the dynamics of these systems. The test facility will be used to evaluate control strategies for improving system response to transient events and load following. A fuel cell simulator, consisting of a natural gas burner controlled by a real time fuel cell model, will be integrated into the system in place of a real solid oxide fuel cell. The use of a fuel cell simulator in the initial phases allows for the exploration of transient events without risk of destroying an actual fuel cell. Fuel cell models and hybrid system models developed at NETL have played an important role in guiding the design of facility equipment and experimental research planning. Results of certain case studies using these models are discussed. Test scenarios were analyzed for potential thermal and mechanical impact on fuel cell, heat exchanger and gas turbine components. Temperature and pressure drop calculations were performed to determine the maximum impact on system components and design. Required turbine modifications were designed and tested for functionality. The resulting facility design will allow for examination of startup, shut down, loss of load to the fuel cell during steady state operations, loss of load to the turbine during steady state operations and load following.Copyright © 2002 by ASME --- paper_title: Simulation of Dual Mixed Refrigerant Natural Gas Liquefaction Processes Using a Nonsmooth Framework paper_content: Natural gas liquefaction is an energy intensive process where the feed is cooled from ambient temperature down to cryogenic temperatures. Different liquefaction cycles exist depending on the application, with dual mixed refrigerant processes normally considered for the large-scale production of Liquefied Natural Gas (LNG). Large temperature spans and small temperature differences in the heat exchangers make the liquefaction processes difficult to analyze. Exergetic losses from irreversible heat transfer increase exponentially with a decreasing temperature at subambient conditions. Consequently, an accurate and robust simulation tool is paramount to allow designers to make correct design decisions. However, conventional process simulators, such as Aspen Plus, suffer from significant drawbacks when modeling multistream heat exchangers. In particular, no rigorous checks exist to prevent temperature crossovers. Limited degrees of freedom and the inability to solve for stream variables other than outlet temperatures also makes such tools inflexible to use, often requiring the user to resort to a manual iterative procedure to obtain a feasible solution. In this article, a nonsmooth, multistream heat exchanger model is used to develop a simulation tool for two different dual mixed refrigerant processes. Case studies are presented for which Aspen Plus fails to obtain thermodynamically feasible solutions. --- paper_title: Artificial neural network models for biomass gasification in fluidized bed gasifiers paper_content: Abstract Artificial neural networks (ANNs) have been applied for modeling biomass gasification process in fluidized bed reactors. Two architectures of ANNs models are presented; one for circulating fluidized bed gasifiers (CFB) and the other for bubbling fluidized bed gasifiers (BFB). Both models determine the producer gas composition (CO, CO 2 , H 2 , CH 4 ) and gas yield. Published experimental data from other authors has been used to train the ANNs. The obtained results show that the percentage composition of the main four gas species in producer gas (CO, CO 2 , H 2 , CH 4 ) and producer gas yield for a biomass fluidized bed gasifier can be successfully predicted by applying neural networks. ANNs models use in the input layer the biomass composition and few operating parameters, two neurons in the hidden layer and the backpropagation algorithm. The results obtained by these ANNs show high agreement with published experimental data used R 2 > 0.98. Furthermore a sensitivity analysis has been applied in each ANN model showing that all studied input variables are important. --- paper_title: Agent-based supply chain management*/1: framework paper_content: Abstract In the face of highly competitive markets and constant pressure to reduce lead times, enterprises today consider supply chain management to be the key area where improvements can significantly impact the bottom line. More enterprises now consider the entire supply chain structure while taking business decisions. They try to identify and manage all critical relationships both upstream and downstream in their supply chains. Some impediments to this are that the necessary information usually resides across a multitude of resources, is ever changing, and is present in multiple formats. Most supply chain decision support systems (DSSs) are specific to an enterprise and its supply chain, and cannot be easily modified to assist other similar enterprises and industries. In this two-part paper, we propose a unified framework for modeling, monitoring and management of supply chains. The first part of the paper describes the framework while the second part illustrates its application to a refinery supply chain. The framework integrates the various elements of the supply chain such as enterprises, their production processes, the associated business data and knowledge and represents them in a unified, intelligent and object-oriented fashion. Supply chain elements are classified as entities, flows and relationships. Software agents are used to emulate the entities i.e. various enterprises and their internal departments. Flows—material and information—are modeled as objects. The framework helps to analyze the business policies with respect to different situations arising in the supply chain. We illustrate the framework by means of two case studies. A DSS for petrochemical cluster management is described together with a prototype DSS for crude procurement in a refinery. --- paper_title: Robust simulation and optimization methods for natural gas liquefaction processes paper_content: Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018. --- paper_title: Agent-based modeling: Methods and techniques for simulating human systems paper_content: Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. --- paper_title: A Critical Survey of Agent-Based Wholesale Electricity Market Models paper_content: The complexity of electricity markets calls for rich and flexible modeling techniques that help to understand market dynamics and to derive advice for the design of appropriate regulatory frameworks. Agent-Based Computational Economics (ACE) is a fairly young research paradigm that offers methods for realistic electricity market modeling. A growing number of researchers have developed agent-based models for simulating electricity markets. The diversity of approaches makes it difficult to overview the field of ACE electricity research; this literature survey should guide the way through and describe the state-of-the-art of this research area. In a conclusive summary, shortcomings of existing approaches and open issues that should be addressed by ACE electricity researchers are critically discussed. --- paper_title: Reliable Flash Calculations: Part 2. Process Flowsheeting with Nonsmooth Models and Generalized Derivatives paper_content: This article presents new methods for robustly simulating process flowsheets containing nondifferentiable models, using recent advances in exact sensitivity analysis for nonsmooth functions. Among other benefits, this allows flowsheeting problems to be equipped with newly developed nonsmooth inside-out algorithms for nonideal vapor–liquid equilibrium calculations that converge reliability, even when the phase regime at the results of these calculations is unknown a priori. Furthermore, process models for inherently nonsmooth unit operations may be seamlessly integrated into process flowsheets, so long as computationally relevant generalized derivative information is computed correctly and communicated to the flowsheet convergence algorithm. These techniques may be used in either sequential-modular simulations or simulations in which the most challenging modules are solved using tailored external procedures, while the remaining flowsheet equations are solved simultaneously. This new nonsmooth flowsheeting ... --- paper_title: Modeling technological change in energy systems – From optimization to agent-based modeling paper_content: Operational optimization models are one of the main streams in modeling energy systems. Agent-based modeling and simulation seem to be another approach getting popular in this field. In either optimization or agent-based modeling practices, technological change in energy systems is a very important and inevitable factor that researchers need to deal with. By introducing three stylized models, namely, a traditional optimization model, an optimization model with endogenous technological change, and an agent-based model, all of which were developed based on the same deliberately simplified energy system, this paper compares how technological change is treated differently in different modeling practices for energy systems, the different philosophies underlying them, and the advantages/disadvantages of each modeling practice. Finally, this paper identifies the different contexts suitable for applying optimization models and agent-based models in decision support regarding energy systems. --- paper_title: Simulation of biomass gasification with a hybrid neural network model. paper_content: Gasification of several types of biomass has been conducted in a fluidized bed gasifier at atmospheric pressure with steam as the fluidizing medium. In order to obtain the gasification profiles for each type of biomass, an artificial neural network model has been developed to simulate this gasification processes. Model-predicted gas production rates in this biomass gasification processes were consistent with the experimental data. Therefore, the gasification profiles generated by neural networks are considered to have properly reflected the real gasification process of a biomass. Gasification profiles identified by neural network suggest that gasification behavior of arboreal types of biomass is significantly different from that of herbaceous ones. --- paper_title: Computationally relevant generalized derivatives: theory, evaluation and applications paper_content: A new method for evaluating generalized derivatives in nonsmooth problems is reviewed. Lexicographic directional (LD-)derivatives are a recently developed tool in nonsmooth analysis for evaluating generalized derivative elements in a tractable and robust way. Applicable to problems in both steady-state and dynamic settings, LD-derivatives exhibit a number of advantages over current theory and algorithms. As highlighted in this article, the LD-derivative approach now admits a suitable theory for inverse and implicit functions, nonsmooth dynamical systems and optimization problems, among others. Moreover, this technique includes an extension of the standard vector forward mode of automatic differentiation (AD) and acts as the natural extension of classical calculus results to the nonsmooth case in many ways. The theory of LD-derivatives is placed in the context of state-of-the-art methods in nonsmooth analysis, with an application in multistream heat exchanger modelling and design used to illustrate the use... --- paper_title: An agent-based approach for supply chain retrofitting under uncertainty paper_content: In this work, decisions that have a long lasting effect on the supply chain (SC) such as the design and retrofit of a production/distribution network are considered. The retrofitting tasks are accomplished by using a SC agent-oriented simulation system, which model each entity belonging to the SC as an independent agent. The starting point is a set of possible design options for the existing SC. For each design alternative a performance index is obtained through the agent-based framework by looking for the best value of the operational variables associated to the resulting network. The proposed methodology allows to address the design of complex SCs which are hard to be modelled otherwise, for example by means of standard mathematical programming tools. Specifically, the multi-agent system is suitable for SCs that are either driven by pull strategies or operate under uncertain environments, in which the mathematical programming approaches are likely to be inferior due to the high computational effort required. The advantages of our approach are highlighted through a case study comprising several plants, warehouses and retailers. --- paper_title: Neural networks for control systems: a survey paper_content: Abstract This paper focuses on the promise of artificial neural networks in the realm of modelling, identification and control of nonlinear systems. The basic ideas and techniques of artificial neural networks are presented in language and notation familiar to control engineers. Applications of a variety of neural network architectures in control are surveyed. We explore the links between the fields of control science and neural networks in a unified presentation and identify key areas for future research. --- paper_title: Executable cell biology paper_content: Computational modeling of biological systems is becoming increasingly important in efforts to better understand complex biological behaviors. In this review, we distinguish between two types of biological models—mathematical and computational—which differ in their representations of biological phenomena. We call the approach of constructing computational models of biological systems 'executable biology', as it focuses on the design of executable computer algorithms that mimic biological phenomena. We survey the main modeling efforts in this direction, emphasize the applicability and benefits of executable models in biological research and highlight some of the challenges that executable biology poses for biology and computer science. We claim that for executable biology to reach its full potential as a mainstream biological technique, formal and algorithmic approaches must be integrated into biological research. This will drive biology toward a more precise engineering discipline. --- paper_title: Modeling, simulation, sensitivity analysis, and optimization of hybrid systems paper_content: Hybrid (discrete/continuous) systems exhibit both discrete state and continuous state dynamics which interact to such a significant extent that they cannot be decoupled and must be analyzed simultaneously. We present an overview of the work that has been done in the modeling, simulation, sensitivity analysis, and optimization of hybrid systems, paying particular attention to the interaction between discrete and continuous dynamics. A concise intuitive framework for hybrid system modeling is presented, together with discussions on robust state event location, transfer functions of the continuous state at discontinuities, parametric sensitivity analysis of hybrid systems, and challenges in optimization. --- paper_title: Simulation of a Dual Mixed Refrigerant LNG Process using a Nonsmooth Framework paper_content: Abstract Natural gas liquefaction is an energy intensive process with very small driving forces at cryogenic temperatures. Small temperature differences arise from the excessive exergy destruction that occurs from irreversible heat transfer at low temperatures. As a result, even a small change in driving forces in the low temperature region can propagate into large exergy losses that must be compensated by additional compression power. Along with the significant investments and operating costs associated with these processes, this demands a robust and accurate simulation tool. Nonsmooth simulation models for single mixed refrigerant processes already exist in the literature. However, these processes are relatively simple, and are normally only considered for small-scale production or floating operations. Other processes, such as dual mixed refrigerant processes, are therefore normally considered for large-scale production of LNG. It is necessary to investigate whether the nonsmooth flowsheeting strategy is capable of also handling these more complex liquefaction processes. This article describes a simulation model for a dual mixed refrigerant process. The model is solved for two cases using the Peng-Robinson equation of state, each solving for a different set of unknown variables. Both cases converged within a few iterations, showing nearly identical results to simulations run in Aspen Plus. --- paper_title: Multistream heat exchanger modeling and design paper_content: A new model formulation and solution strategy for the design and simulation of processes involving multistream heat exchangers (MHEXs) is presented. The approach combines an extension of pinch analysis with an explicit dependence on the heat exchange area in a nonsmooth equation system to create a model which solves for up to three unknown variables in an MHEX. Recent advances in automatic generation of derivative-like information for nonsmooth equations make the method tractable, and the use of nonsmooth equation solving methods make the method very precise. Several illustrative examples and a case study featuring an offshore liquefied natural gas production concept are presented which highlight the flexibility and strengths of the formulation. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3390–3403, 2015 --- paper_title: Disjunctive modeling for optimal control of hybrid systems paper_content: Abstract In this contribution, a novel approach for the modeling and numerical optimal control of hybrid (discrete–continuous dynamic) systems based on a disjunctive problem formulation is proposed. It is shown that a disjunctive model representation, which constitutes an alternative to mixed-integer model formulations, provides a very flexible, intuitive and effective way to formulate hybrid (discrete–continuous dynamic) optimization problems. The structure and properties of the disjunctive process models can be exploited for an efficient and robust numerical solution by applying generalized disjunctive programming techniques. The proposed modeling and optimization approach will be illustrated by means of optimal control of hybrid systems embedding linear discrete–continuous dynamic models. --- paper_title: Modeling of combined discrete/continuous processes paper_content: The dynamic behavior of processing systems exhibits both continuous and significant discrete aspects. Process simulation is therefore a combined discrete/continuous simulation problem. In addition, there is a critical need for a declarative process modeling environment to encompass the entire range of processing system operation, from purely continuous to batch. These issues are addressed by this article. ::: ::: A new formal mathematical description of the combined discrete/continuous simulation problem is introduced to enhance the understanding of the fundamental discrete changes required to model processing systems. The modeling task is decomposed into two distinct activities: modeling fundamental physical behavior, and modeling the external actions imposed on this physical system. Both require significant discrete components. Important contributions include a powerful representation for discontinuities in physical behavior, and the first detailed consideration of how complex sequences of control actions may be modeled in a general manner. --- paper_title: On Hybrid Petri Nets paper_content: Petri nets (PNs) are widely used to model discrete event dynamic systems (computer systems, manufacturing systems, communication systems, etc). Continuous Petri nets (in which the markings are real numbers and the transition firings are continuous) were defined more recently; such a PN may model a continuous system or approximate a discrete system. A hybrid Petri net can be obtained if one part is discrete and another part is continuous. This paper is basically a survey of the work of the authors' team on hybrid PNs (definition, properties, modeling). In addition, it contains new material such as the definition of extended hybrid PNs and several applications, explanations and comments about the timings in Petri nets, more on the conflict resolution in hybrid PNs, and connection between hybrid PNs and hybrid automata. The paper is illustrated by many examples. --- paper_title: Use of Latent Variables to Reduce the Dimension of Surrogate Models paper_content: In this paper, we propose extensions for surrogate model fitting based on partial least square (PLS) regression. The method itself consists of a three step procedure, in which first, linear mass balances are established, then PLS regression is used to reduce the number of independent variables, and finally non-linear surrogate models are fitted to the latent variables defined via the PLS regression. As PLS regression looks for relationships between independent and dependent variables, preprocessing of the sampled data was investigated. Preprocessing improves the fit of the surrogate model by a factor of two in a case study given by the ammonia synthesis reactor section. The additional application of process knowledge allows a new grid definition with incorporated dependencies between independent variables resulting in a further improvement of the fit. --- paper_title: A Review of Multiscale Analysis: Examples from Systems Biology, Materials Engineering, and Other Fluid–Surface Interacting Systems paper_content: Abstract Multiscale simulation is an emerging scientific field that spans many disciplines, including physics, chemistry, mathematics, statistics, chemical engineering, mechanical engineering, and materials science. This review paper first defines this new scientific field and outlines its objectives. An overview of deterministic, continuum models and discrete, particle models is then given. Among discrete, particle models, emphasis is placed on Monte Carlo stochastic simulation methods in well-mixed and spatially distributed systems. Next, a classification of multiscale methods is carried out based on separation of length and time scales and the computational and mathematical approach taken. Broadly speaking, hybrid simulation and coarse graining or mesoscopic modeling are identified as two general and complementary approaches of multiscale modeling. The former is further classified into onion- and multigrid-type simulation depending on length scales and the presence or not of gradients. Several approaches, such as the net event, the probability weighted, the Poisson and binomial τ -leap, and the hybrid, are discussed for acceleration of stochastic simulation. In order to demonstrate the unifying principles of multiscale simulation, examples from different areas are discussed, including systems biology, materials growth and other reacting systems, fluids, and statistical mechanics. While the classification is general and examples from other scales and tools are touched upon, in this review emphasis is placed on stochastic models, their coarse graining, and their integration with continuum deterministic models, i.e., on the coupling of mesoscopic and macroscopic scales. The concept of hierarchical multiscale modeling is discussed in some length. Finally, the importance of systems-level tools such as sensitivity analysis, parameter estimation, optimization, control, model reduction, and bifurcation in multiscale analysis is underscored. --- paper_title: The nature and role of process systems engineering paper_content: Abstract In this article the author is going to attempt to establish Process Systems Engineering as an important core field in Chemical Engineering. Therefore, the definition of Process Systems Engineering, its necessity in the field of Chemical Engineering, its philosophical backbone and an opinion on the general structure of Process Systems Engineering are given first. later, the important roles which Process Systems Engineering have played in the past and will have to play in the future are mentioned with concrete examples of research done in the author's laboratory. The need for continuous research and development in this field is stressed in conclusion. --- paper_title: Towards a Process Modeling Methodolgy paper_content: Recent advances in nonlinear process control call for more detailed first principles based mathematical models to adequately represent the nonlinear process behavior. Despite the increasing demand for rigorous dynamic models, there has been little fundamental work in recent years to contribute to a better understanding of model development. This is because modeling has either been viewed as a solved problem or as an art rather than a science. Systematic methods of modeling are therefore lacking. In order to satify the needs of nonlinear model-based control, this contribution suggests a process modeling methodolgy based on systems theoretical concepts. The focus is on the derivation of the process model equations prior to their implementation by means of some of the available simulation tools. --- paper_title: Challenges in the new millennium: Product discovery and design, enterprise and supply chain optimization, global life cycle assessment paper_content: This paper first provides an overview of the financial state of the process industry, major issues it currently faces, and job placement of chemical engineers in the U.S. These facts combined with an expanded role of Process Systems Engineering, are used to argue that to support the “value preservation” and “value growth” industry three major future research challenges need to be addressed: Product Discovery and Design, Enterprise and Supply Chain Optimization, and Global Life Cycle Assessment. We provide a brief review of the progress that has been made in these areas, as well as the supporting methods and tools for tackling these problems. Finally, we provide some concluding remarks. --- paper_title: ENVIRONMENTALLY CONSCIOUS CHEMICAL PROCESS DESIGN paper_content: ▪ Abstract The environment has emerged as an important determinant of the performance of the modern chemical industry. This paper reviews approaches for incorporating environmental issues into the design of new processes and manufacturing facilities. The organizational framework is the design process itself, which includes framing the problem and generating, analyzing, and evaluating alternatives. A historical perspective on the chemical process synthesis problem illustrates how both performance objectives and the context of the design have evolved to the point where environmental issues must be considered throughout the production chain. In particular, the review illustrates the need to view environmental issues as part of the design objectives rather than as constraints on operations. A concluding section identifies gaps in the literature and opportunities for additional research. --- paper_title: Multi-scale optimization for process systems engineering paper_content: Abstract Efficient nonlinear programming (NLP) algorithms and modeling platforms have led to powerful process optimization strategies. Nevertheless, these algorithms are challenged by recent evolution and deployment of multi-scale models (such as molecular dynamics and complex fluid flow) that apply over broad time and length scales. Integrated optimization of these models requires accurate and efficient reduced models (RMs). This study develops a rigorous multi-scale optimization framework that substitutes RMs for complex original detailed models (ODMs) and guarantees convergence to the original optimization problem. Based on trust region concepts this framework leads to three related NLP algorithms for RM-based optimization. The first follows the classical gradient-based trust-region method, the second avoids gradient calculations from the ODM, and the third avoids frequent recourse to ODM evaluations, using the concept of ϵ -exact RMs. We illustrate these algorithms with small examples and discuss RM-based optimization case studies that demonstrate their performance and effectiveness. --- paper_title: TRENDS IN COMPUTER-AIDED PROCESS MODELING paper_content: Process modeling is an important task during many process engineering activities such as steady-state and dynamic process simulation, process synthesis or control system design and implementation. The demand for models of varying detail is expected to steadily increase in the future due to advances in model-based process engineering methodologies. Computer assistance to support the development and implementation of adequate and transparent models is indispensable to minimize the engineering effort. The state of the art and current trends in computer-aided modeling are presented in this contribution which is intended to serve as a survey and a tutorial at the same time. --- paper_title: Optimization of single mixed-refrigerant natural gas liquefaction processes described by nondifferentiable models paper_content: Abstract A new strategy for the optimization of natural gas liquefaction processes is presented, in which flowsheets formulated using nondifferentiable process models are efficiently and robustly optimized using an interior-point algorithm. The constraints in the optimization formulation lead to solutions that ensure optimal usage of the area of multistream heat exchangers in the processes in order to minimize irreversibilities. The process optimization problems are solved reliably without the need for a complex initialization procedure even when highly accurate descriptions of the process stream cooling curves are required. In addition to the well-studied PRICO liquefaction process, two significantly more complex single mixed-refrigerant processes are successfully optimized and results are reported for each process subject to constraints imposed by several different operating scenarios. --- paper_title: A hybrid neural network-first principles approach to process modeling paper_content: A hybrid neural network-first principles modeling scheme is developed and used to model a fedbatch bioreactor. The hybrid model combines a partial first principles model, which incorporates the available prior knowledge about the process being modeled, with a neural network which serves as an estimator of unmeasuredprocess parameters that are difficult to model from first principles. This hybrid model has better properties than standard “black-box” neural network models in that it is able to interpolate and extrapolate much more accurately, is easier to analyze and interpret, and requires significantly fewer training examples. Two alternative state and parameter estimation strategies, extended Kalman filtering and NLP optimization, are also considered. When no a priori known model of the unobserved process parameters is available, the hybrid network model gives better estimates of the parameters, when compared to these methods. By providing a model of these unmeasured parameters, the hybrid network can also make predictions and hence can be used for process optimization. These results apply both when full and partial state measurements are available, but in the latter case a state reconstruction method must be used for the first principles component of the hybrid model. --- paper_title: A combined first-principles and data-driven approach to model building paper_content: Abstract We address a central theme of empirical model building: the incorporation of first-principles information in a data-driven model-building process. By enabling modelers to leverage all available information, regression models can be constructed using measured data along with theory-driven knowledge of response variable bounds, thermodynamic limitations, boundary conditions, and other aspects of system knowledge. We expand the inclusion of regression constraints beyond intra-parameter relationships to relationships between combinations of predictors and response variables. Since the functional form of these constraints is more intuitive, they can be used to reveal hidden relationships between regression parameters that are not directly available to the modeler. First, we describe classes of a priori modeling constraints. Next, we propose a semi-infinite programming approach for the incorporation of these novel constraints. Finally, we detail several application areas and provide extensive computational results. --- paper_title: A Versatile Simulation Method for Complex Single Mixed Refrigerant Natural Gas Liquefaction Processes paper_content: Natural gas liquefaction is an energy intensive process with very small driving forces particularly in the low temperature region. Small temperature differences in the heat exchangers and high operating and capital costs require the use of an accurate and robust simulation tool for analysis. Unfortunately, state-of-the-art process simulators such as Aspen Plus and Aspen HYSYS have significant limitations in their ability to model multistream heat exchangers, which are critical unit operations in liquefaction processes. In particular, there exist no rigorous checks to prevent temperature crossovers from occurring in the heat exchangers, and the parameters must therefore be determined through a manual iterative approach to establish feasible operating conditions for the process. A multistream heat exchanger model that performs these checks, as well as area calculations for economic analysis, has previously been developed using a nonsmooth modeling approach. In addition, the model was used to successfully si... --- paper_title: Modeling chemical processes using prior knowledge and neural networks paper_content: We present a method for synthesizing chemical process models that combines prior knowledge and artificial neural networks. The inclusion of prior knowledge is investigated as a means of improving the neural network predictions when trained on sparse and noisy process data. Prior knowledge enters the hybrid model as a simple process model and first principle equations. The simple model controls the extrapolation of the hybrid in the regions of input space that lack training data. The first principle equations, such as mass and component balances, enforce equality constraints. The neural network compensates for inaccuracy in the prior model. In addition, inequality constraints are imposed during parameter estimation. For illustration, the approach is applied in predicting cell biomass and secondary metabolite in a fed-batch fermentation. The results show that prior knowledge enhances the generalization capabilities of a pure neural network model. The approach is shown to require less data for parameter estimation, produce more accurate and consistent predictions, and provide more reliable extrapolation --- paper_title: Simulation of Dual Mixed Refrigerant Natural Gas Liquefaction Processes Using a Nonsmooth Framework paper_content: Natural gas liquefaction is an energy intensive process where the feed is cooled from ambient temperature down to cryogenic temperatures. Different liquefaction cycles exist depending on the application, with dual mixed refrigerant processes normally considered for the large-scale production of Liquefied Natural Gas (LNG). Large temperature spans and small temperature differences in the heat exchangers make the liquefaction processes difficult to analyze. Exergetic losses from irreversible heat transfer increase exponentially with a decreasing temperature at subambient conditions. Consequently, an accurate and robust simulation tool is paramount to allow designers to make correct design decisions. However, conventional process simulators, such as Aspen Plus, suffer from significant drawbacks when modeling multistream heat exchangers. In particular, no rigorous checks exist to prevent temperature crossovers. Limited degrees of freedom and the inability to solve for stream variables other than outlet temperatures also makes such tools inflexible to use, often requiring the user to resort to a manual iterative procedure to obtain a feasible solution. In this article, a nonsmooth, multistream heat exchanger model is used to develop a simulation tool for two different dual mixed refrigerant processes. Case studies are presented for which Aspen Plus fails to obtain thermodynamically feasible solutions. --- paper_title: Reliable Flash Calculations: Part 2. Process Flowsheeting with Nonsmooth Models and Generalized Derivatives paper_content: This article presents new methods for robustly simulating process flowsheets containing nondifferentiable models, using recent advances in exact sensitivity analysis for nonsmooth functions. Among other benefits, this allows flowsheeting problems to be equipped with newly developed nonsmooth inside-out algorithms for nonideal vapor–liquid equilibrium calculations that converge reliability, even when the phase regime at the results of these calculations is unknown a priori. Furthermore, process models for inherently nonsmooth unit operations may be seamlessly integrated into process flowsheets, so long as computationally relevant generalized derivative information is computed correctly and communicated to the flowsheet convergence algorithm. These techniques may be used in either sequential-modular simulations or simulations in which the most challenging modules are solved using tailored external procedures, while the remaining flowsheet equations are solved simultaneously. This new nonsmooth flowsheeting ... --- paper_title: Simulation of biomass gasification with a hybrid neural network model. paper_content: Gasification of several types of biomass has been conducted in a fluidized bed gasifier at atmospheric pressure with steam as the fluidizing medium. In order to obtain the gasification profiles for each type of biomass, an artificial neural network model has been developed to simulate this gasification processes. Model-predicted gas production rates in this biomass gasification processes were consistent with the experimental data. Therefore, the gasification profiles generated by neural networks are considered to have properly reflected the real gasification process of a biomass. Gasification profiles identified by neural network suggest that gasification behavior of arboreal types of biomass is significantly different from that of herbaceous ones. --- paper_title: Computationally relevant generalized derivatives: theory, evaluation and applications paper_content: A new method for evaluating generalized derivatives in nonsmooth problems is reviewed. Lexicographic directional (LD-)derivatives are a recently developed tool in nonsmooth analysis for evaluating generalized derivative elements in a tractable and robust way. Applicable to problems in both steady-state and dynamic settings, LD-derivatives exhibit a number of advantages over current theory and algorithms. As highlighted in this article, the LD-derivative approach now admits a suitable theory for inverse and implicit functions, nonsmooth dynamical systems and optimization problems, among others. Moreover, this technique includes an extension of the standard vector forward mode of automatic differentiation (AD) and acts as the natural extension of classical calculus results to the nonsmooth case in many ways. The theory of LD-derivatives is placed in the context of state-of-the-art methods in nonsmooth analysis, with an application in multistream heat exchanger modelling and design used to illustrate the use... --- paper_title: Simulation of a Dual Mixed Refrigerant LNG Process using a Nonsmooth Framework paper_content: Abstract Natural gas liquefaction is an energy intensive process with very small driving forces at cryogenic temperatures. Small temperature differences arise from the excessive exergy destruction that occurs from irreversible heat transfer at low temperatures. As a result, even a small change in driving forces in the low temperature region can propagate into large exergy losses that must be compensated by additional compression power. Along with the significant investments and operating costs associated with these processes, this demands a robust and accurate simulation tool. Nonsmooth simulation models for single mixed refrigerant processes already exist in the literature. However, these processes are relatively simple, and are normally only considered for small-scale production or floating operations. Other processes, such as dual mixed refrigerant processes, are therefore normally considered for large-scale production of LNG. It is necessary to investigate whether the nonsmooth flowsheeting strategy is capable of also handling these more complex liquefaction processes. This article describes a simulation model for a dual mixed refrigerant process. The model is solved for two cases using the Peng-Robinson equation of state, each solving for a different set of unknown variables. Both cases converged within a few iterations, showing nearly identical results to simulations run in Aspen Plus. --- paper_title: Multistream heat exchanger modeling and design paper_content: A new model formulation and solution strategy for the design and simulation of processes involving multistream heat exchangers (MHEXs) is presented. The approach combines an extension of pinch analysis with an explicit dependence on the heat exchange area in a nonsmooth equation system to create a model which solves for up to three unknown variables in an MHEX. Recent advances in automatic generation of derivative-like information for nonsmooth equations make the method tractable, and the use of nonsmooth equation solving methods make the method very precise. Several illustrative examples and a case study featuring an offshore liquefied natural gas production concept are presented which highlight the flexibility and strengths of the formulation. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3390–3403, 2015 --- paper_title: Use of Latent Variables to Reduce the Dimension of Surrogate Models paper_content: In this paper, we propose extensions for surrogate model fitting based on partial least square (PLS) regression. The method itself consists of a three step procedure, in which first, linear mass balances are established, then PLS regression is used to reduce the number of independent variables, and finally non-linear surrogate models are fitted to the latent variables defined via the PLS regression. As PLS regression looks for relationships between independent and dependent variables, preprocessing of the sampled data was investigated. Preprocessing improves the fit of the surrogate model by a factor of two in a case study given by the ammonia synthesis reactor section. The additional application of process knowledge allows a new grid definition with incorporated dependencies between independent variables resulting in a further improvement of the fit. --- paper_title: A Review of Multiscale Analysis: Examples from Systems Biology, Materials Engineering, and Other Fluid–Surface Interacting Systems paper_content: Abstract Multiscale simulation is an emerging scientific field that spans many disciplines, including physics, chemistry, mathematics, statistics, chemical engineering, mechanical engineering, and materials science. This review paper first defines this new scientific field and outlines its objectives. An overview of deterministic, continuum models and discrete, particle models is then given. Among discrete, particle models, emphasis is placed on Monte Carlo stochastic simulation methods in well-mixed and spatially distributed systems. Next, a classification of multiscale methods is carried out based on separation of length and time scales and the computational and mathematical approach taken. Broadly speaking, hybrid simulation and coarse graining or mesoscopic modeling are identified as two general and complementary approaches of multiscale modeling. The former is further classified into onion- and multigrid-type simulation depending on length scales and the presence or not of gradients. Several approaches, such as the net event, the probability weighted, the Poisson and binomial τ -leap, and the hybrid, are discussed for acceleration of stochastic simulation. In order to demonstrate the unifying principles of multiscale simulation, examples from different areas are discussed, including systems biology, materials growth and other reacting systems, fluids, and statistical mechanics. While the classification is general and examples from other scales and tools are touched upon, in this review emphasis is placed on stochastic models, their coarse graining, and their integration with continuum deterministic models, i.e., on the coupling of mesoscopic and macroscopic scales. The concept of hierarchical multiscale modeling is discussed in some length. Finally, the importance of systems-level tools such as sensitivity analysis, parameter estimation, optimization, control, model reduction, and bifurcation in multiscale analysis is underscored. --- paper_title: Bio-fuels from thermochemical conversion of renewable resources: A review paper_content: Demand for energy and its resources, is increasing every day due to the rapid outgrowth of population and urbanization. As the major conventional energy resources like coal, petroleum and natural gas are at the verge of getting extinct, biomass can be considered as one of the promising environment friendly renewable energy options. Different thermo-chemical conversion processes that include combustion, gasification, liquefaction, hydrogenation and pyrolysis, have been used to convert the biomass into various energy products. Although pyrolysis is still under developing stage but during current energy scenario, pyrolysis has received special attention as it can convert biomass directly into solid, liquid and gaseous products by thermal decomposition of biomass in absence of oxygen. In this review article, the focus has been made on pyrolysis while other conventional processes have been discussed in brief. For having better insight, various types of pyrolysis processes have been discussed in detail including slow, fast, flash and catalytic pyrolysis processes. Besides biomass resources and constituents, the composition and uses of pyrolysis products have been discussed in detail. This review article aim to focus on various operational parameters, viz. temperature and particle size of biomass and product yields using various types of biomasses. --- paper_title: Modeling of biomass gasification: A review paper_content: Biomass is being considered seriously as a source of energy generation worldwide. Among the various routes available for biomass based energy generation, biomass gasification is one of the most important routes that are being studied extensively. Biomass gasification is a thermo-chemical conversion process of biomass materials within a reactor. Number of inter-related parameters concerning the type of fuel, the reactor design and operating parameters effect the functioning of the gasifier. Understanding of this working principle is essential for the end user. The end user may be an individual who is interested in the output of the gasifier or the reactor manufacturer who is interested to develop the most optimum design or a planner who is in requirement of a gasifier which will give the best performance for a specific fuel type. Research and development both in the experimental and computational aspect of gasification has been numerous. Computational modeling tools are advantageous in many situations due to their capability of allowing the user to find the optimum conditions for a given reactor without going in for actual experimentation which is both time consuming and expensive. The modeling works of gasification process requires a systematic logical analysis in order to efficiently disseminate the embedded information. An attempt has been made in this study to categorise the recent modeling works based on certain specific criteria such as type of gasifier, feedstock, modeling considerations and evaluated parameters. Comparative assessments are made of the modeling techniques and output for each category of the models. The information is anticipated to be useful for researchers, end users as well as planners. --- paper_title: Combining coal gasification and natural gas reforming for efficient polygeneration paper_content: Abstract A techno-economic analysis of several process systems to convert coal and natural gas to electricity, methanol, diesel, and gasoline is presented. For these polygeneration systems, a wide range of product portfolios and market conditions are considered, including the implementation of a CO 2 emissions tax policy and optional carbon capture and sequestration technology. A new strategy is proposed in which natural gas reforming is used to cool the gasifier, rather than steam generation. Simulations along with economic analyses show that this strategy provides increased energy efficiency and can be the optimal design choice in many market scenarios. --- paper_title: Top ten fundamental challenges of biomass pyrolysis for biofuels paper_content: Pyrolytic biofuels have technical advantages over conventional biological conversion processes since the entire plant can be used as the feedstock (rather than only simple sugars) and the conversion process occurs in only a few seconds (rather than hours or days). Despite decades of study, the fundamental science of biomass pyrolysis is still lacking and detailed models capable of describing the chemistry and transport in real-world reactors is unavailable. Developing these descriptions is a challenge because of the complexity of feedstocks and the multiphase nature of the conversion process. Here, we identify ten fundamental research challenges that, if overcome, would facilitate commercialization of pyrolytic biofuels. In particular, developing fundamental descriptions for condensed-phase pyrolysis chemistry (i.e., elementary reaction mechanisms) are needed since they would allow for accurate process optimization as well as feedstock flexibility, both of which are critical to any modern high-throughput process. Despite the benefits to pyrolysis commercialization, detailed chemical mechanisms are not available today, even for major products such as levoglucosan and hydroxymethylfurfural (HMF). Additionally, accurate estimates for heat and mass transfer parameters (e.g., thermal conductivity, diffusivity) are lacking despite the fact that biomass conversion in commercial pyrolysis reactors is controlled by transport. Finally, we examine methods for improving pyrolysis particle models, which connect fundamental chemical and transport descriptions to real-world pyrolysis reactors. Each of the ten challenges is presented with a brief review of relevant literature followed by future directions which can ultimately lead to technological breakthroughs that would facilitate commercialization of pyrolytic biofuels. --- paper_title: Dynamic simulation and control of an integrated gasifier/reformer system. Part II: Discrete and model predictive control paper_content: Abstract Part I of this series presented an analysis of a multi-loop proportional-integral (PI) control system for an integrated coal gasifier/steam methane reformer system, operating in both counter-current and co-current configurations, for syngas production in a flexible polygeneration plant. In this work, a discrete-PI control system and an offset-free linear model predictive controller (MPC) are presented for the co-current configuration to address process interactions and sampling delay. The MPC model was identified from ‘data’ derived from simulations of the rigorous plant model, with a Luenberger observer augmented to the MPC, to estimate and eliminate plant-model mismatch. MPC offered superior set point tracking relative to discrete-PI control, especially in cases where discrete-PI destabilized the system. The offset-free MPC was developed to solve in less than a second to facilitate online deployment. --- paper_title: Integrated Process Simulation and CFD for Improved Process Engineering paper_content: Abstract This paper describes recent efforts to seamlessly integrate process simulation and computational fluid dynamics (CFD) using open standard interfaces for computer-aided process engineering. A reaction-separation-recycle flowsheet coupled with a CFD stirred tank reactor model is presented as an example to demonstrate the applicability of the integration approach and its potential to improve process engineering. The results show that the combined simulation offers new opportunities to analyze and optimize overall plant performance with respect to mixing and fluid flow behavior. --- paper_title: Simultaneous process synthesis, heat, power, and water integration of thermochemical hybrid biomass, coal, and natural gas facilities paper_content: Abstract A comprehensive wastewater network is introduced into a thermochemical based process superstructure that will convert biomass, coal, and natural gas to liquid (CBGTL) transportation fuels. The mixed-integer nonlinear optimization (MINLP) model includes simultaneous heat, power, and water integration that utilizes heat engines to recover electricity from waste heat and several treatment units to process and recycle wastewater. A total of 108 case studies are analyzed which consist of combinations of six coal feedstocks, three biomass feedstocks, three plant capacities, and two process superstructures. This study discusses important process topological differences between the case studies and illustrates each component of the process synthesis framework using the two medium-sized capacity case studies that have low-volatile bituminous coal and biomass feedstocks. --- paper_title: Combining coal gasification, natural gas reforming, and solid oxide fuel cells for efficient polygeneration with CO2 capture and sequestration paper_content: Several polygeneration process systems are presented which convert natural gas and coal to gasoline, diesel, methanol, and electricity. By using solid oxide fuel cells as the primary electricity generator, the presented systems improve upon a recently introduced concept by which natural gas is reformed inside the radiant cooler of a gasifier. Simulations and techno-economic analyses performed for a wide range of process configurations and market conditions show that this strategy results in significant efficiency and profitability improvements when CO2 capture and sequestration are employed. Market considerations for this analysis include variations in purchase prices of the coal and natural gas, sale prices of the products, and CO2 emission tax rates. --- paper_title: Toward Novel Hybrid Biomass, Coal, and Natural Gas Processes for Satisfying Current Transportation Fuel Demands, 1: Process Alternatives, Gasification Modeling, Process Simulation, and Economic Analysis paper_content: This paper, which is the first part of a series of papers, introduces a hybrid coal, biomass, and natural gas to liquids (CBGTL) process that can produce transportation fuels in ratios consistent with current U.S. transportation fuel demands. Using the principles of the H2Car process, an almost-100% feedstock carbon conversion is attained using hydrogen produced from a carbon or noncarbon source and the reverse water-gas-shift reaction. Seven novel process alternatives that illustrate the effect of feedstock, hydrogen source, and light gas treatment on the process are considered. A complete process description is presented for each section of the CBGTL process including syngas generation, syngas treatment, hydrocarbon generation, hydrocarbon upgrading, and hydrogen generation. Novel mathematical models for biomass and coal gasification are developed to model the nonequilibrium effluent conditions using a stoichiometry-based method. Input−output relationships are derived for all vapor-phase components, cha... --- paper_title: Multi-scale Optimization for Advanced Energy Processes paper_content: Abstract Advanced energy systems demand powerful and systematic optimization strategies for analysis, high performance design and efficient operation. Such processes are modeled through a heterogeneous collection of device-scale and process scale models, which contain distributed and lumped parameter models of varying complexity. This work addresses the integration and optimization of advanced energy models through multi-scale optimization strategies. In particular, we consider the optimal design of advanced energy processes by merging device-scale (e.g., CFD) models with flowsheet simulation models through sophisticated model reduction strategies. Recent developments in surrogate-based optimization have led to a general decomposition framework with multiple scales and convergence guarantees to the overall multi-scale optimum. Here, we sketch two trust region-based algorithms, one requiring gradients from the detailed model and one that is derivative-free; both demonstrate multi-scale optimization of advanced energy processes. Motivated by an advanced Integrated Gasification Combined Cycle (IGCC) process, we present two case studies that include PSA models for carbon capture and CFD models for gasification and combustion. --- paper_title: Metabolic engineering of microorganisms for biofuels production: from bugs to synthetic biology to fuels paper_content: Metabolic engineering of microorganisms for biofuels production: from bugs to synthetic biology to fuels Sung Kuk Lee, Howard Chou, Timothy S Ham, Taek Soon Lee, and Jay D Keasling ABSTRACT The ability to generate microorganisms that can produce biofuels similar to petroleum-based transportation fuels would allow the use of existing engines and infrastructure and would save an enormous amount of capital required for replacing the current infrastructure to accommodate biofuels that have properties significantly different from petroleum-based fuels. Several groups have demonstrated the feasibility of manipulating microbes to produce molecules similar to petroleum-derived products, albeit at relatively low productivity (e.g. maximum butanol production is around 20 g/L). For cost-effective production of biofuels, the fuel-producing hosts and pathways must be engineered and optimized. Advances in metabolic engineering and synthetic biology will provide new tools for metabolic engineers to better understand how to rewire the cell in order to create the desired phenotypes for the production of economically viable biofuels. INTRODUCTION Alternative transportation fuels are in high demand owing to concerns about climate change, the global petroleum supply, and energy security [ 1,2]. Currently, the most widely used biofuels are ethanol generated from starch (corn) or sugar cane and biodiesel produced from vegetable oil or animal fats [3 ]. However, ethanol is not an ideal fuel molecule in that it is not compatible with the existing fuel infrastructure for distribution and storage owing to its corrosivity and high hygroscopicity [1,4 ]. Also, it contains only about 70% of the energy content of gasoline. Biodiesel has similar problems (URL: http:// www.bdpedia.com/biodiesel/alt/alt.html): it cannot be transported in pipelines because its cloud and pour points are higher than those for petroleum diesel (petrodiesel), and its energy content is approximately 11% lower than that of petrodiesel. Furthermore, both ethanol and bio-diesel are currently produced from limited agricultural resources, even though there is a large, untapped resource of plant biomass (lignocellulose) that could be utilized as a renewable source of carbon-neutral, liquid fuels [ 5]. Microbial production of transportation fuels from renew-able lignocellulose has several advantages. First, the production is not reliant on agricultural resources commonly used for food, such as corn, sugar cane, soybean, and palm oil. Second, lignocellulose is the most abundant biopolymer on earth. Third, new biosynthetic pathways can be engineered to produce fossil-fuel replacements, including short- chain, branched-chain, and cyclic alcohols, alkanes, alkenes, esters and aromatics. The development of cost-effective and energy-efficient processes to convert lignocellulose into fuels is hampered by significant roadblocks, including the lack of genetic engineering tools for native producer organisms (non-model organ-isms), and difficulties in optimizing metabolic pathways and balancing the redox state in the engineered microbes [ 6]. Furthermore, production potentials are limited by the low activity of pathway enzymes and the inhibitory effect of fuels and byproducts from the upstream biomass processing steps on microorganisms responsible for producing fuels. Recent advances in synthetic biology and metabolic engineering will make it possible to overcome these hurdles and engineer microorganisms for the cost- effective production of biofuels from cellulosic biomass. In this review, we examine the range of choices available as potential biofuel candidates and production hosts, review the recent methods used to produce biofuels, and discuss how tools from the fields of metabolic engineering and synthetic biology can be applied to produce transportation fuels using genetically engineered micro-organisms. Liquid fuels and alternative biofuel molecules An understanding of what makes a good fuel is important in order to retool microorganisms to produce more useful alternative biofuels. The best fuel targets for the near term will be molecules that are already found in or similar to components of fossil-based fuel in order to be compatible with existing engines (spark ignition engine for gasoline, compression ignition engine for diesel fuel, and gas turbine for jet fuel). There are several relevant factors to consider when designing biofuel candidates ( Table 1). Energy contents, the combustion quality described by octane or cetane number, volatility, freezing point, --- paper_title: Oxy-fuel combustion of pulverized coal: Characterization, fundamentals, stabilization and CFD modeling paper_content: Oxy-fuel combustion has generated significant interest since it was proposed as a carbon capture technology for newly built and retrofitted coal-fired power plants. Research, development and demonstration of oxy-fuel combustion technologies has been advancing in recent years; however, there are still fundamental issues and technological challenges that must be addressed before this technology can reach its full potential, especially in the areas of combustion in oxygen-carbon dioxide environments and potentially at elevated pressures. This paper presents a technical review of oxy-coal combustion covering the most recent experimental and simulation studies, and numerical models for sub-processes are also used to examine the differences between combustion in an oxidizing stream diluted by nitrogen and carbon dioxide. The evolution of this technology from its original inception for high temperature processes to its current form for carbon capture is introduced, followed by a discussion of various oxy-fuel systems proposed for carbon capture. Of all these oxy-fuel systems, recent research has primarily focused on atmospheric air-like oxy-fuel combustion in a CO2-rich environment. Distinct heat and mass transfer, as well as reaction kinetics, have been reported in this environment because of the difference between the physical and chemical properties of CO2 and N2, which in turn changes the flame characteristics. By tracing the physical and chemical processes that coal particles experience during combustion, the characteristics of oxy-fuel combustion are reviewed in the context of heat and mass transfer, fuel delivery and injection, coal particle heating and moisture evaporation, devolatilization and ignition, char oxidation and gasification, as well as pollutants formation. Operation under elevated pressures has also been proposed for oxy-coal combustion systems in order to improve the overall energy efficiency. The potential impact of elevated pressures on oxy-fuel combustion is discussed when applicable. Narrower flammable regimes and lower laminar burning velocity under oxy-fuel combustion conditions may lead to new stability challenges in operating oxy-coal burners. Recent research on stabilization of oxy-fuel combustion is reviewed, and some guiding principles for retrofit are summarized. Distinct characteristics in oxy-coal combustion necessitate modifications of CFD sub-models because the approximations and assumptions for air-fuel combustion may no longer be valid. Advances in sub-models for turbulent flow, heat transfer and reactions in oxy-coal combustion simulations, and the results obtained using CFD are reviewed. Based on the review, research needs in this combustion technology are suggested. --- paper_title: Polygeneration of fuels and chemicals paper_content: Research advances in the rapidly growing field of polygeneration are highlighted. Although ‘polygeneration’ has had many meanings, the chemical engineering community has overwhelmingly settled on a meaning which describes a process that co-produces at least two products: electricity, and at least one chemical or fuel via a thermochemical route that does not rely on petroleum. The production of syngas is almost always the primary intermediate for energy conversion, but the feeds, products, technologies, and pathways vary widely. However, the choice of the most optimal polygeneration system is highly dependent on circumstance, and often results in systems with only one fuel or chemical co-produced with electricity. Conversely, the synergistic use of multiple types of feedstocks can have important profitability benefits. --- paper_title: Synthetic biology and biomass conversion: a match made in heaven? paper_content: To move our economy onto a sustainable basis, it is essential that we find a replacement for fossil carbon as a source of liquid fuels and chemical industry feedstocks. Lignocellulosic biomass, available in enormous quantities, is the only feasible replacement. Many micro-organisms are capable of rapid and efficient degradation of biomass, employing a battery of specialized enzymes, but do not produce useful products. Attempts to transfer biomass-degrading capability to industrially useful organisms by heterologous expression of one or a few biomass-degrading enzymes have met with limited success. It seems probable that an effective biomass-degradation system requires the synergistic action of a large number of enzymes, the individual and collective actions of which are poorly understood. By offering the ability to combine any number of transgenes in a modular, combinatorial way, synthetic biology offers a new approach to elucidating the synergistic action of combinations of biomass-degrading enzymes in vivo and may ultimately lead to a transferable biomass-degradation system. Also, synthetic biology offers the potential for assembly of novel product-formation pathways, as well as mechanisms for increased solvent tolerance. Thus, synthetic biology may finally lead to cheap and effective processes for conversion of biomass to useful products. --- paper_title: CFD modeling to study fluidized bed combustion and gasification paper_content: Abstract The increase in application of fluidized bed combustion and gasification devices throughout world means that more consideration will be given to improve design and reduce emissions of these. Due to excellent thermal and mixing properties fluidized beds are generally preferred over the fixed bed combustors and gasifiers. Computational Fluid Dynamic (CFD) is a technique which helps to optimize the design and operation of fluidized bed combustor and gasifiers. Recent progression in numerical techniques and computing efficacy has advanced CFD as a widely used practice to provide efficient design solutions in fluidized bed industry. In this paper an extensive review of CFD modeling to study combustion and gasification in fluidized beds has been done. This paper introduces the fundamentals involved in developing a CFD solution for fluidized bed combustion and gasification. Mathematical equations governing the fluid flow, heat and mass transfer and chemical reactions in fluidized bed combustion and gasifiers systems are described and main CFD models are presented. The aim is to illustrate what can be done and also to identify trends and those areas where further work is needed. --- paper_title: Optimization of IGCC processes with reduced order CFD models paper_content: Abstract Integrated gasification combined cycle (IGCC) plants have significant advantages for efficient power generation with carbon capture. Moreover, with the development of accurate CFD models for gasification and combined cycle combustion, key units of these processes can now be modeled more accurately. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges. This study describes the development and demonstration of a reduced order modeling (ROM) framework for these tasks. The approach builds on the concepts of co-simulation and ROM development for process units described in earlier studies. Here we show how the ROMs derived from both gasification and combustion units can be integrated within an equation-oriented simulation environment for the overall optimization of an IGCC process. In addition to a systematic approach to ROM development, the approach includes validation tasks for the CFD model as well as closed-loop tests for the integrated flowsheet. This approach allows the application of equation-based nonlinear programming algorithms and leads to fast optimization of CFD-based process flowsheets. The approach is illustrated on two flowsheets based on IGCC technology. --- paper_title: Process/equipment co-simulation for design and analysis of advanced energy systems paper_content: Abstract The grand challenge facing the power and energy industries is the development of efficient, environmentally friendly, and affordable technologies for next-generation energy systems. To provide solutions for energy and the environment, the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL) and its research partners in industry and academia are relying increasingly on the use of sophisticated computer-aided process design and optimization tools. In this paper, we describe recent progress toward developing an Advanced Process Engineering Co-Simulator (APECS) for the high-fidelity design, analysis, and optimization of energy plants. The APECS software system combines steady-state process simulation with multiphysics-based equipment simulations, such as those based on computational fluid dynamics (CFD). These co-simulation capabilities enable design engineers to optimize overall process performance with respect to complex thermal and fluid flow phenomena arising in key plant equipment items, such as combustors, gasifiers, turbines, and carbon capture devices. In this paper we review several applications of the APECS co-simulation technology to advanced energy systems, including coal-fired energy plants with carbon capture. This paper also discusses ongoing co-simulation R&D activities and challenges in areas such as CFD-based reduced-order modeling, knowledge management, advanced analysis and optimization, and virtual plant co-simulation. Continued progress in co-simulation technology – through improved integration, solution, and deployment – will have profound positive impacts on the design and optimization of high-efficiency, near-zero emission fossil energy systems. --- paper_title: Baseline Flowsheet Model for IGCC with Carbon Capture paper_content: Integrated gasification combined cycle (IGCC) processes have the potential for high thermal efficiency with a low energy penalty for carbon capture. Many researchers have proposed various innovations to improve upon the efficiency of the IGCC process. However, the analysis methods of most publications are generally not transparent and these published results are exceedingly difficult to reproduce. The National Energy Technology Laboratory (NETL) report [Cost and Performance Baseline for Fossil Energy Plants: Bituminous Coal and Natural Gas to Electricity Final Report; U.S. Department of Energy, Office of Fossil Energy, NETL, DOE/NETL-2010/1397, 2010] is widely used as a reference by researchers and by industry. To enable researchers to have a consistent and transparent framework for analyzing IGCC flowsheets and potential innovations, a baseline model derived from the NETL report is described herein and the corresponding flowsheet model and its full documentation are available online from this journal for... --- paper_title: High-efficiency power production from coal with carbon capture paper_content: A zero-emissions power plant with high efficiency is presented. Syngas, produced by the gasification of coal, is shifted to produce H2 which in turn fuels stacks of solid oxide fuel cells. Because the fuel cells maintain separate anode and cathode streams, air can be used as the oxygen source without diluting the fuel exhaust with nitrogen. This enables recovery of CO2 from the exhaust with a very small energy penalty. As a result, an absorption-based CO2 recovery process is avoided, as well as the production of large quantities of high-purity O2, allowing a high overall thermal efficiency and essentially eliminating the energy penalty for carbon capture. © 2010 American Institute of Chemical Engineers AIChE J, 2010 --- paper_title: Municipal solid waste to liquid transportation fuels – Part I: Mathematical modeling of a municipal solid waste gasifier paper_content: Abstract This paper presents a generic gasifier model towards the production of liquid fuels using municipal solid waste (MSW) as a feedstock. The MSW gasification has been divided into three zones: pyrolysis, oxidation, and reduction. The pyrolysis zone has been mathematically modeled with an optimization based monomer model. Then, the pyrolysis, oxidation, and reduction zones are defined with different chemical reactions and equations in which some extents of these reactions are not known a priori. Using a nonlinear parameter estimation approach, the unknown gasification parameters are obtained to match the experimental gasification results in the best possible way. The results suggest that a generic MSW gasifier mathematical model can be obtained in which the average error is 8.75%. The mathematical model of the MSW gasifier is of major importance since it can be a part of a process superstructure towards the production of liquid transportation fuels. --- paper_title: Bioconversion of lignocellulosic biomass: biochemical and molecular perspectives paper_content: In view of rising prices of crude oil due to increasing fuel demands, the need for alternative sources of bioenergy is expected to increase sharply in the coming years. Among potential alternative bioenergy resources, lignocellulosics have been identified as the prime source of biofuels and other value-added products. Lignocelluloses as agricultural, industrial and forest residuals account for the majority of the total biomass present in the world. To initiate the production of industrially important products from cellulosic biomass, bioconversion of the cellulosic components into fermentable sugars is necessary. A variety of microorganisms including bacteria and fungi may have the ability to degrade the cellulosic biomass to glucose monomers. Bacterial cellulases exist as discrete multi-enzyme complexes, called cellulosomes that consist of multiple subunits. Cellulolytic enzyme systems from the filamentous fungi, especially Trichoderma reesei, contain two exoglucanases or cellobiohydrolases (CBH1 and CBH2), at least four endoglucanases (EG1, EG2, EG3, EG5), and one β-glucosidase. These enzymes act synergistically to catalyse the hydrolysis of cellulose. Different physical parameters such as pH, temperature, adsorption, chemical factors like nitrogen, phosphorus, presence of phenolic compounds and other inhibitors can critically influence the bioconversion of lignocellulose. The production of cellulases by microbial cells is governed by genetic and biochemical controls including induction, catabolite repression, or end product inhibition. Several efforts have been made to increase the production of cellulases through strain improvement by mutagenesis. Various physical and chemical methods have been used to develop bacterial and fungal strains producing higher amounts of cellulase, all with limited success. Cellulosic bioconversion is a complex process and requires the synergistic action of the three enzymatic components consisting of endoglucanases, exoglucanases and β-glucosidases. The co-cultivation of microbes in fermentation can increase the quantity of the desirable components of the cellulase complex. An understanding of the molecular mechanism leading to biodegradation of lignocelluloses and the development of the bioprocessing potential of cellulolytic microorganisms might effectively be accomplished with recombinant DNA technology. For instance, cloning and sequencing of the various cellulolytic genes could economize the cellulase production process. Apart from that, metabolic engineering and genomics approaches have great potential for enhancing our understanding of the molecular mechanism of bioconversion of lignocelluloses to value added economically significant products in the future. --- paper_title: Municipal solid waste to liquid transportation fuels – Part II: Process synthesis and global optimization strategies paper_content: Abstract This paper investigates the production of liquid transportation fuels from municipal solid waste (MSW). A comprehensive process synthesis superstructure is utilized that incorporates a novel mathematical model for MSW gasification. The production of liquid products proceeds through a synthesis gas intermediate that can be converted into Fischer–Tropsch hydrocarbons or methanol. The methanol can be converted into either gasoline or olefins, and the olefins may subsequently be converted into gasoline and distillate. Simultaneous heat, power, and water integration is included within the process synthesis framework to minimize utilities costs. A rigorous deterministic global optimization branch-and-bound strategy is utilized to minimize the overall cost of the waste-to-liquids (WTL) refinery and determine the optimal process topology. Several case studies are presented to illustrate the process synthesis framework and the nonconvex mixed-integer nonlinear optimization model presented in this paper. This is the first study that explores the possibility of liquid fuels production from municipal solid waste utilizing a process synthesis approach within a global optimization framework. The results suggest that the production of liquid fuels from MSW is competitive with petroleum-based processes. The effect that the delivered cost of municipal solid waste has on the overall cost of liquids production is also investigated parametrically. --- paper_title: Optimization framework for the simultaneous process synthesis, heat and power integration of a thermochemical hybrid biomass, coal, and natural gas facility paper_content: Abstract A thermochemical based process superstructure and its mixed-integer nonlinear optimization (MINLP) model are introduced to convert biomass (switchgrass), coal (Illinois #6), and natural gas to liquid (CBGTL) transportation fuels. The MINLP model includes simultaneous heat and power integration utilizing heat engines to recover electricity from the process waste heat. Four case studies are presented to investigate the effect of CO 2 sequestration (CCS) and greenhouse gas (GHG) reduction targets on the process topology along with detailed parametric analysis on the role of biomass and electricity prices. Topological similarities for the case studies include selection of solid/vapor-fueled gasifiers and iron-catalyzed Fischer-Tropsch units that facilitate the reverse water–gas-shift reaction. The break-even oil price was found to be $57.16/bbl for CCS with a 50% GHG reduction, $62.65/bbl for CCS with a 100% GHG reduction, $82.68/bbl for no CCS with a 50% GHG reduction, and $91.71 for no CCS with a 100% GHG reduction. --- paper_title: Dynamic simulation and control of an integrated gasifier/reformer system. Part I: Agile case design and control paper_content: Abstract This two-part series investigates the feasibility of the operation and control of a novel gasifier cooling system which integrates steam methane reformer tubes into a gasifier radiant syngas cooler. This approach capitalizes on available exergy by producing valuable H 2 -rich synthesis gas (syngas) for liquid fuel production. In Part I (this work), an ‘agile’ device design was developed for both counter-current and co-current flow configurations, wherein a PI control structure was designed to achieve performance objectives. Key trade-offs were found between the configurations: the counter-current design was more robust and effective in rejecting moderate and severe gasifier disturbances, while providing greater cooling duty and natural gas throughput, but at the expense of higher tube wall temperatures, which can greatly reduce tube lifetime. The co-current design operates in a safer temperature range and satisfactorily rejects moderate disturbances, but requires feedforward control to handle extreme gasifier upsets. Using the co-current design, the flexibility of the device to adjust natural gas throughput based on variations in downstream syngas demand was demonstrated. --- paper_title: Modelling, Comparison and Operation Experiences of Entrained Flow Gasifier paper_content: Abstract In this paper a generic entrained flow gasifier is modelled in Aspen Plus using different designs e.g. wet/dry feed as well as wet and dry quench. The models are verified with the corresponding data from real existing plants or reference data from the literature. All of them are found to reproduce the raw gas composition as well as synthesis gas yield with acceptable deviation. The comparison of the selected designs revealed the poor performance of the wet feed compared to the dry design. The corresponding cold gas efficiency of 72.1% is much lower than the 83% for the dry feed cases. Furthermore the specific synthesis gas production is 12% lower at 12% higher oxygen demand. On the other hand, the power demand for the gasification island is found to be 60–70% lower than in the dry feed case. Therefore the wet feed design is recommended only in case of high pressure gasification. During, an exergy analysis of the different raw gas cooling concepts the disadvantage of the direct quench became obvious. Combinations of gas quench and heat recovery results in exergy losses of 52.4%. The heat recovery design showed much better exergy efficiency of 63.8%. Especially the wet quench design showed high looses and is therefore restricted to applications incorporating a down stream shift unit. --- paper_title: Polygeneration as a future sustainable energy solution – A comprehensive review paper_content: Integrating multiple utility outputs to obtain better efficient system has been a good option. After cogeneration and trigeneration, polygeneration emerges as a possible sustainable solution with optimum resource utilization, better efficiency and environment friendliness. Several possible polygeneration has been conceptualized, performance assessed theoretically as available in literature. Both inputs and outputs vary in these reported works. A few prototype development and experimental result analysis are also reported. Several optimization tools based on objective function are used to develop efficient polygeneration. Assessment criteria of polygeneration are also multi dimensional and may be defined on a case to case basis with definite objective. In this paper a comprehensive review of available literature is done to assess the status of polygeneration as a possible sustainable energy solution. Possible future research in this field is also logically predicted at the end of this review. --- paper_title: Modelling coal gasification with CFD and discrete phase method paper_content: In the present paper the authors describe a computational fluid dynamics model of a two-stage, oxygen blown, entrained flow, coal slurry gasifier for use in advanced power plant simulations. The discrete phase method is used to simulate the coal slurry flow. The physical and chemical processing of coal slurry gasification is implemented by calculating the discrete phase trajectory using a Lagrangian formulation. The particle tracking is coupled with specific physical processes, in which the coal particles sequentially undergo moisture release, vaporisation, devolatilisation, char oxidation and char gasification. Using specified plant boundary conditions, the gasification model predicts a synthesis gas composition that is very close to the values calculated by a restricted equilibrium reactor model tuned to represent typical experimental data. The char conversions are 100 and 86% for the first stage and second stage respectively. --- paper_title: Modelling, simulation and design of an integrated radiant syngas cooler and steam methane reformer for use with coal gasification paper_content: Abstract In this work, a novel process intensification design is proposed to integrate the Radiant Syngas Cooler (RSC) utilised to cool the coal-derived synthesis gas in entrained-bed gasifiers and a steam methane reformer (SMR). The feasibility of the proposed integrated system is analyzed by developing a rigorous, dynamic, multi-dimensional model and establishing design heuristics for the integrated system. Two different flow configurations are explored; co-current and counter-current. The simulation results show that the proposed concept is feasible that allows for methane conversions as high as 80% in co-current mode and 88% in counter-current mode. The results also demonstrate that the counter-current design, though with higher conversion and cooling duty provided when compared to co-current designs, is limited by the tube wall material limitations. Our analysis shows that the total avoided CO 2 emissions is 13.3 tonnes/h by using the proposed integrated configuration in place of an external reformer for the natural gas feed rates considered in this study. In addition, a sensitivity analysis is performed on key model assumptions and the resulting effect on the performance is assessed. The sensitivity results have helped identify key factors to consider prior to pilot-scale implementation and further improvement for agile designs; a one third reduction in tube length reduced pressure drop by as much as 50% but reduces methane conversion by 15% points, neglecting slag deposition on tubes over-predicts performance only by 3%, and a 10% change in gas emissivity calculations affects model prediction of performance by less than 1%. --- paper_title: INTEGRATION OF PRODUCTION PLANNING AND SCHEDULING: OVERVIEW CHALLENGES AND OPPORTUNITIES paper_content: We review the integration of medium-term production planning and short-term scheduling. We begin with an overview of supply chain management and the associated planning problems. Next, we formally define the production planning problem and explain why integration with scheduling leads to better solutions. We present the major modeling approaches for the integration of scheduling and planning decisions, and discuss the major solution strategies. We close with an account of the challenges and opportunities in this area. --- paper_title: Model-size reduction techniques for large-scale biomass production and supply networks paper_content: This paper is concerned with developing several model-size reduction techniques for the analysis of large-scale renewable production and supply networks. They are (i) Reducing the connectivity in a biomass supply chain network, (ii) Eliminating unnecessary variables and constraints, (iii) Merging the collection centres. The proposed model-size reduction techniques brought computational time improvements of several magnitudes compared, with the high performance linear system solution techniques and still with a little loss in accuracy. When the methods are combined the time reductions are more significant. A proposed procedure for combining the methods can be implemented for any supply chain model with a large number of components. --- paper_title: Prospective and perspective review in integrated supply chain modelling for the chemical process industry paper_content: There is a large body of work on modelling and optimisation of the supply chain in the chemical process industry. This review summarises the most recent concepts and structural components constituting the SC. It describes the enlarged scope presently attributed to supply chain management, which departs from classical approaches focused on operations to a more integrated conception that jointly considers necessary decisions from other business functional areas (e.g. corporate finances, new product development, environmental management), as well as captures the complex dynamics characterizing the supply chain management (SCM) problem. Moreover, new perspectives come into focus for enterprise-wide decision-making through the use of ontologies that provide a general model representation for different decision-support levels at different time and space scales. --- paper_title: Strategic planning, design, and development of the shale gas supply chain network paper_content: The long-term planning of the shale gas supply chain is a relevant problem that has not been addressed before in the literature. This article presents a mixed-integer nonlinear programming (MINLP) model to optimally determine the number of wells to drill at every location, the size of gas processing plants, the section and length of pipelines for gathering raw gas and delivering processed gas and by-products, the power of gas compressors, and the amount of freshwater required from reservoirs for drilling and hydraulic fracturing so as to maximize the net present value of the project. Because the proposed model is a large-scale nonconvex MINLP, we develop a decomposition approach based on successively refining a piecewise linear approximation of the objective function. Results on realistic instances show the importance of heavier hydrocarbons to the economics of the project, as well as the optimal usage of the infrastructure by properly planning the drilling strategy. © 2014 American Institute of Chemical Engineers AIChE J, 60: 2122–2142, 2014 --- paper_title: Waste biomass-to-energy supply chain management: a critical synthesis. paper_content: The development of renewable energy sources has clearly emerged as a promising policy towards enhancing the fragile global energy system with its limited fossil fuel resources, as well as for reducing the related environmental problems. In this context, waste biomass utilization has emerged as a viable alternative for energy production, encompassing a wide range of potential thermochemical, physicochemical and bio-chemical processes. Two significant bottlenecks that hinder the increased biomass utilization for energy production are the cost and complexity of its logistics operations. In this manuscript, we present a critical synthesis of the relative state-of-the-art literature as this applies to all stakeholders involved in the design and management of waste biomass supply chains (WBSCs). We begin by presenting the generic system components and then the unique characteristics of WBSCs that differentiate them from traditional supply chains. We proceed by discussing state-of-the-art energy conversion technologies along with the resulting classification of all relevant literature. We then recognize the natural hierarchy of the decision-making process for the design and planning of WBSCs and provide a taxonomy of all research efforts as these are mapped on the relevant strategic, tactical and operational levels of the hierarchy. Our critical synthesis demonstrates that biomass-to-energy production is a rapidly evolving research field focusing mainly on biomass-to-energy production technologies. However, very few studies address the critical supply chain management issues, and the ones that do that, focus mainly on (i) the assessment of the potential biomass and (ii) the allocation of biomass collection sites and energy production facilities. Our analysis further allows for the identification of gaps and overlaps in the existing literature, as well as of critical future research areas. --- paper_title: Process industry supply chains: Advances and challenges paper_content: Abstract A large body of work exists in process industry supply chain optimisation. We describe the state of the art of research in infrastructure design, modelling and analysis and planning and scheduling, together with some industrial examples. We draw some conclusions about the degree to which different classes of problem have been solved, and discuss challenges for the future. --- paper_title: Optimal design of sustainable chemical processes and supply chains: A review paper_content: Abstract The importance of balancing social, environmental and economic objectives in companies’ development has cultivated a growing awareness on the sustainable optimal design and planning of supply chains. In the past years, significant research effort has been devoted to extend current approaches to capture these objectives in order to guarantee long term sustainability. Among the various approaches developed, inventory management, product design, production planning and control for remanufacturing, product recovery, reverse logistics and closed-loop supply chains have gained more attention in the literature. In this paper, we review some of the relevant research on sustainable chemical processes and supply chain design focusing on three main areas: (i) sustainable supply chains with respect to energy efficiency and waste management, (ii) environmentally sustainable supply chains and (iii) sustainable water management. The emerging challenges in this area are summarized, and future opportunities are highlighted. --- paper_title: Biomass supply chain design and analysis: Basis, overview, modeling, challenges, and future paper_content: Biofuels are identified as the potential solution for depleting fossil fuel reserves, increasing oil prices, and providing a clean, renewable energy source. The major barrier preventing the commercialization of lignocellulosic biorefineries is the complex conversion process and their respective supply chain. Efficient supply chain management of a lignocellulosic biomass is crucial for success of second generation biofuels. This paper systematically describes energy needs, energy targets, biofuel feedstocks, conversion processes, and finally provides a comprehensive review of Biomass Supply Chain (BSC) design and modeling. Specifically, the paper presents a detailed review of mathematical programming models developed for BSC and identifies key challenges and potential future work. This review will provide readers with a starting point for understanding biomass feedstocks and biofuel production as well as detailed analysis of the BSC modeling and design. --- paper_title: Nationwide energy supply chain analysis for hybrid feedstock processes with significant CO2 emissions reduction paper_content: Integrating diverse energy sources to produce cost-competitive fuels requires efficient resource management. An optimization framework is proposed for a nationwide energy supply chain network using hybrid coal, biomass, and natural gas to liquids (CBGTL) facilities, which are individually optimized with simultaneous heat, power, and water integration using 162 distinct combinations of feedstock types, capacities, and carbon conversion levels. The model integrates the upstream and downstream operations of the facilities, incorporating the delivery of feedstocks, fuel products, electricity supply, water, and CO2 sequestration, with their geographical distributions. Quantitative economic trade-offs are established between supply chain configurations that (a) replace petroleum-based fuels by 100%, 75%, and 50% and (b) utilize the current energy infrastructures. Results suggest that cost-competitive fuels for the US transportation sector can be produced using domestically available coal, natural gas, and sustainably harvested biomass via an optimal network of CBGTL plants with significant GHG emissions reduction from petroleum-based processes. © 2012 American Institute of Chemical Engineers AIChE J, 2012 --- paper_title: Supply chain design and optimization: Challenges and opportunities paper_content: Abstract Optimal supply chain design is vital to the success of industrial concerns now more than ever before. This paper reviews some principal research opportunities and challenges in the field of supply chain design. The growing area of enterprise-wide optimization and the increasing importance of energy and sustainability issues provide plentiful opportunities for supply chain design research. However, modeling, algorithmic, and computational challenges arise from these research opportunities. There are three major technical challenge areas where knowledge gaps can be addressed in supply chain design, namely multi-scale challenges, multi-objective and sustainability challenges, and multi-player challenges. This paper provides an overview of opportunity areas, a description of relevant technical challenges, and a perspective on how these challenges might be addressed in supply chain design. Illustrative examples are presented to illuminate avenues for future research. --- paper_title: District heating and cooling: Review of technology and potential enhancements paper_content: District energy systems are reviewed and possible future enhancements involving expanded thermal networks are considered. Various definitions, classifications and applications of district cooling and heating are discussed and elements of a district energy system are described. Also, the integration of combined heat and power (CHP) with district energy, permitting the cogeneration of electricity and heat, is examined from several points of view and for various locations and applications. One of the main advantages of district heating and cooling systems is their environmental benefits, which are explained in detail. The economics of a thermal network system, as a major factor in the justification for any project, is elaborated upon from industrial, governmental and societal perspectives. Furthermore, related regulations at government levels are suggested based on various investigations. The efficiency of district energy is discussed and exergy analysis, as an effective method for calculating the efficiency of a thermal network, is explained. Finally, other advantages of the district energy technology for communities are pointed out. This review of district heating and cooling considers technical, economic and environmental aspects and helps identify possibilities for future study on district energy systems. --- paper_title: Biomass-to-bioenergy and biofuel supply chain optimization: Overview, key issues and challenges paper_content: Abstract This article describes the key challenges and opportunities in modeling and optimization of biomass-to-bioenergy supply chains. It reviews the major energy pathways from terrestrial and aquatic biomass to bioenergy/biofuel products as well as power and heat with an emphasis on “drop-in” liquid hydrocarbon fuels. Key components of the bioenergy supply chains are then presented, along with a comprehensive overview and classification of the existing contributions on biofuel/bioenergy supply chain optimization. This paper identifies fertile avenues for future research that focuses on multi-scale modeling and optimization, which allows the integration across spatial scales from unit operations to biorefinery processes and to biofuel value chains, as well as across temporal scales from operational level to strategic level. Perspectives on future biofuel supply chains that integrate with petroleum refinery supply chains and/or carbon capture and sequestration systems are presented. Issues on modeling of sustainability and the treatment of uncertainties in bioenergy supply chain optimization are also discussed. --- paper_title: Multi-scale optimization for process systems engineering paper_content: Abstract Efficient nonlinear programming (NLP) algorithms and modeling platforms have led to powerful process optimization strategies. Nevertheless, these algorithms are challenged by recent evolution and deployment of multi-scale models (such as molecular dynamics and complex fluid flow) that apply over broad time and length scales. Integrated optimization of these models requires accurate and efficient reduced models (RMs). This study develops a rigorous multi-scale optimization framework that substitutes RMs for complex original detailed models (ODMs) and guarantees convergence to the original optimization problem. Based on trust region concepts this framework leads to three related NLP algorithms for RM-based optimization. The first follows the classical gradient-based trust-region method, the second avoids gradient calculations from the ODM, and the third avoids frequent recourse to ODM evaluations, using the concept of ϵ -exact RMs. We illustrate these algorithms with small examples and discuss RM-based optimization case studies that demonstrate their performance and effectiveness. --- paper_title: Advanced process engineering co-simulation using CFD-based reduced order models paper_content: The process and energy industries face the challenge of designing the next generation of plants to operate with unprecedented efficiency and near-zero emissions, while performing profitably amid fluctuations in costs for raw materials, finished products, and energy. To achieve these targets, the designers of future plants are increasingly relying upon modeling and simulation to create virtual plants that allow them to evaluate design concepts without the expense of pilot-scale and demonstration facilities. Two of the more commonly used simulation tools include process simulators for describing the entire plant as a network of simplified equipment models and computational fluid dynamic (CFD) packages for modeling an isolated equipment item in great detail by accounting for complex thermal and fluid flow phenomena. The Advanced Process Engineering Co-Simulator (APECS) sponsored by the U.S. Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has been developed to combine process simulation software with CFD-based equipment simulation software so that design engineers can analyze and optimize the coupled fluid flow, heat and mass transfer, and chemical reactions that drive overall plant performance (Zitney et al., 2006). The process/CFD software integration was accomplished using the process-industry standard CAPE-OPEN interfaces. --- paper_title: Supply chain optimisation for the process industries: Advances and opportunities paper_content: Supply chain management and optimisation is a critical aspect of modern enterprises and a flourishing research area. This paper presents a critical review of methodologies for enhancing the decision-making for process industry supply chains towards the development of optimal infrastructures (assets and network) and planning. The presence of uncertainty within supply chains is discussed as an important issue for efficient capacity utilisation and robust infrastructure decisions. The incorporation of business/financial and sustainability aspects is also considered and future challenges are identified. --- paper_title: Process/equipment co-simulation for design and analysis of advanced energy systems paper_content: Abstract The grand challenge facing the power and energy industries is the development of efficient, environmentally friendly, and affordable technologies for next-generation energy systems. To provide solutions for energy and the environment, the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL) and its research partners in industry and academia are relying increasingly on the use of sophisticated computer-aided process design and optimization tools. In this paper, we describe recent progress toward developing an Advanced Process Engineering Co-Simulator (APECS) for the high-fidelity design, analysis, and optimization of energy plants. The APECS software system combines steady-state process simulation with multiphysics-based equipment simulations, such as those based on computational fluid dynamics (CFD). These co-simulation capabilities enable design engineers to optimize overall process performance with respect to complex thermal and fluid flow phenomena arising in key plant equipment items, such as combustors, gasifiers, turbines, and carbon capture devices. In this paper we review several applications of the APECS co-simulation technology to advanced energy systems, including coal-fired energy plants with carbon capture. This paper also discusses ongoing co-simulation R&D activities and challenges in areas such as CFD-based reduced-order modeling, knowledge management, advanced analysis and optimization, and virtual plant co-simulation. Continued progress in co-simulation technology – through improved integration, solution, and deployment – will have profound positive impacts on the design and optimization of high-efficiency, near-zero emission fossil energy systems. --- paper_title: Supply chain modeling: past, present and future paper_content: Over the years, most of the firms have focused their attention to the effectiveness and efficiency of separate business functions. As a new way of doing business, however, a growing number of firms have begun to realize the strategic importance of planning, controlling, and designing a supply chain as a whole. In an effort to help firms capture the synergy of inter-functional and inter-organizational integration and coordination across the supply chain and to subsequently make better supply chain decisions, this paper synthesizes past supply chain modeling efforts and identifies key challenges and opportunities associated with supply chain modeling. We also provide various guidelines for the successful development and implementation of supply chain models. --- paper_title: Progresses and challenges in process industry supply chains optimization paper_content: Process industry supply chains (SCs) involve challenging and complex problems that have been addressed by the process systems engineering community in recent years. Shah in 2005 (Shah N: Process industry supply chains: advances and challenges. Comput Chem Eng 2005, 29 :1225–1235. The paper provides a comprehensive review on the process industry supply chains. Discusses major achievements and explores industrial examples. Challenges are identified where the evidence that supply chains of the future will be quite different from the past is recognized) stated in his review that process industry SCs were still striving to improve efficiency and responsiveness and were facing new challenges that needed further research. Optimization was pointed out as a possible path to follow aiming at building tools that can help the involved decision makers. From that time onwards several works have explored this pathway but there is still space for improvement, especially due to the outer shell of new emerging problems. The present paper provides a brief review of the progress that has been made on process industry SCs focusing in recent years (2008 onwards). Conclusions on the work done are strained, while the tendencies and future challenges in the area are identified. --- paper_title: Reduced Order Model Based on Principal Component Analysis for Process Simulation and Optimization paper_content: It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment model... --- paper_title: Enterprise-wide modeling & optimization—An overview of emerging research challenges and opportunities paper_content: The process systems engineering (PSE) as well as the operations research and management science (ORMS) literature has hitherto focused on disparate processes and functions within the enterprise. These themes have included upstream R&D pipeline management, planning and scheduling in batch and continuous manufacturing systems and more recently supply chain optimization under uncertainty. In reality, the modern process enterprise functions as a cohesive entity involving several degrees of cross-functional co-ordination across enterprise planning and process functions. The complex organizational structures underlying horizontally and vertically integrated process enterprises challenge our understanding of cross-functional co-ordination and its business impact. This article looks at the impact of enterprise-wide cross-functional coordination on enterprise performance, sustainability and growth prospects. Cross-functional coordination is defined as the integration of strategic and tactical decision-making processes involving the control of financial and inventory flows (both internal and external) as well as resource deployments. Initially, we demonstrate the existence of cross-functional decision-making dependencies using an enterprise network model. Subsequently, we discuss interactions between enterprise planning decisions involving project financing, debt-equity balancing, R&D portfolio selection, risk hedging with real derivative instruments, supply chain asset creation and marketing contracts which influence decision-making at the activity/process level. Several case studies are included to re-enforce the point that planning and process decisions need to be integrated. --- paper_title: Optimization framework for the simultaneous process synthesis, heat and power integration of a thermochemical hybrid biomass, coal, and natural gas facility paper_content: Abstract A thermochemical based process superstructure and its mixed-integer nonlinear optimization (MINLP) model are introduced to convert biomass (switchgrass), coal (Illinois #6), and natural gas to liquid (CBGTL) transportation fuels. The MINLP model includes simultaneous heat and power integration utilizing heat engines to recover electricity from the process waste heat. Four case studies are presented to investigate the effect of CO 2 sequestration (CCS) and greenhouse gas (GHG) reduction targets on the process topology along with detailed parametric analysis on the role of biomass and electricity prices. Topological similarities for the case studies include selection of solid/vapor-fueled gasifiers and iron-catalyzed Fischer-Tropsch units that facilitate the reverse water–gas-shift reaction. The break-even oil price was found to be $57.16/bbl for CCS with a 50% GHG reduction, $62.65/bbl for CCS with a 100% GHG reduction, $82.68/bbl for no CCS with a 50% GHG reduction, and $91.71 for no CCS with a 100% GHG reduction. --- paper_title: Enterprise-wide optimization for industrial demand side management: Fundamentals, advances, and perspectives paper_content: Abstract The active management of electricity demand, also referred to as demand side management (DSM), has been recognized as an effective approach to improving power grid performance and consumer benefits. Being large electricity consumers, the power-intensive process industries play a key role in DSM. In particular, enterprise-wide optimization (EWO) for industrial DSM has emerged as a major area of interest for both researchers and practitioners. In this work, we introduce the reader to the fundamentals of power system economics, provide a definition of DSM that reflects more strongly the consumer's perspective, and present a comprehensive review of existing works on EWO for industrial DSM. The review is organized into four parts, which correspond to the four main challenges that we identify as: (1) accurate modeling of operational flexibility, (2) integration of production and energy management, (3) decision-making across multiple time and space scales, and (4) optimization under uncertainty. Finally, we highlight research gaps and future opportunities in this area. --- paper_title: Municipal solid waste to liquid transportation fuels – Part III: An optimization-based nationwide supply chain management framework paper_content: An optimization-based supply chain management framework for municipal solid waste (MSW) to liquid transportation fuels (WTL) processes is presented. First, a thorough analysis of landfill operations and annual amounts of MSW that are deposited across the contiguous United States is conducted and compared with similar studies. A quantitative supply chain framework that simultaneously accounts for the upstream and downstream WTL value chain operations is then presented. A large-scale mixed-integer linear optimization model that captures the interactions among MSW feedstock availabilities and locations, WTL refinery locations, and product delivery locations and demand capacities is described. The model is solved for both the nationwide and statewide WTL supply chains across numerous case studies. The results of the framework yield insights into the strategic placement of WTL refineries in the United States as well as topological information on the feedstock and product flows. The results suggest that large-scale WTL supply chains can be competitive, with breakeven oil prices ranging between $64-$77 per barrel. --- paper_title: 5 Scope for the Application of Mathematical Programming Techniques in the Synthesis and Planning of Sustainable Processes paper_content: Sustainability has recently emerged as a key issue in process systems engineering (PSE). Mathematical programming techniques offer a general modeling framework for including environmental concerns in the synthesis and planning of chemical processes. In this paper, we review major contributions in process synthesis and supply chain management, highlighting the main optimization approaches that are available, including the handling of uncertainty and the multi-objective optimization of economic and environmental objectives. Finally, we discuss challenges and opportunities identified in the area. --- paper_title: Energy from gasification of solid wastes. paper_content: Gasification technology is by no means new: in the 1850s, most of the city of London was illuminated by "town gas" produced from the gasification of coal. Nowadays, gasification is the main technology for biomass conversion to energy and an attractive alternative for the thermal treatment of solid waste. The number of different uses of gas shows the flexibility of gasification and therefore allows it to be integrated with several industrial processes, as well as power generation systems. The use of a waste-biomass energy production system in a rural community is very interesting too. This paper describes the current state of gasification technology, energy recovery systems, pre-treatments and prospective in syngas use with particular attention to the different process cycles and environmental impacts of solid wastes gasification. --- paper_title: Using GREENSCOPE indicators for sustainable computer-aided process evaluation and design paper_content: Abstract Manufacturing sustainability can be increased by educating those who design, construct, and operate facilities, and by using appropriate tools for process evaluation and design. The U.S. Environmental Protection Agency's GREENSCOPE methodology and tool, for evaluation and design of chemical processes, suits these purposes. This work describes example calculations of GREENSCOPE indicators for the oxidation of toluene and puts them into context with best- and worst-case limits. Data available from the process is transformed by GREENSCOPE into understandable information which describes sustainability. An optimization is performed for various process conversions, with results indicating a maximum utility at intermediate conversions. Lower conversions release too much toluene through a purge stream; higher conversions lead to the formation of too many byproducts. Detailed results are elucidated through the context of best- and worst-case limits and graphs of total utility and GREENSCOPE indicator values, which are calculated within an optimization framework for the first time. --- paper_title: Waste-to-energy: A review of the status and benefits in USA. paper_content: The USA has significant experience in the field of municipal solid waste management. The hierarchy of methodologies for dealing with municipal solid wastes consists of recycling and composting, combustion with energy recovery (commonly called waste-to-energy) and landfilling. This paper focuses on waste-to-energy and especially its current status and benefits, with regard to GHG, dioxin and mercury emissions, energy production and land saving, on the basis of experience of operating facilities in USA. --- paper_title: Methods and tools for sustainable process design paper_content: After overcoming its earlier denial of negative environmental impacts, the chemical industry has been working toward enhancing its sustainability. Early efforts focused on reducing pollution from individual processes, while today, the focus is on reducing impacts throughout the life cycle. Current methods for sustainable process design solve large multiobjective optimization problems, and attempt to consider economic, environmental and social aspects. This paper provides an overview of recent developments in sustainable process design and its applications. These methods use the latest advances in process systems engineering, but are lagging in their use of advances in Sustainable Engineering. More work is needed for considering impacts over the full life cycle boundary, and to ensure that sustainable designs do not exceed nature's capacity to provide the needed ecosystem goods and services. --- paper_title: Combining petroleum coke and natural gas for efficient liquid fuels production paper_content: Abstract This work explores the technical feasibility and economic profitability of converting petroleum coke (petcoke) and natural gas to liquid fuels via Fischer-Tropsch synthesis. Different petcoke conversion strategies were examined to determine the conversion pathway which can be competitive with current market prices with little or no adverse environmental impacts. Three main design approaches were considered: petcoke gasification only, combined petcoke gasification and natural gas reforming through traditional processing steps, and combined petcoke gasification and natural gas reforming by directly integrating the gasifier's radiant cooler with the gas reformer. The designs investigated included scenarios with and without carbon capture and sequestration, and with and without CO2 emission tax penalties. The performance metrics considered included net present value, life cycle greenhouse gas emissions, and the cost of CO2 avoided. The design configuration that integrated natural gas reforming with the gasification step directly showed to be the more promising design for the wide range of analyses performed. The Aspen Plus simulation files have been made freely available to the public. --- paper_title: Work and Heat Integration: An emerging research area paper_content: Abstract The extension from Heat Integration (HI) and design of Heat Exchanger Networks (HENs) to including heating and cooling effects from pressure changing equipment has been referred to as Work and Heat Integration and design of Work and Heat Exchange Networks (WHENs). This is an emerging research area of Process Synthesis, however, WHENs is a considerably more complex design task than HENs. A key challenge is the fact that temperature changes (related to heat) and pressure changes (related to work) of process streams are interacting. Changes in inlet temperatures to compressors and expanders resulting from heat integration will influence work consumption and production. Likewise, pressure changes by compression and expansion will change the temperatures of process streams, thus affecting heat integration. As a result, Composite and Grand Composite Curves will change shape due to pressure changes in the process. The thermodynamic path of process streams from supply (pressure, temperature) to target state is not known and depends on the sequence of heating, cooling, compression and expansion. This paper introduces a definition and describes the development of WHENs. Future research challenges related to methodology development and industrial applications will be addressed. The potential of WHENs will be indicated through examples in literature. --- paper_title: Global optimization for the synthesis of integrated water systems in chemical processes paper_content: In this paper, we address the problem of optimal synthesis of an integrated water system, where water using processes and water treatment operations are combined into a single network such that the total cost of obtaining freshwater for use in the water using operations, and treating wastewater is globally minimized. A superstructure that incorporates all feasible design alternatives for water treatment, reuse and recycle, is proposed. We formulate the optimization of this structure as a non-convex Non-Linear Programming (NLP) problem, which is solved to global optimality. The problem takes the form of a non-convex Generalized Disjunctive Program (GDP) if there is a flexibility of choosing different treatment technologies for the removal of the various contaminants in the wastewater streams. A new deterministic spatial branch and contract algorithm is proposed for optimizing such systems, in which piecewise under- and over-estimators are used to approximate the non-convex terms in the original model to obtain a convex relaxation whose solution gives a lower bound on the global optimum. These lower bounds are made to converge to the solution within a branch and bound procedure. Several examples are presented to illustrate the optimization of the integrated networks using the proposed algorithm. --- paper_title: Indicators of Sustainable Development for Industry paper_content: Despite numerous actions worldwide which call for adoption of more sustainable strategies, relatively little has been done on a practical level so far on the pretext that the issue is too complex and not fully understood. This paper follows the argument that it is important that today's decision-makers address the issue of sustainability, however imperfectly, as ignoring it may only exacerbate the problem for future generations. In particular, the paper concentrates on measuring the level of sustainability of industry with he aim of further informing the debate in this area. It proposes a general framework with a relatively simple, yet comprehensive set of indicators for identification of more sustainable practices for industry. The indicators cover the three aspects of sustainability—environmental, economic and social—and among others, include environmental impacts, financial and ethical indicators. The framework is applicable across industry; however, more specific indicators for different sectors have to be defined separately, on a case-by-case basis. It allows a modular approach for gradual incorporation of the framework into the organizational structure. The life cycle approach ensures that the most important stages in the life cycle and their impacts are identified and targeted for improvements. The framework also provides a link between micro and macro-aspects of sustainable development through appropriate indicators. Thus, it serves as a tool which can assist companies in assessing their performance with regard to goals and objectives embedded in the idea of sustainable development. --- paper_title: A Modular Approach to Sustainability Assessment and Decision Support in Chemical Process Design paper_content: In chemical and allied industries, process design sustainability has gained public concern in academia, industry, government agencies, and social groups. Over the past decade, a variety of sustainability indicators have been introduced, but with various challenges in application. It becomes clear that the industries need urgently practical tools for conducting systematic sustainability assessment on existing processes and/or new designs and, further, for helping derive the most desirable design decisions. This paper presents a systematic, general approach for sustainability assessment and design selection through integrating hard (quantitative) economic and environmental indicators along with soft (qualitative) indicators for social criteria into design activities. The approach contains four modules: a process simulator module, an equipment and inventory acquisition module, a sustainability assessment module, and a decision support module. The modules fully utilize and extend the capabilities of the process simulator Aspen Plus, Aspen Simulation Workbook, and a spreadsheet, where case model development, data acquisition and analysis, team contribution assessment, and decision support are effectively integrated. The efficacy of the introduced approach is illustrated by the example of biodiesel process design, where insightful sustainability analysis and persuasive decision support show its superiority over commonly practiced technoeconomy evaluation approaches. --- paper_title: ENVIRONMENTALLY CONSCIOUS CHEMICAL PROCESS DESIGN paper_content: ▪ Abstract The environment has emerged as an important determinant of the performance of the modern chemical industry. This paper reviews approaches for incorporating environmental issues into the design of new processes and manufacturing facilities. The organizational framework is the design process itself, which includes framing the problem and generating, analyzing, and evaluating alternatives. A historical perspective on the chemical process synthesis problem illustrates how both performance objectives and the context of the design have evolved to the point where environmental issues must be considered throughout the production chain. In particular, the review illustrates the need to view environmental issues as part of the design objectives rather than as constraints on operations. A concluding section identifies gaps in the literature and opportunities for additional research. --- paper_title: Multi-objective optimization of process cogeneration systems with economic, environmental, and social tradeoffs paper_content: Process cogeneration is an effective strategy for exploiting the positive aspects of combined heat and power in the process industry. Traditionally, decisions for process cogeneration have been based mostly on economic criteria. With the growing interest in sustainability issues, there is need to consider economic, environmental, and social aspects of cogeneration. The objective of this article is to develop an optimization framework for the design of process cogeneration systems with economic, environmental, and social aspects. Process integration is used as the coordinating framework for the optimization formulation. First, heat integration is carried out to identify the heating utility requirements. Then, a multi-header steam system is designed and optimized for inlet steam characteristics and their impact on power, fixed and operating costs, greenhouse gas emissions, and jobs. A genetic algorithm is developed to solve the optimization problem. Multi-objective tradeoffs between the economic, environmental, and social aspects are studied through Pareto tradeoffs. A case study is solved to illustrate the applicability of the proposed procedure. --- paper_title: Environmentally conscious long-range planning and design of supply chain networks paper_content: Abstract In this paper, a mathematical programming-based methodology is presented for the explicit inclusion of life cycle assessment (LCA) criteria as part of the strategic investment decisions related to the design and planning of supply chain networks. By considering the multiple environmental concerns together with the traditional economic criteria, the planning task is formulated as a multi-objective optimization problem. Over a long-range planning horizon, the methodology utilizes mixed integer modelling techniques to address strategic decisions involving the selection, allocation and capacity expansion of processing technologies and assignment of transportation links required to satisfy the demands at the markets. At the operational level, optimal production profiles and flows of material between various components within the supply chain are determined. As such, the formulation presented here combines the elements of the classical plant location and capacity expansion problems with the principles of LCA to develop a quantitative decision-support tool for environmentally conscious strategic investment planning. --- paper_title: Sustainable development of primary steelmaking under novel blast furnace operation and injection of different reducing agents paper_content: Abstract This paper presents a numerical study of economics and environmental impact of an integrated steelmaking plant, using surrogate, empirical and shortcut models based on mass and energy balance equations for the unit operations. In addition to the steelmaking processes, chemical processes such as pressure/temperature swing adsorption, membrane, chemical absorption technologies are included for gas treatment. A methanol plant integrated with a combined heat and power plant forms a polygeneration system that utilizes energy and gases of the site. The overall model has been applied using mathematical programming to find an optimal design and operation of the integrated plant for an economic objective under several development stages of the technology. New concepts studied are blast furnace operation with different degrees of top gas recycling and oxygen enrichment of the blast to full oxygen blast furnace. Coke in the process may be partially replaced with other carbon carriers. The system is optimized by maximizing the net present value, which includes investment costs for the new unit processes as well as costs of feed materials, CO 2 emission and sequestration, operation costs and credit for products produced. The effect of using different fuels such as oil, natural gas, pulverized coal, coke oven gas, charcoal and biomass is studied, particularly focusing on biomass torrefaction and the effect of integration on arising reductant in steelmaking to reduce emissions from the system. The effects of steel plant capacity on the optimal choice of carbon carriers are also studied. It is demonstrated that it is possible to decrease the specific CO 2 emissions of primary steelmaking from fossil fuels from 1.6 t of CO 2 to a level of 0.75–1.0 t and further by more than 50% through the integration of biofuels in considered scenarios. --- paper_title: Optimization of Coke Oven Gas Desulphurization and Combined Cycle Power Plant Electricity Generation paper_content: Many steel refineries generate significant quantities of coke oven gas (COG), which is in some cases used only to generate low pressure steam and small amounts of electric power. In order to improve energy efficiency and reduce net greenhouse gas emissions, a combined cycle power plant (CCPP) where COG is used as fuel is proposed. However, desulfurization is necessary before the COG can be used as a fuel input for a CCPP. Using a local steel refinery as a case study, a proposed desulfurization process is designed to limit the H2S content in COG to less than 1 ppmv, and simulated using ProMax. In addition, the proposed CCPP plant is simulated in Aspen Plus and is optimized using GAMS to global optimality with net present value as the objective function. Furthermore, the carbon tax is considered in this study. The optimized CCPP plant was observed to generate more than twice the electrical efficiency when compared to the status quo for the existing steel refinery. Thus, by generating more electricity within... --- paper_title: Cannibals with Forks: The Triple Bottom Line of 21st Century Business paper_content: Introduction - is capitalism sustainable? seven revolutions for sustainable capitalism revolution 1 - competition - going for the triple win revolution 2 - values - from me to we revolution 3 - information and transparency - no hiding place revolution 4 - lifecylces - from conception to resurrection revolution 5 - partnerships - after the honeymoon revolution 6 - time - three scenarios revolution 7 - corporate governance - stake in the future the sustainability transition - value shifts, value migrations the worlds of money and power the sustainability audit - how are you placed? --- paper_title: Selection of the Economic Objective Function for the Optimization of Process Flow Sheets paper_content: This paper highlights the problem of selecting the most suitable economic optimization criteria for mathematical programming approaches to the synthesis, design, and optimization of chemical process flow sheets or their subsystems. Minimization of costs and maximization of profit are the most frequently used economic criteria in technical papers. However, there are many other financial measures which can lead to different optimal solutions if applied in the objective function. This paper describes the characteristics of the optimal solutions obtained with various optimization criteria like the total annual cost, the profit, the payback time, the equivalent annual cost, the net present worth, and the internal rate of return. It was concluded that the maximization of the net present worth (NPW) with a discount rate equal to the minimum acceptable rate of return (MARR) is probably the most appropriate method for the optimization of process flow sheets or their subsystems. Similar or equal solutions can be ob... --- paper_title: Product and Process Design Principles: Synthesis, Analysis, and Evaluation paper_content: 1. Introduction to Chemical Product Design 1S Supplement to Chapter 1 2. Product-Development Process PART 1 BASIC CHEMICALS PRODUCT DESIGN 3. Materials Technology for Basic Chemicals: Molecular-Structure Design 3S Supplement to Chapter 3 4. Process Creation for Basic Chemicals 5. Simulation to Assist in Process Creation 6. Heuristics for Process Synthesis 7. Reactor Design and Synthesis of Networks Containing Reactors 7S Supplement to Chapter 7 8. Synthesis of Separation Trains 9. Heat and Power Integration 9S Suppliment to Chapter 9 Second Law Analysis 10. Mass Integration 11. Optimal Design and Scheduling of Batch Processes 12. Plantwide Controllability Assessment 12S Supplement to Chapter 12 Flowsheet Controllability Analysis 13. Basic Chemicals Product Design Case Studies PART 2 INDUSTRIAL CHEMICALS PRODUCT DESIGN 14. Materials and Process/Manufacturing Technologies for Industrial Chemical Products 15. Industrial Chemicals Product Design Case Studies PART 3 CONFIGURED CONSUMER PRODUCT DESIGN 16. Materials, Process/Manufacturing, and Product Technologies for Con"?gured Consumer Products 16S Supplement to Chapter 16 17. Con"?gured Consumer Product Design Case Studies PART 4 DETAILED DESIGN, EQUIPMENT SIZING, OPTIMIZATION, AND PRODUCT-QUALITY ANALYSIS 18. Heat Exchanger Design 19. Separation Tower Design 20. Pumps, Compressors, and Expanders 21. Polymer Compounding 22. Cost Accounting and Capital Cost Estimation 22S Supplement to Chapter 22 23. Annual Costs, Earnings, and Pro"?tability Analysis 23S Supplement to Chapter 23 24. Design Optimization 25. Six-Sigma Design Strategies PART 5 DESIGN REPORT 26. Written Reports and Oral Presentations APPENDIXES INDICES --- paper_title: Introduction to Stochastic Programming paper_content: The aim of stochastic programming is to find optimal decisions in problems which involve uncertain data. This field is currently developing rapidly with contributions from many disciplines including operations research, mathematics, and probability. At the same time, it is now being applied in a wide variety of subjects ranging from agriculture to financial planning and from industrial engineering to computer networks. This textbook provides a first course in stochastic programming suitable for students with a basic knowledge of linear programming, elementary analysis, and probability. The authors aim to present a broad overview of the main themes and methods of the subject. Its prime goal is to help students develop an intuition on how to model uncertainty into mathematical problems, what uncertainty changes bring to the decision process, and what techniques help to manage uncertainty in solving the problems.In this extensively updated new edition there is more material on methods and examples including several new approaches for discrete variables, new results on risk measures in modeling and Monte Carlo sampling methods, a new chapter on relationships to other methods including approximate dynamic programming, robust optimization and online methods.The book is highly illustrated with chapter summaries and many examples and exercises. Students, researchers and practitioners in operations research and the optimization area will find it particularly of interest. Review of First Edition:"The discussion on modeling issues, the large number of examples used to illustrate the material, and the breadth of the coverage make'Introduction to Stochastic Programming' an ideal textbook for the area." (Interfaces, 1998) --- paper_title: Process synthesis optimization and flexibility evaluation of air separation cycles paper_content: The solution is discussed of the process synthesis problem for cryogenic air separation using techniques from process synthesis optimization. The mass and energy balances for the process are represented by a simplified algebraic model in which material stream flows and pressures appear as continuous variables and the various equipment choices for components of the air separation cycle are represented by binary variables. The resulting model corresponds to a mixed-integer nonlinear programming (MINLP) formulation and enables numerical optimization of the cycle. The two problems addressed are the selection of the optimal equipment set for a given product slate and the flexibility determination of the chosen cycle. The flexibility analysis is complemented with unique visual techniques that depict the feasible region and use of the recently proposed convex-hull approach. The calculations are illustrated for low-purity (95%) oxygen plants using the Lachmann cycle. © 2005 American Institute of Chemical Engineers AIChE J, 2005 --- paper_title: Production of FT transportation fuels from biomass; technical options, process analysis and optimisation, and development potential paper_content: Fischer–Tropsch (FT) diesel derived from biomass via gasification is an attractive clean and carbon neutral transportation fuel, directly usable in the present transport sector. System components necessary for FT diesel production from biomass are analysed and combined to a limited set of promising conversion concepts. The main variations are in gasification pressure, the oxygen or air medium, and in optimisation towards liquid fuels only, or towards the product mix of liquid fuels and electricity. The technical and economic performance is analysed. For this purpose, a dynamic model was built in Aspen Plus®, allowing for direct evaluation of the influence of each parameter or device, on investment costs, FT and electricity efficiency and resulting FT diesel costs. FT diesel produced by conventional systems on the short term and at moderate scale would probably cost 16 €/GJ. In the longer term (large scale, technological learning, and selective catalyst), this could decrease to 9 €/GJ. Biomass integrated gasification FT plants can only become economically viable when crude oil price levels rise substantially, or when the environmental benefits of green FT diesel are valued. Green FT diesel also seems 40–50% more expensive than biomass derived methanol or hydrogen, but has clear advantages with respect to applicability to the existing infrastructure and car technology. --- paper_title: Recent Advances in Mathematical Programming Techniques for the Optimization of Process Systems under Uncertainty paper_content: Abstract Optimization under uncertainty has been an active area of research for many years. However, its application in Process Synthesis has faced a number of important barriers that have prevented its effective application. Barriers include availability of information on the uncertainty of the data (ad-hoc or historical), determination of the nature of the uncertainties (exogenous vs. endogenous), selection of an appropriate strategy for hedging against uncertainty (robust optimization vs. stochastic programming), large computational expense (often orders of magnitude larger than deterministic models), and difficulty in the interpretation of the results by non-expert users. In this paper, we describe recent advances that have addressed some of these barriers. --- paper_title: Analysis Synthesis And Design Of Chemical Processes paper_content: Thank you for downloading analysis synthesis and design of chemical processes. Maybe you have knowledge that, people have search numerous times for their chosen readings like this analysis synthesis and design of chemical processes, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful virus inside their desktop computer. --- paper_title: Evolution of concepts and models for quantifying resiliency and flexibility of chemical processes paper_content: Abstract This paper provides a historical perspective and an overview of the pioneering work that Manfred Morari developed in the area of resiliency for chemical processes. Motivated by unique counter-intuitive examples, we present a review of the early mathematical formulations and solution methods developed by Grossmann and co-workers for quantifying Static Resiliency (Flexibility). We also give a brief overview of some of the seminal ideas by Manfred Morari and co-workers in the area of Dynamic Resiliency. Finally, we provide a review of some of the recent developments that have taken place since that early work. --- paper_title: Large-scale gasification-based coproduction of fuels and electricity from switchgrass paper_content: Large-scale gasification-based systems for producing Fischer-Tropsch (F-T) fuels (diesel and gasoline blendstocks), dimethyl ether (DME), or hydrogen from switchgrass – with electricity as a coproduct in each case are assessed using a self-consistent design, simulation, and cost analysis framework. We provide an overview of alternative process designs for coproducing these fuels and power assuming commercially mature technology performance and discuss the commercial status of key component technologies. Overall efficiencies (lower-heating-value basis) of producing fuels plus electricity in these designs ranges from 57% for F-T fuels, 55–61% for DME, and 58–64% for hydrogen. Detailed capital cost estimates for each design are developed, on the basis of which prospective commercial economics of future large-scale facilities that coproduce fuels and power are evaluated. © 2009 Society of Chemical Industry and John Wiley & Sons, Ltd --- paper_title: Flexibility analysis of process supply chain networks paper_content: Abstract One of the key fundamentals for organizations to remain competitive in the present economic climate is to effectively manage their supply chains under uncertainty. The notion of supply chain flexibility attempts to characterize the ability of a supply chain to perform satisfactorily in the face of uncertainty. However, limited quantitative analysis is available. In this work, we utilize a flexibility analysis framework developed within the context of process operations and design to characterize supply chain flexibility. This framework also provides a quantitative mapping to various types of flexibility discussed in the operations research and management science literature. Two case studies are included to illustrate the application of this framework for analyzing the flexibility of existing supply chain processes, as well as utilizing it in supply chain design. --- paper_title: Robust optimization -- methodology and applications paper_content: Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for real-world applications. We discuss some of these applications, specifically: antenna design, truss topology design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon. --- paper_title: Comparative life cycle environmental assessment of CCS technologies paper_content: Abstract Hybrid life cycle assessment is used to assess and compare the life cycle environmental impacts of electricity generation from coal and natural gas with various carbon capture and storage (CCS) technologies consisting of post-combustion, pre-combustion or oxyfuel capture; pipeline CO 2 transport and geological storage. The systems with a capture efficiency of 85–96% decrease net greenhouse gas emission by 64–78% depending on the technology used. Calculation of other life cycle impacts shows significant trade-offs with fresh-water eutrophication and toxicity potentials. Human toxicity impact increases by 40–75%, terrestrial ecotoxicity by 60–120%, and freshwater eutrophication by 60–200% for the different technologies. There is a two- to four-fold increase in freshwater ecotoxicity potential in the post-combustion approach. The increase in toxicity for pre-combustion systems is 40–80% for the coal and 50–90% for the gas power plant. The increase in impacts for the oxyfuel approach mainly depends on energy demand for the air separation unit, giving an increase in various toxicity potentials of 35–70% for coal and 60–105% for natural gas system. Most of the increase in impacts with CCS systems is due to the energy penalty and the infrastructure development chain. --- paper_title: Life Cycle Assessment: Principles, Practice and Prospects paper_content: Life Cycle Assessment (LCA) has developed in Australia over the last 20 years into a technique for systematically identifying the resource flows and environmental impacts associated with the provision of products and services. Interest in LCA has accelerated alongside growing demand to assess and reduce greenhouse gas emissions across different manufacturing and service sectors. ::: ::: Life Cycle Assessment focuses on the reflective practice of LCA, and provides critical insight into the technique and how it can be used as a problem-solving tool. It describes the distinctive strengths and limitations of LCA, with an emphasis on practice in Australia, as well as the application of LCA in waste management, the built environment, water and agriculture. Supported by examples and case studies, each chapter investigates contemporary challenges for environmental assessment and performance improvement in these key sectors. ::: ::: LCA methodologies are compared to the emerging climate change mitigation policy and practice techniques, and the uptake of ‘quick’ LCA and management tools are considered in the light of current and changing environmental agendas. The authors also debate the future prospects for LCA technique and applications. --- paper_title: Recent developments in Life Cycle Assessment paper_content: Life Cycle Assessment is a tool to assess the environmental impacts and resources used throughout a product's life cycle, i.e., from raw material acquisition, via production and use phases, to waste management. The methodological development in LCA has been strong, and LCA is broadly applied in practice. The aim of this paper is to provide a review of recent developments of LCA methods. The focus is on some areas where there has been an intense methodological development during the last years. We also highlight some of the emerging issues. In relation to the Goal and Scope definition we especially discuss the distinction between attributional and consequential LCA. For the Inventory Analysis, this distinction is relevant when discussing system boundaries, data collection, and allocation. Also highlighted are developments concerning databases and Input-Output and hybrid LCA. In the sections on Life Cycle Impact Assessment we discuss the characteristics of the modelling as well as some recent developments for specific impact categories and weighting. In relation to the Interpretation the focus is on uncertainty analysis. Finally, we discuss recent developments in relation to some of the strengths and weaknesses of LCA. --- paper_title: System boundary selection in life-cycle inventories using hybrid approaches. paper_content: Life-cycle assessment (LCA) is a method for evaluating the environmental impacts of products holistically, including direct and supply chain impacts. The current LCA methodologies and the standards by the International Organization for Standardization (ISO) impose practical difficulties for drawing system boundaries; decisions on inclusion or exclusion of processes in an analysis (the cutoff criteria) are typically not made on a scientific basis. In particular, the requirement of deciding which processes could be excluded from the inventory can be rather difficult to meet because many excluded processes have often never been assessed by the practitioner, and therefore, their negligibility cannot be guaranteed. LCA studies utilizing economic input-output analysis have shown that, in practice, excluded processes can contribute as much to the product system under study as included processes; thus, the subjective determination of the system boundary may lead to invalid results. System boundaries in LCA are discussed herein with particular attention to outlining hybrid approaches as methods for resolving the boundary selection problem in LCA. An input-output model can be used to describe at least a part of a product system, and an ISO-compatible system boundary selection procedure can be designed by applying hybrid input-output-assisted approaches. There are several hybrid input-output analysis-based LCA methods that can be implemented in practice for broadening system boundary and also for ISO compliance. --- paper_title: Life cycle analyses of bulk-scale solid oxide fuel cell power plants and comparisons to the natural gas combined cycle paper_content: In this work, detailed cradle-to-grave life cycle analyses are performed for a current state-of-the art natural gas combined cycle and a bulk-scale solid fuel cell power plant fuelled by natural gas. Life cycle inventories are performed for multiple configurations of each plant, including designs with carbon capture capability. Consistent boundaries (including all supply chain and upstream processes) and unit bases for each process are defined for each process. The ReCiPe 2008 life cycle assessment method is used to quantify the impacts of each plant at both mid- and end-point levels. Three impact assessment perspectives (individualist, hierarchist, and egalitarian) are considered. The results of these life cycle analyses are compared in order to determine the environmental trade-offs between potential power generation pathways. Results indicate that power generation using solid oxide fuel cells has a smaller life cycle impact than the natural gas combined cycle when the entire life cycle of each option is considered. --- paper_title: Environmental Life-Cycle Assessment paper_content: Concept/Methodology History of Life Cycle inventory Systems Comparison of Methodologies Applications Industrial Application Data Bases/Data Quality Streamlining Eco-labeling Product Certification Policy Public Use Relationship to P2 Substainability Global Competitiveness Design for the Environment DFE in Europe Industrial DRE Applications Future Impacts Computer Tools European Perspective Total Cost Assessment Federal Activities Curriculum Development Future Trends. --- paper_title: Handbook on life cycle assessment operational guide to the ISO standards paper_content: Preface. Foreword. Part 1: LCA in Perspective. 1. Why a new Guide to LCA? 2. Main characteristics of LCA. 3. International developments. 4. Guiding principles for the present Guide. 5. Reading guide. Part 2a: Guide. Reading guidance. 1. Management of LCA projects: procedures. 2. Goal and scope definition. 3. Inventory analysis. 4. Impact assessment. 5. Interpretation. Appendix A: Terms, definitions and abbreviations. Part 2b: Operational annex. List of tables. Reading guidance. 1. Management of LCA projects: procedures. 2. Goal and scope definition. 3. Inventory analysis. 4. Impact assessment. 5. Interpretation. 6. References. Part 3: Scientific background. Reading guidance. 1. General introduction. 2. Goal and scope definition. 3. Inventory analysis. 4. Impact assessment. 5. Interpretation. 6. References. Annex A: Contributors. Appendix B: Areas of application of LCA. Appendix C: Partitioning economic inputs and outputs to product systems. --- paper_title: Methodologies for social life cycle assessment paper_content: Goal, Scope and Background. In recent years several different approaches towards Social Life Cycle Assessment (SLCA) have been developed. The purpose of this review is to compare these approaches in order to highlight methodological differences and general shortcomings. SLCA has several similarities with other social assessment tools, although, in order to limit the expanse of the review, only claims to address social impacts from an LCA-like framework are considered. Main Features. The review is to a large extent based on conference proceedings and reports, which are not all easily accessible, since very little has been published on SLCA in the open literature. The review follows the methodological steps of the environmental LCA (ELCA) known from the ISO 14044 standard. Results. The review reveals a broad variety in how the approaches address the steps of the ELCA methodology, particularly in the choice and formulation of indicators. The indicators address a wide variety of issues; some approaches focus on impacts created in the very close proximity of the processes included in the product system, whereas others focus on the more remote societal consequences. Only very little focus has been given to the use stage in the product life cycle. Another very important difference among the proposals is their position towards the use of generic data. Several of the proposals argue that social impacts are connected to the conduct of the company leading to the conclusion that each individual company in the product chain has to be assessed, whereas others claim that generic data can give a sufficiently accurate picture of the associated social impacts. Discussion. The SLCA approaches show that the perception of social impacts is very variable. An assessment focusing on social impacts created in the close proximity of the processes included in the product system will not necessarily point in the same direction as an assessment that focuses on the more societal consequences. This points toward the need to agree on the most relevant impacts to include in the SLCA in order to include the bulk of the situation. Regarding the use of generic data as a basis for the assessment, this obviously has an advantage over using site specific data in relation to practicality, although many authors behind the SLCA ESS-Submission Editor: Dr. David Hunkeler --- paper_title: A Modular Approach to Sustainability Assessment and Decision Support in Chemical Process Design paper_content: In chemical and allied industries, process design sustainability has gained public concern in academia, industry, government agencies, and social groups. Over the past decade, a variety of sustainability indicators have been introduced, but with various challenges in application. It becomes clear that the industries need urgently practical tools for conducting systematic sustainability assessment on existing processes and/or new designs and, further, for helping derive the most desirable design decisions. This paper presents a systematic, general approach for sustainability assessment and design selection through integrating hard (quantitative) economic and environmental indicators along with soft (qualitative) indicators for social criteria into design activities. The approach contains four modules: a process simulator module, an equipment and inventory acquisition module, a sustainability assessment module, and a decision support module. The modules fully utilize and extend the capabilities of the process simulator Aspen Plus, Aspen Simulation Workbook, and a spreadsheet, where case model development, data acquisition and analysis, team contribution assessment, and decision support are effectively integrated. The efficacy of the introduced approach is illustrated by the example of biodiesel process design, where insightful sustainability analysis and persuasive decision support show its superiority over commonly practiced technoeconomy evaluation approaches. --- paper_title: 2.1. Econometric techniques: Theory versus practice paper_content: This paper introduces the basic concepts used in econometric modeling, and describes five prescriptions to avoid common real-world pitfalls in that style of modeling. The paper begins by comparing econometric modeling with other forms of modeling used in energy modeling and engineering. It describes what an econometric model is, and how to build one. It then gives a detailed explanation of many facets of the five prescriptions: pay attention to uncertainty; don't expect a free lunch when devising specifications; pay attention to prior information; don't expect to draw conclusions without adequate data; and check the historical track record of your model. The issues of generalization and robustness over time receive special attention; they are important in practice, and subtle in theory. Finally, the paper discusses model development in practice, building upon experience with PURHAPS, a model I developed for the Energy Information Administration (EIA). --- paper_title: Energy models for demand forecasting—A review paper_content: Energy is vital for sustainable development of any nation – be it social, economic or environment. In the past decade energy consumption has increased exponentially globally. Energy management is crucial for the future economic prosperity and environmental security. Energy is linked to industrial production, agricultural output, health, access to water, population, education, quality of life, etc. Energy demand management is required for proper allocation of the available resources. During the last decade several new techniques are being used for energy demand management to accurately predict the future energy needs. In this paper an attempt is made to review the various energy demand forecasting models. Traditional methods such as time series, regression, econometric, ARIMA as well as soft computing techniques such as fuzzy logic, genetic algorithm, and neural networks are being extensively used for demand side management. Support vector regression, ant colony and particle swarm optimization are new techniques being adopted for energy demand forecasting. Bottom up models such as MARKAL and LEAP are also being used at the national and regional level for energy demand management. --- paper_title: OSeMOSYS: The Open Source Energy Modeling System: An introduction to its ethos, structure and development paper_content: This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation'in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. --- paper_title: Inside the integrated assessment models: Four issues in climate economics paper_content: Good climate policy requires the best possible understanding of how climatic change will impact on human lives and livelihoods in both industrialized and developing counties. Our review of recent contributions to the climate-economics literature assesses 30 existing integrated assessment models in four key areas: the connection between model structure and the type of results produced; uncertainty in climate outcomes and projection of future damages; equity across time and space; and abatement costs and the endogeneity of technological change. Differences in treatment of these issues are substantial and directly affect model results and their implied policy prescriptions. Much can be learned about climate economics and modelling technique from the best practices in these areas; there is unfortunately no existing model that incorporates the best practices on all or most of the questions we examine. --- paper_title: Progress in integrated assessment and modelling paper_content: Environmental processes have been modelled for decades. However. the need for integrated assessment and modeling (IAM) has,town as the extent and severity of environmental problems in the 21st Century worsens. The scale of IAM is not restricted to the global level as in climate change models, but includes local and regional models of environmental problems. This paper discusses various definitions of IAM and identifies five different types of integration that Lire needed for the effective solution of environmental problems. The future is then depicted in the form of two brief scenarios: one optimistic and one pessimistic. The current state of IAM is then briefly reviewed. The issues of complexity and validation in IAM are recognised as more complex than in traditional disciplinary approaches. Communication is identified as a central issue both internally among team members and externally with decision-makers. stakeholders and other scientists. Finally it is concluded that the process of integrated assessment and modelling is considered as important as the product for any particular project. By learning to work together and recognise the contribution of all team members and participants, it is believed that we will have a strong scientific and social basis to address the environmental problems of the 21st Century. --- paper_title: Designing a Model for the Glogbal Energy System—GENeSYS-MOD: An Application of the Open-Source Energy Modeling System (OSeMOSYS) paper_content: This paper develops a path for the global energy system up to 2050, presenting a new application of the open-source energy modeling system (OSeMOSYS) to the community. It allows quite disaggregate energy and emission analysis: Global Energy System Model (GENeSYS-MOD) uses a system of linear equations of the energy system to search for lowest-cost solutions for a secure energy supply, given externally defined constraints, mainly in terms of CO2-emissions. The general algebraic modeling system (GAMS) version of OSeMOSYS is updated to the newest version and, in addition, extended and enhanced to include e.g., a modal split for transport, an improved trading system, and changes to storages. The model can be scaled from small-scale applications, e.g., a company, to cover the global energy system. The paper also includes an application of GENeSYS-MOD to analyze decarbonization scenarios at the global level, broken down into 10 regions. Its main focus is on interdependencies between traditionally segregated sectors: electricity, transportation, and heating; which are all included in the model. Model calculations suggests that in order to achieve the 1.5–2 °C target, a combination of renewable energy sources provides the lowest-cost solution, solar photovoltaic being the dominant source. Average costs of electricity generation in 2050 are about 4 €cents/kWh (excluding infrastructure and transportation costs). --- paper_title: Opening the black box of energy modelling: strategies and lessons learned paper_content: The global energy system is undergoing a major transition, and in energy planning and decision-making across governments, industry and academia, models play a crucial role. Because of their policy relevance and contested nature, the transparency and open availability of energy models and data are of particular importance. Here we provide a practical how-to guide based on the collective experience of members of the Open Energy Modelling Initiative (Openmod). We discuss key steps to consider when opening code and data, including determining intellectual property ownership, choosing a licence and appropriate modelling languages, distributing code and data, and providing support and building communities. After illustrating these decisions with examples and lessons learned from the community, we conclude that even though individual researchers' choices are important, institutional changes are still also necessary for more openness and transparency in energy research. --- paper_title: Methods for IA: The challenges and opportunities ahead paper_content: There is increasing recognition and credibility for the rapidly evolving field of Integrated Assessment (IA). Within the setting of the political arena it is accepted that IA can be supportive in the long-term policy planning process, while in the scientific arena more and more scientists do realise the complementary value of IA research. One of the best indicators for this increased recognition is the establishment of the European Forum on Integrated Environmental Assessment (EFIEA) by the European Commission DGXII. In spite of this growing appreciation for IA, the methodological basis of IA is still narrow, and lags behind the high expectations from the outside world. Broadening the basis of the methodologies underlying IA should therefore be one of the top priorities of the IA community. This paper deals with some ideas which could form a basis for an IA research agenda for the next 5–10 years. One of the problems of IA is still the many definitions and interpretations that circulate (Weyant et al. [118], Rotmans and Dowlatabadi [100], Parson [85–87], Ravetz [91], Jaeger et al. [49]). Notwithstanding this diversity, these definitions have two elements in common, i.e., interdisciplinarity and decision support. These two common elements make Integrated Assessment difficult to plan and even harder to conduct. Instead of coming up with another definition of IA, we simply focus on the above commonalities as points of departure for exploring challenges for the future. Thus irrespective of whatever definition is taken, IA can be described as --- paper_title: MESAP/TIMES- Advanced Decision Support for Energy and Environmental Planning paper_content: In view of the requirements for climate and environmental protection and the increased international competition within a deregulated energy market, authorities and energy providers want to improve their competence of strategic planning in order to identify robust decisions for the upcoming threats and opportunities. This paper presents the planning tool MESAP/TIMES which integrates the TIMES optimization model and the MESAP software environment for energy and environmental planning. Questions that may be studied with MESAP/TIMES range e.g. from the planning of a local energy system, over the evaluation of energy technologies on a national scale up to the analysis of carbon permit trading strategies in the international Kyoto protocol discussion. --- paper_title: Energy scientists must show their workings paper_content: Public trust demands greater openness from those whose research is used to set policy, argues Stefan Pfenninger. --- paper_title: A review of energy systems models in the UK: Prevalent usage and categorisation paper_content: In this paper, a systematic review of academic literature and policy papers since 2008 is undertaken with an aim of identifying the prevalent energy systems models and tools in the UK. A list of all referenced models is presented and the literature is analysed with regards sectoral coverage and technological inclusion, as well as mathematical structure of models. --- paper_title: What Does Europe Pay for Clean Energy? – Review of Macroeconomic Simulation Studies paper_content: This paper analyses the macroeconomic costs of environmental regulation in European energy markets on the basis of existing macroeconomic simulation studies. The analysis comprises the European emssion trading scheme, energy taxes, measures in the transport sector, and the promotion of renewable energy sources. We find that these instruments affect the European economy, in particular the energy intensive industries and the industries that produce internationally tradeable goods. From a macroeconomic point of view, however, the costs of environmental regulation appear to be modest. The underlying environmental targets and the efficient design of regulation are key determinants for the cost burden. --- paper_title: Impacts of new energy technology using generalized input-output analysis paper_content: Abstract The economic and environmental impacts of several new energy technologies (high and low BTU coal gasification and the gas turbine topping cycle) were examined for the 1980–1985 time period. A projected 1980–1985 U.S. input-output matrix was augmented to include environmental and resource usage variables. Engineering studies were used to modify the input-output matrix to represent the introduction of new energy technologies. The results illustrate the high sensitivity of capital investment to the rate of growth of energy consumption. The results also illustrate several economic mechanisms that will help to hold total capital investment within its historical bounds as a percentage of GNP. The methodology and many possible applications and extensions of generalized input-output model are included. A short critique of the methodology is also presented. --- paper_title: Energy impact of consumption decisions paper_content: The energy cost of goods and services is computed, and applications are discussed. The method utilizes the data base of input-output economics, but entails additional analysis. Applications range over consumption options for individuals, business, industry, and government; from the total energy cost of bus versus auto travel to the national import-export balance. --- paper_title: USEEIO: A new and transparent United States environmentally-extended input-output model paper_content: National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hotspots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally-extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO is assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO files are publicly available bringing a new level of transparency for environmentally-extended input-output models. --- paper_title: Process to planet: A multiscale modeling framework toward sustainable engineering paper_content: To prevent the chance of unintended environmental harm, engineering decisions need to consider an expanded boundary that captures all relevant connected systems. Comprehensive models for sustainable engineering may be developed by combining models at multiple scales. Models at the finest “equipment” scale are engineering models based on fundamental knowledge. At the intermediate “value chain” scale, empirical models represent average production technologies, and at the coarsest “economy” scale, models represent monetary and environmental exchanges for industrial sectors in a national or global economy. However, existing methods for sustainable engineering design ignore the economy scale, while existing methods for life cycle assessment do not consider the equipment scale. This work proposes an integrated, multiscale modeling framework for connecting models from process to planet and using them for sustainable engineering applications. The proposed framework is demonstrated with a toy problem, and potential applications of the framework including current and future work are discussed. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3332–3352, 2015 --- paper_title: The National Energy Modeling System: A Large-Scale Energy-Economic Equilibrium Model paper_content: The National Energy Modeling System (NEMS) is a large-scale mathematical model that computes equilibrium fuel prices and quantities in the U.S. energy sector and is currently in use at the U.S. Department of Energy (DOE). At present, to generate these equilibrium values, NEMS iteratively solves a sequence of linear programs and nonlinear equations. This is a nonlinear Gauss-Seidel approach to arrive at estimates of market equilibrium fuel prices and quantities. In this paper, we present existence and uniqueness results for NEMS-type models based on a nonlinear complementarity/variational inequality problem format. Also, we document mathematically, for the first time, how the inputs and the outputs for each NEMS module link together. --- paper_title: Application of rolling horizon optimization to an integrated solid-oxide fuel cell and compressed air energy storage plant for zero-emissions peaking power under uncertainty paper_content: Abstract In this study, the application of a rolling horizon optimization strategy to an integrated solid-oxide fuel cell/compressed air energy storage plant for load-following is investigated. A reduced-order model of the integrated plant is used to simulate and optimize each optimization interval as a mixed integer non-linear program. Forecasting uncertainties are considered through the addition of measurement noise and use of stochastic Monte Carlo simulations. The addition of rolling horizon optimization gives significant reductions to the sum-of-squared-errors between the demand and supply profiles. A sensitivity analysis is used to show that increasing the forecasting and optimization horizon improves load tracking with diminishing returns. Incorporating white Gaussian noise to demand forecasts has a marginal impact on error, even when a relatively high noise power of is used. Consistently over- or under-predicting demand has a greater impact on the plant's load-tracking error. However, even under worst-case forecasting scenarios, using a rolling horizon optimization scheme provides a more than 50% reduction in error when compared to the original system. An economic objective function is formulated to improve gross revenue using electricity spot-prices, but results in a trade-off with load-following performance. The results indicate that the rolling horizon optimization approach could potentially be applied to future municipal-scale fuel cell/compressed air storage systems to achieve power levels which closely follow real grid power cycles using existing prediction models. --- paper_title: Combined heat-and-power plants and district heating in a deregulated electricity market paper_content: In this paper, a municipality with district heating supplied via boilers and combined heat-and-power (CHP) plants is studied. The electricity load in the municipality is provided for by the CHP plant and electricity bought from the Nordic electricity market. It is therefore desirable to produce as much electricity as possible during periods when the price of electricity is high. The variations in the price of electricity over a 24-h period are significant. The idea presented in this paper is that heat storage can be used to maximise the amount of electricity produced in the CHP plants during peak-price periods. It can also be used to minimise the use of plants with higher operational costs. For storing heat, both a hot-water accumulator at the CHP plant and storage in the building stock are suggested. The situation is analysed using a mixed integer linear-programming model and a case study is presented for the city of Linkoping, a City of approximately 130,000 inhabitants, situated 200 km south of Stockholm in Sweden. A simple model for forecasting the electricity price on the Nordic electricity market is also presented. --- paper_title: Optimal Multi-scale Capacity Planning for Power-Intensive Continuous Processes under Time-sensitive Electricity Prices and Demand Uncertainty, Part I: Modeling paper_content: Abstract Time-sensitive electricity prices (as part of so-called demand-side management in the smart grid) offer economical incentives for large industrial customers. In part I of this paper, we propose an MILP formulation that integrates the operational and strategic decision-making for continuous power-intensive processes under time-sensitive electricity prices. We demonstrate the trade-off between capital and operating expenditures with an industrial case study for an air separation plant. Furthermore, we compare the insights obtained from a model that assumes deterministic demand with those obtained from a stochastic demand model. The value of the stochastic solution (VSS) is discussed, which can be significant in cases with an unclear setup, such as medium baseline product demand and growth rate, large variance or skewed demand distributions. While the resulting optimization models are large-scale, they can be solved within three days of computational time. A decomposition algorithm for speeding-up the solution time is described in part II. --- paper_title: Product Environmental Life‐Cycle Assessment Using Input‐Output Techniques paper_content: Summary ::: Life-cycle assessment (LCA) facilitates a systems view in environmental evaluation of products, materials, and processes. Life-cycle assessment attempts to quantify environmental burdens over the entire life-cycle of a product from raw material extraction, manufacturing, and use to ultimate disposal. However, current methods for LCA suffer from problems of subjective boundary definition, inflexibility, high cost, data confidentiality, and aggregation. ::: ::: This paper proposes alternative models to conduct quick, cost effective, and yet comprehensive life-cycle assessments. The core of the analytical model consists of the 498 sector economic input-output tables for the U.S. economy augmented with various sector-level environmental impact vectors. The environmental impacts covered include global warming, acidification, energy use, non-renewable ores consumption, eutrophication, conventional pollutant emissions and toxic releases to the environment. Alternative models are proposed for environmental assessment of individual products, processes, and life-cycle stages by selective disaggregation of aggregate input-output data or by creation of hypothetical new commodity sectors. To demonstrate the method, a case study comparing the life-cycle environmental performance of steel and plastic automobile fuel tank systems is presented. --- paper_title: Comparing two life cycle assessment approaches: a process model vs. economic input-output-based assessment paper_content: We compare two tools for Life Cycle Assessment (LCA). The software GaBi (Ganzheitliche Bilanzierung Integrated Assessment) from Germany is based on a process model approach, as recommended by the Society of Environmental Toxicology and Chemistry (SETAC). These results are contrasted to those from the method developed by Carnegie Mellon University's Green Design Initiative, Economic Input-Output Life Cycle Analysis (EIO-LCA). The EIO-LCA model uses economic input-output matrices, and industry sector level environmental and nonrenewable resource consumption data to assesses the economy-wide environmental impacts of products and processes. The results from the alternative approaches are compared in terms of toxic chemical releases, conventional pollutant emissions, energy use by fuel type, and use of ores. We find that most of the values from the two tools are within the same order of magnitude, despite the fundamental differences in the models. We contrast the two approaches to identify their relative strengths and weaknesses. --- paper_title: Expanding exergy analysis to account for ecosystem products and services paper_content: Exergy analysis is a thermodynamic approach used for analyzing and improving the efficiency of chemical and thermal processes. It has also been extended for life cycle assessment and sustainability evaluation of industrial products and processes. Although these extensions recognize the importance of capital and labor inputs and environmental impact, most of them ignore the crucial role that ecosystems play in sustaining all industrial activity. Decisions based on approaches that take nature for granted continue to cause significant deterioration in the ability of ecosystems to provide goods and services that are essential for every human activity. Accounting for nature's contribution is also important for determining the impact and sustainablility of industrial activity. In contrast, emergy analysis, a thermodynamic method from systems ecology, does account for ecosystems, but has encountered a lot of resistance and criticism, particularly from economists, physicists, and engineers. This paper expands the... --- paper_title: Errors in Conventional and Input-Output—based Life—Cycle Inventories paper_content: Conventional process‐analysis‐type techniques for compiling life‐cycle inventories suffer from a truncation error, which is caused by the omission of resource requirements or pollutant releases of higher‐order upstream stages of the production process. The magnitude of this truncation error varies with the type of product or process considered, but can be on the order of 50%. One way to avoid such significant errors is to incorporate input‐output analysis into the assessment framework, resulting in a hybrid life‐cycle inventory method. Using Monte‐Carlo simulations, it can be shown that uncertainties of input‐output– based life‐cycle assessments are often lower than truncation errors in even extensive, third‐order process analyses. --- paper_title: Sustainable process design by the process to planet framework paper_content: Sustainable process design (SPD) problems combine a process design problem with life cycle assessment (LCA) to optimize process economics and life cycle environmental impacts. While SPD makes use of recent advances in process systems engineering and optimization, its use of LCA has stagnated. Currently, only process LCA is utilized in SPD, resulting in designs based on incomplete and potentially inaccurate life cycle information. To address these shortcomings, the multiscale process to planet (P2P) modeling framework is applied to formulate and solve the SPD problem. The P2P framework offers a more comprehensive analysis boundary than conventional SPD and greater modeling detail than advanced LCA methodologies. Benefits of applying this framework to SPD are demonstrated with an ethanol process design case study. Results show that current methods shift emissions outside the analysis boundary, while applying the P2P modeling framework results in environmentally superior process designs. Future extensions of the P2P framework are discussed. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3320–3331, 2015 --- paper_title: High-efficiency power production from natural gas with carbon capture paper_content: Abstract A unique electricity generation process uses natural gas and solid oxide fuel cells at high electrical efficiency (74%HHV) and zero atmospheric emissions. The process contains a steam reformer heat-integrated with the fuel cells to provide the heat necessary for reforming. The fuel cells are powered with H2 and avoid carbon deposition issues. 100% CO2 capture is achieved downstream of the fuel cells with very little energy penalty using a multi-stage flash cascade process, where high-purity water is produced as a side product. Alternative reforming techniques such as CO2 reforming, autothermal reforming, and partial oxidation are considered. The capital and energy costs of the proposed process are considered to determine the levelized cost of electricity, which is low when compared to other similar carbon capture-enabled processes. --- paper_title: The Exergy Method of Thermal Plant Analysis paper_content: The subject of this book, the Exergy Method also known as the Availability Analysis, is a method of ::: thermodynamic analysis in which the basis of evaluation of thermodynamic losses follows from ::: the Second Law rather than the First Law of Thermodynamics. As a result of the recent developments in this technique combined with the increasing need to conserve fuel, the Exergy Method has gained in the last few years many new followers, both among practising engineers and academics. Its advantages, in relation to the traditional techniques which rely mainly on the First Law are now generally recognised. Although the Exergy Method has featured as the subject of many published papers in scientific and engineering journals and at conferences, very few comprehensive English language books on this subject have been published so far. This book is particularly intended for engineers ::: and students specialising in thermal and chemical plant design and operation as well as for applied scientists concerned with various aspects of conservation of energy. It introduces the subject in a manner that can be understood by anyone who is familiar with the fundamentals of Applied hermodynamics.Numerous examples are used in the book to aid the reader in assimilating the basic concepts and in mastering the techniques. The book contains a number of tables and charts which will be found of great assistance in calculations concerning topics such as thermoeconomics, refrigeration, ::: cryogenic processes, combustion, power generation and various aspects of chemical and process engineering. --- paper_title: Thermodynamic Input-Output Analysis of Economic and Ecological Systems paper_content: Ecological resources constitute the basic support system for all activity on earth. These resources include products such as air, water, minerals and crude oil and services such as carbon sequestration and pollution dissipation (Tilman et al. 2002; Daily 1997; Costanza et al. 1997; Odum 1996). However, traditional methods in engineering and economics often fail to account for the contribution of ecosystems despite their obvious importance. The focus of these methods tends to be on short-term economic objectives, while long-term sustainability issues get shortchanged. Such ignorance of ecosystems is widely believed to be one of the primary causes behind a significant and alarming deterioration of global ecological resources (WRI 2000; WWF 2000; UNEP 2002). To overcome the shortcomings of existing methods, and to make them ecologically more conscious, various techniques have been developed in recent years (Holliday et al. 2002). These techniques can be broadly divided into two categories, namely preference-based and biophysical methods. The preference-based methods use human valuation to account for ecosystem resources (AIChE 2004; Balmford et al. 2002; Bockstael et al. 2000; Costanza et al. 1997). These methods either use a single monetary unit to readily compare economic and ecological contributions, or use multi-criteria decision making to address trade-offs between indicators in completely different units. However, preference-based methods do not necessitate --- paper_title: Process to planet: A multiscale modeling framework toward sustainable engineering paper_content: To prevent the chance of unintended environmental harm, engineering decisions need to consider an expanded boundary that captures all relevant connected systems. Comprehensive models for sustainable engineering may be developed by combining models at multiple scales. Models at the finest “equipment” scale are engineering models based on fundamental knowledge. At the intermediate “value chain” scale, empirical models represent average production technologies, and at the coarsest “economy” scale, models represent monetary and environmental exchanges for industrial sectors in a national or global economy. However, existing methods for sustainable engineering design ignore the economy scale, while existing methods for life cycle assessment do not consider the equipment scale. This work proposes an integrated, multiscale modeling framework for connecting models from process to planet and using them for sustainable engineering applications. The proposed framework is demonstrated with a toy problem, and potential applications of the framework including current and future work are discussed. © 2015 American Institute of Chemical Engineers AIChE J, 61: 3332–3352, 2015 --- paper_title: Integrating Hybrid Life Cycle Assessment with Multiobjective Optimization: A Modeling Framework paper_content: By combining life cycle assessment (LCA) with multiobjective optimization (MOO), the life cycle optimization (LCO) framework holds the promise not only to evaluate the environmental impacts for a given product but also to compare different alternatives and identify both ecologically and economically better decisions. Despite the recent methodological developments in LCA, most LCO applications are developed upon process-based LCA, which results in system boundary truncation and underestimation of the true impact. In this study, we propose a comprehensive LCO framework that seamlessly integrates MOO with integrated hybrid LCA. It quantifies both direct and indirect environmental impacts and incorporates them into the decision making process in addition to the more traditional economic criteria. The proposed LCO framework is demonstrated through an application on sustainable design of a potential bioethanol supply chain in the UK. Results indicate that the proposed hybrid LCO framework identifies a considera... --- paper_title: Application of the exergetic cost theory to the CGAM problem paper_content: An optimization strategy for complex thermal systems is presented. The strategy is based on conventional techniques and incorporates assumptions and consequences of the exergetic cost theory (ECT) and of symbolic exergoeconomics. In addition to the results obtained by consventional techniques, this method provides valuable information about the interaction of components. --- paper_title: Exergy, waste accounting, and life-cycle analysis paper_content: The authors argue that thermodynamics offers a means of accounting for both resource inputs and waste outputs in a systematic and uniform way. The new feature of the present work is to extend the applications of exergy analysis to resource and waste accounting and to present the results in an integrated analytical framework, namely, life-cycle analysis (LCA). We conclude that exergy is appropriate for general statistical use, both as a measure of resource stocks and flows and as a measure of waste emissions and potential for causing environmental harm. --- paper_title: System boundary selection in life-cycle inventories using hybrid approaches. paper_content: Life-cycle assessment (LCA) is a method for evaluating the environmental impacts of products holistically, including direct and supply chain impacts. The current LCA methodologies and the standards by the International Organization for Standardization (ISO) impose practical difficulties for drawing system boundaries; decisions on inclusion or exclusion of processes in an analysis (the cutoff criteria) are typically not made on a scientific basis. In particular, the requirement of deciding which processes could be excluded from the inventory can be rather difficult to meet because many excluded processes have often never been assessed by the practitioner, and therefore, their negligibility cannot be guaranteed. LCA studies utilizing economic input-output analysis have shown that, in practice, excluded processes can contribute as much to the product system under study as included processes; thus, the subjective determination of the system boundary may lead to invalid results. System boundaries in LCA are discussed herein with particular attention to outlining hybrid approaches as methods for resolving the boundary selection problem in LCA. An input-output model can be used to describe at least a part of a product system, and an ISO-compatible system boundary selection procedure can be designed by applying hybrid input-output-assisted approaches. There are several hybrid input-output analysis-based LCA methods that can be implemented in practice for broadening system boundary and also for ISO compliance. --- paper_title: A brief Commented History of Exergy From the Beginnings to 2004 paper_content: This paper presents a brief critical and analytical account of the development of the concept of exergy and of its applications. It is based on a careful and extended (in time) consultation of a very large body of published references taken from archival journals, textbooks and other monographic works, conference proceedings, technical reports and lecture series. We have tried to identify the common thread that runs through all of the references, to put different issues into perspective, to clarify dubious points, to suggest logical and scientific connections and priorities. It was impossible to eliminate our respective biases that still affect the “style” of the present paper: luckily, some of our individual biases “cancelled out” at the time of writing, and some were corrected by our Reviewers (to whom we owe sincere thanks for the numerous and very relevant corrections and suggestions). The article is organized chronologically and epistemologically: it turns out that the two criteria allow for a quite clear systematization of the subject matter, because the development of the exergy concept was rather “linear”. This work is addressed to our Colleagues who are involved in theoretical research, industrial development, and societal applications of exergy concepts: if they extract from this article the idea of an extraordinary epistemological uniformity in the development of the concept of exergy, our goal will be achieved. The other addressees of this paper are Graduate Students taking their first steps in this field: in their case, we hope that consultation of our paper will prompt them to adopt and maintain throughout their career a scholarly valid method of research, which implies studying and respecting our scientific roots (the sources) but venturing freely and creatively into unknown territory. In the Conclusions we try to forecast future developments: this is the only part of the paper that is an intentional expression of our own views: the previous historical-scientific exposition is instead based on verifiable facts and accepted opinions. --- paper_title: Cumulative exergy consumption and cumulative degree of perfection of chemical processes paper_content: The paper presents methods of calculating the cumulative exergy consumption and introduces the notion of the cumulative degree of perfection of the entire chain of production processes leading to the material under consideration. Tables of values calculated on the basis of cumulative energy consumption indices have been included. --- paper_title: Net Energy Analysis and the Energy Requirements of Energy Systems paper_content: Foreword by Alvin M. Weinberg Part I: Energy Systems Introduction Man's Use of Energy Locations and Concentrations of Energy Energy--When and at What Rate? The Energy Services Part II: Valuations of Energy National and International Energy Statistics The Physicist's Approach Valuation of Energy in Engineering The Role of Energy in Economics The View of the Conservationist Part III: Methods of Cumulative Energy Accounting A Short History of Energy Accounting Useful Questions in Energy Accounting Problems in Applying Net-Energy Analysis Process Analysis and the Input-Output Method Schools of Net-Energy Analysis Indexes of Systems' Performance and Sankey Diagrams Part IV: Energy Accounting of Energy Carriers Electricity-Producing Systems Fuel-Producing Systems Part V: Energy Accounting of Energy Services Direct and Indirect Energy Requirements of Transport Systems Process Heat and Mechanical Drive Warm Water and Space Heat Part VI: Concluding Remarks Purpose and Method Answers and Partial Answers to Some Policy Questions --- paper_title: Exergy Analysis of Thermal, Chemical, and Metallurgical Processes paper_content: In addition to the exergy analysis of thermal processes, e.g. heat engines and commercial power stations, for which the methods described have been long established, the book considers the chemical and metallurgical process industries. Charts and tables are provided for the determination of the exergy of many typical substances. Examples are drawn from the fields of thermal, chemical and metallurgical engineering and the exergetic efficiency of typical processes is calculated. The book also discusses the application of the exergy concept to the problem of the economical optimization of complex plants and the implications to the environment of pollution due to external exergy losses. An Instructor's Manual is available which contains outline solutions to the problems listed at the end of each chapter. --- paper_title: Practical approaches for applying thermoeconomic analysis to energy conversion systems : Benchmarking and comparative application paper_content: Abstract In the last decades, thermoeconomic analysis emerged as a combination of exergy analysis and cost accounting principles, widely used for multiple purposes: to account for the exergy and economic costs of energy systems products, to derive the structures of such costs for the design optimization purpose, and to perform system diagnosis quantifying the source and the impact of malfunctions and dysfunctions within the analyzed process. Traditionally, thermoeconomic analysis is referred to as Exergy Cost Analysis or Exergoeconomic Cost Analysis. The former is based on the so-called Exergy Cost Theory, focused on the evaluation of exergy cost of the system products, while the latter is focused on the evaluation of monetary cost following the same theory. Currently, many practical approaches are available in the literature for the application of thermoeconomic analysis and Exergy Cost Theory to energy conversion systems, while a comprehensive classification, benchmarking and comparison of such approaches is missing. This paper aims to fill this gap through the following activities: first of all, a brief but comprehensive literature review related to the theoretical developments and applications of thermoeconomic analysis method is performed. Secondly and for the purpose of benchmarking, the main practical approaches identified for the application of Exergy Cost Theory are presented and formalized, including the fundamental aspects related to the definition of auxiliary relations and the reallocation of the exergy cost of the residues. Finally, the identified approaches are comparatively applied to the standard CGAM problem, and the advantages and drawbacks of each approach are discussed. It is found that the definition of the functional diagram and the numerical solution of the system through input-output analysis seem to be more straightforward with respect to the other approaches, leading also to the formalization of an unambiguous method to reallocate the exergy cost of the residual flows. --- paper_title: Off-Design Modeling of Natural Gas Combined Cycle Power Plants: An Order Reduction by Means of Thermoeconomic Input–Output Analysis paper_content: In a European context characterized by growing need for operational flexibility across the electricity sector, the combined cycle power plants are increasingly subjected to cyclic operation. These new operation profiles cause an increase of production costs and decrease of revenues, which undermines the competitiveness of the combined cycles. Power plant operators need tools to predict the effect of off-design operation and control mechanisms on the performance of the power plant. Traditional Thermodynamic or Thermoeconomic models may be unpractical for the operators, due to their complexity and the computational effort they require. This study proposes a Thermoeconomic Input–Output Analysis model for the on- and off-design performance prediction of energy systems, and applies it to La Casella Natural Gas Combined Cycle (NGCC) power plant, in Italy. It represents a stand-alone, reduced order model, where the cost structure of the plant products and the Thermoeconomic performance indicators are derived for on- and off-design conditions as functions of the load and of different control mechanisms, independently from the Thermodynamic model. The results of the application show that the Thermoeconomic Input–Output Analysis model is a suitable tool for power plant operators, able to derive the same information coming from traditional Thermoeconomic Analysis with reduced complexity and computational effort. --- paper_title: The underestimated potential of solar energy to mitigate climate change paper_content: Despite being currently under-represented in IPCC reports, PV generation represents a growing share of power generation. This Perspective argues that underestimating PV potential led to suboptimal integration measures and that specific deployment strategies for emerging economies should be developed. --- paper_title: Technological learning in energy–environment–economy modelling: A survey paper_content: Abstract This paper aims at providing an overview and a critical analysis of the technological learning concept and its incorporation in energy–environment–economy models. A special emphasis is put on surveying and discussing, through the so-called learning curve, both studies estimating learning rates in the energy field and studies incorporating endogenous technological learning in bottom-up and top-down models. The survey of learning rate estimations gives special attention to interpreting and explaining the sources of variability of estimated rates, which is shown to be mainly inherent in R&D expenditures, the problem of omitted variable bias, the endogeneity relationship and the role of spillovers. Large-scale models survey show that, despite some methodological and computational complexity related to the non-linearity and the non-convexity associated with the learning curve incorporation, results of the numerous modelling experiments give several new insights with regard to the analysis of the prospects of specific technological options and their cost decrease potential (bottom-up models), and with regard to the analysis of strategic considerations, especially inherent in the innovation and energy diffusion process, in particular the energy sector's endogenous responses to environment policy instruments (top-down models). --- paper_title: The role of technology diffusion in a decarbonizing world to limit global warming to well below 2 °C: An assessment with application of Global TIMES model paper_content: Abstract Low-carbon power generation technologies such as wind, solar and carbon capture and storage are expected to play major roles in a decarbonized world. However, currently high cost may weaken the competitiveness of these technologies. One important cost reduction mechanism is the “learning by doing”, through which cumulative deployment results in technology costs decline. In this paper, a 14-region global energy system model (Global TIMES model) is applied to assess the impacts of technology diffusion on power generation portfolio and CO2 emission paths out to the year 2050. This analysis introduces three different technology learning approaches, namely standard endogenous learning, multiregional learning and multi-cluster learning. Four types of low-carbon power generation technologies (wind, solar, coal-fired and gas-fired CCS) undergo endogenous technology learning. The modelling results show that: (1) technology diffusion can effectively reduce the long-term abatement costs and the welfare losses caused by carbon emission mitigation; (2) from the perspective of global optimization, developed countries should take the lead in low-carbon technologies’ deployment; and (3) the establishment of an effective mechanism for technology diffusion across boundaries can enhance the capability and willingness of developing countries to cut down their CO2 emission. --- paper_title: High-efficiency power production from natural gas with carbon capture paper_content: Abstract A unique electricity generation process uses natural gas and solid oxide fuel cells at high electrical efficiency (74%HHV) and zero atmospheric emissions. The process contains a steam reformer heat-integrated with the fuel cells to provide the heat necessary for reforming. The fuel cells are powered with H2 and avoid carbon deposition issues. 100% CO2 capture is achieved downstream of the fuel cells with very little energy penalty using a multi-stage flash cascade process, where high-purity water is produced as a side product. Alternative reforming techniques such as CO2 reforming, autothermal reforming, and partial oxidation are considered. The capital and energy costs of the proposed process are considered to determine the levelized cost of electricity, which is low when compared to other similar carbon capture-enabled processes. --- paper_title: Alternative energy technologies paper_content: Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising. --- paper_title: Hydrogen supply chain architecture for bottom-up energy systems models. Part 2: Techno-economic inputs for hydrogen production pathways paper_content: Abstract This article is the second paper of a serial study on hydrogen energy system modelling. In the first study, we proposed a stylized hydrogen supply chain architecture and its pathways for the representation of hydrogen systems in bottom-up energy system models. In this current paper, we aim to present and assess techno-economic inputs and bandwidths for a hydrogen production module in bottom-up energy system models. After briefly summarizing the current technological status for each production method, we introduce the parameters and associated input data that are required for the representation of hydrogen production technologies in energy system modelling activities. This input data is described both as numeric values and trend line modes that can be employed in large or small energy system models. Hydrogen production technologies should be complemented with hydrogen storage and delivery pathways to fully understand the system integration. In this context, we will propose techno-economic inputs and technological background information for hydrogen delivery pathways in later work, as the final paper of this serial study. --- paper_title: Integration of market dynamics into the design of biofuel processes paper_content: Abstract Wood is considered one of the main future feedstocks in the production of second generation biofuels in Europe. While feedstock cost has a major impact on production cost, wood prices are expected to change significantly once biorefineries enter the raw material market as new high volume consumers. Therefore, wood market dynamics and approximate price forecasting should be integrated into decision-making during the design phase. In this contribution we show how such an integration could be realized by combining a preliminary process model to determine the raw material demand and a spatial partial equilibrium model of the wood market to predict the price development. This approach is illustrated on a case study examining the production of 2-methyltetrahydrofuran (MTHF) as a novel biofuel component, which is converted from deciduous wood grown in Germany. --- paper_title: How far away is hydrogen? Its role in the medium and long-term decarbonisation of the European energy system ☆ paper_content: Hydrogen is a promising avenue for decarbonising energy systems and providing flexibility. In this paper, the JRC-EU-TIMES model – a bottom-up, technology-rich model of the EU28 energy system – is used to assess the role of hydrogen in a future decarbonised Europe under two climate scenarios, current policy initiative (CPI) and long-term decarbonisation (CAP). Our results indicate that hydrogen could become a viable option already in 2030 – however, a long-term CO2 cap is needed to sustain the transition. In the CAP scenario, the share of hydrogen in the final energy consumption of the transport and industry sectors reaches 5% and 6% by 2050. Low-carbon hydrogen production technologies dominate, and electrolysers provide flexibility by absorbing electricity at times of high availability of intermittent sources. Hydrogen could also play a significant role in the industrial and transport sectors, while the emergence of stationary hydrogen fuel cells for hydrogen-to-power would require significant cost improvements, over and above those projected by the experts. --- paper_title: Hydrogen supply chain architecture for bottom-up energy systems models. Part 1: Developing pathways paper_content: Abstract The integration of hydrogen energy systems in the overall energy system is an important and complex subject for hydrogen supply chain management. The efficiency of the integration depends on finding optimum pathways for hydrogen supply. Accordingly, energy systems modelling methods and tools have been implemented to obtain the best configuration of hydrogen processes for a defined system. The appropriate representation of hydrogen technologies becomes an important stage for energy system modelling activities. This study, split in consecutive parts, has been conducted to analyse how representative hydrogen supply pathways can be integrated in energy systems modelling. The current paper, the first part of a larger study, presents stylised pathways of hydrogen supply chain options, derived on the basis of a detailed literature review. It aims at establishing a reference hydrogen energy system architecture for energy modelling tools. The subsequent papers of the study will discuss the techno-economic assumptions of the hydrogen supply chain components for energy modelling purposes. --- paper_title: Comparison of CO2 Capture Approaches for Fossil-Based Power Generation: Review and Meta-Study paper_content: This work is a meta-study of CO2 capture processes for coal and natural gas power generation, including technologies such as post-combustion solvent-based carbon capture, the integrated gasification combined cycle process, oxyfuel combustion, membrane-based carbon capture processes, and solid oxide fuel cells. A literature survey of recent techno-economic studies was conducted, compiling relevant data on costs, efficiencies, and other performance metrics. The data were then converted in a consistent fashion to a common standard (such as a consistent net power output, country of construction, currency, base year of operation, and captured CO2 pressure) such that a meaningful and direct comparison of technologies can be made. The processes were compared against a standard status quo power plant without carbon capture to compute metrics such as cost of CO2 emissions avoided to identify the most promising designs and technologies to use for CO2 emissions abatement. --- paper_title: A review of learning rates for electricity supply technologies paper_content: A variety of mathematical models have been proposed to characterize and quantify the dependency of electricity supply technology costs on various drivers of technological change. The most prevalent model form, called a learning curve, or experience curve, is a log-linear equation relating the unit cost of a technology to its cumulative installed capacity or electricity generated. This one-factor model is also the most common method used to represent endogenous technical change in large-scale energy-economic models that inform energy planning and policy analysis. A characteristic parameter is the “learning rate,” defined as the fractional reduction in cost for each doubling of cumulative production or capacity. In this paper, a literature review of the learning rates reported for 11 power generation technologies employing an array of fossil fuels, nuclear, and renewable energy sources is presented. The review also includes multi-factor models proposed for some energy technologies, especially two-factor models relating cost to cumulative expenditures for research and development (R&D) as well as the cumulative installed capacity or electricity production of a technology. For all technologies studied, we found substantial variability (as much as an order of magnitude) in reported learning rates across different studies. Such variability is not readily explained by systematic differences in the time intervals, geographic regions, choice of independent variable, or other parameters of each study. This uncertainty in learning rates, together with other limitations of current learning curve formulations, suggests the need for much more careful and systematic examination of the influence of how different factors and assumptions affect policy-relevant outcomes related to the future choice and cost of electricity supply and other energy technologies. --- paper_title: Effects of technological learning on future cost and performance of power plants with CO2 capture paper_content: This paper demonstrates the concept of applying learning curves in a consistent manner to performance as well as cost variables in order to assess the future development of power plants with CO2 capture. An existing model developed at Carnegie Mellon University, which had provided insight into the potential learning of cost variables in power plants with CO2 capture, is extended with learning curves for several key performance variables, including the overall energy loss in power plants, the energy required for CO2 capture, the CO2 capture ratio (removal efficiency), and the power plant availability. Next, learning rates for both performance and cost parameters were combined with global capacity projections for fossil-fired power plants to estimate future cost and performance of these power plants with and without CO2 capture. The results of global learning are explicitly reported, so that they can be used for other purposes such as in regional bottom-up models. Results of this study show that IGCC with CO2 capture has the largest learning potential, with significant improvements in efficiency and reductions in cost between 2001 and 2050 under the condition that around 3100 GW of combined cycle capacity is installed worldwide. Furthermore, in a scenario with a strict climate policy, mitigation costs in 2030 are 26, 11, 19 €/t (excluding CO2 transport and storage costs) for NGCC, IGCC, and PC power plants with CO2 capture, respectively, compared to 42, 13, and 32 €/t in a scenario with a limited climate policy. Additional results are presented for IGCC, PC, and NGCC plants with and without CO2 capture, and a sensitivity analysis is employed to show the impacts of alternative assumptions on projected learning rates of different systems. --- paper_title: Learning through a portfolio of carbon capture and storage demonstration projects paper_content: Carbon capture and storage is considered an important element to meet our climate mitigation targets. This Perspective explores the history of the first wave of projects and what challenges must be faced if widespread deployment is to be successful. ---
Title: Modeling and Simulation of Energy Systems: A Review Section 1: Introduction Description 1: Write about the importance of energy in the modern economy, the expected increase in energy consumption, and the necessity of transitioning to a sustainable energy future through the engagement of various stakeholders. Section 2: Categorization According to Modeling Approach Description 2: Present an overview of the different classes of energy system models, categorized into computational, mathematical, and physical models. Section 3: The PSE Approach to Energy System Modeling and Simulation Description 3: Discuss the Process Systems Engineering (PSE) approach, focusing on multi-scale systems engineering, sustainable process engineering, and emerging trends within the PSE community. Section 4: The EE Approach to Energy System Modeling and Simulation Description 4: Explain the Energy Economics (EE) approach, including demand and supply forecasting models, bottom-up models, and top-down models. Section 5: Combining PSE and EE Approaches Description 5: Make the case for combining PSE and EE approaches, highlighting opportunities such as optimal design and operation processes using demand and price forecasts, sustainability analysis using hybrid methods, and accounting for feedback effects of breakthrough technologies. Section 6: Concluding Remarks Description 6: Summarize the key points of the paper, emphasizing the importance of bridging the PSE and EE fields to get a holistic picture of the long-term performance of energy systems in a wider economic and policy context.
A survey of strategies for communication networks to protect against large-scale natural disasters
5
--- paper_title: Disaster survivability in optical communication networks paper_content: With the frequent occurrences of natural disasters damaging large portions of communication networks and the rising risk of intentional attacks, network vulnerability to multiple cascading, correlated, and collocated failures has become a major concern. Optical backbone networks provide highly-scalable connectivity across large distances. These networks exploit optical technology to carry huge aggregated data and can support ''higher-layer'' networks, such as SONET, Ethernet, IP, MPLS, ATM, etc. Given the high complexity and scale of backbone networks, multiple correlated failures can have a devastating impact on topological connectivity, which in turn can cause widespread ''end-to-end'' connection-level disruptions. These outages may affect many applications/services supported by the optical layer, irrespective of the importance of the service and/or sensitivity of the carried data. Hence, it is crucial to understand the vulnerability of optical backbone networks to disasters and design appropriate countermeasures. In this paper, we present a general classification of the existing research works on disaster survivability in optical networks and a survey on relevant works based on that classification. We also classify disasters based on their characteristics and impact on communication networks and discuss different ways to combat them. We conclude the paper with open issues and challenges. --- paper_title: Analysing GeoPath diversity and improving routing performance in optical networks paper_content: With the increasing frequency of natural disasters and intentional attacks that challenge telecommunication networks, vulnerability to cascading and regional-correlated challenges is escalating. Given the high complexity and large traffic load of optical networks, these correlated challenges cause substantial damage to reliable network communication. In this paper, we propose a network vulnerability identification mechanism and study different vulnerability scales using real-world optical network data. We further propose geographical diversity and incorporate it into a new graph resilience metric cTGGD (compensated Total Geographical Graph Diversity), which is capable of characterising and differentiating resiliency levels among different optical fibre networks. It is shown to be an effective resilience level indicator under regional network challenges or attacks. We further propose two heuristics for solving the path geodiverse problem (PGD) in which the calculation of a number of geographically separated paths is required. Geodiverse paths can be used to circumvent physical challenges such as large-scale disasters in telecommunication networks. We present the GeoDivRP routing protocol with two new routing heuristics implemented, which provides the end nodes with multiple geographically diverse paths. Our protocol demonstrates better performance compared to OSPF when the network is subject to area-based challenges. We have analysed the mechanism by which the attackers could use to maximise the attack impact with a limited budget and demonstrate the effectiveness of restoration plans. --- paper_title: An Overview of Algorithms for Network Survivability paper_content: Network survivability--the ability to maintain operation when one or a few network components fail--is indispensable for present-day networks. In this paper, we characterize three main components in establishing network survivability for an existing network, namely, (1) determining network connectivity, (2) augmenting the network, and (3) finding disjoint paths. We present a concise overview of network survivability algorithms, where we focus on presenting a few polynomial-time algorithms that could be implemented by practitioners and give references to more involved algorithms. --- paper_title: Resilience and survivability in communication networks : Strategies , principles , and survey of disciplines paper_content: The Internet has become essential to all aspects of modern life, and thus the consequences of network disruption have become increasingly severe. It is widely recognised that the Internet is not sufficiently resilient, survivable, and dependable, and that significant research, development, and engineering is necessary to improve the situation. This paper provides an architectural framework for resilience and survivability in communication networks and provides a survey of the disciplines that resilience encompasses, along with significant past failures of the network infrastructure. A resilience strategy is presented to defend against, detect, and remediate challenges, a set of principles for designing resilient networks is presented, and techniques are described to analyse network resilience. --- paper_title: Resilient Routing in Communication Networks paper_content: Introduction Principles of Communication Networks Resilience Resilience of Future Internet Communications Resilience of Wireless Mesh Networks Disruption-tolerant Routing in Vehicular Ad-hoc Networks --- paper_title: A survey on rapidly deployable solutions for post-disaster networks paper_content: In post-disaster scenarios, for example, after earthquakes or floods, the traditional communication infrastructure may be unavailable or seriously disrupted and overloaded. Therefore, rapidly deployable network solutions are needed to restore connectivity and provide assistance to users and first responders in the incident area. This work surveys the solutions proposed to address the deployment of a network without any a priori knowledge about the communication environment for critical communications. The design of such a network should also allow for quick, flexible, scalable, and resilient deployment with minimal human intervention. --- paper_title: Analyzing the Internet Stability in Presence of Disasters paper_content: The Internet is now a critical infrastructure for the modern, information-based, e-Society. Stability and survivability of the Internet are thus important, especially in presence of catastrophic events which carry heavy societal and financial impacts. In this work, we analyze the stability of the inter-domain routing system during several large-scale catastrophic events that affected the connectivity of massive parts of the address space, with the objective of acquiring information about degradation of service and recovery capabilities. --- paper_title: Evaluation of network resilience, survivability, and disruption tolerance: analysis, topology generation, simulation, and experimentation paper_content: As the Internet becomes increasingly important to all aspects of society, the consequences of disruption become increasingly severe. Thus it is critical to increase the resilience and survivability of future networks. We define resilience as the ability of the network to provide desired service even when challenged by attacks, large-scale disasters, and other failures. This paper describes a comprehensive methodology to evaluate network resilience using a combination of topology generation, analytical, simulation, and experimental emulation techniques with the goal of improving the resilience and survivability of the Future Internet. --- paper_title: Critical nodes for distance-based connectivity and related problems in graphs paper_content: This study considers a class of critical node detection problems that involves minimization of a distance-based connectivity measure of a given unweighted graph via the removal of a subset of nodes referred to as critical nodes subject to a budgetary constraint. The distance-based connectivity measure of a graph is assumed to be a function of the actual pairwise distances between nodes in the remaining graph e.g., graph efficiency, Harary index, characteristic path length, residual closeness rather than simply whether nodes are connected or not, a typical assumption in the literature. We derive linear integer programming IP formulations, along with additional enhancements, aimed at improving the performance of standard solvers. For handling larger instances, we develop an effective exact algorithm that iteratively solves a series of simpler IPs to obtain an optimal solution for the original problem. The edge-weighted generalization is also considered, which results in some interesting implications for distance-based clique relaxations, namely, s-clubs. Finally, we conduct extensive computational experiments with real-world and randomly generated network instances under various settings that reveal interesting insights and demonstrate the advantages and limitations of the proposed approach. In particular, one important conclusion of our work is that vulnerability of real-world networks to targeted attacks can be significantly more pronounced than what can be estimated by centrality-based heuristic methods commonly used in the literature. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 663, 170-195 2015 --- paper_title: Resilience and survivability in communication networks : Strategies , principles , and survey of disciplines paper_content: The Internet has become essential to all aspects of modern life, and thus the consequences of network disruption have become increasingly severe. It is widely recognised that the Internet is not sufficiently resilient, survivable, and dependable, and that significant research, development, and engineering is necessary to improve the situation. This paper provides an architectural framework for resilience and survivability in communication networks and provides a survey of the disciplines that resilience encompasses, along with significant past failures of the network infrastructure. A resilience strategy is presented to defend against, detect, and remediate challenges, a set of principles for designing resilient networks is presented, and techniques are described to analyse network resilience. --- paper_title: Network under joint node and link attacks: vulnerability assessment methods and analysis paper_content: Critical infrastructures such as communication networks, electrical grids, and transportation systems are highly vulnerable to natural disasters and malicious attacks. Even failures of few nodes or links may have a profound impact on large parts of the system. Traditionally, network vulnerability assessment methods separate the studies of node vulnerability and link vulnerability, and thus ignore joint node and link attack schemes that may cause grave damage to the network. To this end, we introduce a new assessment method, called $\beta$ -disruptor, that unifies both link and node vulnerability assessment. The new assessment method is formulated as an optimization problem in which we aim to identify a minimum-cost set of mixed links and nodes whose removal would severely disrupt the network connectivity. We prove the NP-completeness of the problem and propose an $O(\sqrt{\log n})$ bicriteria approximation algorithm for the $\beta$ -disruptor problem. This new theoretical guarantee improves the best approximation results for both link and node vulnerability assessment in literature. We further enhance the proposed algorithm by embedding it into a special combination of simulated annealing and variable neighborhood search method. The results of our extensive simulation-based experiments on synthetic and real networks show the feasibility and efficiency of our proposed vulnerability assessment methods. --- paper_title: On New Approaches of Assessing Network Vulnerability: Hardness and Approximation paper_content: Society relies heavily on its networked physical infrastructure and information systems. Accurately assessing the vulnerability of these systems against disruptive events is vital for planning and risk management. Existing approaches to vulnerability assessments of large-scale systems mainly focus on investigating inhomogeneous properties of the underlying graph elements. These measures and the associated heuristic solutions are limited in evaluating the vulnerability of large-scale network topologies. Furthermore, these approaches often fail to provide performance guarantees of the proposed solutions. In this paper, we propose a vulnerability measure, pairwise connectivity, and use it to formulate network vulnerability assessment as a graph-theoretical optimization problem, referred to as β-disruptor. The objective is to identify the minimum set of critical network elements, namely nodes and edges, whose removal results in a specific degradation of the network global pairwise connectivity. We prove the NP-completeness and inapproximability of this problem and propose an O(log n log log n) pseudo-approximation algorithm to computing the set of critical nodes and an O(log1.5n) pseudo-approximation algorithm for computing the set of critical edges. The results of an extensive simulation-based experiment show the feasibility of our proposed vulnerability assessment framework and the efficiency of the proposed approximation algorithms in comparison to other approaches. --- paper_title: Exact identification of critical nodes in sparse networks via new compact formulations paper_content: Critical node detection problems aim to optimally delete a subset of nodes in order to optimize or restrict a certain metric of network fragmentation. In this paper, we consider two network disruption metrics which have recently received substantial attention in the literature: the size of the remaining connected components and the total number of node pairs connected by a path. Exact solution methods known to date are based on linear 0–1 formulations with at least \(\varTheta (n^3)\) entities and allow one to solve these problems to optimality only in small sparse networks with up to 150 nodes. In this work, we develop more compact linear 0–1 formulations for the considered types of problems with \(\varTheta (n^2)\) entities. We also provide reformulations and valid inequalities that improve the performance of the developed models. Computational experiments show that the proposed formulations allow finding exact solutions to the considered problems for real-world sparse networks up to 10 times larger and with CPU time up to 1,000 times faster compared to previous studies. --- paper_title: Finding critical regions and region-disjoint paths in a network paper_content: Due to their importance to society, communication networks should be built and operated to withstand failures. However, cost considerations make network providers less inclined to take robustness measures against failures that are unlikely to manifest, like several failures coinciding simultaneously in different geographic regions of their network. Considering networks embedded in a two-dimensional plane, we study the problem of finding a critical region—a part of the network that can be enclosed by a given elementary figure of predetermined size—whose destruction would lead to the highest network disruption. We determine that only a polynomial, in the input, number of nontrivial positions for such a figure needs to be considered and propose a corresponding polynomial-time algorithm. In addition, we consider region-aware network augmentation to decrease the impact of a regional failure. We subsequently address the region-disjoint paths problem, which asks for two paths with minimum total weight between a source $(s)$ and a destination $(d)$ that cannot both be cut by a single regional failure of diameter $D$ (unless that failure includes $s$ or $d$ ). We prove that deciding whether region-disjoint paths exist is NP-hard and propose a heuristic region-disjoint paths algorithm. --- paper_title: Measuring the survivability of networks to geographic correlated failures paper_content: Wide area backbone communication networks are subject to a variety of hazards that can result in network component failures. Hazards such as power failures and storms can lead to geographical correlated failures. Recently there has been increasing interest in determining the ability of networks to survive geographic correlated failures and a number of measures to quantify the effects of failures have appeared in the literature. This paper proposes the use of weighted spectrum to evaluate network survivability regarding geographic correlated failures. Further we conduct a comparative analysis by finding the most vulnerable geographic cuts or nodes in the network though solving an optimization problem to determine the cut with the largest impact for a number of measures in the literature as well as weighted spectrum. Numerical results on several sample network topologies show that the worst-case geographic cuts depend on the measure used in an unweighted or a weighted graph. The proposed weighted spectrum measure is shown to be more versatile than other measures in both unweighted and weighted graphs. --- paper_title: The resilience of WDM networks to probabilistic geographical failures paper_content: Telecommunications networks, and in particular optical WDM networks, are vulnerable to large-scale failures of their physical infrastructure, resulting from physical attacks (such as an Electromagnetic Pulse attack) or natural disasters (such as solar flares, earthquakes, and floods). Such events happen at specific geographical locations and disrupt specific parts of the network but their effects are not deterministic. Therefore, we provide a unified framework to model the network vulnerability when the event has a probabilistic nature, defined by an arbitrary probability density function. Our framework captures scenarios with a number of simultaneous attacks, in which network components consist of several dependent subcomponents, and in which either a 1+1 or a 1∶1 protection plan is in place. We use computational geometric tools to provide efficient algorithms to identify vulnerable points within the network under various metrics. Then, we obtain numerical results for specific backbone networks, thereby demonstrating the applicability of our algorithms to real-world scenarios. Our novel approach allows for identifying locations which require additional protection efforts (e.g., equipment shielding). Overall, the paper demonstrates that using computational geometric techniques can significantly contribute to our understanding of network resilience. --- paper_title: Detection of spatially-close fiber segments in optical networks paper_content: Spatially-close network fibers have a significant chance of failing simultaneously in the event of man-made or natural disasters within their geographic area. Network operators are interested in the proper detection and grouping of any existing spatially-close fiber segments, to avoid service disruptions due to simultaneous fiber failures. Moreover, spatially-close fibers can further be differentiated by computing the intervals over which they are spatially close. In this paper, we propose (1) polynomial-time algorithms for detecting all the spatially-close fiber segments of different fibers, (2) a polynomial-time algorithm for finding the spatially-close intervals of a fiber to a set of other fibers, and (3) a fast exact algorithm for grouping spatially-close fibers using the minimum number of distinct risk groups. All of our algorithms have a fast running time when simulated on three real-world network topologies. --- paper_title: Finding geographic vulnerabilities in multilayer networks using reduced network state enumeration paper_content: Despite advancements in the analysis of networks with respect to geographic vulnerabilities, very few approaches exist that can be applied to large networks with varied applications and network measures. Natural and man-made disasters as well as major political events (like riots) have kept the challenges of geographic failures in networks in the forefront. With the increasing interest in multilayer and virtual networks, methods to analyze these networks for geographic vulnerabilities are important. In this paper, we present a state space analysis method that analyzes multilayer networks for geographic vulnerabilities. It uses either the inability to provision an upper layer service and/or increased costs to provision as the criteria for network failure. Mapping techniques for multilayer network states are presented. Simplifying geographic state mapping techniques to reduce enumeration costs are also presented and tested. Finally, these techniques are tested on small and extremely large networks. --- paper_title: Assessing the impact of geographically correlated network failures paper_content: Communication networks are vulnerable to natural disasters, such as earthquakes or floods, as well as to human attacks, such as an electromagnetic pulse (EMP) attack. Such real-world events have geographical locations, and therefore, the geographical structure of the network graph affects the impact of these events. In this paper we focus on assessing the vulnerability of (geographical) networks to such disasters. In particular, we aim to identify the location of a disaster that would have the maximum effect on network capacity. We consider a geometric graph model in which nodes and links are geographically located on a plane. Specifically, we model the physical network as a bipartite graph (in the topological and geographical sense) and consider the set of all vertical line segment cuts. For that model, we develop a polynomial time algorithm for finding a worst possible cut. Our approach has the potential to be extended to general graphs and provides a promising new direction for network design to avert geographical disasters or attacks. --- paper_title: Determining Geographic Vulnerabilities Using a Novel Impact Based Resilience Metric paper_content: Various natural and man-made disasters as well as major political events (like riots) have increased the importance of understanding geographic failures and how correlated failures impact networks. Since mission critical networks are overlaid as virtual networks over a physical network infrastructure forming multilayer networks, there is an increasing need for methods to analyze multilayer networks for geographic vulnerabilities. In this paper, we present a novel impact-based resilience metric. Our new metric uses ideas borrowed from performability to combine network impact with state probability to calculate a new metric called Network Impact Resilience. The idea is that the highest impact to the mission of a network should drive its resilience metric. Furthermore, we present a state space analysis method that analyzes multilayer networks for geographic vulnerabilities. To demonstrate the methods, the inability to provision a given number of upper layer services is used as the criteria for network failure. Mapping techniques for multilayer network states are presented. Simplifying geographic state mapping techniques to reduce enumeration costs are also presented and tested. Finally, these techniques are tested on networks of varying sizes. --- paper_title: Assessing the Vulnerability of the Fiber Infrastructure to Disasters paper_content: Communication networks are vulnerable to natural disasters, such as earthquakes or floods, as well as to physical attacks, such as an Electromagnetic Pulse (EMP) attack. Such real- world events happen in specific geographical locations and disrupt specific parts of the network. Therefore, the geographical layout of the network determines the impact of such events on the network's connectivity. In this paper, we focus on assessing the vulnerability of (geographical) networks to such disasters. In particular, we aim to identify the most vulnerable parts of the network. That is, the locations of disasters that would have the maximum disruptive effect on the network in terms of capacity and connectivity. We consider graph models in which nodes and links are geographically located on a plane, and model the disaster event as a line segment or a circular cut. We develop algorithms that find a worst- case line segment cut and a worst-case circular cut. Then, we obtain numerical results for a specific backbone network, thereby demonstrating the applicability of our algorithms to real-world networks. Our novel approach provides a promising new direction for network design to avert geographical disasters or attacks. --- paper_title: Spatiotemporal risk-averse routing paper_content: A cyber-physical system is often designed as a network in which critical information is transmitted. However, network links may fail, possibly as the result of a disaster. Disasters tend to display spatiotemporal characteristics, and consequently link availabilities may vary in time. Yet, the requested connection availability of traffic must be satisfied at all times, even under disasters. In this paper, we argue that often the spatiotemporal impact of disasters can be predicted, such that suitable actions can be taken, before the disaster manifests, to ensure the availability of connections. Our main contributions are three-fold: (1) we propose a generic grid-based model to represent the risk profile of a network area and relate the risk profile to the availability of links and connections, (2) we propose a polynomial-time algorithm to identify connections that are vulnerable to an emerging disaster risk, and (3) we consider the predicted spatiotemporal disaster impact, and propose a polynomial-time algorithm based on an auxiliary graph to find the most risk-averse path under a time constraint. --- paper_title: Gust buffeting and aeroelastic behaviour of poles and monotubular towers paper_content: Abstract The evolution in the constructional field and the realization of ever more slender and light structures have emphasized the increasing difficulty of properly evaluating the actions and effects of wind on poles and monotubular towers. Faced with this situation the Italian constructors, united in a consortium coordinated by ACS ACAI Servizi, entrusted the Department of Structural and Geotechnical Engineering of Genova University with the task of formulating an ad hoc calculation procedure for this type of structure. This gave rise to a wide-ranging research project in which theoretical models, experimental evaluations and engineering methods were developed in parallel through an effective and quite a unique co-operation between researchers, designers and builders. This paper illustrates the physical aspects, the general principles and the basic formulation of the method proposed, with special emphasis on gust buffeting and aeroelastic phenomena. Preliminary results of full-scale measurements of the structural damping are also presented. The conclusions highlight the scientific and technical perspectives of this research. --- paper_title: Telecommunications Power Plant Damage Assessment for Hurricane Katrina– Site Survey and Follow-Up Results paper_content: This paper extends knowledge of disaster impact on the telecommunications power infrastructure by discussing the effects of Hurricane Katrina based on an on-site survey conducted in October 2005 and on public sources. It includes observations about power infrastructure damage in wire-line and wireless networks. In general, the impact on centralized network elements was more severe than on the distributed portion of the grids. The main cause of outage was lack of power due to fuel supply disruptions, flooding and security issues. This work also describes the means used to restore telecommunications services and proposes ways to improve logistics, such as coordinating portable generator set deployment among different network operators and reducing genset fuel consumption by installing permanent photovoltaic systems at sites where long electric outages are likely. One long term solution is to use of distributed generation. It also discusses the consequences on telecom power technology and practices since the storm. --- paper_title: Masts and Towers paper_content: The analysis and design of masts and towers requires special knowledge and experience, especially when it concerns guyed masts. The special problems related to these structures are underlined by the many collapses during the years. The basis of design for such antenna supporting structures are sometimes many and often mutual contradictory, and the overall structural layout may have a dramatically effect on the loading on the structure. The loads are mainly meteorological from wind and ice and combination of these, and the dynamic nature of the wind has to be taken into account as masts and towers are more or less sensitive to dynamic loads. This paper gives a brief introduction to the problems related to the design, as well as several practical examples are mentioned. The aesthetic elements are becoming more and more important for antenna supporting structures and are also mentioned. The IASS Working Group No. 4 Masts and Towers is the only international forum for the exchange of knowledge and experience within the field of masts and towers, and this Working group is briefly mentioned in the paper. --- paper_title: Dynamic monitoring and numerical modelling of communication towers with FBG based accelerometers paper_content: Abstract This study presents the dynamic monitoring of two telecommunication tall slender steel towers with an optical FBG accelerometer. Numerical simulation for both towers was used recurring to finite elements modelling in order to demonstrate the feasibility of using optical technology in this type of structural monitoring. The results show a good agreement between experimental and simulated data, demonstrating that the optical accelerometer can be a very useful tool in the monitoring of tall slender structures. --- paper_title: The past 20 years of telecommunication structures in Portugal paper_content: Abstract This paper reviews the analysis and design of telecommunication structures and presents the main problems observed for various types of structures. The nation of Portugal is selected for a case study, and more specifically, the evolution of Portuguese structural design standards for telecommunications systems is summarised using comparative analyses that cover a subset of the most relevant topics for design, including wind profiles, drag coefficients, dynamic effects and reliability classes. These analyses focus on characterisation of the effects of wind action, which plays a fundamental role in the behaviour and design of these structures. Following the comparative analyses of standards, the more common problems observed over the past 20 years in guyed masts and towers located in Portugal are presented and discussed. --- paper_title: Telecom power planning for natural and man-made disasters paper_content: This paper discusses a planning framework to reduce telecommunication network power supply vulnerability during natural and man-made disasters. The analysis proposes a three-part structure for a plan. The first part is an assessment of risk. Risk assessment involves identifying potential disasters at each site as well as evaluating the probability of occurrence and its impact. Impact evaluation focuses on electric grid, roads, and natural gas distribution infrastructure. The second part is an evaluation of resources and logistics. Use of alternative technologies to diversify energy supply, such as distributed generation power units, is discussed. The last part of the plan is its execution. The importance of record keeping and control of the plan outcome during this phase is emphasized, as well as conducting periodic drills to test and improve the plan. --- paper_title: STEM-NET: How to Deploy a Self-Organizing Network of Mobile End-User Devices for Emergency CommunicationComputer Communications paper_content: Spontaneous wireless networks constructed out of mobile end-user devices (e.g. smartphones or tablets) are currently receiving considerable interest as they enable a wide range of novel, highly pervasive and user-centric network services and applications. In this paper, we focus on emergency-related scenarios, and we investigate the potential of spontaneous networks for providing Internet connectivity over the emergency area through the sharing of resources owned by the end-user devices. Novel and extremely flexible network deployment strategies are required in order to cope with the user mobility, the limited communication capabilities of wireless devices, and the intrinsic dynamics of traffic loads and QoS requirements. To this purpose, we propose here a novel approach toward the deployment of spontaneous networks composed by a new generation of wireless devices - called Stem Nodes (SNs) - to emphasize their ability to cover multiple network roles (e.g. gateway, router). The self-organization of the spontaneous network is then achieved through the local reconfiguration of each SN. Two complementary research contributions are provided. First, we describe the software architecture of a SN (which can be implemented on top of existing end-user devices), and we detail how a SN can manage its role set, eventually extending it through cooperation with other SNs. Second, we propose distributed algorithms, based on swarm intelligence principles, through which each SN can autonomously select its role, and self-elect to gateway or router, so that end-to-end performance are maximized while the lifetime of the spontaneous emergency network is prolonged. The ability of the proposed algorithm to guarantee adaptive and self-organizing network behaviors is demonstrated through extensive Omnet++ simulations, and through a prototype implementation of the SN architecture on a real testbed. --- paper_title: Network adaptability from disaster disruptions and cascading failures paper_content: Disasters can cause severe service disruptions due to large-scale correlated cascading failures in telecom networks. Major network disruptions due to disasters — both natural (e.g., Hurricane Sandy, 2011 Japan Tsunami) and human-made (e.g., 9/11 terrorist attack) — deprive the affected population of essential network services for weeks and severely hamper rescue operations. Many techniques exist to provide fast network protection, but they are optimized for limited faults without addressing the extent of disaster failures. Thus, there is a pressing need for novel robust survivability methods to mitigate the effects of disasters on telecom networks. While researchers in climatology, geology, and environmental science have been studying how to predict disasters and assess disaster risks for certain regions, networking research can exploit this information to develop novel methods to prepare networks to handle disasters with the knowledge of risky regions and to better prepare them for a predicted disaster. The events during the aftermath of a disaster should also be considered. For instance, methods to re-arrange network resources and services on a partially damaged network, which is the property of a self-organizing network, should be developed, and new algorithms to manage the post-disaster traffic deluge and to relieve the rescue operations after a disaster, with the knowledge of the post-disaster failures, should be investigated. Since cloud services today are an integral part of our society and massive amounts of content/services have been created and shared over the cloud, loss/disruption of critical content/ services caused by disasters can significantly affect the security and economic well being of our society. As the network is becoming increasingly an end-to-content (vs. end-toend) connection provider, we have to ensure reachability of content from any point of a network, which we call content connectivity (in contrast to network connectivity) after disaster failures. This article presents the nature of possible disruptions in telecom networks caused by disaster events, and sheds light on how to prepare the network and cloud services against disasters, and adapt them for disaster disruptions and cascading failures. --- paper_title: LTE-advanced based handover mechanism for natural disaster situations paper_content: Telecommunication networks often face power outage problems in the natural disaster affected areas. Also, owing to a sudden substantial increase in network traffic loads the battery backup power of the base stations run out quickly and therefore hampering telecommunication services. To overcome this system performance issues, we propose a Long Term Evolution (LTE)-Advanced (LTE-A)-based user equipment (UE)-controlled and base station (Evolved Node B or eNB)-assisted handover scheme. The idea is to limit the arrival of new traffic to an already overloaded eNB by diverting their handover to lightly loaded nearby eNBs. The novelty of this work is the ability of an UE to self-detect the occurrence of a natural disaster and to self-select the most suitable target eNB (TeNB) to handover with in the disaster affected areas. The handover is performed by obtaining the weighted average score (WAS) of the direction of motion (DoM) and the leftover battery backup power of the different neighboring eNBs (NeNB). The UE also predicts its DoM and dynamically adjust the weights of the two parameters if it's a disaster situation. Preliminary simulation results show that the scheme can offer up to 65% handover success rate in disaster situations. --- paper_title: Proposal of disaster avoidance control paper_content: This paper proposes the concept of disaster avoidance control for forecasted disasters. This control is used to relocate objects to avoid or minimize the damage from forecasted disasters. This paper also illustrates a system for implementing the control and discusses the metrics used in the control. An experimental system for executing this control is also described. --- paper_title: Improving the Resilience of Transport Networks to Large-scale Failures paper_content: Telecommunication networks have to deal with fiber cuts, hardware malfunctioning and other failures on a daily basis, events which are usually treated as isolated and unrelated. Efficient methods have been developed for coping with such common failures and hence users rarely notice them. Although less frequently, there also arise cases of multiple failures with catastrophic consequences. Multiple failures can occur for many reasons, for example, natural disasters, epidemic outbreaks affecting software components, or intentional attacks. This article investigates new methods for lessening the impact of such failures in terms of the number of connections affected. Two heuristic-based link prioritization strategies for improving network resilience are proposed. One strategy is built upon the concept of betweenness centrality, while the second is based on what we call the observed link criticality. Both strategies are evaluated through simulation on a large synthetic topology that represents a GMPLS-based transport network. The provisioning of connections in a dynamic traffic scenario as well as the occurrence of large-scale failures are simulated for the evaluation. --- paper_title: Enhancing Network Robustness via Shielding paper_content: We consider shielding critical links to enhance the robustness of a network, in which shielded links are resilient to failures. We first study the problem of increasing network connectivity by shielding links that belong to small cuts of a network, which improves the network reliability under random link failures. We then focus on the problem of shielding links to guarantee network connectivity under geographical and general failure models. We develop a mixed integer linear program (MILP) to obtain the minimum cost shielding to guarantee the connectivity of a single source–destination pair under a general failure model, and exploit geometric properties to decompose the shielding problem under a geographical failure model. We extend our MILP formulation to guarantee the connectivity of the entire network, and use Benders decomposition to significantly reduce the running time. We also apply simulated annealing to obtain near-optimal solutions in much shorter time. Finally, we extend the algorithms to guarantee partial network connectivity, and observe significant reduction in the shielding cost, especially when the geographical failure region is small. --- paper_title: Survivable virtual network mapping to provide content connectivity against double-link failures paper_content: In recent years, large scale natural disasters (such as earthquakes, or tsunami) have caused multiple Internet outages in different parts of the world, resulting in high infrastructures damages and capacity losses. Content providers are currently investigating novel disaster-resiliency mechanisms to maintain the service continuity in their Content Delivery Networks (CDNs). In case of such widespread failures, the content providers might not be able to guarantee Network Connectivity (i.e., the reachability of all nodes from any node in the network) and researchers have started investigating the concept of Content Connectivity (i.e., the reachability of the content from any point of the network), that can be achieved even when the network is disconnected, as long as a replica of the content can be retrieved in all the disconnected components of the network. In this paper we focus on double-link failures and consider different combinations of content connectivity and network connectivity. As guaranteeing network connectivity against double-link failures may result in very high network-resource consumption, in this work we present an Integer Linear Programming (ILP) formulation for survivable virtual network mapping to guarantee the network connectivity after single-link failures and maintain the content connectivity after double-link failures. We show that maintaining content connectivity against double-link failures costs almost the same, in terms of network resources, as providing network connectivity against single-link failures. We also investigate the trade-off between datacenter placement and the amount of resources needed to provide content connectivity in case of double-link failures. --- paper_title: Fault-tolerant virtual network mapping to provide Content Connectivity in optical networks paper_content: We define Content Connectivity as the reachability of every content from any point of an optical network. We propose a scheme for virtual network mapping and content placement to ensure content connectivity after failures. --- paper_title: New Options and Insights for Survivable Transport Networks paper_content: This article is devoted to a selection of recent topics in survivable networking. New ideas in capacity design and ring-to-mesh evolution are given, as well as a systematic comparison of the capacity requirements of several mesh-based schemes showing how they perform over a range of network graph connectivity. The work provides new options and insights to address the following questions. How does one evolve from an existing ring-based network to a future mesh network? If the facilities graph is very sparse, how can mesh efficiency be much better than rings? How do the options for mesh protection or restoration rank in capacity requirements? How much is efficiency increased if we enrich our network connectivity? We also outline p-cycles, showing this new concept can realize ring-like speed with meshlike efficiency. The scope is limited to conveying basic ideas with an understanding that they could be further adapted for use in IP or DWDM layers with GMPLS-type protocols or a centralized control plane. --- paper_title: Analysis of and proposal for a disaster information network from experience of the Great East Japan Earthquake paper_content: Recently serious natural disasters such as earthquakes, tsunamis, typhoons, and hurricanes have occurred at many places around the world. The East Japan Great Earthquake on March 11, 2011 had more than 19,000 victims and destroyed a huge number of houses, buildings, loads, and seaports over the wide area of Northern Japan. Information networks and systems and electric power lines were also severely damaged by the great tsunami. Functions such as the highly developed information society, and residents' safety and trust were completely lost. Thus, through the lessons from this great earthquake, a more robust and resilient information network has become one of the significant subjects. In this article, our information network recovery activity in the aftermath of the East Japan Great Earthquake is described. Then the problems of current information network systems are analyzed to improve our disaster information network and system through our network recovery activity. Finally we suggest the systems and functions required for future large-scale disasters. --- paper_title: A survey on rapidly deployable solutions for post-disaster networks paper_content: In post-disaster scenarios, for example, after earthquakes or floods, the traditional communication infrastructure may be unavailable or seriously disrupted and overloaded. Therefore, rapidly deployable network solutions are needed to restore connectivity and provide assistance to users and first responders in the incident area. This work surveys the solutions proposed to address the deployment of a network without any a priori knowledge about the communication environment for critical communications. The design of such a network should also allow for quick, flexible, scalable, and resilient deployment with minimal human intervention. --- paper_title: Bringing movable and deployable networks to disaster areas: development and field test of MDRU paper_content: Communication demand is paramount for disaster-affected people to confirm safety, seek help, and gather evacuation information. However, the communication infrastructure is likely to be crippled due to a natural disaster, which makes disaster response excruciatingly difficult. Although traditional approaches can partially fulfill the most important requirements from the user perspective, including prompt deployment, high capacity, large coverage, useful disaster-time application, and carrier-free usability, a complete solution that provides all those features is still required. Our collaborative research and development group has developed the Movable and Deployable Resource Unit, which is referred to as the MDRU and has been proven to have all those required features. Via extensive field tests using a compact version of an MDRU (i.e., the van-type MDRU), we verify the effectiveness of the MDRU-based disaster recovery network. Moreover, we demonstrate the further improvement of the MDRU’s performance when it is complemented by other technologies such as relay-by-smartphone or satellites. --- paper_title: Experimental emergency communication systems using USRP and GNU radio platform paper_content: One lesson learned from recent large-scale disasters is that the destroyed communication infrastructure severely burdens the relief operation and the recovery mission. How to establish a temporary emergency communication system (ECS) providing reliable communication channels upon deployment becomes a must. An Universal Software Radio Peripheral platform with GNU Radio is employed for experimential purpose with basic voice communications and short message services. With the aid of pre-installed APP in victims' smartphone, valuable information about the victim, such as the identity, location, or physical condition, can be automatically delivered to the developed BS, which significantly facilitates the rescue work. Furthermore, BS acting as a relay could provide direct voice connectivity between victims and relief workers, thereby avoiding considerable fatalities. --- paper_title: Emergenet: robust, rapidly deployable cellular networks paper_content: Cellular phone networks are often paralyzed after a disaster, as damage to fixed infrastructure, loss of power, and increased demand degrade coverage and quality of service. To ensure disaster victims and first responders have access to reliable local and global communication, we propose EmergeNet, a portable, rapidly deployable, small-scale cellular network. In this article, we describe EmergeNet, which addresses the challenges of emergency and disaster areas. EmergeNet provides free voice calling and text messaging within a disaster area, and enables users of unmodified GSM handsets to communicate with the outside world using the Skype VoIP network. We evaluate EmergeNet's ability to provide robust service despite high load, limited bandwidth, and software or hardware failures. EmergeNet is uniquely well suited to providing reliable, fairly allocated voice and text communication in emergency and disaster scenarios. --- paper_title: On-the-fly establishment of multihop wireless access networks for disaster recovery paper_content: This article proposes a novel approach to onthe- fly establishment of multihop wireless access networks (OEMAN) for disaster response. OEMAN extends Internet connectivity from surviving access points to disaster victims using their own mobile devices. OEMAN is set up on demand using wireless virtualization to create virtual access points on mobile devices. Virtual access points greedily form a tree-based topology, configured automatically for naming and addressing, which is then used to provide multihop wireless Internet access to users. Ordinary users can easily connect to the Internet through OEMAN as if they are connected through conventional access points. After connecting, users naturally contribute to the network extension, realizing the self-supporting capability of a disaster's local communities. The proposed scheme establishes a wireless access network quickly, which is essential in emergency relief situations. Furthermore, OEMAN is transparent to users and cost effective as it does not require additional hardware. Experimental evaluations on top of our preliminary prototype over Windows-based laptops confirm OEMAN's feasibility and its effectiveness for multihop paths of up to seven hops, and standard Internet services such as audio and video streaming. --- paper_title: Extending Network Coverage by Using Static and Mobile Relays during Natural Disasters paper_content: During natural disasters, such as earthquakes, apart of the Internet access infrastructure can be damaged, leaving many users disconnected. At the same time, many peopleneed to communicate to find their relatives and receive officialnotifications about the current situation. This paper presents anevaluation of different techniques for extending network coveragein such scenarios. We use real-world data to model the poweroutage probability of cellular base stations in Tokyo area andcombine it with information about batteries/power generatorsto create accurate maps of network coverage for different timeperiods after an earthquake. In our simulation, we use a realmap of evacuation sites, provided by Japanese government. Wefirst considered mobile nodes, moving between evacuation sites, and investigated their impact on network coverage. Then, wedeveloped an algorithm to determine the optimal locations forstatic relays ensuring different levels of network coverage. Ourresults show that even a small number of fixed relays, carefullyplaced between the evacuation sites, can outperform a muchhigher number of mobile nodes in terms of network coverage. --- paper_title: STEM-NET: How to Deploy a Self-Organizing Network of Mobile End-User Devices for Emergency CommunicationComputer Communications paper_content: Spontaneous wireless networks constructed out of mobile end-user devices (e.g. smartphones or tablets) are currently receiving considerable interest as they enable a wide range of novel, highly pervasive and user-centric network services and applications. In this paper, we focus on emergency-related scenarios, and we investigate the potential of spontaneous networks for providing Internet connectivity over the emergency area through the sharing of resources owned by the end-user devices. Novel and extremely flexible network deployment strategies are required in order to cope with the user mobility, the limited communication capabilities of wireless devices, and the intrinsic dynamics of traffic loads and QoS requirements. To this purpose, we propose here a novel approach toward the deployment of spontaneous networks composed by a new generation of wireless devices - called Stem Nodes (SNs) - to emphasize their ability to cover multiple network roles (e.g. gateway, router). The self-organization of the spontaneous network is then achieved through the local reconfiguration of each SN. Two complementary research contributions are provided. First, we describe the software architecture of a SN (which can be implemented on top of existing end-user devices), and we detail how a SN can manage its role set, eventually extending it through cooperation with other SNs. Second, we propose distributed algorithms, based on swarm intelligence principles, through which each SN can autonomously select its role, and self-elect to gateway or router, so that end-to-end performance are maximized while the lifetime of the spontaneous emergency network is prolonged. The ability of the proposed algorithm to guarantee adaptive and self-organizing network behaviors is demonstrated through extensive Omnet++ simulations, and through a prototype implementation of the SN architecture on a real testbed. --- paper_title: A smartphone-based post-disaster management mechanism using WiFi tethering paper_content: Natural disasters often cause the breakdown of the power grid in the affected areas hampering telecommunication services. In the absence of electrical power undamaged cellular base stations switch to battel") backup to sustain communication. However, owing to the sudden substantial increase in voice and data traffic, base stations become over-congested fast and batter) backups quickly get exhausted. We propose a smartphone-based post-disaster management mechanism for managing traffic in the affected areas using the concept of WiFi tethering. Smart phones in the affected areas may turn themselves into temporary WiFi hotspots to provide internet connectivity and important communication abilities to nearby WiFi-enabled user devices. The hotspots can self-assess the number of new connections they can serve based on their leftover battery energy. Client devices. approaching the affected areas, can also self-select the most suitable hotspot to connect to depending on their proximity and nature of motion with respect to individual hotspots. Long Term Fvolution Advanced (LTF-A) underlying networks are considered for this work. --- paper_title: Survivability as a generalization of recovery paper_content: Social infrastructure systems such as communication, transportation, power and water supply systems are now facing various types of threats including component failures, security attacks and natural disasters, etc. Whenever such undesirable events occur, it is crucial to recover the system as quickly as possible because the downtime of social infrastructure causes catastrophic consequences in the society. In the business continuity context, Recovery Time Objective (RTO) has been used as a criterion to specify the allowable maximum time to recover from system failure events. While RTO gives the requirement for system recovery time, performance degradation of infrastructure service during the recovery period is another dimension that should be taken into consideration. In this paper, we introduce survivability as a generalization of recovery behavior which can address the performance impacts during the recovery and show a survivability quantification example in a escalating and deferred repair system. --- paper_title: Robust Fault Tolerant uncapacitated facility location paper_content: In the {\em uncapacitated facility location} problem, given a graph, ::: a set of demands and opening costs, it is required to find a set of facilities $R$, so as to minimize the sum of the cost of opening the facilities in $R$ and the cost of assigning all node demands to open facilities. ::: ::: This paper concerns the {\em robust fault-tolerant} version of the uncapacitated facility location problem (RFTFL). In this problem, one or more facilities might fail, and each demand should be supplied by the closest open facility that did not fail. It is required to find a set of facilities $R$, so as to minimize the sum of the cost of opening the facilities in $R$ and the cost of assigning all node demands to open facilities that did not fail, after the failure of up to $\alpha$ facilities. ::: ::: We present a polynomial time algorithm that yields a 6.5-approximation ::: for this problem with at most one failure and a $1.5 + 7.5\alpha$-approximation for the problem with at most $\alpha > 1$ failures. We also show that the $RFTFL$ problem is NP-hard even on trees, and even in the case of a single failure. --- paper_title: Traveling repairman problem for optical network recovery to restore virtual networks after a disaster [invited] paper_content: Virtual networks mapped over a physical network can suffer disconnection and/or outage due to disasters. After a disaster occurs, the network operator should determine a repair schedule and then send repairmen to repair failures following the schedule. The schedule can change the overall effect of a disaster by changing the restoration order of failed components. In this study, we introduce the traveling repairman problem to help the network operator make the schedule after a disaster. We measure the overall effect of a disaster from the damage it caused, and we define the damage as the numbers of disconnected virtual networks, failed virtual links, and failed physical links. Our objective is to find an optimal schedule for a repairman to restore the optical network with minimum damage. We first state the problem; then a mixed integer linear program (MILP) and three heuristic algorithms, namely dynamic programming (DP), the greedy algorithm (GR), and simulated annealing (SA), are proposed. Finally, simulation results show that the repair schedules using MILP and DP results get the least damage but the highest complexity; GR gets the highest damage with the lowest complexity, while SA has a good balance between damage and complexity. --- paper_title: On-site configuration of disaster recovery access networks made easy paper_content: Catastrophic disasters can destroy large regions and, in the process, leave many victims isolated from the rest of the world. Recovering the communication infrastructure is typically slow and expensive, which is not suitable for emergency response. Multihop wireless access networks have the potential to quickly provide Internet connectivity to victims, but so far no simple and practical solution has been proposed to help people configure these networks easily. We are pursuing the approach of utilizing wireless virtualization techniques to establish wireless access networks on-the-fly using on-site mobile devices. While our previous work has demonstrated proof-of-concept solutions, it lacked fundamental communication abstractions, a rigorous design, and a thorough analysis on the effectiveness of these solutions. The main new contributions of this article are: (1) the wireless multihop communication abstraction (WMCA) as a fundamental communication concept for a practical tree-based disaster recovery access network (TDRAN), (2) the complete design and implementation details of TDRAN, and (3) a comprehensive analysis of the effectiveness of the proposed approach based on field experiments, both in indoor and outdoor settings, at different sites in Japan. The results demonstrate the effectiveness of the proposed solution for on-site configuration of wireless access networks, as it can easily extend to 20 hops by 15 m-distance and 16 hops by 30 m-distance networks, which result in 300 m and 480 m (respectively) in radius or about 1 km in diameter. This work also confirms that our approach is ready for realization as a real disaster recovery solution. --- paper_title: The Role of the Internet of Things in Network Resilience paper_content: Disasters lead to devastating structural damage not only to buildings and transport infrastructure, but also to other critical infrastructure, such as the power grid and communication backbones. Following such an event, the availability of minimal communication services is however crucial to allow efficient and coordinated disaster response, to enable timely public information, or to provide individuals in need with a default mechanism to post emergency messages. The Internet of Things consists in the massive deployment of heterogeneous devices, most of which battery-powered, and interconnected via wireless network interfaces. Typical IoT communication architectures enables such IoT devices to not only connect to the communication backbone (i.e. the Internet) using an infrastructure-based wireless network paradigm, but also to communicate with one another autonomously, without the help of any infrastructure, using a spontaneous wireless network paradigm. In this paper, we argue that the vast deployment of IoT-enabled devices could bring benefits in terms of data network resilience in face of disaster. Leveraging their spontaneous wireless networking capabilities, IoT devices could enable minimal communication services (e.g. emergency micro-message delivery) while the conventional communication infrastructure is out of service. We identify the main challenges that must be addressed in order to realize this potential in practice. These challenges concern various technical aspects, including physical connectivity requirements, network protocol stack enhancements, data traffic prioritization schemes, as well as social and political aspects. --- paper_title: Comparing Strategies to Construct Local Disaster Recovery Networks paper_content: Large-scale disasters, such as earthquakes and tsunamis, damage communication infrastructure. The damaged infrastructure is then not able to provide the means for communication, which is important after a disaster. In this paper we simulate how survivors of a disaster can place smart phones or notebooks (hereafter called devices) as stationary relay chains to connect to other evacuation centers and Internet gateways to access the Internet. To determine the time necessary to set up the network and the number of evacuation centers connected to the Internet, we create a Poisson-based simulation of evacuation centers and Internet gateways. We then compare strategies how to interconnect evacuation centers and gateways. Our results show that among the strategies to place relay chains we tested, the most promising are: (1) Link every evacuation center to the closest gateway with a relay chain and (2) link each evacuation center to the 3 closest evacuation centers or gateways with relay chains. Both these strategies seem promising and should be tested in field tests in the next step. Together with step-by-step guides for disaster survivors this work will hopefully result in a smart phone and notebook application that lets untrained disaster survivors quickly set up their own recovery network in the disaster area. --- paper_title: An Overview of Algorithms for Network Survivability paper_content: Network survivability--the ability to maintain operation when one or a few network components fail--is indispensable for present-day networks. In this paper, we characterize three main components in establishing network survivability for an existing network, namely, (1) determining network connectivity, (2) augmenting the network, and (3) finding disjoint paths. We present a concise overview of network survivability algorithms, where we focus on presenting a few polynomial-time algorithms that could be implemented by practitioners and give references to more involved algorithms. --- paper_title: Tunable QoS-aware network survivability paper_content: Coping with network failures has been recognized as an issue of major importance in terms of social security, stability and prosperity. It has become clear that current networking standards fall short of coping with the complex challenge of surviving failures. The need to address this challenge has become a focal point of networking research. In particular, the concept of tunable survivability offers major performance improvements over traditional approaches. Indeed, while the traditional approach is to provide full (100%) protection against network failures through disjoint paths, it was realized that this requirement is too restrictive in practice. Tunable survivability provides a quantitative measure for specifying the desired level (0%-100%) of survivability and offers flexibility in the choice of the routing paths. Previous work focused on the simpler class of “bottleneck” criteria, such as bandwidth. In this study, we focus on the important and much more complex class of additive criteria, such as delay and cost. First, we establish some (in part, counter-intuitive) properties of the optimal solution. Then, we establish efficient algorithmic schemes for optimizing the level of survivability under additive end-to-end QoS bounds. Subsequently, through extensive simulations, we show that, at the price of negligible reduction in the level of survivability, a major improvement (up to a factor of 2) is obtained in terms of end-to-end QoS performance. Finally, we exploit the above findings in the context of a network design problem, in which we need to best invest a given “budget” for improving the performance of the network links. --- paper_title: Tunable survivable spanning trees paper_content: Coping with network failures has become a major networking challenge. The concept of tunable survivability provides a quantitative measure for specifying any desired level (0%-100%) of survivability, thus offering flexibility in the routing choice. Previous works focused on implementing this concept on unicast transmissions. However, vital network information is often broadcasted via spanning trees. Accordingly, in this study, we investigate the application of tunable survivability for efficient maintenance of spanning trees under the presence of failures. We establish efficient algorithmic schemes for optimizing the level of survivability under various QoS requirements. In addition, we derive theoretical bounds on the number of required trees for maximum survivability. Finally, through extensive simulations, we demonstrate the effectiveness of the tunable survivability concept in the construction of spanning trees. Most notably, we show that, typically, negligible reduction in the level of survivability results in major improvement in the QoS performance of the resulting spanning trees. --- paper_title: Path diversification for future internet end-to-end resilience and survivability paper_content: Path Diversification is a new mechanism that can be used to select multiple paths between a given ingress and egress node pair using a quantified diversity measure to achieve maximum flow reliability. The path diversification mechanism is targeted at the end-to-end layer, but can be applied at any level for which a path discovery service is available. Path diversification also takes into account service requirements for low-latency or maximal reliability in selecting appropriate paths. Using this mechanism will allow future internetworking architectures to exploit naturally rich physical topologies to a far greater extent than is possible with shortest-path routing or equal-cost load balancing. We describe the path diversity metric and its application at various aggregation levels, and apply the path diversification process to 13 real-world network graphs as well as 4 synthetic topologies to asses the gain in flow reliability. Based on the analysis of flow reliability across a range of networks, we then extend our path diversity metric to create a composite compensated total graph diversity metric that is representative of a particular topology's survivability with respect to distributed simultaneous link and node failures. We tune the accuracy of this metric having simulated the performance of each topology under a range of failure severities, and present the results. The topologies used are from national-scale backbone networks with a variety of characteristics, which we characterize using standard graph-theoretic metrics. The end result is a compensated total graph diversity metric that accurately predicts the survivability of a given network topology. --- paper_title: Protection coordination for dual failure on two-layer networks paper_content: Network layers such as IP/MPLS and OTN/ASON each has its own failure protection scheme. We propose a coordinated protection plan, so called protection synergy, to protect all possible dual failures by utilizing existing single failure protection schemes. There are two aspects essential for effective dual failure protection: One is to guarantee the connectivity under any dual fiber failures, the other is to allocate minimum but enough spare capacity on both layers. Our model achieves both goals using a novel topology mapping technique and computing working and backup paths with an accurate path disjoint criterion. The experimental results on four networks demonstrate complete dual failure restorability and spare capacity savings of the protection synergy approach. --- paper_title: Spare capacity allocation using shared backup path protection for dual link failures paper_content: This paper extends the spare capacity allocation (SCA) problem from single failures to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The spare provision matrix (SPM) method aggregates per-flow based information and computes the shared spare capacity for dual link failures. When compared to previous two-flow based methods, it has better scalability and flexibility. The SCA problem is formulated as a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths, and the other finds all secondary backup paths. We extend the terminologies in the 1+1 and 1:1 link protection for the backup path protection: using '':'' to indicate backup paths with shared spare capacity; and using ''+'' to indicate backup paths with dedicated capacity. Numerical results from five networks show that the network redundancy of the 1+1+1 dedicated path protection is in the range of 313-400%. It drops to 96-180% in the 1:1:1 shared backup path protection without loss of dual-link resiliency, but with the trade-off of the highest complexity on spare capacity shared by all backup paths. The 1+1:1 hybrid path protection provides intermediate redundancy at 187-310% with the moderate complexity. It has dedicated primary backup paths and shared secondary backup paths. We also compare passive sharing with active sharing. They perform spare capacity sharing either after or during the backup path routing, i.e., the active sharing approach performs share spare capacity within the backup path routing, while the passive sharing does so only after all backup paths are found. The active sharing approaches always achieve lower redundancy values than the passive sharing. The reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively. The extension of the Successive Survivable Routing (SSR) heuristic algorithm to the dual failure case is given and the numerical results show that SSR maintains a 4-11% gap from optimal on small or medium networks, and scales up well on large networks. --- paper_title: Minimizing the Risk From Disaster Failures in Optical Backbone Networks paper_content: Failures caused by disasters (e.g., weapons of mass destruction (WMD) attacks, earthquakes, hurricanes, etc.) can create huge amount of loss and disruptions in optical backbone networks. Due to major network disruptions in recent disasters, network operators need solutions to prevent connections from disasters, and recover them after disasters. 1) Prevention requires proactive approaches where the damage from a possible disaster should be estimated. Specifically, we propose disaster-risk-aware provisioning, which minimizes loss to a network operator in case of a disaster. 2) Recovery methods should consider that, after the initial failure, more connections might be disconnected by correlated cascading failures. Thus, we investigate a reprovisioning scheme to recover disrupted connections. Numerical examples conducted for different disaster types (WMD attack, earthquake, and tornado) show that our schemes significantly reduce the risk and loss in case of a disaster. --- paper_title: Finding critical regions and region-disjoint paths in a network paper_content: Due to their importance to society, communication networks should be built and operated to withstand failures. However, cost considerations make network providers less inclined to take robustness measures against failures that are unlikely to manifest, like several failures coinciding simultaneously in different geographic regions of their network. Considering networks embedded in a two-dimensional plane, we study the problem of finding a critical region—a part of the network that can be enclosed by a given elementary figure of predetermined size—whose destruction would lead to the highest network disruption. We determine that only a polynomial, in the input, number of nontrivial positions for such a figure needs to be considered and propose a corresponding polynomial-time algorithm. In addition, we consider region-aware network augmentation to decrease the impact of a regional failure. We subsequently address the region-disjoint paths problem, which asks for two paths with minimum total weight between a source $(s)$ and a destination $(d)$ that cannot both be cut by a single regional failure of diameter $D$ (unless that failure includes $s$ or $d$ ). We prove that deciding whether region-disjoint paths exist is NP-hard and propose a heuristic region-disjoint paths algorithm. --- paper_title: Enhancing network service survivability in large-scale failure scenarios paper_content: Large-scale failures resulting from natural disasters or intentional attacks are now causing serious concerns for communication network infrastructure, as the impact of large-scale network connection disruptions may cause significant costs for service providers and subscribers. In this paper, we propose a new framework for the analysis and prevention of network service disruptions in large-scale failure scenarios. We build dynamic deterministic and probabilistic models to capture the impact of regional failures as they evolve with time. A probabilistic failure model is proposed based on wave energy behaviour. Then, we develop a novel approach for preventive protection of the network in such probabilistic large-scale failure scenarios. We show that our method significantly improves uninterrupted delivery of data in the network and reduces service disruption times in large-scale regional failure scenarios. --- paper_title: Spatiotemporal risk-averse routing paper_content: A cyber-physical system is often designed as a network in which critical information is transmitted. However, network links may fail, possibly as the result of a disaster. Disasters tend to display spatiotemporal characteristics, and consequently link availabilities may vary in time. Yet, the requested connection availability of traffic must be satisfied at all times, even under disasters. In this paper, we argue that often the spatiotemporal impact of disasters can be predicted, such that suitable actions can be taken, before the disaster manifests, to ensure the availability of connections. Our main contributions are three-fold: (1) we propose a generic grid-based model to represent the risk profile of a network area and relate the risk profile to the availability of links and connections, (2) we propose a polynomial-time algorithm to identify connections that are vulnerable to an emerging disaster risk, and (3) we consider the predicted spatiotemporal disaster impact, and propose a polynomial-time algorithm based on an auxiliary graph to find the most risk-averse path under a time constraint. --- paper_title: An approach for short message resilience in disaster-stricken areas paper_content: Large scale disasters may destroy essential network infrastructures and cause network-based service failure over a large area. Communication between two nodes in a resilient network should continue without any support of network infrastructure. Delay and Disruption Tolerant Network (DTN) is the choice for network resilience because it is not dependent on any infrastructure and supports flexibility in link delay and availability. Network resilience must consider the power limitations of the nodes and unusually very high traffic demand after a disaster hits. Existing DTN routing protocols' performances are not satisfactory in terms of the two most important metrics of communication in a disaster-stricken area-message delivery ratio and residual energy of the communicating devices. We propose a new message routing mechanism, called Location-aware Message Delivery (LMD), for communicating among wireless devices that uses greedy forwarding towards the intended destinations. Simulation results show the effectiveness of our proposed approach in the delivery ratio and energy saving of the devices. Overall efficiency of our scheme is superior to those of compared DTN routing protocols. --- paper_title: A Spectrum- and Energy-Efficient Scheme for Improving the Utilization of MDRU-Based Disaster Resilient Networks paper_content: The movable and deployable resource unit (MDRU)-based network provides communication services in disaster-struck areas where the lack of spectrum and energy resources is intensified due to the high demand from users and the power outages after a disaster. The MDRU-based network attempts to apply spectrum- and energy-efficient methods to provide communications services to users. However, existing works in this field only consider spectrum efficiency or energy efficiency separately, in spite of the tradeoff relationship between them. Thus, we propose a scheme to improve the utilization of both spectrum and energy resources for better system performance. The considered MDRU-based network is composed of gateways deployed in the disaster area, which can replenish their energy by using solar panels. Our proposed scheme constructs a topology based on the top $k$ spectrum-efficient paths from each sender and applies a max flow algorithm with vertex capacities, which are the number of transmissions that each gateway can send, which is referred to as transmission capability. The transmission capability of each gateway is determined by its energy resource and distances to its neighbors. Furthermore, we show that the proposal can be used for multisender–multireceiver topologies. A new metric named spectrum–energy efficiency to measure both spectrum efficiency and energy efficiency of the network is defined. Through analyses, we prove that a value of $k$ exists such that the spectrum–energy efficiency of a given topology is maximized. Furthermore, our simulation results show that, by dynamically selecting appropriate value of $k$ , the proposed scheme can provide better spectrum–energy efficiency than existing approaches. Moreover, our experimental results verify the findings of our analysis. ---
Title: A survey of strategies for communication networks to protect against large-scale natural disasters Section 1: INTRODUCTION Description 1: Introduce the significance of communication networks in disaster scenarios and outline the goals and structure of the survey. Section 2: VULNERABILITY OF COMMUNICATION NETWORKS TO DISASTER-BASED DISRUPTIONS Description 2: Discuss the assessment of communication networks' vulnerability and review existing literature on identifying and addressing vulnerabilities. Section 3: ENHANCING THE DISASTER-RESISTANCE OF EXISTING NETWORKS AND THE DEPLOYMENT OF EMERGENCY NETWORKS Description 3: Explore methodologies for improving the robustness of existing communication networks and strategies for deploying emergency networks post-disaster. Section 4: DISASTER-RESILIENT ROUTING ALGORITHMS Description 4: Review various techniques and algorithms proposed for disaster-resilient routing to ensure continued network functionality in disaster scenarios. Section 5: CONCLUSION Description 5: Summarize the survey's findings and emphasize the importance of resilient communication strategies in mitigating the impact of natural disasters.
Should I Raise The Red Flag? A comprehensive survey of anomaly scoring methods toward mitigating false alarms
27
--- paper_title: Setting the threshold for high throughput detectors: A mathematical approach for ensembles of dynamic, heterogeneous, probabilistic anomaly detectors paper_content: Anomaly detection (AD) has garnered ample attention in security research, as such algorithms complement existing signature-based methods but promise detection of never-before-seen attacks. Cyber operations manage a high volume of heterogeneous log data; hence, AD in such operations involves multiple (e.g., per IP, per data type) ensembles of detectors modeling heterogeneous characteristics (e.g., rate, size, type) often with adaptive online models producing alerts in near real time. Because of high data volume, setting the threshold for each detector in such a system is an essential yet underdeveloped configuration issue that, if slightly mistuned, can leave the system useless, either producing a myriad of alerts and flooding downstream systems, or giving none. In this work, we build on the foundations of Ferragut et al. to provide a set of rigorous results for understanding the relationship between threshold values and alert quantities, and we propose an algorithm for setting the threshold in practice. Specifically, we give an algorithm for setting the threshold of multiple, heterogeneous, possibly dynamic detectors completely a priori, in principle. Indeed, if the underlying distribution of the incoming data is known (closely estimated), the algorithm provides provably manageable thresholds. If the distribution is unknown (e.g., has changed over time) our analysis reveals how the model distribution differs from the actual distribution, indicating a period of model refitting is necessary. We provide empirical experiments showing the efficacy of the capability by regulating the alert rate of a system with $\approx$2,500 adaptive detectors scoring over 1.5M events in 5 hours. Further, we demonstrate on the real network data and detection framework of Harshaw et al. the alternative case, showing how the inability to regulate alerts indicates the detection model is a bad fit to the data. --- paper_title: Systematic construction of anomaly detection benchmarks from real data paper_content: Research in anomaly detection suffers from a lack of realistic and publicly-available problem sets. This paper discusses what properties such problem sets should possess. It then introduces a methodology for transforming existing classification data sets into ground-truthed benchmark data sets for anomaly detection. The methodology produces data sets that vary along three important dimensions: (a) point difficulty, (b) relative frequency of anomalies, and (c) clusteredness. We apply our generated datasets to benchmark several popular anomaly detection algorithms under a range of different conditions. --- paper_title: Tracking User Mobility to Detect Suspicious Behavior paper_content: Popularity of mobile devices is accompanied by widespread security problems, such as MAC address spoofing in wireless networks. We propose a probabilistic approach to temporal anomaly detection using smoothing technique for sparse data. Our technique builds up on the Markov chain, and clustering is presented for reduced storage requirements. Wireless networks suffer from oscillations between locations, which result in weaker statistical models. Our technique identifies such oscillations, resulting in higher accuracy. Experimental results on publicly available wireless network data sets indicate that our technique is more effective than Markov chain to detect anomalies for location, time, or --- paper_title: Time Series Anomaly Detection; Detection of anomalous drops with limited features and sparse examples in noisy highly periodic data paper_content: Google uses continuous streams of data from industry partners in order to deliver accurate results to users. Unexpected drops in traffic can be an indication of an underlying issue and may be an early warning that remedial action may be necessary. Detecting such drops is non-trivial because streams are variable and noisy, with roughly regular spikes (in many different shapes) in traffic data. We investigated the question of whether or not we can predict anomalies in these data streams. Our goal is to utilize Machine Learning and statistical approaches to classify anomalous drops in periodic, but noisy, traffic patterns. Since we do not have a large body of labeled examples to directly apply supervised learning for anomaly classification, we approached the problem in two parts. First we used TensorFlow to train our various models including DNNs, RNNs, and LSTMs to perform regression and predict the expected value in the time series. Secondly we created anomaly detection rules that compared the actual values to predicted values. Since the problem requires finding sustained anomalies, rather than just short delays or momentary inactivity in the data, our two detection methods focused on continuous sections of activity rather than just single points. We tried multiple combinations of our models and rules and found that using the intersection of our two anomaly detection methods proved to be an effective method of detecting anomalies on almost all of our models. In the process we also found that not all data fell within our experimental assumptions, as one data stream had no periodicity, and therefore no time based model could predict it. --- paper_title: Review: False alarm minimization techniques in signature-based intrusion detection systems: A survey paper_content: A network based Intrusion Detection System (IDS) gathers and analyzes network packets and report possible low level security violations to a system administrator. In a large network setup, these low level and partial reports become unmanageable to the administrator resulting in some unattended events. Further it is known that state of the art IDS generate many false alarms. There are techniques proposed in IDS literature to minimize false alarms, many of which are widely used in practice in commercial Security Information and Event Management (SIEM) tools. In this paper, we review existing false alarm minimization techniques in signature-based Network Intrusion Detection System (NIDS). We give a taxonomy of false alarm minimization techniques in signature-based IDS and present the pros and cons of each class. We also study few of the prominent commercial SIEM tools which have implemented these techniques along with their performance. Finally, we conclude with some directions to the future research. --- paper_title: A survey of deep learning-based network anomaly detection paper_content: A great deal of attention has been given to deep learning over the past several years, and new deep learning techniques are emerging with improved functionality. Many computer and network applications actively utilize such deep learning algorithms and report enhanced performance through them. In this study, we present an overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neural network, as well as the machine learning techniques relevant to network anomaly detection. In addition, this article introduces the latest work that employed deep learning techniques with the focus on network anomaly detection through the extensive literature survey. We also discuss our local experiments showing the feasibility of the deep learning approach to network traffic analysis. --- paper_title: Robust Regression and Outlier Detection paper_content: 1. Introduction. 2. Simple Regression. 3. Multiple Regression. 4. The Special Case of One-Dimensional Location. 5. Algorithms. 6. Outlier Diagnostics. 7. Related Statistical Techniques. References. Table of Data Sets. Index. --- paper_title: Identification of outliers paper_content: A computer receives one or more sets of historical data points, wherein each set of historical data points corresponds to a component. The computer normalizes the one or more sets of historical data points. The computer receives and normalizes a first set of additional data points corresponding to a first set of the one or more sets and a second set of additional data points corresponding to the second set of the one or more sets. The computer creates a first visual representation corresponding to the first set of the one or more sets and the first set of additional points and a second visual representation corresponding to the second set of the one or more sets and the second set of additional data points. --- paper_title: An overview of anomaly detection techniques: Existing solutions and latest technological trends paper_content: As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems-the cyberspace's equivalent to the burglar alarm-join ranks with firewalls as one of the fundamental technologies for network security. However, today's commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ''zero day'' attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area. --- paper_title: Deep Learning for IoT Big Data and Streaming Analytics: A Survey paper_content: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature. --- paper_title: Anomaly detection: A survey paper_content: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with. --- paper_title: A survey of machine-learning and nature-inspired based credit card fraud detection techniques paper_content: Credit card is one of the popular modes of payment for electronic transactions in many developed and developing countries. Invention of credit cards has made online transactions seamless, easier, comfortable and convenient. However, it has also provided new fraud opportunities for criminals, and in turn, increased fraud rate. The global impact of credit card fraud is alarming, millions of US dollars have been lost by many companies and individuals. Furthermore, cybercriminals are innovating sophisticated techniques on a regular basis, hence, there is an urgent task to develop improved and dynamic techniques capable of adapting to rapidly evolving fraudulent patterns. Achieving this task is very challenging, primarily due to the dynamic nature of fraud and also due to lack of dataset for researchers. This paper presents a review of improved credit card fraud detection techniques. Precisely, this paper focused on recent Machine Learning based and Nature Inspired based credit card fraud detection techniques proposed in literature. This paper provides a picture of recent trend in credit card fraud detection. Moreover, this review outlines some limitations and contributions of existing credit card fraud detection techniques, it also provides necessary background information for researchers in this domain. Additionally, this review serves as a guide and stepping stone for financial institutions and individuals seeking for new and effective credit card fraud detection techniques. --- paper_title: Outlier Detection for Temporal Data: A Survey paper_content: In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used. --- paper_title: A Survey of Outlier Detection Methodologies paper_content: Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review. --- paper_title: A Comparative Study for Outlier Detection Techniques in Data Mining paper_content: Existing studies in data mining mostly focus on finding patterns in large datasets and further using it for organizational decision making. However, finding such exceptions and outliers has not yet received as much attention in the data mining field as some other topics have, such as association rules, classification and clustering. Thus, this paper describes the performance of control chart, linear regression, and Manhattan distance techniques for outlier detection in data mining. Experimental studies show that outlier detection technique using control chart is better than the technique modeled from linear regression because the number of outlier data detected by control chart is smaller than linear regression. Further, experimental studies shows that Manhattan distance technique outperformed compared with the other techniques when the threshold values increased. --- paper_title: A comprehensive survey of numeric and symbolic outlier mining techniques paper_content: Data that appear to have different characteristics than the rest of the population are called outliers. Identifying outliers from huge data repositories is a very complex task called outlier mining. Outlier mining has been akin to finding needles in a haystack. However, outlier mining has a number of practical applications in areas such as fraud detection, network intrusion detection, and identification of competitor and emerging business trends in e-commerce. This survey discuses practical applications of outlier mining, and provides a taxonomy for categorizing related mining techniques. A comprehensive review of these techniques with their advantages and disadvantages along with some current research issues are provided. --- paper_title: Review: False alarm minimization techniques in signature-based intrusion detection systems: A survey paper_content: A network based Intrusion Detection System (IDS) gathers and analyzes network packets and report possible low level security violations to a system administrator. In a large network setup, these low level and partial reports become unmanageable to the administrator resulting in some unattended events. Further it is known that state of the art IDS generate many false alarms. There are techniques proposed in IDS literature to minimize false alarms, many of which are widely used in practice in commercial Security Information and Event Management (SIEM) tools. In this paper, we review existing false alarm minimization techniques in signature-based Network Intrusion Detection System (NIDS). We give a taxonomy of false alarm minimization techniques in signature-based IDS and present the pros and cons of each class. We also study few of the prominent commercial SIEM tools which have implemented these techniques along with their performance. Finally, we conclude with some directions to the future research. --- paper_title: Anomaly detection: A survey paper_content: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with. --- paper_title: A comprehensive survey of numeric and symbolic outlier mining techniques paper_content: Data that appear to have different characteristics than the rest of the population are called outliers. Identifying outliers from huge data repositories is a very complex task called outlier mining. Outlier mining has been akin to finding needles in a haystack. However, outlier mining has a number of practical applications in areas such as fraud detection, network intrusion detection, and identification of competitor and emerging business trends in e-commerce. This survey discuses practical applications of outlier mining, and provides a taxonomy for categorizing related mining techniques. A comprehensive review of these techniques with their advantages and disadvantages along with some current research issues are provided. --- paper_title: A New, Principled Approach to Anomaly Detection paper_content: Intrusion detection is often described as having two main approaches: signature-based and anomaly-based. We argue that only unsupervised methods are suitable for detecting anomalies. However, there has been a tendency in the literature to conflate the notion of an anomaly with the notion of a malicious event. As a result, the methods used to discover anomalies have typically been ad hoc, making it nearly impossible to systematically compare between models or regulate the number of alerts. We propose a new, principled approach to anomaly detection that addresses the main shortcomings of ad hoc approaches. We provide both theoretical and cyber-specific examples to demonstrate the benefits of our more principled approach. --- paper_title: Identification of outliers paper_content: A computer receives one or more sets of historical data points, wherein each set of historical data points corresponds to a component. The computer normalizes the one or more sets of historical data points. The computer receives and normalizes a first set of additional data points corresponding to a first set of the one or more sets and a second set of additional data points corresponding to the second set of the one or more sets. The computer creates a first visual representation corresponding to the first set of the one or more sets and the first set of additional points and a second visual representation corresponding to the second set of the one or more sets and the second set of additional data points. --- paper_title: General notions of statistical depth function paper_content: Statistical depth functions are being formulated ad hoc with increasing popularity in nonparametric inference for multivariate data. Here we introduce several general structures for depth functions, classify many existing examples as special cases, and establish results on the possession, or lack thereof, of four key properties desirable for depth functions in general. Roughly speaking, these properties may be described as: affine invariance, maximality at center, monotonicity relative to deepest point, and vanishing at infinity. This provides a more systematic basis for selection of a depth function. In particular, from these and other considerations it is found that the halfspace depth behaves very well overall in comparison with various competitors. --- paper_title: Anomaly detection in streaming environmental sensor data: A data-driven modeling approach paper_content: The deployment of environmental sensors has generated an interest in real-time applications of the data they collect. This research develops a real-time anomaly detection method for environmental data streams that can be used to identify data that deviate from historical patterns. The method is based on an autoregressive data-driven model of the data stream and its corresponding prediction interval. It performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no pre-classification of anomalies. Furthermore, this method can be easily deployed on a large heterogeneous sensor network. Sixteen instantiations of this method are compared based on their ability to identify measurement errors in a windspeed data stream from Corpus Christi, Texas. The results indicate that a multilayer perceptron model of the data stream, coupled with replacement of anomalous data points, performs well at identifying erroneous data in this data stream. --- paper_title: Tiresias: Black-Box Failure Prediction in Distributed Systems paper_content: Faults in distributed systems can result in errors that manifest in several ways, potentially even in parts of the system that are not collocated with the root cause. These manifestations often appear as deviations (or "errors") in performance metrics. By transparently gathering, and then identifying escalating anomalous behavior in, various node-level and system-level performance metrics, the Tiresias system makes black-box failure-prediction possible. Through the trend analysis of performance metrics, Tiresias provides a window of opportunity (look-ahead time) for system recovery prior to impending crash failures. We empirically validate the heuristic rules of the Tiresias system by analyzing fault-free and faulty performance data from a replicated middleware-based system. --- paper_title: Deep Learning for Time-Series Analysis paper_content: In many real-world application, e.g., speech recognition or sleep stage classification, data are captured over the course of time, constituting a Time-Series. Time-Series often contain temporal dependencies that cause two otherwise identical points of time to belong to different classes or predict different behavior. This characteristic generally increases the difficulty of analysing them. Existing techniques often depended on hand-crafted features that were expensive to create and required expert knowledge of the field. With the advent of Deep Learning new models of unsupervised learning of features for Time-series analysis and forecast have been developed. Such new developments are the topic of this paper: a review of the main Deep Learning techniques is presented, and some applications on Time-Series analysis are summaried. The results make it clear that Deep Learning has a lot to contribute to the field. --- paper_title: Learning under Concept Drift: an Overview paper_content: Concept drift refers to a non stationary learning problem over time. The training and the application data often mismatch in real life problems. In this report we present a context of concept drift problem 1. We focus on the issues relevant to adaptive training set formation. We present the framework and terminology, and formulate a global picture of concept drift learners design. We start with formalizing the framework for the concept drifting data in Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept drift learners. In Section 3 we overview the principle mechanisms of concept drift learners. In this chapter we give a general picture of the available algorithms and categorize them based on their properties. Section 5 discusses the related research fields and Section 5 groups and presents major concept drift applications. This report is intended to give a bird's view of concept drift research field, provide a context of the research and position it within broad spectrum of research fields and applications. --- paper_title: A Meta-Analysis of the Anomaly Detection Problem paper_content: This article provides a thorough meta-analysis of the anomaly detection problem. To accomplish this we first identify approaches to benchmarking anomaly detection algorithms across the literature and produce a large corpus of anomaly detection benchmarks that vary in their construction across several dimensions we deem important to real-world applications: (a) point difficulty, (b) relative frequency of anomalies, (c) clusteredness of anomalies, and (d) relevance of features. We apply a representative set of anomaly detection algorithms to this corpus, yielding a very large collection of experimental results. We analyze these results to understand many phenomena observed in previous work. First we observe the effects of experimental design on experimental results. Second, results are evaluated with two metrics, ROC Area Under the Curve and Average Precision. We employ statistical hypothesis testing to demonstrate the value (or lack thereof) of our benchmarks. We then offer several approaches to summarizing our experimental results, drawing several conclusions about the impact of our methodology as well as the strengths and weaknesses of some algorithms. Last, we compare results against a trivial solution as an alternate means of normalizing the reported performance of algorithms. The intended contributions of this article are many; in addition to providing a large publicly-available corpus of anomaly detection benchmarks, we provide an ontology for describing anomaly detection contexts, a methodology for controlling various aspects of benchmark creation, guidelines for future experimental design and a discussion of the many potential pitfalls of trying to measure success in this field. --- paper_title: Anomaly Detection in Application Performance Monitoring Data paper_content: Performance issues and outages in IT systems have significant impact on business. Traditional methods for identifying these issues based on rules and simple statistics have become ineffective due to the complexity of the underlying systems, the volume and variety of performance metrics collected and the desire to correlate unusual application logging to help diagnosis. This paper examines the problem of providing accurate ranking of disjoint time periods in raw IT system monitoring data by their anomalousness. Given this ranking a decision method can be used to identify certain periods as anomalous with the aim of reducing the various performance metrics and application log messages to a manageable number of timely and actionable reports about unusual system behaviour. In order to be actionable, any such report should aim to provide the minimum context necessary to understand the behaviour it describes. In this paper, we argue that this problem is well suited to analysis with a statistical model of the system state and further that Bayesian methods are particularly well suited to the formulation of this model. To do this we analyse performance data gathered for a real internet banking system. These data highlight some of the challenges for accurately modelling the system state; in brief, very high dimensionality, high overall data rates, seasonality and variability in the data rates, seasonality in the data values, transaction data, mixed data types (continuous data, integer data, lattice data), bounded data, lags between the onset of anomalous behaviour in different performance metrics and non-Gaussian distributions. In order to be successful, subject to the criteria defined above, any approach must be flexible enough to handle all these features of the data. Finally, we present the results of applying robust methods to analyse these data, which were effectively used to pre-empt and diagnose system issues. --- paper_title: Systematic construction of anomaly detection benchmarks from real data paper_content: Research in anomaly detection suffers from a lack of realistic and publicly-available problem sets. This paper discusses what properties such problem sets should possess. It then introduces a methodology for transforming existing classification data sets into ground-truthed benchmark data sets for anomaly detection. The methodology produces data sets that vary along three important dimensions: (a) point difficulty, (b) relative frequency of anomalies, and (c) clusteredness. We apply our generated datasets to benchmark several popular anomaly detection algorithms under a range of different conditions. --- paper_title: Isolation Forest paper_content: Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies. --- paper_title: Systematic construction of anomaly detection benchmarks from real data paper_content: Research in anomaly detection suffers from a lack of realistic and publicly-available problem sets. This paper discusses what properties such problem sets should possess. It then introduces a methodology for transforming existing classification data sets into ground-truthed benchmark data sets for anomaly detection. The methodology produces data sets that vary along three important dimensions: (a) point difficulty, (b) relative frequency of anomalies, and (c) clusteredness. We apply our generated datasets to benchmark several popular anomaly detection algorithms under a range of different conditions. --- paper_title: Robust kernel density estimation paper_content: In this paper, we propose a method for robust kernel density estimation. We interpret a KDE with Gaussian kernel as the inner product between a mapped test point and the centroid of mapped training points in kernel feature space. Our robust KDE replaces the centroid with a robust estimate based on M-estimation (P. Huber, 1981), The iteratively re-weighted least squares (IRWLS) algorithm for M-estimation depends only on inner products, and can therefore be implemented using the kernel trick. We prove the IRWLS method monotonically decreases its objective value at every iteration for a broad class of robust loss functions. Our proposed method is applied to synthetic data and network traffic volumes, and the results compare favorably to the standard KDE. --- paper_title: Isolation Forest paper_content: Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies. --- paper_title: Systematic construction of anomaly detection benchmarks from real data paper_content: Research in anomaly detection suffers from a lack of realistic and publicly-available problem sets. This paper discusses what properties such problem sets should possess. It then introduces a methodology for transforming existing classification data sets into ground-truthed benchmark data sets for anomaly detection. The methodology produces data sets that vary along three important dimensions: (a) point difficulty, (b) relative frequency of anomalies, and (c) clusteredness. We apply our generated datasets to benchmark several popular anomaly detection algorithms under a range of different conditions. --- paper_title: Toward Explainable Deep Neural Network Based Anomaly Detection paper_content: Anomaly detection in industrial processes is crucial for general process monitoring and process health assessment. Deep Neural Networks (DNNs) based anomaly detection has received increased attention in recent work. Albeit their high accuracy, the black-box nature of DNNs is a drawback in practical deployment. Especially in industrial anomaly detection systems, explanations of DNN detected anomalies are crucial. This paper presents a framework for DNN based anomaly detection which provides explanations of detected anomalies. The framework answers the following questions during online processing: 1) “why is it an anomaly?” and 2) “what is the confidence?” Further, the framework can be used offline to evaluate the “knowledge” of the trained DNN. The framework reduces the opaqueness of the DNN based anomaly detector and thus improves human operators' trust in the algorithm. This paper implements the first steps of the presented framework on the benchmark KDD-NSL dataset for Denial of Service (DoS) attack detection. Offline DNN explanations showed that the DNN was detecting DoS attacks based on features indicating destination of connection, frequency and amount of data transferred while showing an accuracy around 97%. --- paper_title: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion paper_content: We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations. --- paper_title: Active Anomaly Detection via Ensembles paper_content: In critical applications of anomaly detection including computer security and fraud prevention, the anomaly detector must be configurable by the analyst to minimize the effort on false positives. One important way to configure the anomaly detector is by providing true labels for a few instances. We study the problem of label-efficient active learning to automatically tune anomaly detection ensembles and make four main contributions. First, we present an important insight into how anomaly detector ensembles are naturally suited for active learning. This insight allows us to relate the greedy querying strategy to uncertainty sampling, with implications for label-efficiency. Second, we present a novel formalism called compact description to describe the discovered anomalies and show that it can also be employed to improve the diversity of the instances presented to the analyst without loss in the anomaly discovery rate. Third, we present a novel data drift detection algorithm that not only detects the drift robustly, but also allows us to take corrective actions to adapt the detector in a principled manner. Fourth, we present extensive experiments to evaluate our insights and algorithms in both batch and streaming settings. Our results show that in addition to discovering significantly more anomalies than state-of-the-art unsupervised baselines, our active learning algorithms under the streaming-data setup are competitive with the batch setup. --- paper_title: A Meta-Analysis of the Anomaly Detection Problem paper_content: This article provides a thorough meta-analysis of the anomaly detection problem. To accomplish this we first identify approaches to benchmarking anomaly detection algorithms across the literature and produce a large corpus of anomaly detection benchmarks that vary in their construction across several dimensions we deem important to real-world applications: (a) point difficulty, (b) relative frequency of anomalies, (c) clusteredness of anomalies, and (d) relevance of features. We apply a representative set of anomaly detection algorithms to this corpus, yielding a very large collection of experimental results. We analyze these results to understand many phenomena observed in previous work. First we observe the effects of experimental design on experimental results. Second, results are evaluated with two metrics, ROC Area Under the Curve and Average Precision. We employ statistical hypothesis testing to demonstrate the value (or lack thereof) of our benchmarks. We then offer several approaches to summarizing our experimental results, drawing several conclusions about the impact of our methodology as well as the strengths and weaknesses of some algorithms. Last, we compare results against a trivial solution as an alternate means of normalizing the reported performance of algorithms. The intended contributions of this article are many; in addition to providing a large publicly-available corpus of anomaly detection benchmarks, we provide an ontology for describing anomaly detection contexts, a methodology for controlling various aspects of benchmark creation, guidelines for future experimental design and a discussion of the many potential pitfalls of trying to measure success in this field. --- paper_title: A hybrid system for reducing the false alarm rate of anomaly intrusion detection system paper_content: In this paper, we propose a hybrid intrusion detection system that combines k-Means, and two classifiers: K-nearest neighbor and Naive Bayes for anomaly detection. It consists of selecting features using an entropy based feature selection algorithm which selects the important attributes and removes the irredundant attributes. This algorithm operates on the KDD-99 Data set; this data set is used worldwide for evaluating the performance of different intrusion detection systems. The next step is clustering phase using k-Means. We have used the KDD99 (knowledge Discovery and Data Mining) intrusion detection contest. This system can detect the intrusions and further classify them into four categories: Denial of Service (DoS), U2R (User to Root), R2L (Remote to Local), and probe. The main goal is to reduce the false alarm rate of IDS1. --- paper_title: Anomaly Detection in Application Performance Monitoring Data paper_content: Performance issues and outages in IT systems have significant impact on business. Traditional methods for identifying these issues based on rules and simple statistics have become ineffective due to the complexity of the underlying systems, the volume and variety of performance metrics collected and the desire to correlate unusual application logging to help diagnosis. This paper examines the problem of providing accurate ranking of disjoint time periods in raw IT system monitoring data by their anomalousness. Given this ranking a decision method can be used to identify certain periods as anomalous with the aim of reducing the various performance metrics and application log messages to a manageable number of timely and actionable reports about unusual system behaviour. In order to be actionable, any such report should aim to provide the minimum context necessary to understand the behaviour it describes. In this paper, we argue that this problem is well suited to analysis with a statistical model of the system state and further that Bayesian methods are particularly well suited to the formulation of this model. To do this we analyse performance data gathered for a real internet banking system. These data highlight some of the challenges for accurately modelling the system state; in brief, very high dimensionality, high overall data rates, seasonality and variability in the data rates, seasonality in the data values, transaction data, mixed data types (continuous data, integer data, lattice data), bounded data, lags between the onset of anomalous behaviour in different performance metrics and non-Gaussian distributions. In order to be successful, subject to the criteria defined above, any approach must be flexible enough to handle all these features of the data. Finally, we present the results of applying robust methods to analyse these data, which were effectively used to pre-empt and diagnose system issues. --- paper_title: A comparative study of anomaly detection schemes in network intrusion detection paper_content: Intrusion detection corresponds to a suite of techniques that are used to identify attacks against computers and network infrastructures. Anomaly detection is a key element of intrusion detection in which perturbations of normal behavior suggest the presence of intentionally or unintentionally induced attacks, faults, defects, etc. This paper focuses on a detailed comparative study of several anomaly detection schemes for identifying different network intrusions. Several existing supervised and unsupervised anomaly detection schemes and their variations are evaluated on the DARPA 1998 data set of network connections [9] as well as on real network data using existing standard evaluation techniques as well as using several specific metrics that are appropriate when detecting attacks that involve a large number of connections. Our experimental results indicate that some anomaly detection schemes appear very promising when detecting novel intrusions in both DARPA’98 data and real network data. --- paper_title: Static and dynamic novelty detection methods for jet engine health monitoring paper_content: Novelty detection requires models of normality to be learnt from training data known to be normal. The first model considered in this paper is a static model trained to detect novel events associated with changes in the vibration spectra recorded from a jet engine. We describe how the distribution of energy across the harmonics of a rotating shaft can be learnt by a support vector machine model of normality. The second model is a dynamic model partially learnt from data using an expectation–maximization-based method. This model uses a Kalman filter to fuse performance data in order to characterize normal engine behaviour. Deviations from normal operation are detected using the normalized innovations squared from the Kalman filter. --- paper_title: Setting the threshold for high throughput detectors: A mathematical approach for ensembles of dynamic, heterogeneous, probabilistic anomaly detectors paper_content: Anomaly detection (AD) has garnered ample attention in security research, as such algorithms complement existing signature-based methods but promise detection of never-before-seen attacks. Cyber operations manage a high volume of heterogeneous log data; hence, AD in such operations involves multiple (e.g., per IP, per data type) ensembles of detectors modeling heterogeneous characteristics (e.g., rate, size, type) often with adaptive online models producing alerts in near real time. Because of high data volume, setting the threshold for each detector in such a system is an essential yet underdeveloped configuration issue that, if slightly mistuned, can leave the system useless, either producing a myriad of alerts and flooding downstream systems, or giving none. In this work, we build on the foundations of Ferragut et al. to provide a set of rigorous results for understanding the relationship between threshold values and alert quantities, and we propose an algorithm for setting the threshold in practice. Specifically, we give an algorithm for setting the threshold of multiple, heterogeneous, possibly dynamic detectors completely a priori, in principle. Indeed, if the underlying distribution of the incoming data is known (closely estimated), the algorithm provides provably manageable thresholds. If the distribution is unknown (e.g., has changed over time) our analysis reveals how the model distribution differs from the actual distribution, indicating a period of model refitting is necessary. We provide empirical experiments showing the efficacy of the capability by regulating the alert rate of a system with $\approx$2,500 adaptive detectors scoring over 1.5M events in 5 hours. Further, we demonstrate on the real network data and detection framework of Harshaw et al. the alternative case, showing how the inability to regulate alerts indicates the detection model is a bad fit to the data. --- paper_title: Unsupervised real-time anomaly detection for streaming data paper_content: Abstract We are seeing an enormous increase in the availability of streaming, time-series data. Largely driven by the rise of connected real-time data sources, this data presents technical challenges and opportunities. One fundamental capability for streaming analytics is to model each stream in an unsupervised fashion and detect unusual, anomalous behaviors in real-time. Early anomaly detection is valuable, yet it can be difficult to execute reliably in practice. Application constraints require systems to process data in real-time, not batches. Streaming data inherently exhibits concept drift, favoring algorithms that learn continuously. Furthermore, the massive number of independent streams in practice requires that anomaly detectors be fully automated. In this paper we propose a novel anomaly detection algorithm that meets these constraints. The technique is based on an online sequence memory algorithm called Hierarchical Temporal Memory (HTM). We also present results using the Numenta Anomaly Benchmark (NAB), a benchmark containing real-world data streams with labeled anomalies. The benchmark, the first of its kind, provides a controlled open-source environment for testing anomaly detection algorithms on streaming data. We present results and analysis for a wide range of algorithms on this benchmark, and discuss future challenges for the emerging field of streaming analytics. --- paper_title: A New, Principled Approach to Anomaly Detection paper_content: Intrusion detection is often described as having two main approaches: signature-based and anomaly-based. We argue that only unsupervised methods are suitable for detecting anomalies. However, there has been a tendency in the literature to conflate the notion of an anomaly with the notion of a malicious event. As a result, the methods used to discover anomalies have typically been ad hoc, making it nearly impossible to systematically compare between models or regulate the number of alerts. We propose a new, principled approach to anomaly detection that addresses the main shortcomings of ad hoc approaches. We provide both theoretical and cyber-specific examples to demonstrate the benefits of our more principled approach. --- paper_title: A new similarity measure using Bhattacharyya coefficient for collaborative filtering in sparse data paper_content: Collaborative filtering (CF) is the most successful approach for personalized product or service recommendations. Neighborhood based collaborative filtering is an important class of CF, which is simple, intuitive and efficient product recommender system widely used in commercial domain. Typically, neighborhood-based CF uses a similarity measure for finding similar users to an active user or similar products on which she rated. Traditional similarity measures utilize ratings of only co-rated items while computing similarity between a pair of users. Therefore, these measures are not suitable in a sparse data. In this paper, we propose a similarity measure for neighborhood based CF, which uses all ratings made by a pair of users. Proposed measure finds importance of each pair of rated items by exploiting Bhattacharyya similarity. To show effectiveness of the measure, we compared performances of neighborhood based CFs using state-of-the-art similarity measures with the proposed measured based CF. Recommendation results on a set of real data show that proposed measure based CF outperforms existing measures based CFs in various evaluation metrics. --- paper_title: The self-organizing map paper_content: The self-organized map, an architecture suggested for artificial neural networks, is explained by presenting simulation experiments and practical applications. The self-organizing map has the property of effectively creating spatially organized internal representations of various features of input signals and their abstractions. One result of this is that the self-organization process can discover semantic relationships in sentences. Brain maps, semantic maps, and early work on competitive learning are reviewed. The self-organizing map algorithm (an algorithm which order responses spatially) is reviewed, focusing on best matching cell selection and adaptation of the weight vectors. Suggestions for applying the self-organizing map algorithm, demonstrations of the ordering process, and an example of hierarchical clustering of data are presented. Fine tuning the map by learning vector quantization is addressed. The use of self-organized maps in practical speech recognition and a simulation experiment on semantic mapping are discussed. > --- paper_title: Anomaly Detection in Application Performance Monitoring Data paper_content: Performance issues and outages in IT systems have significant impact on business. Traditional methods for identifying these issues based on rules and simple statistics have become ineffective due to the complexity of the underlying systems, the volume and variety of performance metrics collected and the desire to correlate unusual application logging to help diagnosis. This paper examines the problem of providing accurate ranking of disjoint time periods in raw IT system monitoring data by their anomalousness. Given this ranking a decision method can be used to identify certain periods as anomalous with the aim of reducing the various performance metrics and application log messages to a manageable number of timely and actionable reports about unusual system behaviour. In order to be actionable, any such report should aim to provide the minimum context necessary to understand the behaviour it describes. In this paper, we argue that this problem is well suited to analysis with a statistical model of the system state and further that Bayesian methods are particularly well suited to the formulation of this model. To do this we analyse performance data gathered for a real internet banking system. These data highlight some of the challenges for accurately modelling the system state; in brief, very high dimensionality, high overall data rates, seasonality and variability in the data rates, seasonality in the data values, transaction data, mixed data types (continuous data, integer data, lattice data), bounded data, lags between the onset of anomalous behaviour in different performance metrics and non-Gaussian distributions. In order to be successful, subject to the criteria defined above, any approach must be flexible enough to handle all these features of the data. Finally, we present the results of applying robust methods to analyse these data, which were effectively used to pre-empt and diagnose system issues. --- paper_title: A Survey of Distance and Similarity Measures Used Within Network Intrusion Anomaly Detection paper_content: Anomaly detection (AD) use within the network intrusion detection field of research, or network intrusion AD (NIAD), is dependent on the proper use of similarity and distance measures, but the measures used are often not documented in published research. As a result, while the body of NIAD research has grown extensively, knowledge of the utility of similarity and distance measures within the field has not grown correspondingly. NIAD research covers a myriad of domains and employs a diverse array of techniques from simple $k$ -means clustering through advanced multiagent distributed AD systems. This review presents an overview of the use of similarity and distance measures within NIAD research. The analysis provides a theoretical background in distance measures and a discussion of various types of distance measures and their uses. Exemplary uses of distance measures in published research are presented, as is the overall state of the distance measure rigor in the field. Finally, areas that require further focus on improving the distance measure rigor in the NIAD field are presented. --- paper_title: A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data paper_content: Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. --- paper_title: Anomaly Detection in Streams with Extreme Value Theory paper_content: Anomaly detection in time series has attracted considerable attention due to its importance in many real-world applications including intrusion detection, energy management and finance. Most approaches for detecting outliers rely on either manually set thresholds or assumptions on the distribution of data according to Chandola, Banerjee and Kumar. Here, we propose a new approach to detect outliers in streaming univariate time series based on Extreme Value Theory that does not require to hand-set thresholds and makes no assumption on the distribution: the main parameter is only the risk, controlling the number of false positives. Our approach can be used for outlier detection, but more generally for automatically setting thresholds, making it useful in wide number of situations. We also experiment our algorithms on various real-world datasets which confirm its soundness and efficiency. --- paper_title: P Values: What They are and What They are Not paper_content: Abstract P values (or significance probabilities) have been used in place of hypothesis tests as a means of giving more information about the relationship between the data and the hypothesis than does a simple reject/do not reject decision. Virtually all elementary statistics texts cover the calculation of P values for one-sided and point-null hypotheses concerning the mean of a sample from a normal distribution. There is, however, a third case that is intermediate to the one-sided and point-null cases, namely the interval hypothesis, that receives no coverage in elementary texts. We show that P values are continuous functions of the hypothesis for fixed data. This allows a unified treatment of all three types of hypothesis testing problems. It also leads to the discovery that a common informal use of P values as measures of support or evidence for hypotheses has serious logical flaws. --- paper_title: A comparative study of anomaly detection schemes in network intrusion detection paper_content: Intrusion detection corresponds to a suite of techniques that are used to identify attacks against computers and network infrastructures. Anomaly detection is a key element of intrusion detection in which perturbations of normal behavior suggest the presence of intentionally or unintentionally induced attacks, faults, defects, etc. This paper focuses on a detailed comparative study of several anomaly detection schemes for identifying different network intrusions. Several existing supervised and unsupervised anomaly detection schemes and their variations are evaluated on the DARPA 1998 data set of network connections [9] as well as on real network data using existing standard evaluation techniques as well as using several specific metrics that are appropriate when detecting attacks that involve a large number of connections. Our experimental results indicate that some anomaly detection schemes appear very promising when detecting novel intrusions in both DARPA’98 data and real network data. --- paper_title: Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding paper_content: As spacecraft send back increasing amounts of telemetry data, improved anomaly detection systems are needed to lessen the monitoring burden placed on operations engineers and reduce operational risk. Current spacecraft monitoring systems only target a subset of anomaly types and often require costly expert knowledge to develop and maintain due to challenges involving scale and complexity. We demonstrate the effectiveness of Long Short-Term Memory (LSTMs) networks, a type of Recurrent Neural Network (RNN), in overcoming these issues using expert-labeled telemetry anomaly data from the Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) rover, Curiosity. We also propose a complementary unsupervised and nonparametric anomaly thresholding approach developed during a pilot implementation of an anomaly detection system for SMAP, and offer false positive mitigation strategies along with other key improvements and lessons learned during development. --- paper_title: Tracking User Mobility to Detect Suspicious Behavior paper_content: Popularity of mobile devices is accompanied by widespread security problems, such as MAC address spoofing in wireless networks. We propose a probabilistic approach to temporal anomaly detection using smoothing technique for sparse data. Our technique builds up on the Markov chain, and clustering is presented for reduced storage requirements. Wireless networks suffer from oscillations between locations, which result in weaker statistical models. Our technique identifies such oscillations, resulting in higher accuracy. Experimental results on publicly available wireless network data sets indicate that our technique is more effective than Markov chain to detect anomalies for location, time, or --- paper_title: Limiting forms of the frequency distribution of the largest or smallest member of a sample paper_content: The limiting distribution, when n is large, of the greatest or least of a sample of n , must satisfy a functional equation which limits its form to one of two main types. Of these one has, apart from size and position, a single parameter h , while the other is the limit to which it tends when h tends to zero. The appropriate limiting distribution in any case may be found from the manner in which the probability of exceeding any value x tends to zero as x is increased. For the normal distribution the limiting distribution has h = 0. From the normal distribution the limiting distribution is approached with extreme slowness; the final series of forms passed through as the ultimate form is approached may be represented by the series of limiting distributions in which h tends to zero in a definite manner as n increases to infinity. Numerical values are given for the comparison of the actual with the penultimate distributions for samples of 60 to 1000, and of the penultimate with the ultimate distributions for larger samples. --- paper_title: Time Series Anomaly Detection; Detection of anomalous drops with limited features and sparse examples in noisy highly periodic data paper_content: Google uses continuous streams of data from industry partners in order to deliver accurate results to users. Unexpected drops in traffic can be an indication of an underlying issue and may be an early warning that remedial action may be necessary. Detecting such drops is non-trivial because streams are variable and noisy, with roughly regular spikes (in many different shapes) in traffic data. We investigated the question of whether or not we can predict anomalies in these data streams. Our goal is to utilize Machine Learning and statistical approaches to classify anomalous drops in periodic, but noisy, traffic patterns. Since we do not have a large body of labeled examples to directly apply supervised learning for anomaly classification, we approached the problem in two parts. First we used TensorFlow to train our various models including DNNs, RNNs, and LSTMs to perform regression and predict the expected value in the time series. Secondly we created anomaly detection rules that compared the actual values to predicted values. Since the problem requires finding sustained anomalies, rather than just short delays or momentary inactivity in the data, our two detection methods focused on continuous sections of activity rather than just single points. We tried multiple combinations of our models and rules and found that using the intersection of our two anomaly detection methods proved to be an effective method of detecting anomalies on almost all of our models. In the process we also found that not all data fell within our experimental assumptions, as one data stream had no periodicity, and therefore no time based model could predict it. --- paper_title: Identification of outliers paper_content: A computer receives one or more sets of historical data points, wherein each set of historical data points corresponds to a component. The computer normalizes the one or more sets of historical data points. The computer receives and normalizes a first set of additional data points corresponding to a first set of the one or more sets and a second set of additional data points corresponding to the second set of the one or more sets. The computer creates a first visual representation corresponding to the first set of the one or more sets and the first set of additional points and a second visual representation corresponding to the second set of the one or more sets and the second set of additional data points. --- paper_title: Anomaly Detection in Streams with Extreme Value Theory paper_content: Anomaly detection in time series has attracted considerable attention due to its importance in many real-world applications including intrusion detection, energy management and finance. Most approaches for detecting outliers rely on either manually set thresholds or assumptions on the distribution of data according to Chandola, Banerjee and Kumar. Here, we propose a new approach to detect outliers in streaming univariate time series based on Extreme Value Theory that does not require to hand-set thresholds and makes no assumption on the distribution: the main parameter is only the risk, controlling the number of false positives. Our approach can be used for outlier detection, but more generally for automatically setting thresholds, making it useful in wide number of situations. We also experiment our algorithms on various real-world datasets which confirm its soundness and efficiency. --- paper_title: Interpreting and Unifying Outlier Scores paper_content: Outlier scores provided by different outlier models differ widely in their meaning, range, and contrast between different outlier models and, hence, are not easily comparable or interpretable. We propose a unification of outlier scores provided by various outlier models and a translation of the arbitrary “outlier factors” to values in the range [0, 1] interpretable as values describing the probability of a data object of being an outlier. As an application, we show that this unification facilitates enhanced ensembles for outlier detection. --- paper_title: Mass Volume Curves and Anomaly Ranking paper_content: This paper aims at formulating the issue of ranking multivariate unlabeled observations depending on their degree of abnormality as an unsupervised statistical learning task. In the 1-d situation, this problem is usually tackled by means of tail estimation techniques: univariate observations are viewed as all the more `abnormal' as they are located far in the tail(s) of the underlying probability distribution. It would be desirable as well to dispose of a scalar valued `scoring' function allowing for comparing the degree of abnormality of multivariate observations. Here we formulate the issue of scoring anomalies as a M-estimation problem by means of a novel functional performance criterion, referred to as the Mass Volume curve (MV curve in short), whose optimal elements are strictly increasing transforms of the density almost everywhere on the support of the density. We first study the statistical estimation of the MV curve of a given scoring function and we provide a strategy to build confidence regions using a smoothed bootstrap approach. Optimization of this functional criterion over the set of piecewise constant scoring functions is next tackled. This boils down to estimating a sequence of empirical minimum volume sets whose levels are chosen adaptively from the data, so as to adjust to the variations of the optimal MV curve, while controling the bias of its approximation by a stepwise curve. Generalization bounds are then established for the difference in sup norm between the MV curve of the empirical scoring function thus obtained and the optimal MV curve. --- paper_title: Scoring anomalies: a {M}-estimation formulation paper_content: It is the purpose of this paper to formulate the issue of scoring multivariate observations depending on their degree of abnormality/novelty as an unsupervised learning task. Whereas in the 1-d situation, this problem can be dealt with by means of tail estimation techniques, observations being viewed as all the more ”abnormal” as they are located far in the tail(s) of the underlying probability distribution. In a wide variety of applications, it is desirable to dispose of a scalar valued ”scoring” function allowing for comparing the degree of abnormality of multivariate observations. Here we formulate the issue of scoring anomalies as a M -estimation problem. A (functional) performance criterion is proposed, whose optimal elements are, as expected, nondecreasing transforms of the density. The question of empirical estimation of this criterion is tackled and preliminary statistical results related to the accuracy of partition-based techniques for optimizing empirical estimates of the empirical performance measure are established. --- paper_title: Unsupervised real-time anomaly detection for streaming data paper_content: Abstract We are seeing an enormous increase in the availability of streaming, time-series data. Largely driven by the rise of connected real-time data sources, this data presents technical challenges and opportunities. One fundamental capability for streaming analytics is to model each stream in an unsupervised fashion and detect unusual, anomalous behaviors in real-time. Early anomaly detection is valuable, yet it can be difficult to execute reliably in practice. Application constraints require systems to process data in real-time, not batches. Streaming data inherently exhibits concept drift, favoring algorithms that learn continuously. Furthermore, the massive number of independent streams in practice requires that anomaly detectors be fully automated. In this paper we propose a novel anomaly detection algorithm that meets these constraints. The technique is based on an online sequence memory algorithm called Hierarchical Temporal Memory (HTM). We also present results using the Numenta Anomaly Benchmark (NAB), a benchmark containing real-world data streams with labeled anomalies. The benchmark, the first of its kind, provides a controlled open-source environment for testing anomaly detection algorithms on streaming data. We present results and analysis for a wide range of algorithms on this benchmark, and discuss future challenges for the emerging field of streaming analytics. --- paper_title: Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding paper_content: As spacecraft send back increasing amounts of telemetry data, improved anomaly detection systems are needed to lessen the monitoring burden placed on operations engineers and reduce operational risk. Current spacecraft monitoring systems only target a subset of anomaly types and often require costly expert knowledge to develop and maintain due to challenges involving scale and complexity. We demonstrate the effectiveness of Long Short-Term Memory (LSTMs) networks, a type of Recurrent Neural Network (RNN), in overcoming these issues using expert-labeled telemetry anomaly data from the Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) rover, Curiosity. We also propose a complementary unsupervised and nonparametric anomaly thresholding approach developed during a pilot implementation of an anomaly detection system for SMAP, and offer false positive mitigation strategies along with other key improvements and lessons learned during development. --- paper_title: Thwarting DoS Attacks: A Framework for Detection based on Collective Anomalies and Clustering paper_content: A hybrid learning framework uses a collective anomaly to analyze patterns in denial-of-service attacks along with data clustering to distinguish an attack from normal network traffic. In two evaluation datasets, the framework achieved higher hit rates relative to existing anomaly-detection techniques. --- paper_title: Information-theoretic measures for anomaly detection paper_content: Anomaly detection is an essential component of protection mechanisms against novel attacks. We propose to use several information-theoretic measures, namely, entropy, conditional entropy, relative conditional entropy, information gain, and information cost for anomaly detection. These measures can be used to describe the characteristics of an audit data set, suggest the appropriate anomaly detection model(s) to be built, and explain the performance of the model(s). We use case studies on Unix system call data, BSM data, and network tcpdump data to illustrate the utilities of these measures. --- paper_title: Hidden Markov based anomaly detection for water supply systems paper_content: Considering the fact that fully immunizing critical infrastructure such as water supply or power grid systems against physical and cyberattacks is not feasible, it is crucial for every public or private sector to invigorate the detective, predictive, and preventive mechanisms to minimize the risk of disruptions, resource loss or damage. This paper proposes a methodical approach to situation analysis and anomaly detection in SCADA-based water supply systems. We model normal system behavior as a hierarchy of hidden semi-Markov models, forming the basis for detecting contextual anomalies of interest in SCADA data. Our experimental evaluation on real-world water supply system data emphasizes the efficacy of our method by significantly outperforming baseline methods. --- paper_title: Hidden Markov based anomaly detection for water supply systems paper_content: Considering the fact that fully immunizing critical infrastructure such as water supply or power grid systems against physical and cyberattacks is not feasible, it is crucial for every public or private sector to invigorate the detective, predictive, and preventive mechanisms to minimize the risk of disruptions, resource loss or damage. This paper proposes a methodical approach to situation analysis and anomaly detection in SCADA-based water supply systems. We model normal system behavior as a hierarchy of hidden semi-Markov models, forming the basis for detecting contextual anomalies of interest in SCADA data. Our experimental evaluation on real-world water supply system data emphasizes the efficacy of our method by significantly outperforming baseline methods. --- paper_title: ATLANTIDES: An Architecture for Alert Verification in Network Intrusion Detection Systems paper_content: We present an architecture designed for alert verification (i.e., to reduce false positives) in network intrusion-detection systems. Our technique is based on a systematic (and automatic) anomaly-based analysis of the system output, which provides useful context information regarding the network services. The false positives raised by the NIDS analyzing the incoming traffic (which can be either signature- or anomaly-based) are reduced by correlating them with the output anomalies. We designed our architecture for TCP-based network services which have a client/server architecture (such as HTTP). Benchmarks show a substantial reduction of false positives between 50% and 100%. --- paper_title: Alert verification evasion through server response forging paper_content: Intrusion Detection Systems (IDSs) are necessary components in the defense of any computer network. Network administrators rely on IDSs to detect attacks, but ultimately it is their responsibility to investigate IDS alerts and determine the damage done. With the number of alerts increasing, IDS analysts have turned to automated methods to help with alert verification. This research investigates this next step of the intrusion detection process. Some alert verification mechanisms attempt to identify successful intrusion attempts based on server responses and protocol analysis. This research examines the server responses generated by four different exploits across four different Linux distributions. Next, three techniques capable of forging server responses on Linux operating systems are developed and implemented. This research shows that these new alert verification evasion methods can make attacks appear unsuccessful even though the exploitation occurs. This type of attack ignores detection and tries to evade the verification process. --- paper_title: Review: False alarm minimization techniques in signature-based intrusion detection systems: A survey paper_content: A network based Intrusion Detection System (IDS) gathers and analyzes network packets and report possible low level security violations to a system administrator. In a large network setup, these low level and partial reports become unmanageable to the administrator resulting in some unattended events. Further it is known that state of the art IDS generate many false alarms. There are techniques proposed in IDS literature to minimize false alarms, many of which are widely used in practice in commercial Security Information and Event Management (SIEM) tools. In this paper, we review existing false alarm minimization techniques in signature-based Network Intrusion Detection System (NIDS). We give a taxonomy of false alarm minimization techniques in signature-based IDS and present the pros and cons of each class. We also study few of the prominent commercial SIEM tools which have implemented these techniques along with their performance. Finally, we conclude with some directions to the future research. --- paper_title: An incremental frequent structure mining framework for real-time alert correlation paper_content: With the large volume of alerts produced by low-level detectors, management of intrusion alerts is becoming more challenging. Manual analysis of a large number of raw alerts is both time consuming and labor intensive. Alert Correlation addresses this issue by finding similarity and causality relationships between raw alerts to provide a condensed, yet more meaningful view of the network from the intrusion standpoint. While some efforts have been made in the literature by researchers to find the relationships between alerts automatically, not much attention has been given to the issue of real-time correlation of alerts. Previous learning-based approaches either fail to cope with a large number of generated alerts in a large-scale network or do not address the problem of concept drift directly. In this paper, we propose a framework for real-time alert correlation which incorporates novel techniques for aggregating alerts into structured patterns and incremental mining of frequent structured patterns. Our approach to aggregation provides a reduced view of developed patterns of alerts. At the core of the proposed framework is a new algorithm (FSP_Growth) for mining frequent patterns of alerts considering their structures. In the proposed framework, time-sensitive statistical relationships between alerts are maintained in an efficient data structure and are updated incrementally to reflect the latest trends of patterns. The results of experiments conducted with the DARPA 2000 dataset as well as artificial data clearly demonstrate the efficiency of proposed techniques. A promising reduction ratio of 96% is achieved on the DARPA 2000 dataset. The running time of the FSP_Growth algorithm scales linearly with the size of artificial datasets. Moreover, testing the proposed framework with alert logs of a real-world network shows its ability to extract interesting patterns among the alerts. The ability to answer useful time-sensitive queries regarding pattern co-occurrences is another advantage of the proposed method compared to other approaches. --- paper_title: Alert Fusion for a Computer Host Based Intrusion Detection System paper_content: Intrusions impose tremendous threats to today's computer hosts. Intrusions using security breaches to achieve unauthorized access or misuse of critical information can have catastrophic consequences. To protect computer hosts from the increasing threat of intrusion, various kinds of intrusion detection systems (IDSs) have been developed. The main disadvantages of current IDSs are a high false detection rate and the lack of post-intrusion decision support capability. To minimize these drawbacks, we propose an event-driven intrusion detection architecture which integrates subject-verb-object (SVO) multi-point monitors and an impact analysis engine. Alert fusion and verification models are implemented to provide more reasonable intrusion information from incomplete, inconsistent or imprecise alerts acquired by SVO monitors. DEVS formalism is used to describe the model based design approach. Finally we use the DEVS-JAVA simulation tool to show the feasibility of the proposed system --- paper_title: A Multiscale Approach for Spatio‐Temporal Outlier Detection paper_content: A spatial outlier is a spatially referenced object whose thematic attribute values are significantly different from those of other spatially referenced objects in its spatial neighborhood. It represents an object that is significantly different from its neighbourhoods even though it may not be significantly different from the entire population. Here we extend this concept to the spatio-temporal domain and define a spatial-temporal outlier (ST-outlier) to be a spatial-temporal object whose thematic attribute values are significantly different from those of other spatially and temporally referenced objects in its spatial or/ and temporal neighbourhoods. Identification of ST-outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability or deformation. Many methods have been recently proposed to detect spatial outliers, but how to detect the temporal outliers or spatial-temporal outliers has been seldom discussed. In this paper we propose a multiscale approach to detect ST-outliers by evaluating the change between consecutive spatial and temporal scales. A four-step procedure consisting of classification, aggregation, comparison and verification is put forward to address the semantic and dynamic properties of geographic phenomena for ST-outlier detection. The effectiveness of the approach is illustrated by a practical coastal geomorphic study. © 2006 The Authors. Journal compilation © 2006 Blackwell Publishing Ltd. --- paper_title: A stateful intrusion detection system for World-Wide Web servers paper_content: Web servers are ubiquitous, remotely accessible, and often misconfigured. In addition, custom Web-based applications may introduce vulnerabilities that are overlooked even by the most security-conscious server administrators. Consequently, Web servers are a popular target for hackers. To mitigate the security exposure associated with Web servers, intrusion detection systems are deployed to analyze and screen incoming requests. The goal is to perform early detection of malicious activity and possibly prevent more serious damage to the protected site. Even though intrusion detection is critical for the security of Web servers, the intrusion detection systems available today only perform very simple analyses and are often vulnerable to simple evasion techniques. In addition, most systems do not provide sophisticated attack languages that allow a system administrator to specify custom, complex attack scenarios to be detected. We present WebSTAT, an intrusion detection system that analyzes Web requests looking for evidence of malicious behavior. The system is novel in several ways. First of all, it provides a sophisticated language to describe multistep attacks in terms of states and transitions. In addition, the modular nature of the system supports the integrated analysis of network traffic sent to the server host, operating system-level audit data produced by the server host, and the access logs produced by the Web server. By correlating different streams of events, it is possible to achieve more effective detection of Web-based attacks. --- paper_title: Quantifying the Behavior of Stock Correlations Under Market Stress paper_content: Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. --- paper_title: Detecting Cyber Attacks in Industrial Control Systems Using Convolutional Neural Networks paper_content: This paper presents a study on detecting cyber attacks on industrial control systems (ICS) using convolutional neural networks. The study was performed on a Secure Water Treatment testbed (SWaT) dataset, which represents a scaled-down version of a real-world industrial water treatment plant. We suggest a method for anomaly detection based on measuring the statistical deviation of the predicted value from the observed value. We applied the proposed method by using a variety of deep neural network architectures including different variants of convolutional and recurrent networks. The test dataset included 36 different cyber attacks. The proposed method successfully detected 31 attacks with three false positives thus improving on previous research based on this dataset. The results of the study show that 1D convolutional networks can be successfully used for anomaly detection in industrial control systems and outperform recurrent networks in this setting. The findings also suggest that 1D convolutional networks are effective at time series prediction tasks which are traditionally considered to be best solved using recurrent neural networks. This observation is a promising one, as 1D convolutional neural networks are simpler, smaller, and faster than the recurrent neural networks. --- paper_title: Artificial Face Recognition Using Wavelet Adaptive LBP with Directional Statistical Features paper_content: In this paper, a novel face recognition technique based on discrete wavelet transform and Adaptive Local Binary Pattern (ALBP) with directional statistical features is proposed. The proposed technique consists of three stages: preprocessing, feature extraction and recognition. In preprocessing and feature extraction stages, wavelet decomposition is used to enhance the common features of the same subject of images and the ALBP is used to extract representative features from each facial image. Then, the mean and the standard deviation of the local absolute difference between each pixel and its neighbors are used within ALBP and the nearest neighbor classifier to improve the classification accuracy of the LBP. Experiments conducted on two virtual world avatar face image datasets show that our technique performs better than LBP, PCA, multi-scale Local Binary Pattern, ALBP and ALBP with directional statistical features (ALBPF) in terms of accuracy and the time required to classify each facial image to its subject. --- paper_title: Anomaly Detection in Cyber Physical Systems Using Recurrent Neural Networks paper_content: This paper presents a novel unsupervised approach to detect cyber attacks in Cyber-Physical Systems (CPS). We describe an unsupervised learning approach using a Recurrent Neural network which is a time series predictor as our model. We then use the Cumulative Sum method to identify anomalies in a replicate of a water treatment plant. The proposed method not only detects anomalies in the CPS but also identifies the sensor that was attacked. The experiments were performed on a complex dataset which is collected through a Secure Water Treatment Testbed (SWaT). Through the experiments, we show that the proposed technique is able to detect majority of the attacks designed by our research team with low false positive rates. --- paper_title: Beehive: large-scale log analysis for detecting suspicious activity in enterprise networks paper_content: As more and more Internet-based attacks arise, organizations are responding by deploying an assortment of security products that generate situational intelligence in the form of logs. These logs often contain high volumes of interesting and useful information about activities in the network, and are among the first data sources that information security specialists consult when they suspect that an attack has taken place. However, security products often come from a patchwork of vendors, and are inconsistently installed and administered. They generate logs whose formats differ widely and that are often incomplete, mutually contradictory, and very large in volume. Hence, although this collected information is useful, it is often dirty. We present a novel system, Beehive, that attacks the problem of automatically mining and extracting knowledge from the dirty log data produced by a wide variety of security products in a large enterprise. We improve on signature-based approaches to detecting security incidents and instead identify suspicious host behaviors that Beehive reports as potential security incidents. These incidents can then be further analyzed by incident response teams to determine whether a policy violation or attack has occurred. We have evaluated Beehive on the log data collected in a large enterprise, EMC, over a period of two weeks. We compare the incidents identified by Beehive against enterprise Security Operations Center reports, antivirus software alerts, and feedback from enterprise security specialists. We show that Beehive is able to identify malicious events and policy violations which would otherwise go undetected. --- paper_title: Analyzing Intensive Intrusion Alerts via Correlation paper_content: Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive intrusions, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. Several complementary alert correlation methods have been proposed to address this problem. As one of these methods, we have developed a framework to correlate intrusion alerts using prerequisites of intrusions. In this paper, we continue this work to study the feasibility of this method in analyzing real-world, intensive intrusions. In particular, we develop three utilities (called adjustable graph reduction, focused analysis, and graph decomposition) to facilitate the analysis of large sets of correlated alerts. We study the effectiveness of the alert correlation method and these utilities through a case study with the network traffic captured at the DEF CON 8 Capture the Flag (CTF) event. Our results show that these utilities can simplify the analysis of large amounts of alerts, and also reveals several attack strategies that were repeatedly used in the DEF CON 8 CTF event. --- paper_title: Discovering novel attack strategies from INFOSEC alerts paper_content: Correlating security alerts and discovering attack strategies are important and challenging tasks for security analysts. Recently, there have been several proposed techniques to analyze attack scenarios from security alerts. However, most of these approaches depend on a priori and hard-coded domain knowledge that lead to their limited capabilities of detecting new attack strategies. In this paper, we propose an approach to discover novel attack strategies. Our approach includes two complementary correlation mechanisms based on two hypotheses of attack step relationship. The first hypothesis is that attack steps are directly related because an earlier attack enables or positively affects the later one. For this type of attack relationship, we develop a Bayesian-based correlation engine to correlate attack steps based on security states of systems and networks. The second hypothesis is that for some related attack steps, even though they do not have obvious and direct relationship in terms of security and performance measures, they still have temporal and statistical patterns. For this category of relationship, we apply time series and statistical analysis to correlate attack steps. The security analysts are presented with aggregated information on attack strategies from these two correlation engines. We evaluate our approach using DARPA’s Grand Challenge Problem (GCP) data sets. The results show that our approach can discover novel attack strategies and provide a quantitative analysis of attack scenarios. --- paper_title: Contextual verification for false alarm reduction in maritime anomaly detection paper_content: Automated vessel anomaly detection is immensely important for preventing and reducing illegal activities (e.g., drug dealing, human trafficking, etc.) and for effective emergency response and rescue in a country's territorial waters. A major limitation of previously proposed vessel anomaly detection techniques is the high rate of false alarms as these methods mainly consider vessel kinematic information which is generally obtained from AIS data. In many cases, an anomalous vessel in terms of kinematic data can be completely normal and legitimate if the "context" at the location and time (e.g., weather and sea conditions) of the vessel is factored in. In this paper, we propose a novel anomalous vessel detection framework that utilizes such contextual information to reduce false alarms through "contextual verification". We evaluate our proposed framework for vessel anomaly detection using massive amount of real-life AIS data sets obtained from U.S. Coast Guard. Though our study and developed prototype is based on the maritime domain the basic idea of using contextual information through "contextual verification" to filter false alarms can be applied to other domains as well. --- paper_title: Deep Learning Based Forecasting of Critical Infrastructure Data paper_content: Intelligent monitoring and control of critical infrastructure such as electric power grids, public water utilities and transportation systems produces massive volumes of time series data from heterogeneous sensor networks. Time Series Forecasting (TSF) is essential for system safety and security, and also for improving the efficiency and quality of service delivery. Being highly dependent on various external factors, the observed system behavior is usually stochastic, which makes the next value prediction a tricky and challenging task that usually needs customized methods. In this paper we propose a novel deep learning based framework for time series analysis and prediction by ensembling parametric and nonparametric methods. Our approach takes advantage of extracting features at different time scales, which improves accuracy without compromising reliability in comparison with the state-of-the-art methods. Our experimental evaluation using real-world SCADA data from a municipal water management system shows that our proposed method outperforms the baseline methods evaluated here. --- paper_title: Hidden Markov based anomaly detection for water supply systems paper_content: Considering the fact that fully immunizing critical infrastructure such as water supply or power grid systems against physical and cyberattacks is not feasible, it is crucial for every public or private sector to invigorate the detective, predictive, and preventive mechanisms to minimize the risk of disruptions, resource loss or damage. This paper proposes a methodical approach to situation analysis and anomaly detection in SCADA-based water supply systems. We model normal system behavior as a hierarchy of hidden semi-Markov models, forming the basis for detecting contextual anomalies of interest in SCADA data. Our experimental evaluation on real-world water supply system data emphasizes the efficacy of our method by significantly outperforming baseline methods. --- paper_title: An approach to sensor correlation paper_content: We present an approach to intrusion detection (ID) sensor correlation that considers the problem in three phases: event aggregation, sensor coupling, and meta alert fusion. The approach is well suited to probabilistically based sensors such as EMERALD eBayes. We demonstrate the efficacy of the EMERALD alert thread mechanism, the sensor coupling in eBayes, and a prototype alert fusion capability towards achieving significant functionality in the field of ID sensor correlation. --- paper_title: The NIDS cluster: Scalable, stateful network intrusion detection on commodity hardware paper_content: In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS's operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring. --- paper_title: Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding paper_content: As spacecraft send back increasing amounts of telemetry data, improved anomaly detection systems are needed to lessen the monitoring burden placed on operations engineers and reduce operational risk. Current spacecraft monitoring systems only target a subset of anomaly types and often require costly expert knowledge to develop and maintain due to challenges involving scale and complexity. We demonstrate the effectiveness of Long Short-Term Memory (LSTMs) networks, a type of Recurrent Neural Network (RNN), in overcoming these issues using expert-labeled telemetry anomaly data from the Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) rover, Curiosity. We also propose a complementary unsupervised and nonparametric anomaly thresholding approach developed during a pilot implementation of an anomaly detection system for SMAP, and offer false positive mitigation strategies along with other key improvements and lessons learned during development. --- paper_title: Detecting Spacecraft Anomalies Using LSTMs and Nonparametric Dynamic Thresholding paper_content: As spacecraft send back increasing amounts of telemetry data, improved anomaly detection systems are needed to lessen the monitoring burden placed on operations engineers and reduce operational risk. Current spacecraft monitoring systems only target a subset of anomaly types and often require costly expert knowledge to develop and maintain due to challenges involving scale and complexity. We demonstrate the effectiveness of Long Short-Term Memory (LSTMs) networks, a type of Recurrent Neural Network (RNN), in overcoming these issues using expert-labeled telemetry anomaly data from the Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) rover, Curiosity. We also propose a complementary unsupervised and nonparametric anomaly thresholding approach developed during a pilot implementation of an anomaly detection system for SMAP, and offer false positive mitigation strategies along with other key improvements and lessons learned during development. --- paper_title: Active Anomaly Detection via Ensembles paper_content: In critical applications of anomaly detection including computer security and fraud prevention, the anomaly detector must be configurable by the analyst to minimize the effort on false positives. One important way to configure the anomaly detector is by providing true labels for a few instances. We study the problem of label-efficient active learning to automatically tune anomaly detection ensembles and make four main contributions. First, we present an important insight into how anomaly detector ensembles are naturally suited for active learning. This insight allows us to relate the greedy querying strategy to uncertainty sampling, with implications for label-efficiency. Second, we present a novel formalism called compact description to describe the discovered anomalies and show that it can also be employed to improve the diversity of the instances presented to the analyst without loss in the anomaly discovery rate. Third, we present a novel data drift detection algorithm that not only detects the drift robustly, but also allows us to take corrective actions to adapt the detector in a principled manner. Fourth, we present extensive experiments to evaluate our insights and algorithms in both batch and streaming settings. Our results show that in addition to discovering significantly more anomalies than state-of-the-art unsupervised baselines, our active learning algorithms under the streaming-data setup are competitive with the batch setup. --- paper_title: Efficient learning of sparse representations with an energy-based model paper_content: We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces "stroke detectors" when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps. --- paper_title: Direct Robust Matrix Factorizatoin for Anomaly Detection paper_content: Matrix factorization methods are extremely useful in many data mining tasks, yet their performances are often degraded by outliers. In this paper, we propose a novel robust matrix factorization algorithm that is insensitive to outliers. We directly formulate robust factorization as a matrix approximation problem with constraints on the rank of the matrix and the cardinality of the outlier set. Then, unlike existing methods that resort to convex relaxations, we solve this problem directly and efficiently. In addition, structural knowledge about the outliers can be incorporated to find outliers more effectively. We applied this method in anomaly detection tasks on various data sets. Empirical results show that this new algorithm is effective in robust modeling and anomaly detection, and our direct solution achieves superior performance over the state-of-the-art methods based on the L1-norm and the nuclear norm of matrices. --- paper_title: A novel hybridization of artificial neural networks and ARIMA models for time series forecasting paper_content: Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing decision makers in many areas. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, using ANNs to model linear problems have yielded mixed results, and hence; it is not wise to apply ANNs blindly to any type of data. Autoregressive integrated moving average (ARIMA) models are one of the most popular linear models in time series forecasting, which have been widely applied in order to construct more accurate hybrid models during the past decade. Although, hybrid techniques, which decompose a time series into its linear and nonlinear components, have recently been shown to be successful for single models, these models have some disadvantages. In this paper, a novel hybridization of artificial neural networks and ARIMA model is proposed in order to overcome mentioned limitation of ANNs and yield more general and more accurate forecasting model than traditional hybrid ARIMA-ANNs models. In our proposed model, the unique advantages of ARIMA models in linear modeling are used in order to identify and magnify the existing linear structure in data, and then a neural network is used in order to determine a model to capture the underlying data generating process and predict, using preprocessed data. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by traditional hybrid models and also either of the components models used separately. --- paper_title: A survey of deep learning-based network anomaly detection paper_content: A great deal of attention has been given to deep learning over the past several years, and new deep learning techniques are emerging with improved functionality. Many computer and network applications actively utilize such deep learning algorithms and report enhanced performance through them. In this study, we present an overview of deep learning methodologies, including restricted Bolzmann machine-based deep belief network, deep neural network, and recurrent neural network, as well as the machine learning techniques relevant to network anomaly detection. In addition, this article introduces the latest work that employed deep learning techniques with the focus on network anomaly detection through the extensive literature survey. We also discuss our local experiments showing the feasibility of the deep learning approach to network traffic analysis. --- paper_title: Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery paper_content: Obtaining models that capture imaging markers relevant for disease progression and treatment monitoring is challenging. Models are typically based on large amounts of data with annotated examples of known markers aiming at automating detection. High annotation effort and the limitation to a vocabulary of known markers limit the power of such approaches. Here, we perform unsupervised learning to identify anomalies in imaging data as candidates for markers. We propose AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability, accompanying a novel anomaly scoring scheme based on the mapping from image space to a latent space. Applied to new data, the model labels anomalies, and scores image patches indicating their fit into the learned distribution. Results on optical coherence tomography images of the retina demonstrate that the approach correctly identifies anomalous images, such as images containing retinal fluid or hyperreflective foci. --- paper_title: Deep Learning for IoT Big Data and Streaming Analytics: A Survey paper_content: In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature. --- paper_title: Deep Learning via Semi-Supervised Embedding paper_content: We show how nonlinear embedding algorithms popular for use with "shallow" semi-supervised learning techniques such as kernel methods can be easily applied to deep multi-layer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This trick provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques. --- paper_title: A survey of machine-learning and nature-inspired based credit card fraud detection techniques paper_content: Credit card is one of the popular modes of payment for electronic transactions in many developed and developing countries. Invention of credit cards has made online transactions seamless, easier, comfortable and convenient. However, it has also provided new fraud opportunities for criminals, and in turn, increased fraud rate. The global impact of credit card fraud is alarming, millions of US dollars have been lost by many companies and individuals. Furthermore, cybercriminals are innovating sophisticated techniques on a regular basis, hence, there is an urgent task to develop improved and dynamic techniques capable of adapting to rapidly evolving fraudulent patterns. Achieving this task is very challenging, primarily due to the dynamic nature of fraud and also due to lack of dataset for researchers. This paper presents a review of improved credit card fraud detection techniques. Precisely, this paper focused on recent Machine Learning based and Nature Inspired based credit card fraud detection techniques proposed in literature. This paper provides a picture of recent trend in credit card fraud detection. Moreover, this review outlines some limitations and contributions of existing credit card fraud detection techniques, it also provides necessary background information for researchers in this domain. Additionally, this review serves as a guide and stepping stone for financial institutions and individuals seeking for new and effective credit card fraud detection techniques. --- paper_title: Applying convolutional neural network for network intrusion detection paper_content: Recently, Convolutional neural network (CNN) architectures in deep learning have achieved significant results in the field of computer vision. To transform this performance toward the task of intrusion detection (ID) in cyber security, this paper models network traffic as time-series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with supervised learning methods such as multi-layer perceptron (MLP), CNN, CNN-recurrent neural network (CNN-RNN), CNN-long short-term memory (CNN-LSTM) and CNN-gated recurrent unit (GRU), using millions of known good and bad network connections. To measure the efficacy of these approaches we evaluate on the most important synthetic ID data set such as KDDCup 99. To select the optimal network architecture, comprehensive analysis of various MLP, CNN, CNN-RNN, CNN-LSTM and CNN-GRU with its topologies, network parameters and network structures is used. The models in each experiment are run up to 1000 epochs with learning rate in the range [0.01-05]. CNN and its variant architectures have significantly performed well in comparison to the classical machine learning classifiers. This is mainly due to the reason that CNN have capability to extract high level feature representations that represents the abstract form of low level feature sets of network traffic connections. --- paper_title: Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction paper_content: This paper proposes to use autoencoders with nonlinear dimensionality reduction in the anomaly detection task. The authors apply dimensionality reduction by using an autoencoder onto both artificial data and real data, and compare it with linear PCA and kernel PCA to clarify its property. The artificial data is generated from Lorenz system, and the real data is the spacecrafts' telemetry data. This paper demonstrates that autoencoders are able to detect subtle anomalies which linear PCA fails. Also, autoencoders can increase their accuracy by extending them to denoising autoenconders. Moreover, autoencoders can be useful as nonlinear techniques without complex computation as kernel PCA requires. Finaly, the authors examine the learned features in the hidden layer of autoencoders, and present that autoencoders learn the normal state properly and activate differently with anomalous input. --- paper_title: High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning paper_content: High-dimensional problem domains pose significant challenges for anomaly detection. The presence of irrelevant features can conceal the presence of anomalies. This problem, known as the 'curse of dimensionality', is an obstacle for many anomaly detection techniques. Building a robust anomaly detection model for use in high-dimensional spaces requires the combination of an unsupervised feature extractor and an anomaly detector. While one-class support vector machines are effective at producing decision surfaces from well-behaved feature vectors, they can be inefficient at modelling the variation in large, high-dimensional datasets. Architectures such as deep belief networks (DBNs) are a promising technique for learning robust features. We present a hybrid model where an unsupervised DBN is trained to extract generic underlying features, and a one-class SVM is trained from the features learned by the DBN. Since a linear kernel can be substituted for nonlinear ones in our hybrid model without loss of accuracy, our model is scalable and computationally efficient. The experimental results show that our proposed model yields comparable anomaly detection performance with a deep autoencoder, while reducing its training and testing time by a factor of 3 and 1000, respectively. HighlightsWe use a combination of a one-class SVM and deep learning.In our model linear kernels can be used rather than nonlinear ones.Our model delivers a comparable accuracy with a deep autoencoder.Our model executes 3times faster in training and 1000 faster than a deep autoencoder. --- paper_title: A Deep Learning Approach for Network Intrusion Detection System paper_content: A Network Intrusion Detection System (NIDS) helps system administrators to detect network security breaches in ::: ::: their organizations. However, many challenges arise while ::: ::: developing a flexible and efficient NIDS for unforeseen and unpredictable attacks. We propose a deep learning based approach for developing such an efficient and flexible NIDS. ::: ::: We use Self-taught Learning (STL), a deep learning based technique, on NSL-KDD - a benchmark dataset for network ::: ::: intrusion. We present the performance of our approach and compare it with a few previous work. Compared metrics include accuracy, precision, recall, and f-measure values. --- paper_title: Robust statistics for outlier detection paper_content: When analyzing data, outlying observations cause problems because they may strongly influence the result. Robust statistics aims at detecting the outliers by searching for the model fitted by the majority of the data. We present an overview of several robust methods and outlier detection tools. We discuss robust procedures for univariate, low-dimensional, and high-dimensional data such as estimation of location and scatter, linear regression, principal component analysis, and classification. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 73-79 DOI: 10.1002/widm.2 ::: ::: This article is categorized under: ::: ::: Algorithmic Development > Biological Data Mining ::: Algorithmic Development > Spatial and Temporal Data Mining ::: Application Areas > Health Care ::: Technologies > Structure Discovery and Clustering --- paper_title: Sum-product networks: A new deep architecture paper_content: The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs). SPNs are directed acyclic graphs with variables as leaves, sums and products as internal nodes, and weighted edges. We show that if an SPN is complete and consistent it represents the partition function and all marginals of some graphical model, and give semantics to its nodes. Essentially all tractable graphical models can be cast as SPNs, but SPNs are also strictly more general. We then propose learning algorithms for SPNs, based on backpropagation and EM. Experiments show that inference and learning with SPNs can be both faster and more accurate than with standard deep networks. For example, SPNs perform image completion better than state-of-the-art deep networks for this task. SPNs also have intriguing potential connections to the architecture of the cortex. --- paper_title: Deep Learning Based Forecasting of Critical Infrastructure Data paper_content: Intelligent monitoring and control of critical infrastructure such as electric power grids, public water utilities and transportation systems produces massive volumes of time series data from heterogeneous sensor networks. Time Series Forecasting (TSF) is essential for system safety and security, and also for improving the efficiency and quality of service delivery. Being highly dependent on various external factors, the observed system behavior is usually stochastic, which makes the next value prediction a tricky and challenging task that usually needs customized methods. In this paper we propose a novel deep learning based framework for time series analysis and prediction by ensembling parametric and nonparametric methods. Our approach takes advantage of extracting features at different time scales, which improves accuracy without compromising reliability in comparison with the state-of-the-art methods. Our experimental evaluation using real-world SCADA data from a municipal water management system shows that our proposed method outperforms the baseline methods evaluated here. --- paper_title: Anomaly detection in aircraft data using Recurrent Neural Networks (RNN) paper_content: Anomaly Detection in multivariate, time-series data collected from aircraft's Flight Data Recorder (FDR) or Flight Operational Quality Assurance (FOQA) data provide a powerful means for identifying events and trends that reduce safety margins. The industry standard “Exceedance Detection” algorithm uses a list of specified parameters and their thresholds to identify known deviations. In contrast, Machine Learning algorithms detect unknown unusual patterns in the data either through semi-supervised or unsupervised learning. The Multiple Kernel Anomaly Detection (MKAD) algorithm based on One-class SVM identified 6 of 11 canonical anomalies in a large dataset but is limited by the need for dimensionality reduction, poor sensitivity to short term anomalies, and inability to detect anomalies in latent features. This paper describes the application of Recurrent Neural Networks (RNN) with Long Term Short Term Memory (LTSM) and Gated Recurrent Units (GRU) architectures which can overcome the limitations described above. The RNN algorithms detected 9 out the 11 anomalies in the test dataset with Precision = 1, Recall = 0.818 and F1 score = 0.89. RNN architectures, designed for time-series data, are suited for implementation on the flight deck to provide real-time anomaly detection. The implications of these results are discussed. --- paper_title: Fuzzy time series forecasting with a novel hybrid approach combining fuzzy c-means and neural networks paper_content: In recent years, time series forecasting studies in which fuzzy time series approach is utilized have got more attentions. Various soft computing techniques such as fuzzy clustering, artificial neural networks and genetic algorithms have been used in fuzzy time series method to improve the method. While fuzzy clustering and genetic algorithms are being used for fuzzification, artificial neural networks method is being preferred for using in defining fuzzy relationships. In this study, a hybrid fuzzy time series approach is proposed to reach more accurate forecasts. In the proposed hybrid approach, fuzzy c-means clustering method and artificial neural networks are employed for fuzzification and defining fuzzy relationships, respectively. The enrollment data of University of Alabama is forecasted by using both the proposed method and the other fuzzy time series approaches. As a result of comparison, it is seen that the most accurate forecasts are obtained when the proposed hybrid fuzzy time series approach is used. --- paper_title: A New, Principled Approach to Anomaly Detection paper_content: Intrusion detection is often described as having two main approaches: signature-based and anomaly-based. We argue that only unsupervised methods are suitable for detecting anomalies. However, there has been a tendency in the literature to conflate the notion of an anomaly with the notion of a malicious event. As a result, the methods used to discover anomalies have typically been ad hoc, making it nearly impossible to systematically compare between models or regulate the number of alerts. We propose a new, principled approach to anomaly detection that addresses the main shortcomings of ad hoc approaches. We provide both theoretical and cyber-specific examples to demonstrate the benefits of our more principled approach. --- paper_title: A New, Principled Approach to Anomaly Detection paper_content: Intrusion detection is often described as having two main approaches: signature-based and anomaly-based. We argue that only unsupervised methods are suitable for detecting anomalies. However, there has been a tendency in the literature to conflate the notion of an anomaly with the notion of a malicious event. As a result, the methods used to discover anomalies have typically been ad hoc, making it nearly impossible to systematically compare between models or regulate the number of alerts. We propose a new, principled approach to anomaly detection that addresses the main shortcomings of ad hoc approaches. We provide both theoretical and cyber-specific examples to demonstrate the benefits of our more principled approach. --- paper_title: An Anomaly Detection Approach Based on Isolation Forest Algorithm for Streaming Data using Sliding Window paper_content: Abstract Anomalous behavior detection in many applications is becoming more and more important, such as computer security, sensor network and so on. However, the inherent characteristics of streaming data, such as generated quickly, data infinite, tremendous volume and the phenomenon of concept drift, imply that the anomaly detection in the streaming data is a challenge work. In this paper, using the frame of sliding windows and taking into account the concept drift phenomenon, a novel anomaly detection framework is presented and an adapted streaming data anomaly detection algorithm based on the iForest algorithm, namely iForestASD is proposed. The experiment results performed on four real-world datasets derived from the UCI repository demonstrate that the proposed algorithm can effective to detect anomalous instances for the streaming data. --- paper_title: Unsupervised real-time anomaly detection for streaming data paper_content: Abstract We are seeing an enormous increase in the availability of streaming, time-series data. Largely driven by the rise of connected real-time data sources, this data presents technical challenges and opportunities. One fundamental capability for streaming analytics is to model each stream in an unsupervised fashion and detect unusual, anomalous behaviors in real-time. Early anomaly detection is valuable, yet it can be difficult to execute reliably in practice. Application constraints require systems to process data in real-time, not batches. Streaming data inherently exhibits concept drift, favoring algorithms that learn continuously. Furthermore, the massive number of independent streams in practice requires that anomaly detectors be fully automated. In this paper we propose a novel anomaly detection algorithm that meets these constraints. The technique is based on an online sequence memory algorithm called Hierarchical Temporal Memory (HTM). We also present results using the Numenta Anomaly Benchmark (NAB), a benchmark containing real-world data streams with labeled anomalies. The benchmark, the first of its kind, provides a controlled open-source environment for testing anomaly detection algorithms on streaming data. We present results and analysis for a wide range of algorithms on this benchmark, and discuss future challenges for the emerging field of streaming analytics. --- paper_title: Hidden Markov based anomaly detection for water supply systems paper_content: Considering the fact that fully immunizing critical infrastructure such as water supply or power grid systems against physical and cyberattacks is not feasible, it is crucial for every public or private sector to invigorate the detective, predictive, and preventive mechanisms to minimize the risk of disruptions, resource loss or damage. This paper proposes a methodical approach to situation analysis and anomaly detection in SCADA-based water supply systems. We model normal system behavior as a hierarchy of hidden semi-Markov models, forming the basis for detecting contextual anomalies of interest in SCADA data. Our experimental evaluation on real-world water supply system data emphasizes the efficacy of our method by significantly outperforming baseline methods. --- paper_title: Active Anomaly Detection via Ensembles paper_content: In critical applications of anomaly detection including computer security and fraud prevention, the anomaly detector must be configurable by the analyst to minimize the effort on false positives. One important way to configure the anomaly detector is by providing true labels for a few instances. We study the problem of label-efficient active learning to automatically tune anomaly detection ensembles and make four main contributions. First, we present an important insight into how anomaly detector ensembles are naturally suited for active learning. This insight allows us to relate the greedy querying strategy to uncertainty sampling, with implications for label-efficiency. Second, we present a novel formalism called compact description to describe the discovered anomalies and show that it can also be employed to improve the diversity of the instances presented to the analyst without loss in the anomaly discovery rate. Third, we present a novel data drift detection algorithm that not only detects the drift robustly, but also allows us to take corrective actions to adapt the detector in a principled manner. Fourth, we present extensive experiments to evaluate our insights and algorithms in both batch and streaming settings. Our results show that in addition to discovering significantly more anomalies than state-of-the-art unsupervised baselines, our active learning algorithms under the streaming-data setup are competitive with the batch setup. --- paper_title: Anomaly Detection in Streams with Extreme Value Theory paper_content: Anomaly detection in time series has attracted considerable attention due to its importance in many real-world applications including intrusion detection, energy management and finance. Most approaches for detecting outliers rely on either manually set thresholds or assumptions on the distribution of data according to Chandola, Banerjee and Kumar. Here, we propose a new approach to detect outliers in streaming univariate time series based on Extreme Value Theory that does not require to hand-set thresholds and makes no assumption on the distribution: the main parameter is only the risk, controlling the number of false positives. Our approach can be used for outlier detection, but more generally for automatically setting thresholds, making it useful in wide number of situations. We also experiment our algorithms on various real-world datasets which confirm its soundness and efficiency. --- paper_title: Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark paper_content: Much of the world's data is streaming, time-series data, where anomalies give significant information in critical situations, examples abound in domains such as finance, IT, security, medical, and energy. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, not batches, and learn while simultaneously making predictions. There are no benchmarks to adequately test and score the efficacy of real-time anomaly detectors. Here we propose the Numenta Anomaly Benchmark (NAB), which attempts to provide a controlled and repeatable environment of open-source tools to test and measure anomaly detection algorithms on streaming data. The perfect detector would detect all anomalies as soon as possible, trigger no false alarms, work with real-world time-series data across a variety of domains, and automatically adapt to changing statistics. Rewarding these characteristics is formalized in NAB, using a scoring algorithm designed for streaming data. NAB evaluates detectors on a benchmark dataset with labeled, real-world time-series data. We present these components, and give results and analyses for several open source, commercially-used algorithms. The goal for NAB is to provide a standard, open source framework with which the research community can compare and evaluate different algorithms for detecting anomalies in streaming data. ---
Title: Should I Raise The Red Flag? A comprehensive survey of anomaly scoring methods toward mitigating false alarms Section 1: Introduction Description 1: Introduce the overall context and importance of anomaly detection systems (ADS), specifically focusing on the challenges of mitigating false alarms. Section 2: Motivation and Challenges Description 2: Discuss the motivation behind performing a comprehensive survey on anomaly scoring methods and outline the key challenges addressed in the paper. Section 3: Organization Description 3: Provide an overview of the paper's structure, briefly describing what each section will cover. Section 4: Related works Description 4: Review existing surveys and studies on anomaly detection, highlighting the differences and gaps addressed by this paper. Section 5: Our contribution Description 5: Summarize the unique contributions made by this survey to the existing literature on false alarm mitigation and anomaly scoring. Section 6: Basic definitions Description 6: Define key concepts such as anomaly, anomaly scoring, and other essential terms used throughout the paper. Section 7: Problem definition Description 7: Present the core problem addressed by this survey, especially focusing on the challenges of setting thresholds and scoring anomalies in ADS. Section 8: What is an anomaly? Description 8: Discuss the various definitions and interpretations of anomalies, providing a comprehensive overview. Section 9: What is anomaly scoring? Description 9: Explain the concept of anomaly scoring and its significance in ADS. Section 10: Behavior predictor model Description 10: Describe different models used for predicting normal behavior and their importance in anomaly detection. Section 11: Concept drift and abrupt evolution of data Description 11: Discuss the concept of data drift and how it affects the accuracy of anomaly detection models over time. Section 12: Challenging Criteria for ADs Description 12: List and describe various criteria that make anomaly detection challenging, including high dimensionality, noise, and data drift. Section 13: Masking Effect Description 13: Explain the masking effect and its impact on detecting anomalies. Section 14: Swamping Effect Description 14: Describe the swamping effect and its role in generating false positives. Section 15: Variable frequency of anomalies Description 15: Discuss how the variability in the frequency of anomalies poses challenges to ADS. Section 16: High dimensionality curse Description 16: Explain the high dimensionality curse and its implications for anomaly detection. Section 17: Lag of emergence of anomalies Description 17: Describe how the lag in the emergence of anomalies affects their detection. Section 18: Domain specific challenges Description 18: Discuss challenges specific to different application domains of ADS. Section 19: Automatic false alarm scaling Description 19: Address methods and techniques to automatically scale and manage false alarms in ADS. Section 20: Improved individual scoring Description 20: Dive into enhanced scoring mechanisms that improve upon basic anomaly scoring methods. Section 21: Improved Threshold Computation Description 21: Present techniques aimed at optimizing threshold computation for better anomaly detection. Section 22: Sequence based scoring Description 22: Examine methods that take advantage of sequences to improve anomaly detection and address false alarms. Section 23: Alarm Verification Description 23: Discuss techniques to verify alarms post-hoc to reduce the false alarm rate. Section 24: Collective analysis Description 24: Review methods that use collective data to analyze and mitigate false alarms. Section 25: Concept Drift Description 25: Explore various strategies for detecting and handling concept drift in dynamic data environments. Section 26: Research questions Description 26: Propose critical research questions that arise from the survey and suggest directions for future research. Section 27: Conclusion Description 27: Summarize the findings from the survey, reiterating the importance of false-alarm mitigation in ADS and outlining future research paths.
A Survey on Traffic Signal Control Methods
16
--- paper_title: 2015 Urban Mobility Scorecard paper_content: Findings in the 2015 Urban Mobility Scorecard are drawn from traffic speed data collected by INRIX on 1.3 million miles of urban streets and highways, along with highway performance data from the Federal Highway Administration. This edition provides a comprehensive analysis of traffic conditions in 471 urban areas across the United States. Travel delays due to traffic congestion caused drivers to waste more than 3 billion gallons of fuel and kept travelers stuck in their cars for nearly 7 billion extra hours – 42 hours per rush-hour commuter. The total nationwide price tag: $160 billion, or $960 per commuter. Washington, D.C. tops the list of gridlock-plagued cities, with 82 hours of delay per commuter, followed by Los Angeles (80 hours), San Francisco (78 hours), New York (74 hours), and San Jose (67 hours). Drivers on America’s Top 10 worst roads waste on average 84 hours or 3.5 days a year on average in gridlock – twice the national average. Of these roads, six are in Los Angeles, two are in New York and the remaining two are in Chicago. The report predicts urban roadway congestion will continue to get worse without more assertive approaches on the project, program, and policy fronts. --- paper_title: THE SCOOT ON-LINE TRAFFIC SIGNAL OPTIMISATION TECHNIQUE paper_content: Many large cities have, or plan to have, urban traffic control (utc) systems that centrally monitor and control the traffic signals in their jurisdiction. The present generation of utc systems usually co-ordinates the signals on fixed-time plans, which consist of sets of timings that determine when each signal turns red and green. The plans are precalculated to suit average conditions during each part off the day (e.g. A.M. peak) and do not respond to variations in flows in the network. Since 1973 the UK Transport and Road Research Laboratory has been researching a vehicle responsive method of signal control called SCOOT (split, cycle and offset optimisation technique). Research was carried out in Glasgow by a small team from TRRL and the Ferranti, GEC and Plessey traffic companies, with assistance from Strathclyde Regional Council. In 1976 the success of the research phase led to a development project between the departments of transport and of industry and the three traffic companies. TRRL continued research into SCOOT and in 1979 carried out a full-scale trial of SCOOT in Glasgow. As part of the development project, and with the co-operation of West Midlands County Council, SCOOT was installed in Coventry. A further full-scale trial of the developed system was carried out in 1980. This paper describes the SCOOT system and the results of the trials which compared SCOOT with up-to-date fixed-time systems. It is concluded that SCOOT reduced vehicle delay by an average of about 12 percent during the working day. The surveys demonstrate that scoot rapidly adapts to unusual traffic conditions as well as to the usual variations in demand that occur throughout the day and night. It is an important benefit of SCOOT that there is no need to periodically prepare new fixed-time plans and that the signal timings are automatically kept up-to-date. The traffic model in SCOOT provides real-time information on flows and queues and is likely to be a key element in the development of new traffic management strategies that make the best overall use of roads in urban areas. This paper is a shortened version of TRRL Report LR 1014 (see TRIS 348845). This paper was presented at the IEE's Conference on Road Traffic signalling, Londong, March 1982. See also TRIS abstracts 368871 and 368872. (TRRL) --- paper_title: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control paper_content: Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field. --- paper_title: A Survey of Traffic Control With Vehicular Communications paper_content: During the last 60 years, incessant efforts have been made to improve the efficiency of traffic control systems to meet ever-increasing traffic demands. Some recent works attempt to enhance traffic efficiency via vehicle-to-vehicle communications. In this paper, we aim to give a survey of some research frontiers in this trend, identifying early-stage key technologies and discussing potential benefits that will be gained. Our survey focuses on the control side and aims to highlight that the design philosophy for traffic control systems is undergoing a transition from feedback character to feedforward character. Moreover, we discuss some contrasting preferences in the design of traffic control systems and their relations to vehicular communications. The first pair of contrasting preferences are model-based predictive control versus simulation-based predictive control. The second pair are global planning-based control versus local self-organization-based con- trol. The third pair are control using rich information that may be highly redundant versus control using concise information that is necessary. Both the potentials and drawbacks of these control strategies are explained. We hope these comparisons can shed some interesting light on future traffic control studies. --- paper_title: Review of road traffic control strategies paper_content: Traffic congestion in urban road and freeway networks leads to a strong degradation of the network infrastructure and accordingly reduced throughput, which can be countered via suitable control measures and strategies. After illustrating the main reasons for infrastructure deterioration due to traffic congestion, a comprehensive overview of proposed and implemented control strategies is provided for three areas: urban road networks, freeway networks, and route guidance. Selected application results, obtained from either simulation studies or field implementations, are briefly outlined to illustrate the impact of various control actions and strategies. The paper concludes with a brief discussion of future needs in this important technical area. --- paper_title: Adaptive Traffic Control Systems: Domestic and Foreign State of Practice paper_content: Adaptive Traffic Control Systems (ATCSs), also known as real-time traffic control systems, adjust, in real time, signal timings based on the current traffic conditions, demand, and system capacity. Although there are at least 25 ATCS deployments in the United States, these systems may not be well understood by many traffic signal practitioners in the country. Their operational benefits are demonstrated, but there are still some reservations among the people in the traffic signal community. These systems are considered expensive and complex and they require high maintenance of detectors and communications. The study methodology included three sequential efforts. The first focused on the selection of ATCSs, which are typically deployed in the United States (and worldwide) and identification of ATCS agencies. The next effort undertaken was a literature review that gathered and reported information about ATCS operations and deployments from previous studies. Finally, two electronic surveys were conducted: a shorter e-mail survey for ATCS vendors and a longer website-based survey for ATCS users. Responses were obtained from 34 of 42 agencies in North America, an 81% response rate. Also, 11 responses from agencies in other countries were obtained. Municipal and county traffic operations agencies were the major contributors among the 45 agencies that responded to the survey. --- paper_title: MULTIBAND-96: A Program for Variable-Bandwidth Progression Optimization of Multiarterial Traffic Networks paper_content: Progression schemes are widely used for traffic signal control in urban arterial streets. Commonly available programs such as the MAXBAND or PASSER programs use the traditional approach, which consists of a uniform bandwidth design for each arterial. The multiband criterion, on the other hand, has the ability to adapt the progressions to the specific characteristics of each link in the network and thus obtain improved performance. The development and application of the multiband signal optimization scheme in multiarterial grid networks are described. The MULTIBAND-96 model optimizes all the signal control variables, including phase lengths, offsets, cycle time, and phase sequences, and generates variable bandwidth progressions on each arterial in the network. It uses the MINOS mathematical programming package for the optimization and offers considerable advantages compared with existing models. Simulation results using TRAF-NETSIM are given. --- paper_title: A multi-band approach to arterial traffic signal optimization paper_content: Progression schemes are widely used for traffic signal control in arterial streets. Under such a scheme a continuous green band of uniform width is provided in each direction along the artery at the desired speed of travel. A basic limitation of existing bandwidth-based programs is that they do not consider the actual traffic volumes and flow capacities on each link in their optimization criterion. Consequently they cannot guarantee the most suitable progression scheme for different traffic flow patterns. In this paper we present a new optimization approach for arterial progression that incorporates a systematic traffic-dependent criterion. The method generates a variable bandwidth progression in which each directional road section can obtain an individually weighted bandwidth (hence, the term multi-band). Mixed-integer linear programming is used for the optimization. Simulation results indicate that this method can produce considerable gains in performance when compared with traditional progression methods. It also lends itself to a natural extension for the optimization of grid networks. --- paper_title: Self-Organizing Traffic Lights paper_content: Steering traffic in cities is a very complex task, since improving efficiency involves the coordination of many actors. Traditional approaches attempt to optimize traffic lights for a particular configuration. of traffic and density. The disadvantage of this lies in the fact that traffic configurations change constantly. Traffic seems to be an adaptationproblem rather than an optimization problem. We propose a simple and feasible alternative, in which traffic lightsself-organize to improve traffic flow. We use a multi-agent simulation to study two self-organizing methods, which are able to outperform two traditional rigid methods. Using simple rules, traffic lights are able to self-organize and adapt to changingtraffic conditions, reducing waiting times, stopped cars, and increasing average speeds. Even when the scenario simplifies real traffic, results are very promising, and encourage further research in more realistic environments. --- paper_title: Self-Organizing Traffic Lights: A Realistic Simulation paper_content: We have previously shown in an abstract simulation (Gershenson in Complex Syst. 16(1):29–53, 2005) that selforganizing traffic lights can greatly improve traffic flow for any density. In this chapter, we extend these results to a realistic setting, implementing self-organizing traffic lights in an advanced traffic simulator using real data from a Brussels avenue. In the next section, a brief introduction to the concept of self-organization is given. The SOTL control method is then presented, followed by the moreVTS simulator. In Sect. 3.5, results from our simulations are shown, followed by Discussion, Future Work, and Conclusions. --- paper_title: The Max-Pressure Controller for Arbitrary Networks of Signalized Intersections paper_content: The control of an arbitrary network of signalized intersections is considered. At the beginning of each cycle, a controller selects the duration of every stage at each intersection as a function of all queues in the network. A stage is a set of permissible (non-conflicting) phases along which vehicles may move at pre-specified saturation rates. Demand is modeled by vehicles entering the network at a constant average rate with an arbitrary burst size and moving with pre-specified average turn ratios. The movement of vehicles is modeled as a “store and forward” queuing network. A controller is said to stabilize a demand if all queues remain bounded. The max-pressure controller is introduced. It differs from other network controllers analyzed in the literature in three respects. First, max-pressure requires only local information: the stage durations selected at any intersection depends only on queues adjacent to that intersection. Second, max-pressure is provably stable: it stabilizes a demand whenever there exists any stabilizing controller. Third, max-pressure requires no knowledge of the demand, although it needs turn ratios. The analysis is conducted within the framework of “network calculus,” which, for fixed-time controllers, gives guaranteed bounds on queue size, delay, and queue clearance times. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. --- paper_title: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control paper_content: Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field. --- paper_title: Soilse: A decentralized approach to optimization of fluctuating urban traffic using Reinforcement Learning paper_content: Increasing traffic congestion is a major problem in urban areas, which incurs heavy economic and environmental costs in both developing and developed countries. Efficient urban traffic control (UTC) can help reduce traffic congestion. However, the increasing volume and the dynamic nature of urban traffic pose particular challenges to UTC. Reinforcement Learning (RL) has been shown to be a promising approach to efficient UTC. However, most existing work on RL-based UTC does not adequately address the fluctuating nature of urban traffic. This paper presents Soilse1, a decentralized RL-based UTC optimization scheme that includes a nonparametric pattern change detection mechanism to identify local traffic pattern changes that adversely affect an RL agent's performance. Hence, Soilse is adaptive as agents learn to optimize for different traffic patterns and responsive as agents can detect genuine traffic pattern changes and trigger relearning. We compare the performance of Soilse to two baselines, a fixed-time approach and a saturation balancing algorithm that emulates SCATS, a well-known UTC system. The comparison was performed based on a simulation of traffic in Dublin's inner city centre. Results from using our scheme show an approximate 35%–43% and 40%–54% better performance in terms of average vehicle waiting time and average number of vehicle stops respectively against the best baseline performance in our simulation. --- paper_title: A Collaborative Reinforcement Learning Approach to Urban Traffic Control Optimization paper_content: The high growth rate of vehicles per capita now poses a real challenge to efficient urban traffic control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-to-vehicle/infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction's traffic in order to optimize traffic control. Global UTC optimization is achieved using a local adaptive round robin (ARR) phase switching model optimized using collaborative reinforcement learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a large-scale simulation of traffic in Dublin's inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm. --- paper_title: Exploring Q-Learning Optimization in Traffic Signal Timing Plan Management paper_content: Traffic congestions often occur within the entire traffic network of the urban areas due to the increasing of traffic demands by the outnumbered vehicles on road. The problem may be solved by a good traffic signal timing plan, but unfortunately most of the timing plans available currently are not fully optimized based on the on spot traffic conditions. The incapability of the traffic intersections to learn from their past experiences has cost them the lack of ability to adapt into the dynamic changes of the traffic flow. The proposed Q-learning approach can manage the traffic signal timing plan more effectively via optimization of the traffic flows. Q-learning gains rewards from its past experiences including its future actions to learn from its experience and determine the best possible actions. The proposed learning algorithm shows a good valuable performance that able to improve the traffic signal timing plan for the dynamic traffic flows within a traffic network. --- paper_title: Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto paper_content: Population is steadily increasing worldwide, resulting in intractable traffic congestion in dense urban areas. Adaptive traffic signal control (ATSC) has shown strong potential to effectively alleviate urban traffic congestion by adjusting signal timing plans in real time in response to traffic fluctuations to achieve desirable objectives (e.g., minimize delay). Efficient and robust ATSC can be designed using a multiagent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. Applying MARL approaches to the ATSC problem is associated with a few challenges as agents typically react to changes in the environment at the individual level, but the overall behavior of all agents may not be optimal. This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC). MARLIN-ATSC offers two possible modes: 1) independent mode, where each intersection controller works independently of other agents; and 2) integrated mode, where each controller coordinates signal control actions with neighboring intersections. MARLIN-ATSC is tested on a large-scale simulated network of 59 intersections in the lower downtown core of the City of Toronto, ON, Canada, for the morning rush hour. The results show unprecedented reduction in the average intersection delay ranging from 27% in mode 1 to 39% in mode 2 at the network level and travel-time savings of 15% in mode 1 and 26% in mode 2, along the busiest routes in Downtown Toronto. --- paper_title: Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran paper_content: Abstract Traffic signal control plays a pivotal role in reducing traffic congestion. Traffic signals cannot be adequately controlled with conventional methods due to the high variations and complexity in traffic environments. In recent years, reinforcement learning (RL) has shown great potential for traffic signal control because of its high adaptability, flexibility, and scalability. However, designing RL-embedded traffic signal controllers (RLTSCs) for traffic systems with a high degree of realism is faced with several challenges, among others system disturbances and large state-action spaces are considered in this research. The contribution of the present work is founded on three features: (a) evaluating the robustness of different RLTSCs against system disturbances including incidents, jaywalking, and sensor noise, (b) handling a high-dimensional state-action space by both employing different continuous state RL algorithms and reducing the state-action space in order to improve the performance and learning speed of the system, and (c) presenting a detailed empirical study of traffic signals control of downtown Tehran through seven RL algorithms: discrete state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), continuous state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), and residual actor-critic( λ ). In this research, first a real-world microscopic traffic simulation of downtown Tehran is carried out, then four experiments are performed in order to find the best RLTSC with convincing robustness and strong performance. The results reveal that the RLTSC based on continuous state actor-critic( λ ) has the best performance. In addition, it is found that the best RLTSC leads to saving average travel time by 22% (at the presence of high system disturbances) when it is compared with an optimized fixed-time controller. --- paper_title: Deep Deterministic Policy Gradient for Urban Traffic Light Control paper_content: Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section). --- paper_title: An agent-based learning towards decentralized and coordinated traffic signal control paper_content: Adaptive traffic signal control is a promising technique for alleviating traffic congestion. Reinforcement Learning (RL) has the potential to tackle the optimal traffic control problem for a single agent. However, the ultimate goal is to develop integrated traffic control for multiple intersections. Integrated traffic control can be efficiently achieved using decentralized controllers. Multi-Agent Reinforcement Learning (MARL) is an extension of RL techniques that makes it possible to decentralize multiple agents in a non-stationary environments. Most of the studies in the field of traffic signal control consider a stationary environment, an approach whose shortcomings are highlighted in this paper. A Q-Learning-based acyclic signal control system that uses a variable phasing sequence is developed. To investigate the appropriate state model for different traffic conditions, three models were developed, each with different state representation. The models were tested on a typical multiphase intersection to minimize the vehicle delay and were compared to the pre-timed control strategy as a benchmark. The Q-Learning control system consistently outperformed the widely used Webster pre-timed optimized signal control strategy under various traffic conditions. --- paper_title: Distributed Geometric Fuzzy Multiagent Urban Traffic Signal Control paper_content: Rapid urbanization and the growing demand for faster transportation has led to heavy congestion in road traffic networks, necessitating the need for traffic-responsive intelligent signal control systems. The developed signal control system must be capable of determining the green time that minimizes the network-wide travel time delay based on limited information of the environment. This paper adopts a distributed multiagent-based approach to develop a traffic-responsive signal control system, i.e., the geometric fuzzy multiagent system (GFMAS), which is based on a geometric type-2 fuzzy inference system. GFMAS is capable of handling the various levels of uncertainty found in the inputs and rule base of the traffic signal controller. Simulation models of the agents designed in PARAMICS were tested on virtual road network replicating a section of the central business district in Singapore. A comprehensive analysis and comparison was performed against the existing traffic-control algorithms green link determining (GLIDE) and hierarchical multiagent system (HMS). The proposed GFMAS signal control outperformed both the benchmarks when tested for typical traffic-flow scenarios. Further tests show the superior performance of the proposed GFMAS in handling unplanned and planned incidents and obstructions. The promising results demonstrate the efficiency of the proposed multiagent architecture and scope for future development. --- paper_title: Traffic Light Control by Multiagent Reinforcement Learning Systems paper_content: Traffic light control is one of the main means of controlling road traffic. Improving traffic control is important because it can lead to higher traffic throughput and reduced traffic congestion. This chapter describes multiagent reinforcement learning techniques for automatic optimization of traffic light controllers. Such techniques are attractive because they can automatically discover efficient control strategies for complex tasks, such as traffic control, for which it is hard or impossible to compute optimal solutions directly and hard to develop hand-coded solutions. First, the general multi-agent reinforcement learning framework is described, which is used to control traffic lights in this work. In this framework, multiple local controllers (agents) are each responsible for the optimization of traffic lights around a single traffic junction, making use of locally perceived traffic state information (sensed cars on the road), a learned probabilistic model of car behavior, and a learned value function which indicates how traffic light decisions affect long-term utility, in terms of the average waiting time of cars. Next, three extensions are described which improve upon the basic framework in various ways: agents (traffic junction controllers) taking into account congestion information from neighboring agents; handling partial observability of traffic states; and coordinating the behavior of multiple agents by coordination graphs and the max-plus algorithm. --- paper_title: Reinforcement learning with average cost for adaptive control of traffic lights at intersections paper_content: We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance. --- paper_title: The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment paper_content: Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA) for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. --- paper_title: Distributed learning and multi-objectivity in traffic light control paper_content: Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against s... --- paper_title: Traffic light control in non-stationary environments based on multi agent Q-learning paper_content: In many urban areas where traffic congestion does not have the peak pattern, conventional traffic signal timing methods does not result in an efficient control. One alternative is to let traffic signal controllers learn how to adjust the lights based on the traffic situation. However this creates a classical non-stationary environment since each controller is adapting to the changes caused by other controllers. In multi-agent learning this is likely to be inefficient and computationally challenging, i.e., the efficiency decreases with the increase in the number of agents (controllers). In this paper, we model a relatively large traffic network as a multi-agent system and use techniques from multi-agent reinforcement learning. In particular, Q-learning is employed, where the average queue length in approaching links is used to estimate states. A parametric representation of the action space has made the method extendable to different types of intersection. The simulation results demonstrate that the proposed Q-learning outperformed the fixed time method under different traffic demands. --- paper_title: Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets paper_content: Traffic signal control can mitigate traffic congestion and reduce travel time. A model-free reinforcement learning (RL) approach is a powerful framework for learning a responsive traffic control policy for short-term traffic demand changes without prior environmental knowledge. Previous RL approaches could handle high-dimensional feature space using a standard neural network, e.g., a convolutional neural network; however, to control traffic on a road network with multiple intersections, the geometric features between roads had to be created manually. Rather than using manually crafted geometric features, we developed an RL-based traffic signal control method that employs a graph convolutional neural network (GCNN). GCNNs can automatically extract features considering the traffic features between distant roads by stacking multiple neural network layers. We numerically evaluated the proposed method in a six-intersection environment. The results demonstrate that the proposed method can find comparable policies twice as fast as the conventional RL method with a neural network and can adapt to more extensive traffic demand changes. --- paper_title: Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control paper_content: Reinforcement learning (RL) is a promising data-driven approach for adaptive traffic signal control (ATSC) in complex urban traffic networks, and deep neural networks further enhance its learning power. However, centralized RL is infeasible for large-scale ATSC due to the extremely high dimension of the joint action space. Multi-agent RL (MARL) overcomes the scalability issue by distributing the global control to each local RL agent, but it introduces new challenges: now the environment becomes partially observable from the viewpoint of each local agent due to limited communication among agents. Most existing studies in MARL focus on designing efficient communication and coordination among traditional Q-learning agents. This paper presents, for the first time, a fully scalable and decentralized MARL algorithm for the state-of-the-art deep RL agent: advantage actor critic (A2C), within the context of ATSC. In particular, two methods are proposed to stabilize the learning procedure, by improving the observability and reducing the learning difficulty of each local agent. The proposed multi-agent A2C is compared against independent A2C and independent Q-learning algorithms, in both a large synthetic traffic grid and a large real-world traffic network of Monaco city, under simulated peak-hour traffic dynamics. Results demonstrate its optimality, robustness, and sample efficiency over other state-of-the-art decentralized MARL algorithms. --- paper_title: Reinforcement learning-based multi-agent system for network traffic signal control paper_content: A challenging application of artificial intelligence systems involves the scheduling of traffic signals in multi-intersection vehicular networks. This paper introduces a novel use of a multi-agent system and reinforcement learning (RL) framework to obtain an efficient traffic signal control policy. The latter is aimed at minimising the average delay, congestion and likelihood of intersection cross-blocking. A five-intersection traffic network has been studied in which each intersection is governed by an autonomous intelligent agent. Two types of agents, a central agent and an outbound agent, were employed. The outbound agents schedule traffic signals by following the longest-queue-first (LQF) algorithm, which has been proved to guarantee stability and fairness, and collaborate with the central agent by providing it local traffic statistics. The central agent learns a value function driven by its local and neighbours' traffic conditions. The novel methodology proposed here utilises the Q-Learning algorithm with a feedforward neural network for value function approximation. Experimental results clearly demonstrate the advantages of multi-agent RL-based control over LQF governed isolated single-intersection control, thus paving the way for efficient distributed traffic signal control in complex settings. --- paper_title: Hierarchical control of traffic signals using Q-learning with tile coding paper_content: Multi-agent systems are rapidly growing as powerful tools for Intelligent Transportation Systems (ITS). It is desirable that traffic signals control, as a part of ITS, is performed in a distributed model. Therefore agent-based technologies can be efficiently used for traffic signals control. For traffic networks which are composed of multiple intersections, distributed control achieves better results in comparison to centralized methods. Hierarchical structures are useful to decompose the network into multiple sub-networks and provide a mechanism for distributed control of the traffic signals. ::: ::: In this paper, a two-level hierarchical control of traffic signals based on Q-learning is presented. Traffic signal controllers, located at intersections, can be seen as autonomous agents in the first level (at the bottom of the hierarchy) which use Q-learning to learn a control policy. The network is divided into some regions where an agent is assigned to control each region at the second level (top of the hierarchy). Due to the combinational explosion in the number of states and actions, i.e. features, the use of Q-learning is impractical. Therefore, in the top level, tile coding is used as a linear function approximation method. ::: ::: A network composed of 9 intersections arranged in a 3?3 grid is used for the simulation. Experimental results show that the proposed hierarchical control improves the Q-learning efficiency of the bottom level agents. The impact of the parameters used in tile coding is also analyzed. --- paper_title: Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events paper_content: Abstract The transportation demand is rapidly growing in metropolises, resulting in chronic traffic congestions in dense downtown areas. Adaptive traffic signal control as the principle part of intelligent transportation systems has a primary role to effectively reduce traffic congestion by making a real-time adaptation in response to the changing traffic network dynamics. Reinforcement learning (RL) is an effective approach in machine learning that has been applied for designing adaptive traffic signal controllers. One of the most efficient and robust type of RL algorithms are continuous state actor-critic algorithms that have the advantage of fast learning and the ability to generalize to new and unseen traffic conditions. These algorithms are utilized in this paper to design adaptive traffic signal controllers called actor-critic adaptive traffic signal controllers (A-CATs controllers). The contribution of the present work rests on the integration of three threads: (a) showing performance comparisons of both discrete and continuous A-CATs controllers in a traffic network with recurring congestion (24-h traffic demand) in the upper downtown core of Tehran city, (b) analyzing the effects of different traffic disruptions including opportunistic pedestrians crossing, parking lane, non-recurring congestion, and different levels of sensor noise on the performance of A-CATS controllers, and (c) comparing the performance of different function approximators (tile coding and radial basis function) on the learning of A-CATs controllers. To this end, first an agent-based traffic simulation of the study area is carried out. Then six different scenarios are conducted to find the best A-CATs controller that is robust enough against different traffic disruptions. We observe that the A-CATs controller based on radial basis function networks (RBF (5)) outperforms others. This controller is benchmarked against controllers of discrete state Q-learning, Bayesian Q-learning, fixed time and actuated controllers; and the results reveal that it consistently outperforms them. --- paper_title: Multiagent Reinforcement Learning for Urban Traffic Control using Coordination Graphs paper_content: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light. However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior. This paper extends this approach to include explicit coordination between neighboring traffic lights. Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents. This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings. It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs. Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties. --- paper_title: Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning paper_content: Recent advances in combining deep neural network architectures with reinforcement learning (RL) techniques have shown promising potential results in solving complex control problems with high-dimensional state and action spaces. Inspired by these successes, in this study, the authors built two kinds of RL algorithms: deep policy-gradient (PG) and value-function-based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The PG-based agent maps its observation directly to the control signal; however, the value-function-based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Their methods show promising results in a traffic network simulated in the simulation of urban mobility traffic simulator, without suffering from instability issues during the training process. --- paper_title: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control paper_content: Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field. --- paper_title: Soilse: A decentralized approach to optimization of fluctuating urban traffic using Reinforcement Learning paper_content: Increasing traffic congestion is a major problem in urban areas, which incurs heavy economic and environmental costs in both developing and developed countries. Efficient urban traffic control (UTC) can help reduce traffic congestion. However, the increasing volume and the dynamic nature of urban traffic pose particular challenges to UTC. Reinforcement Learning (RL) has been shown to be a promising approach to efficient UTC. However, most existing work on RL-based UTC does not adequately address the fluctuating nature of urban traffic. This paper presents Soilse1, a decentralized RL-based UTC optimization scheme that includes a nonparametric pattern change detection mechanism to identify local traffic pattern changes that adversely affect an RL agent's performance. Hence, Soilse is adaptive as agents learn to optimize for different traffic patterns and responsive as agents can detect genuine traffic pattern changes and trigger relearning. We compare the performance of Soilse to two baselines, a fixed-time approach and a saturation balancing algorithm that emulates SCATS, a well-known UTC system. The comparison was performed based on a simulation of traffic in Dublin's inner city centre. Results from using our scheme show an approximate 35%–43% and 40%–54% better performance in terms of average vehicle waiting time and average number of vehicle stops respectively against the best baseline performance in our simulation. --- paper_title: A Collaborative Reinforcement Learning Approach to Urban Traffic Control Optimization paper_content: The high growth rate of vehicles per capita now poses a real challenge to efficient urban traffic control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-to-vehicle/infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction's traffic in order to optimize traffic control. Global UTC optimization is achieved using a local adaptive round robin (ARR) phase switching model optimized using collaborative reinforcement learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a large-scale simulation of traffic in Dublin's inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm. --- paper_title: Exploring Q-Learning Optimization in Traffic Signal Timing Plan Management paper_content: Traffic congestions often occur within the entire traffic network of the urban areas due to the increasing of traffic demands by the outnumbered vehicles on road. The problem may be solved by a good traffic signal timing plan, but unfortunately most of the timing plans available currently are not fully optimized based on the on spot traffic conditions. The incapability of the traffic intersections to learn from their past experiences has cost them the lack of ability to adapt into the dynamic changes of the traffic flow. The proposed Q-learning approach can manage the traffic signal timing plan more effectively via optimization of the traffic flows. Q-learning gains rewards from its past experiences including its future actions to learn from its experience and determine the best possible actions. The proposed learning algorithm shows a good valuable performance that able to improve the traffic signal timing plan for the dynamic traffic flows within a traffic network. --- paper_title: Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto paper_content: Population is steadily increasing worldwide, resulting in intractable traffic congestion in dense urban areas. Adaptive traffic signal control (ATSC) has shown strong potential to effectively alleviate urban traffic congestion by adjusting signal timing plans in real time in response to traffic fluctuations to achieve desirable objectives (e.g., minimize delay). Efficient and robust ATSC can be designed using a multiagent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. Applying MARL approaches to the ATSC problem is associated with a few challenges as agents typically react to changes in the environment at the individual level, but the overall behavior of all agents may not be optimal. This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC). MARLIN-ATSC offers two possible modes: 1) independent mode, where each intersection controller works independently of other agents; and 2) integrated mode, where each controller coordinates signal control actions with neighboring intersections. MARLIN-ATSC is tested on a large-scale simulated network of 59 intersections in the lower downtown core of the City of Toronto, ON, Canada, for the morning rush hour. The results show unprecedented reduction in the average intersection delay ranging from 27% in mode 1 to 39% in mode 2 at the network level and travel-time savings of 15% in mode 1 and 26% in mode 2, along the busiest routes in Downtown Toronto. --- paper_title: Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran paper_content: Abstract Traffic signal control plays a pivotal role in reducing traffic congestion. Traffic signals cannot be adequately controlled with conventional methods due to the high variations and complexity in traffic environments. In recent years, reinforcement learning (RL) has shown great potential for traffic signal control because of its high adaptability, flexibility, and scalability. However, designing RL-embedded traffic signal controllers (RLTSCs) for traffic systems with a high degree of realism is faced with several challenges, among others system disturbances and large state-action spaces are considered in this research. The contribution of the present work is founded on three features: (a) evaluating the robustness of different RLTSCs against system disturbances including incidents, jaywalking, and sensor noise, (b) handling a high-dimensional state-action space by both employing different continuous state RL algorithms and reducing the state-action space in order to improve the performance and learning speed of the system, and (c) presenting a detailed empirical study of traffic signals control of downtown Tehran through seven RL algorithms: discrete state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), continuous state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), and residual actor-critic( λ ). In this research, first a real-world microscopic traffic simulation of downtown Tehran is carried out, then four experiments are performed in order to find the best RLTSC with convincing robustness and strong performance. The results reveal that the RLTSC based on continuous state actor-critic( λ ) has the best performance. In addition, it is found that the best RLTSC leads to saving average travel time by 22% (at the presence of high system disturbances) when it is compared with an optimized fixed-time controller. --- paper_title: Deep Deterministic Policy Gradient for Urban Traffic Light Control paper_content: Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section). --- paper_title: Distributed Geometric Fuzzy Multiagent Urban Traffic Signal Control paper_content: Rapid urbanization and the growing demand for faster transportation has led to heavy congestion in road traffic networks, necessitating the need for traffic-responsive intelligent signal control systems. The developed signal control system must be capable of determining the green time that minimizes the network-wide travel time delay based on limited information of the environment. This paper adopts a distributed multiagent-based approach to develop a traffic-responsive signal control system, i.e., the geometric fuzzy multiagent system (GFMAS), which is based on a geometric type-2 fuzzy inference system. GFMAS is capable of handling the various levels of uncertainty found in the inputs and rule base of the traffic signal controller. Simulation models of the agents designed in PARAMICS were tested on virtual road network replicating a section of the central business district in Singapore. A comprehensive analysis and comparison was performed against the existing traffic-control algorithms green link determining (GLIDE) and hierarchical multiagent system (HMS). The proposed GFMAS signal control outperformed both the benchmarks when tested for typical traffic-flow scenarios. Further tests show the superior performance of the proposed GFMAS in handling unplanned and planned incidents and obstructions. The promising results demonstrate the efficiency of the proposed multiagent architecture and scope for future development. --- paper_title: Traffic Light Control by Multiagent Reinforcement Learning Systems paper_content: Traffic light control is one of the main means of controlling road traffic. Improving traffic control is important because it can lead to higher traffic throughput and reduced traffic congestion. This chapter describes multiagent reinforcement learning techniques for automatic optimization of traffic light controllers. Such techniques are attractive because they can automatically discover efficient control strategies for complex tasks, such as traffic control, for which it is hard or impossible to compute optimal solutions directly and hard to develop hand-coded solutions. First, the general multi-agent reinforcement learning framework is described, which is used to control traffic lights in this work. In this framework, multiple local controllers (agents) are each responsible for the optimization of traffic lights around a single traffic junction, making use of locally perceived traffic state information (sensed cars on the road), a learned probabilistic model of car behavior, and a learned value function which indicates how traffic light decisions affect long-term utility, in terms of the average waiting time of cars. Next, three extensions are described which improve upon the basic framework in various ways: agents (traffic junction controllers) taking into account congestion information from neighboring agents; handling partial observability of traffic states; and coordinating the behavior of multiple agents by coordination graphs and the max-plus algorithm. --- paper_title: Reinforcement learning with average cost for adaptive control of traffic lights at intersections paper_content: We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance. --- paper_title: The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment paper_content: Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA) for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. --- paper_title: Distributed learning and multi-objectivity in traffic light control paper_content: Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against s... --- paper_title: Traffic light control in non-stationary environments based on multi agent Q-learning paper_content: In many urban areas where traffic congestion does not have the peak pattern, conventional traffic signal timing methods does not result in an efficient control. One alternative is to let traffic signal controllers learn how to adjust the lights based on the traffic situation. However this creates a classical non-stationary environment since each controller is adapting to the changes caused by other controllers. In multi-agent learning this is likely to be inefficient and computationally challenging, i.e., the efficiency decreases with the increase in the number of agents (controllers). In this paper, we model a relatively large traffic network as a multi-agent system and use techniques from multi-agent reinforcement learning. In particular, Q-learning is employed, where the average queue length in approaching links is used to estimate states. A parametric representation of the action space has made the method extendable to different types of intersection. The simulation results demonstrate that the proposed Q-learning outperformed the fixed time method under different traffic demands. --- paper_title: Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets paper_content: Traffic signal control can mitigate traffic congestion and reduce travel time. A model-free reinforcement learning (RL) approach is a powerful framework for learning a responsive traffic control policy for short-term traffic demand changes without prior environmental knowledge. Previous RL approaches could handle high-dimensional feature space using a standard neural network, e.g., a convolutional neural network; however, to control traffic on a road network with multiple intersections, the geometric features between roads had to be created manually. Rather than using manually crafted geometric features, we developed an RL-based traffic signal control method that employs a graph convolutional neural network (GCNN). GCNNs can automatically extract features considering the traffic features between distant roads by stacking multiple neural network layers. We numerically evaluated the proposed method in a six-intersection environment. The results demonstrate that the proposed method can find comparable policies twice as fast as the conventional RL method with a neural network and can adapt to more extensive traffic demand changes. --- paper_title: Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control paper_content: Reinforcement learning (RL) is a promising data-driven approach for adaptive traffic signal control (ATSC) in complex urban traffic networks, and deep neural networks further enhance its learning power. However, centralized RL is infeasible for large-scale ATSC due to the extremely high dimension of the joint action space. Multi-agent RL (MARL) overcomes the scalability issue by distributing the global control to each local RL agent, but it introduces new challenges: now the environment becomes partially observable from the viewpoint of each local agent due to limited communication among agents. Most existing studies in MARL focus on designing efficient communication and coordination among traditional Q-learning agents. This paper presents, for the first time, a fully scalable and decentralized MARL algorithm for the state-of-the-art deep RL agent: advantage actor critic (A2C), within the context of ATSC. In particular, two methods are proposed to stabilize the learning procedure, by improving the observability and reducing the learning difficulty of each local agent. The proposed multi-agent A2C is compared against independent A2C and independent Q-learning algorithms, in both a large synthetic traffic grid and a large real-world traffic network of Monaco city, under simulated peak-hour traffic dynamics. Results demonstrate its optimality, robustness, and sample efficiency over other state-of-the-art decentralized MARL algorithms. --- paper_title: Reinforcement learning-based multi-agent system for network traffic signal control paper_content: A challenging application of artificial intelligence systems involves the scheduling of traffic signals in multi-intersection vehicular networks. This paper introduces a novel use of a multi-agent system and reinforcement learning (RL) framework to obtain an efficient traffic signal control policy. The latter is aimed at minimising the average delay, congestion and likelihood of intersection cross-blocking. A five-intersection traffic network has been studied in which each intersection is governed by an autonomous intelligent agent. Two types of agents, a central agent and an outbound agent, were employed. The outbound agents schedule traffic signals by following the longest-queue-first (LQF) algorithm, which has been proved to guarantee stability and fairness, and collaborate with the central agent by providing it local traffic statistics. The central agent learns a value function driven by its local and neighbours' traffic conditions. The novel methodology proposed here utilises the Q-Learning algorithm with a feedforward neural network for value function approximation. Experimental results clearly demonstrate the advantages of multi-agent RL-based control over LQF governed isolated single-intersection control, thus paving the way for efficient distributed traffic signal control in complex settings. --- paper_title: Hierarchical control of traffic signals using Q-learning with tile coding paper_content: Multi-agent systems are rapidly growing as powerful tools for Intelligent Transportation Systems (ITS). It is desirable that traffic signals control, as a part of ITS, is performed in a distributed model. Therefore agent-based technologies can be efficiently used for traffic signals control. For traffic networks which are composed of multiple intersections, distributed control achieves better results in comparison to centralized methods. Hierarchical structures are useful to decompose the network into multiple sub-networks and provide a mechanism for distributed control of the traffic signals. ::: ::: In this paper, a two-level hierarchical control of traffic signals based on Q-learning is presented. Traffic signal controllers, located at intersections, can be seen as autonomous agents in the first level (at the bottom of the hierarchy) which use Q-learning to learn a control policy. The network is divided into some regions where an agent is assigned to control each region at the second level (top of the hierarchy). Due to the combinational explosion in the number of states and actions, i.e. features, the use of Q-learning is impractical. Therefore, in the top level, tile coding is used as a linear function approximation method. ::: ::: A network composed of 9 intersections arranged in a 3?3 grid is used for the simulation. Experimental results show that the proposed hierarchical control improves the Q-learning efficiency of the bottom level agents. The impact of the parameters used in tile coding is also analyzed. --- paper_title: Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events paper_content: Abstract The transportation demand is rapidly growing in metropolises, resulting in chronic traffic congestions in dense downtown areas. Adaptive traffic signal control as the principle part of intelligent transportation systems has a primary role to effectively reduce traffic congestion by making a real-time adaptation in response to the changing traffic network dynamics. Reinforcement learning (RL) is an effective approach in machine learning that has been applied for designing adaptive traffic signal controllers. One of the most efficient and robust type of RL algorithms are continuous state actor-critic algorithms that have the advantage of fast learning and the ability to generalize to new and unseen traffic conditions. These algorithms are utilized in this paper to design adaptive traffic signal controllers called actor-critic adaptive traffic signal controllers (A-CATs controllers). The contribution of the present work rests on the integration of three threads: (a) showing performance comparisons of both discrete and continuous A-CATs controllers in a traffic network with recurring congestion (24-h traffic demand) in the upper downtown core of Tehran city, (b) analyzing the effects of different traffic disruptions including opportunistic pedestrians crossing, parking lane, non-recurring congestion, and different levels of sensor noise on the performance of A-CATS controllers, and (c) comparing the performance of different function approximators (tile coding and radial basis function) on the learning of A-CATs controllers. To this end, first an agent-based traffic simulation of the study area is carried out. Then six different scenarios are conducted to find the best A-CATs controller that is robust enough against different traffic disruptions. We observe that the A-CATs controller based on radial basis function networks (RBF (5)) outperforms others. This controller is benchmarked against controllers of discrete state Q-learning, Bayesian Q-learning, fixed time and actuated controllers; and the results reveal that it consistently outperforms them. --- paper_title: Multiagent Reinforcement Learning for Urban Traffic Control using Coordination Graphs paper_content: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light. However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior. This paper extends this approach to include explicit coordination between neighboring traffic lights. Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents. This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings. It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs. Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties. --- paper_title: Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning paper_content: Recent advances in combining deep neural network architectures with reinforcement learning (RL) techniques have shown promising potential results in solving complex control problems with high-dimensional state and action spaces. Inspired by these successes, in this study, the authors built two kinds of RL algorithms: deep policy-gradient (PG) and value-function-based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The PG-based agent maps its observation directly to the control signal; however, the value-function-based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Their methods show promising results in a traffic network simulated in the simulation of urban mobility traffic simulator, without suffering from instability issues during the training process. --- paper_title: A Brief Survey of Deep Reinforcement Learning paper_content: Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep $Q$-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field. --- paper_title: Reinforcement Learning: A Survey paper_content: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. --- paper_title: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control paper_content: Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field. --- paper_title: Soilse: A decentralized approach to optimization of fluctuating urban traffic using Reinforcement Learning paper_content: Increasing traffic congestion is a major problem in urban areas, which incurs heavy economic and environmental costs in both developing and developed countries. Efficient urban traffic control (UTC) can help reduce traffic congestion. However, the increasing volume and the dynamic nature of urban traffic pose particular challenges to UTC. Reinforcement Learning (RL) has been shown to be a promising approach to efficient UTC. However, most existing work on RL-based UTC does not adequately address the fluctuating nature of urban traffic. This paper presents Soilse1, a decentralized RL-based UTC optimization scheme that includes a nonparametric pattern change detection mechanism to identify local traffic pattern changes that adversely affect an RL agent's performance. Hence, Soilse is adaptive as agents learn to optimize for different traffic patterns and responsive as agents can detect genuine traffic pattern changes and trigger relearning. We compare the performance of Soilse to two baselines, a fixed-time approach and a saturation balancing algorithm that emulates SCATS, a well-known UTC system. The comparison was performed based on a simulation of traffic in Dublin's inner city centre. Results from using our scheme show an approximate 35%–43% and 40%–54% better performance in terms of average vehicle waiting time and average number of vehicle stops respectively against the best baseline performance in our simulation. --- paper_title: A Collaborative Reinforcement Learning Approach to Urban Traffic Control Optimization paper_content: The high growth rate of vehicles per capita now poses a real challenge to efficient urban traffic control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-to-vehicle/infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction's traffic in order to optimize traffic control. Global UTC optimization is achieved using a local adaptive round robin (ARR) phase switching model optimized using collaborative reinforcement learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a large-scale simulation of traffic in Dublin's inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm. --- paper_title: Exploring Q-Learning Optimization in Traffic Signal Timing Plan Management paper_content: Traffic congestions often occur within the entire traffic network of the urban areas due to the increasing of traffic demands by the outnumbered vehicles on road. The problem may be solved by a good traffic signal timing plan, but unfortunately most of the timing plans available currently are not fully optimized based on the on spot traffic conditions. The incapability of the traffic intersections to learn from their past experiences has cost them the lack of ability to adapt into the dynamic changes of the traffic flow. The proposed Q-learning approach can manage the traffic signal timing plan more effectively via optimization of the traffic flows. Q-learning gains rewards from its past experiences including its future actions to learn from its experience and determine the best possible actions. The proposed learning algorithm shows a good valuable performance that able to improve the traffic signal timing plan for the dynamic traffic flows within a traffic network. --- paper_title: Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto paper_content: Population is steadily increasing worldwide, resulting in intractable traffic congestion in dense urban areas. Adaptive traffic signal control (ATSC) has shown strong potential to effectively alleviate urban traffic congestion by adjusting signal timing plans in real time in response to traffic fluctuations to achieve desirable objectives (e.g., minimize delay). Efficient and robust ATSC can be designed using a multiagent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. Applying MARL approaches to the ATSC problem is associated with a few challenges as agents typically react to changes in the environment at the individual level, but the overall behavior of all agents may not be optimal. This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC). MARLIN-ATSC offers two possible modes: 1) independent mode, where each intersection controller works independently of other agents; and 2) integrated mode, where each controller coordinates signal control actions with neighboring intersections. MARLIN-ATSC is tested on a large-scale simulated network of 59 intersections in the lower downtown core of the City of Toronto, ON, Canada, for the morning rush hour. The results show unprecedented reduction in the average intersection delay ranging from 27% in mode 1 to 39% in mode 2 at the network level and travel-time savings of 15% in mode 1 and 26% in mode 2, along the busiest routes in Downtown Toronto. --- paper_title: Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran paper_content: Abstract Traffic signal control plays a pivotal role in reducing traffic congestion. Traffic signals cannot be adequately controlled with conventional methods due to the high variations and complexity in traffic environments. In recent years, reinforcement learning (RL) has shown great potential for traffic signal control because of its high adaptability, flexibility, and scalability. However, designing RL-embedded traffic signal controllers (RLTSCs) for traffic systems with a high degree of realism is faced with several challenges, among others system disturbances and large state-action spaces are considered in this research. The contribution of the present work is founded on three features: (a) evaluating the robustness of different RLTSCs against system disturbances including incidents, jaywalking, and sensor noise, (b) handling a high-dimensional state-action space by both employing different continuous state RL algorithms and reducing the state-action space in order to improve the performance and learning speed of the system, and (c) presenting a detailed empirical study of traffic signals control of downtown Tehran through seven RL algorithms: discrete state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), continuous state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), and residual actor-critic( λ ). In this research, first a real-world microscopic traffic simulation of downtown Tehran is carried out, then four experiments are performed in order to find the best RLTSC with convincing robustness and strong performance. The results reveal that the RLTSC based on continuous state actor-critic( λ ) has the best performance. In addition, it is found that the best RLTSC leads to saving average travel time by 22% (at the presence of high system disturbances) when it is compared with an optimized fixed-time controller. --- paper_title: Developing adaptive traffic signal control by actor-critic and direct exploration methods paper_content: Designing efficient traffic signal controllers has always been an important concern in traffic engineering. This is owing to the complex and uncertain nature of traffic environments. Within such a ... --- paper_title: Deep Deterministic Policy Gradient for Urban Traffic Light Control paper_content: Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section). --- paper_title: Distributed Geometric Fuzzy Multiagent Urban Traffic Signal Control paper_content: Rapid urbanization and the growing demand for faster transportation has led to heavy congestion in road traffic networks, necessitating the need for traffic-responsive intelligent signal control systems. The developed signal control system must be capable of determining the green time that minimizes the network-wide travel time delay based on limited information of the environment. This paper adopts a distributed multiagent-based approach to develop a traffic-responsive signal control system, i.e., the geometric fuzzy multiagent system (GFMAS), which is based on a geometric type-2 fuzzy inference system. GFMAS is capable of handling the various levels of uncertainty found in the inputs and rule base of the traffic signal controller. Simulation models of the agents designed in PARAMICS were tested on virtual road network replicating a section of the central business district in Singapore. A comprehensive analysis and comparison was performed against the existing traffic-control algorithms green link determining (GLIDE) and hierarchical multiagent system (HMS). The proposed GFMAS signal control outperformed both the benchmarks when tested for typical traffic-flow scenarios. Further tests show the superior performance of the proposed GFMAS in handling unplanned and planned incidents and obstructions. The promising results demonstrate the efficiency of the proposed multiagent architecture and scope for future development. --- paper_title: Traffic Light Control by Multiagent Reinforcement Learning Systems paper_content: Traffic light control is one of the main means of controlling road traffic. Improving traffic control is important because it can lead to higher traffic throughput and reduced traffic congestion. This chapter describes multiagent reinforcement learning techniques for automatic optimization of traffic light controllers. Such techniques are attractive because they can automatically discover efficient control strategies for complex tasks, such as traffic control, for which it is hard or impossible to compute optimal solutions directly and hard to develop hand-coded solutions. First, the general multi-agent reinforcement learning framework is described, which is used to control traffic lights in this work. In this framework, multiple local controllers (agents) are each responsible for the optimization of traffic lights around a single traffic junction, making use of locally perceived traffic state information (sensed cars on the road), a learned probabilistic model of car behavior, and a learned value function which indicates how traffic light decisions affect long-term utility, in terms of the average waiting time of cars. Next, three extensions are described which improve upon the basic framework in various ways: agents (traffic junction controllers) taking into account congestion information from neighboring agents; handling partial observability of traffic states; and coordinating the behavior of multiple agents by coordination graphs and the max-plus algorithm. --- paper_title: Reinforcement learning with average cost for adaptive control of traffic lights at intersections paper_content: We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance. --- paper_title: Policy Gradient Methods for Reinforcement Learning with Function Approximation paper_content: Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy. --- paper_title: The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment paper_content: Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA) for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. --- paper_title: Distributed learning and multi-objectivity in traffic light control paper_content: Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against s... --- paper_title: Traffic light control in non-stationary environments based on multi agent Q-learning paper_content: In many urban areas where traffic congestion does not have the peak pattern, conventional traffic signal timing methods does not result in an efficient control. One alternative is to let traffic signal controllers learn how to adjust the lights based on the traffic situation. However this creates a classical non-stationary environment since each controller is adapting to the changes caused by other controllers. In multi-agent learning this is likely to be inefficient and computationally challenging, i.e., the efficiency decreases with the increase in the number of agents (controllers). In this paper, we model a relatively large traffic network as a multi-agent system and use techniques from multi-agent reinforcement learning. In particular, Q-learning is employed, where the average queue length in approaching links is used to estimate states. A parametric representation of the action space has made the method extendable to different types of intersection. The simulation results demonstrate that the proposed Q-learning outperformed the fixed time method under different traffic demands. --- paper_title: Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets paper_content: Traffic signal control can mitigate traffic congestion and reduce travel time. A model-free reinforcement learning (RL) approach is a powerful framework for learning a responsive traffic control policy for short-term traffic demand changes without prior environmental knowledge. Previous RL approaches could handle high-dimensional feature space using a standard neural network, e.g., a convolutional neural network; however, to control traffic on a road network with multiple intersections, the geometric features between roads had to be created manually. Rather than using manually crafted geometric features, we developed an RL-based traffic signal control method that employs a graph convolutional neural network (GCNN). GCNNs can automatically extract features considering the traffic features between distant roads by stacking multiple neural network layers. We numerically evaluated the proposed method in a six-intersection environment. The results demonstrate that the proposed method can find comparable policies twice as fast as the conventional RL method with a neural network and can adapt to more extensive traffic demand changes. --- paper_title: Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control paper_content: Reinforcement learning (RL) is a promising data-driven approach for adaptive traffic signal control (ATSC) in complex urban traffic networks, and deep neural networks further enhance its learning power. However, centralized RL is infeasible for large-scale ATSC due to the extremely high dimension of the joint action space. Multi-agent RL (MARL) overcomes the scalability issue by distributing the global control to each local RL agent, but it introduces new challenges: now the environment becomes partially observable from the viewpoint of each local agent due to limited communication among agents. Most existing studies in MARL focus on designing efficient communication and coordination among traditional Q-learning agents. This paper presents, for the first time, a fully scalable and decentralized MARL algorithm for the state-of-the-art deep RL agent: advantage actor critic (A2C), within the context of ATSC. In particular, two methods are proposed to stabilize the learning procedure, by improving the observability and reducing the learning difficulty of each local agent. The proposed multi-agent A2C is compared against independent A2C and independent Q-learning algorithms, in both a large synthetic traffic grid and a large real-world traffic network of Monaco city, under simulated peak-hour traffic dynamics. Results demonstrate its optimality, robustness, and sample efficiency over other state-of-the-art decentralized MARL algorithms. --- paper_title: Reinforcement learning-based multi-agent system for network traffic signal control paper_content: A challenging application of artificial intelligence systems involves the scheduling of traffic signals in multi-intersection vehicular networks. This paper introduces a novel use of a multi-agent system and reinforcement learning (RL) framework to obtain an efficient traffic signal control policy. The latter is aimed at minimising the average delay, congestion and likelihood of intersection cross-blocking. A five-intersection traffic network has been studied in which each intersection is governed by an autonomous intelligent agent. Two types of agents, a central agent and an outbound agent, were employed. The outbound agents schedule traffic signals by following the longest-queue-first (LQF) algorithm, which has been proved to guarantee stability and fairness, and collaborate with the central agent by providing it local traffic statistics. The central agent learns a value function driven by its local and neighbours' traffic conditions. The novel methodology proposed here utilises the Q-Learning algorithm with a feedforward neural network for value function approximation. Experimental results clearly demonstrate the advantages of multi-agent RL-based control over LQF governed isolated single-intersection control, thus paving the way for efficient distributed traffic signal control in complex settings. --- paper_title: Hierarchical control of traffic signals using Q-learning with tile coding paper_content: Multi-agent systems are rapidly growing as powerful tools for Intelligent Transportation Systems (ITS). It is desirable that traffic signals control, as a part of ITS, is performed in a distributed model. Therefore agent-based technologies can be efficiently used for traffic signals control. For traffic networks which are composed of multiple intersections, distributed control achieves better results in comparison to centralized methods. Hierarchical structures are useful to decompose the network into multiple sub-networks and provide a mechanism for distributed control of the traffic signals. ::: ::: In this paper, a two-level hierarchical control of traffic signals based on Q-learning is presented. Traffic signal controllers, located at intersections, can be seen as autonomous agents in the first level (at the bottom of the hierarchy) which use Q-learning to learn a control policy. The network is divided into some regions where an agent is assigned to control each region at the second level (top of the hierarchy). Due to the combinational explosion in the number of states and actions, i.e. features, the use of Q-learning is impractical. Therefore, in the top level, tile coding is used as a linear function approximation method. ::: ::: A network composed of 9 intersections arranged in a 3?3 grid is used for the simulation. Experimental results show that the proposed hierarchical control improves the Q-learning efficiency of the bottom level agents. The impact of the parameters used in tile coding is also analyzed. --- paper_title: Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events paper_content: Abstract The transportation demand is rapidly growing in metropolises, resulting in chronic traffic congestions in dense downtown areas. Adaptive traffic signal control as the principle part of intelligent transportation systems has a primary role to effectively reduce traffic congestion by making a real-time adaptation in response to the changing traffic network dynamics. Reinforcement learning (RL) is an effective approach in machine learning that has been applied for designing adaptive traffic signal controllers. One of the most efficient and robust type of RL algorithms are continuous state actor-critic algorithms that have the advantage of fast learning and the ability to generalize to new and unseen traffic conditions. These algorithms are utilized in this paper to design adaptive traffic signal controllers called actor-critic adaptive traffic signal controllers (A-CATs controllers). The contribution of the present work rests on the integration of three threads: (a) showing performance comparisons of both discrete and continuous A-CATs controllers in a traffic network with recurring congestion (24-h traffic demand) in the upper downtown core of Tehran city, (b) analyzing the effects of different traffic disruptions including opportunistic pedestrians crossing, parking lane, non-recurring congestion, and different levels of sensor noise on the performance of A-CATS controllers, and (c) comparing the performance of different function approximators (tile coding and radial basis function) on the learning of A-CATs controllers. To this end, first an agent-based traffic simulation of the study area is carried out. Then six different scenarios are conducted to find the best A-CATs controller that is robust enough against different traffic disruptions. We observe that the A-CATs controller based on radial basis function networks (RBF (5)) outperforms others. This controller is benchmarked against controllers of discrete state Q-learning, Bayesian Q-learning, fixed time and actuated controllers; and the results reveal that it consistently outperforms them. --- paper_title: Multiagent Reinforcement Learning for Urban Traffic Control using Coordination Graphs paper_content: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light. However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior. This paper extends this approach to include explicit coordination between neighboring traffic lights. Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents. This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings. It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs. Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties. --- paper_title: Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning paper_content: Recent advances in combining deep neural network architectures with reinforcement learning (RL) techniques have shown promising potential results in solving complex control problems with high-dimensional state and action spaces. Inspired by these successes, in this study, the authors built two kinds of RL algorithms: deep policy-gradient (PG) and value-function-based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The PG-based agent maps its observation directly to the control signal; however, the value-function-based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Their methods show promising results in a traffic network simulated in the simulation of urban mobility traffic simulator, without suffering from instability issues during the training process. --- paper_title: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control paper_content: Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field. --- paper_title: Soilse: A decentralized approach to optimization of fluctuating urban traffic using Reinforcement Learning paper_content: Increasing traffic congestion is a major problem in urban areas, which incurs heavy economic and environmental costs in both developing and developed countries. Efficient urban traffic control (UTC) can help reduce traffic congestion. However, the increasing volume and the dynamic nature of urban traffic pose particular challenges to UTC. Reinforcement Learning (RL) has been shown to be a promising approach to efficient UTC. However, most existing work on RL-based UTC does not adequately address the fluctuating nature of urban traffic. This paper presents Soilse1, a decentralized RL-based UTC optimization scheme that includes a nonparametric pattern change detection mechanism to identify local traffic pattern changes that adversely affect an RL agent's performance. Hence, Soilse is adaptive as agents learn to optimize for different traffic patterns and responsive as agents can detect genuine traffic pattern changes and trigger relearning. We compare the performance of Soilse to two baselines, a fixed-time approach and a saturation balancing algorithm that emulates SCATS, a well-known UTC system. The comparison was performed based on a simulation of traffic in Dublin's inner city centre. Results from using our scheme show an approximate 35%–43% and 40%–54% better performance in terms of average vehicle waiting time and average number of vehicle stops respectively against the best baseline performance in our simulation. --- paper_title: A Collaborative Reinforcement Learning Approach to Urban Traffic Control Optimization paper_content: The high growth rate of vehicles per capita now poses a real challenge to efficient urban traffic control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-to-vehicle/infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction's traffic in order to optimize traffic control. Global UTC optimization is achieved using a local adaptive round robin (ARR) phase switching model optimized using collaborative reinforcement learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a large-scale simulation of traffic in Dublin's inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm. --- paper_title: Exploring Q-Learning Optimization in Traffic Signal Timing Plan Management paper_content: Traffic congestions often occur within the entire traffic network of the urban areas due to the increasing of traffic demands by the outnumbered vehicles on road. The problem may be solved by a good traffic signal timing plan, but unfortunately most of the timing plans available currently are not fully optimized based on the on spot traffic conditions. The incapability of the traffic intersections to learn from their past experiences has cost them the lack of ability to adapt into the dynamic changes of the traffic flow. The proposed Q-learning approach can manage the traffic signal timing plan more effectively via optimization of the traffic flows. Q-learning gains rewards from its past experiences including its future actions to learn from its experience and determine the best possible actions. The proposed learning algorithm shows a good valuable performance that able to improve the traffic signal timing plan for the dynamic traffic flows within a traffic network. --- paper_title: Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto paper_content: Population is steadily increasing worldwide, resulting in intractable traffic congestion in dense urban areas. Adaptive traffic signal control (ATSC) has shown strong potential to effectively alleviate urban traffic congestion by adjusting signal timing plans in real time in response to traffic fluctuations to achieve desirable objectives (e.g., minimize delay). Efficient and robust ATSC can be designed using a multiagent reinforcement learning (MARL) approach in which each controller (agent) is responsible for the control of traffic lights around a single traffic junction. Applying MARL approaches to the ATSC problem is associated with a few challenges as agents typically react to changes in the environment at the individual level, but the overall behavior of all agents may not be optimal. This paper presents the development and evaluation of a novel system of multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC). MARLIN-ATSC offers two possible modes: 1) independent mode, where each intersection controller works independently of other agents; and 2) integrated mode, where each controller coordinates signal control actions with neighboring intersections. MARLIN-ATSC is tested on a large-scale simulated network of 59 intersections in the lower downtown core of the City of Toronto, ON, Canada, for the morning rush hour. The results show unprecedented reduction in the average intersection delay ranging from 27% in mode 1 to 39% in mode 2 at the network level and travel-time savings of 15% in mode 1 and 26% in mode 2, along the busiest routes in Downtown Toronto. --- paper_title: Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran paper_content: Abstract Traffic signal control plays a pivotal role in reducing traffic congestion. Traffic signals cannot be adequately controlled with conventional methods due to the high variations and complexity in traffic environments. In recent years, reinforcement learning (RL) has shown great potential for traffic signal control because of its high adaptability, flexibility, and scalability. However, designing RL-embedded traffic signal controllers (RLTSCs) for traffic systems with a high degree of realism is faced with several challenges, among others system disturbances and large state-action spaces are considered in this research. The contribution of the present work is founded on three features: (a) evaluating the robustness of different RLTSCs against system disturbances including incidents, jaywalking, and sensor noise, (b) handling a high-dimensional state-action space by both employing different continuous state RL algorithms and reducing the state-action space in order to improve the performance and learning speed of the system, and (c) presenting a detailed empirical study of traffic signals control of downtown Tehran through seven RL algorithms: discrete state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), continuous state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), and residual actor-critic( λ ). In this research, first a real-world microscopic traffic simulation of downtown Tehran is carried out, then four experiments are performed in order to find the best RLTSC with convincing robustness and strong performance. The results reveal that the RLTSC based on continuous state actor-critic( λ ) has the best performance. In addition, it is found that the best RLTSC leads to saving average travel time by 22% (at the presence of high system disturbances) when it is compared with an optimized fixed-time controller. --- paper_title: Deep Deterministic Policy Gradient for Urban Traffic Light Control paper_content: Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section). --- paper_title: An agent-based learning towards decentralized and coordinated traffic signal control paper_content: Adaptive traffic signal control is a promising technique for alleviating traffic congestion. Reinforcement Learning (RL) has the potential to tackle the optimal traffic control problem for a single agent. However, the ultimate goal is to develop integrated traffic control for multiple intersections. Integrated traffic control can be efficiently achieved using decentralized controllers. Multi-Agent Reinforcement Learning (MARL) is an extension of RL techniques that makes it possible to decentralize multiple agents in a non-stationary environments. Most of the studies in the field of traffic signal control consider a stationary environment, an approach whose shortcomings are highlighted in this paper. A Q-Learning-based acyclic signal control system that uses a variable phasing sequence is developed. To investigate the appropriate state model for different traffic conditions, three models were developed, each with different state representation. The models were tested on a typical multiphase intersection to minimize the vehicle delay and were compared to the pre-timed control strategy as a benchmark. The Q-Learning control system consistently outperformed the widely used Webster pre-timed optimized signal control strategy under various traffic conditions. --- paper_title: Distributed Geometric Fuzzy Multiagent Urban Traffic Signal Control paper_content: Rapid urbanization and the growing demand for faster transportation has led to heavy congestion in road traffic networks, necessitating the need for traffic-responsive intelligent signal control systems. The developed signal control system must be capable of determining the green time that minimizes the network-wide travel time delay based on limited information of the environment. This paper adopts a distributed multiagent-based approach to develop a traffic-responsive signal control system, i.e., the geometric fuzzy multiagent system (GFMAS), which is based on a geometric type-2 fuzzy inference system. GFMAS is capable of handling the various levels of uncertainty found in the inputs and rule base of the traffic signal controller. Simulation models of the agents designed in PARAMICS were tested on virtual road network replicating a section of the central business district in Singapore. A comprehensive analysis and comparison was performed against the existing traffic-control algorithms green link determining (GLIDE) and hierarchical multiagent system (HMS). The proposed GFMAS signal control outperformed both the benchmarks when tested for typical traffic-flow scenarios. Further tests show the superior performance of the proposed GFMAS in handling unplanned and planned incidents and obstructions. The promising results demonstrate the efficiency of the proposed multiagent architecture and scope for future development. --- paper_title: Traffic Light Control by Multiagent Reinforcement Learning Systems paper_content: Traffic light control is one of the main means of controlling road traffic. Improving traffic control is important because it can lead to higher traffic throughput and reduced traffic congestion. This chapter describes multiagent reinforcement learning techniques for automatic optimization of traffic light controllers. Such techniques are attractive because they can automatically discover efficient control strategies for complex tasks, such as traffic control, for which it is hard or impossible to compute optimal solutions directly and hard to develop hand-coded solutions. First, the general multi-agent reinforcement learning framework is described, which is used to control traffic lights in this work. In this framework, multiple local controllers (agents) are each responsible for the optimization of traffic lights around a single traffic junction, making use of locally perceived traffic state information (sensed cars on the road), a learned probabilistic model of car behavior, and a learned value function which indicates how traffic light decisions affect long-term utility, in terms of the average waiting time of cars. Next, three extensions are described which improve upon the basic framework in various ways: agents (traffic junction controllers) taking into account congestion information from neighboring agents; handling partial observability of traffic states; and coordinating the behavior of multiple agents by coordination graphs and the max-plus algorithm. --- paper_title: Reinforcement learning with average cost for adaptive control of traffic lights at intersections paper_content: We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance. --- paper_title: A Brief Survey of Deep Reinforcement Learning paper_content: Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world. Currently, deep learning is enabling reinforcement learning to scale to problems that were previously intractable, such as learning to play video games directly from pixels. Deep reinforcement learning algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of reinforcement learning, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep reinforcement learning, including the deep $Q$-network, trust region policy optimisation, and asynchronous advantage actor-critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via reinforcement learning. To conclude, we describe several current areas of research within the field. --- paper_title: The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment paper_content: Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA) for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. --- paper_title: Distributed learning and multi-objectivity in traffic light control paper_content: Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against s... --- paper_title: Traffic light control in non-stationary environments based on multi agent Q-learning paper_content: In many urban areas where traffic congestion does not have the peak pattern, conventional traffic signal timing methods does not result in an efficient control. One alternative is to let traffic signal controllers learn how to adjust the lights based on the traffic situation. However this creates a classical non-stationary environment since each controller is adapting to the changes caused by other controllers. In multi-agent learning this is likely to be inefficient and computationally challenging, i.e., the efficiency decreases with the increase in the number of agents (controllers). In this paper, we model a relatively large traffic network as a multi-agent system and use techniques from multi-agent reinforcement learning. In particular, Q-learning is employed, where the average queue length in approaching links is used to estimate states. A parametric representation of the action space has made the method extendable to different types of intersection. The simulation results demonstrate that the proposed Q-learning outperformed the fixed time method under different traffic demands. --- paper_title: Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets paper_content: Traffic signal control can mitigate traffic congestion and reduce travel time. A model-free reinforcement learning (RL) approach is a powerful framework for learning a responsive traffic control policy for short-term traffic demand changes without prior environmental knowledge. Previous RL approaches could handle high-dimensional feature space using a standard neural network, e.g., a convolutional neural network; however, to control traffic on a road network with multiple intersections, the geometric features between roads had to be created manually. Rather than using manually crafted geometric features, we developed an RL-based traffic signal control method that employs a graph convolutional neural network (GCNN). GCNNs can automatically extract features considering the traffic features between distant roads by stacking multiple neural network layers. We numerically evaluated the proposed method in a six-intersection environment. The results demonstrate that the proposed method can find comparable policies twice as fast as the conventional RL method with a neural network and can adapt to more extensive traffic demand changes. --- paper_title: Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control paper_content: Reinforcement learning (RL) is a promising data-driven approach for adaptive traffic signal control (ATSC) in complex urban traffic networks, and deep neural networks further enhance its learning power. However, centralized RL is infeasible for large-scale ATSC due to the extremely high dimension of the joint action space. Multi-agent RL (MARL) overcomes the scalability issue by distributing the global control to each local RL agent, but it introduces new challenges: now the environment becomes partially observable from the viewpoint of each local agent due to limited communication among agents. Most existing studies in MARL focus on designing efficient communication and coordination among traditional Q-learning agents. This paper presents, for the first time, a fully scalable and decentralized MARL algorithm for the state-of-the-art deep RL agent: advantage actor critic (A2C), within the context of ATSC. In particular, two methods are proposed to stabilize the learning procedure, by improving the observability and reducing the learning difficulty of each local agent. The proposed multi-agent A2C is compared against independent A2C and independent Q-learning algorithms, in both a large synthetic traffic grid and a large real-world traffic network of Monaco city, under simulated peak-hour traffic dynamics. Results demonstrate its optimality, robustness, and sample efficiency over other state-of-the-art decentralized MARL algorithms. --- paper_title: Reinforcement learning-based multi-agent system for network traffic signal control paper_content: A challenging application of artificial intelligence systems involves the scheduling of traffic signals in multi-intersection vehicular networks. This paper introduces a novel use of a multi-agent system and reinforcement learning (RL) framework to obtain an efficient traffic signal control policy. The latter is aimed at minimising the average delay, congestion and likelihood of intersection cross-blocking. A five-intersection traffic network has been studied in which each intersection is governed by an autonomous intelligent agent. Two types of agents, a central agent and an outbound agent, were employed. The outbound agents schedule traffic signals by following the longest-queue-first (LQF) algorithm, which has been proved to guarantee stability and fairness, and collaborate with the central agent by providing it local traffic statistics. The central agent learns a value function driven by its local and neighbours' traffic conditions. The novel methodology proposed here utilises the Q-Learning algorithm with a feedforward neural network for value function approximation. Experimental results clearly demonstrate the advantages of multi-agent RL-based control over LQF governed isolated single-intersection control, thus paving the way for efficient distributed traffic signal control in complex settings. --- paper_title: Hierarchical control of traffic signals using Q-learning with tile coding paper_content: Multi-agent systems are rapidly growing as powerful tools for Intelligent Transportation Systems (ITS). It is desirable that traffic signals control, as a part of ITS, is performed in a distributed model. Therefore agent-based technologies can be efficiently used for traffic signals control. For traffic networks which are composed of multiple intersections, distributed control achieves better results in comparison to centralized methods. Hierarchical structures are useful to decompose the network into multiple sub-networks and provide a mechanism for distributed control of the traffic signals. ::: ::: In this paper, a two-level hierarchical control of traffic signals based on Q-learning is presented. Traffic signal controllers, located at intersections, can be seen as autonomous agents in the first level (at the bottom of the hierarchy) which use Q-learning to learn a control policy. The network is divided into some regions where an agent is assigned to control each region at the second level (top of the hierarchy). Due to the combinational explosion in the number of states and actions, i.e. features, the use of Q-learning is impractical. Therefore, in the top level, tile coding is used as a linear function approximation method. ::: ::: A network composed of 9 intersections arranged in a 3?3 grid is used for the simulation. Experimental results show that the proposed hierarchical control improves the Q-learning efficiency of the bottom level agents. The impact of the parameters used in tile coding is also analyzed. --- paper_title: Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events paper_content: Abstract The transportation demand is rapidly growing in metropolises, resulting in chronic traffic congestions in dense downtown areas. Adaptive traffic signal control as the principle part of intelligent transportation systems has a primary role to effectively reduce traffic congestion by making a real-time adaptation in response to the changing traffic network dynamics. Reinforcement learning (RL) is an effective approach in machine learning that has been applied for designing adaptive traffic signal controllers. One of the most efficient and robust type of RL algorithms are continuous state actor-critic algorithms that have the advantage of fast learning and the ability to generalize to new and unseen traffic conditions. These algorithms are utilized in this paper to design adaptive traffic signal controllers called actor-critic adaptive traffic signal controllers (A-CATs controllers). The contribution of the present work rests on the integration of three threads: (a) showing performance comparisons of both discrete and continuous A-CATs controllers in a traffic network with recurring congestion (24-h traffic demand) in the upper downtown core of Tehran city, (b) analyzing the effects of different traffic disruptions including opportunistic pedestrians crossing, parking lane, non-recurring congestion, and different levels of sensor noise on the performance of A-CATS controllers, and (c) comparing the performance of different function approximators (tile coding and radial basis function) on the learning of A-CATs controllers. To this end, first an agent-based traffic simulation of the study area is carried out. Then six different scenarios are conducted to find the best A-CATs controller that is robust enough against different traffic disruptions. We observe that the A-CATs controller based on radial basis function networks (RBF (5)) outperforms others. This controller is benchmarked against controllers of discrete state Q-learning, Bayesian Q-learning, fixed time and actuated controllers; and the results reveal that it consistently outperforms them. --- paper_title: Multiagent Reinforcement Learning for Urban Traffic Control using Coordination Graphs paper_content: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light. However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior. This paper extends this approach to include explicit coordination between neighboring traffic lights. Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents. This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings. It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs. Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties. --- paper_title: Traffic Light Control Using Deep Policy-Gradient and Value-Function Based Reinforcement Learning paper_content: Recent advances in combining deep neural network architectures with reinforcement learning (RL) techniques have shown promising potential results in solving complex control problems with high-dimensional state and action spaces. Inspired by these successes, in this study, the authors built two kinds of RL algorithms: deep policy-gradient (PG) and value-function-based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The PG-based agent maps its observation directly to the control signal; however, the value-function-based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Their methods show promising results in a traffic network simulated in the simulation of urban mobility traffic simulator, without suffering from instability issues during the training process. --- paper_title: An Experimental Review of Reinforcement Learning Algorithms for Adaptive Traffic Signal Control paper_content: Urban traffic congestion has become a serious issue, and improving the flow of traffic through cities is critical for environmental, social and economic reasons. Improvements in Adaptive Traffic Signal Control (ATSC) have a pivotal role to play in the future development of Smart Cities and in the alleviation of traffic congestion. Here we describe an autonomic method for ATSC, namely, reinforcement learning (RL). This chapter presents a comprehensive review of the applications of RL to the traffic control problem to date, along with a case study that showcases our developing multi-agent traffic control architecture. Three different RL algorithms are presented and evaluated experimentally. We also look towards the future and discuss some important challenges that still need to be addressed in this field. --- paper_title: Soilse: A decentralized approach to optimization of fluctuating urban traffic using Reinforcement Learning paper_content: Increasing traffic congestion is a major problem in urban areas, which incurs heavy economic and environmental costs in both developing and developed countries. Efficient urban traffic control (UTC) can help reduce traffic congestion. However, the increasing volume and the dynamic nature of urban traffic pose particular challenges to UTC. Reinforcement Learning (RL) has been shown to be a promising approach to efficient UTC. However, most existing work on RL-based UTC does not adequately address the fluctuating nature of urban traffic. This paper presents Soilse1, a decentralized RL-based UTC optimization scheme that includes a nonparametric pattern change detection mechanism to identify local traffic pattern changes that adversely affect an RL agent's performance. Hence, Soilse is adaptive as agents learn to optimize for different traffic patterns and responsive as agents can detect genuine traffic pattern changes and trigger relearning. We compare the performance of Soilse to two baselines, a fixed-time approach and a saturation balancing algorithm that emulates SCATS, a well-known UTC system. The comparison was performed based on a simulation of traffic in Dublin's inner city centre. Results from using our scheme show an approximate 35%–43% and 40%–54% better performance in terms of average vehicle waiting time and average number of vehicle stops respectively against the best baseline performance in our simulation. --- paper_title: A Collaborative Reinforcement Learning Approach to Urban Traffic Control Optimization paper_content: The high growth rate of vehicles per capita now poses a real challenge to efficient urban traffic control (UTC). An efficient solution to UTC must be adaptive in order to deal with the highly-dynamic nature of urban traffic. In the near future, global positioning systems and vehicle-to-vehicle/infrastructure communication may provide a more detailed local view of the traffic situation that could be employed for better global UTC optimization. In this paper we describe the design of a next-generation UTC system that exploits such local knowledge about a junction's traffic in order to optimize traffic control. Global UTC optimization is achieved using a local adaptive round robin (ARR) phase switching model optimized using collaborative reinforcement learning (CRL). The design employs an ARR-CRL-based agent controller for each signalized junction that collaborates with neighbouring agents in order to learn appropriate phase timing based on the traffic pattern. We compare our approach to non-adaptive fixed-time UTC system and to a saturation balancing algorithm in a large-scale simulation of traffic in Dublin's inner city centre. We show that the ARR-CRL approach can provide significant improvement resulting in up to ~57% lower average waiting time per vehicle compared to the saturation balancing algorithm. --- paper_title: Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran paper_content: Abstract Traffic signal control plays a pivotal role in reducing traffic congestion. Traffic signals cannot be adequately controlled with conventional methods due to the high variations and complexity in traffic environments. In recent years, reinforcement learning (RL) has shown great potential for traffic signal control because of its high adaptability, flexibility, and scalability. However, designing RL-embedded traffic signal controllers (RLTSCs) for traffic systems with a high degree of realism is faced with several challenges, among others system disturbances and large state-action spaces are considered in this research. The contribution of the present work is founded on three features: (a) evaluating the robustness of different RLTSCs against system disturbances including incidents, jaywalking, and sensor noise, (b) handling a high-dimensional state-action space by both employing different continuous state RL algorithms and reducing the state-action space in order to improve the performance and learning speed of the system, and (c) presenting a detailed empirical study of traffic signals control of downtown Tehran through seven RL algorithms: discrete state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), continuous state Q-learning( λ ), SARSA( λ ), actor-critic( λ ), and residual actor-critic( λ ). In this research, first a real-world microscopic traffic simulation of downtown Tehran is carried out, then four experiments are performed in order to find the best RLTSC with convincing robustness and strong performance. The results reveal that the RLTSC based on continuous state actor-critic( λ ) has the best performance. In addition, it is found that the best RLTSC leads to saving average travel time by 22% (at the presence of high system disturbances) when it is compared with an optimized fixed-time controller. --- paper_title: Deep Deterministic Policy Gradient for Urban Traffic Light Control paper_content: Traffic light timing optimization is still an active line of research despite the wealth of scientific literature on the topic, and the problem remains unsolved for any non-toy scenario. One of the key issues with traffic light optimization is the large scale of the input information that is available for the controlling agent, namely all the traffic data that is continually sampled by the traffic detectors that cover the urban network. This issue has in the past forced researchers to focus on agents that work on localized parts of the traffic network, typically on individual intersections, and to coordinate every individual agent in a multi-agent setup. In order to overcome the large scale of the available state information, we propose to rely on the ability of deep Learning approaches to handle large input spaces, in the form of Deep Deterministic Policy Gradient (DDPG) algorithm. We performed several experiments with a range of models, from the very simple one (one intersection) to the more complex one (a big city section). --- paper_title: An agent-based learning towards decentralized and coordinated traffic signal control paper_content: Adaptive traffic signal control is a promising technique for alleviating traffic congestion. Reinforcement Learning (RL) has the potential to tackle the optimal traffic control problem for a single agent. However, the ultimate goal is to develop integrated traffic control for multiple intersections. Integrated traffic control can be efficiently achieved using decentralized controllers. Multi-Agent Reinforcement Learning (MARL) is an extension of RL techniques that makes it possible to decentralize multiple agents in a non-stationary environments. Most of the studies in the field of traffic signal control consider a stationary environment, an approach whose shortcomings are highlighted in this paper. A Q-Learning-based acyclic signal control system that uses a variable phasing sequence is developed. To investigate the appropriate state model for different traffic conditions, three models were developed, each with different state representation. The models were tested on a typical multiphase intersection to minimize the vehicle delay and were compared to the pre-timed control strategy as a benchmark. The Q-Learning control system consistently outperformed the widely used Webster pre-timed optimized signal control strategy under various traffic conditions. --- paper_title: Distributed Geometric Fuzzy Multiagent Urban Traffic Signal Control paper_content: Rapid urbanization and the growing demand for faster transportation has led to heavy congestion in road traffic networks, necessitating the need for traffic-responsive intelligent signal control systems. The developed signal control system must be capable of determining the green time that minimizes the network-wide travel time delay based on limited information of the environment. This paper adopts a distributed multiagent-based approach to develop a traffic-responsive signal control system, i.e., the geometric fuzzy multiagent system (GFMAS), which is based on a geometric type-2 fuzzy inference system. GFMAS is capable of handling the various levels of uncertainty found in the inputs and rule base of the traffic signal controller. Simulation models of the agents designed in PARAMICS were tested on virtual road network replicating a section of the central business district in Singapore. A comprehensive analysis and comparison was performed against the existing traffic-control algorithms green link determining (GLIDE) and hierarchical multiagent system (HMS). The proposed GFMAS signal control outperformed both the benchmarks when tested for typical traffic-flow scenarios. Further tests show the superior performance of the proposed GFMAS in handling unplanned and planned incidents and obstructions. The promising results demonstrate the efficiency of the proposed multiagent architecture and scope for future development. --- paper_title: Learning Multiagent Communication with Backpropagation paper_content: Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand. --- paper_title: Reinforcement learning with average cost for adaptive control of traffic lights at intersections paper_content: We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance. --- paper_title: The Study of Reinforcement Learning for Traffic Self-Adaptive Control under Multiagent Markov Game Environment paper_content: Urban traffic self-adaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic self-adaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA) for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzero-sum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of single-agent Q-learning. This method lets each TSCA learn to update its Q-values under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic self-adaptive control setting. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. --- paper_title: Distributed learning and multi-objectivity in traffic light control paper_content: Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against s... --- paper_title: Traffic light control in non-stationary environments based on multi agent Q-learning paper_content: In many urban areas where traffic congestion does not have the peak pattern, conventional traffic signal timing methods does not result in an efficient control. One alternative is to let traffic signal controllers learn how to adjust the lights based on the traffic situation. However this creates a classical non-stationary environment since each controller is adapting to the changes caused by other controllers. In multi-agent learning this is likely to be inefficient and computationally challenging, i.e., the efficiency decreases with the increase in the number of agents (controllers). In this paper, we model a relatively large traffic network as a multi-agent system and use techniques from multi-agent reinforcement learning. In particular, Q-learning is employed, where the average queue length in approaching links is used to estimate states. A parametric representation of the action space has made the method extendable to different types of intersection. The simulation results demonstrate that the proposed Q-learning outperformed the fixed time method under different traffic demands. --- paper_title: The Dynamics of Reinforcement Learning in Cooperative Multiagent Systems paper_content: Reinforcement learning can provide a robust and natural means for agents to learn how to coordinate their action choices in multi agent systems. We examine some of the factors that can influence the dynamics of the learning process in such a setting. We first distinguish reinforcement learners that are unaware of (or ignore) the presence of other agents from those that explicitly attempt to learn the value of joint actions and the strategies of their counterparts. We study (a simple form of) Q-leaming in cooperative multi agent systems under these two perspectives, focusing on the influence of that game structure and exploration strategies on convergence to (optimal and suboptimal) Nash equilibria. We then propose alternative optimistic exploration strategies that increase the likelihood of convergence to an optimal equilibrium. --- paper_title: Traffic Signal Control Based on Reinforcement Learning with Graph Convolutional Neural Nets paper_content: Traffic signal control can mitigate traffic congestion and reduce travel time. A model-free reinforcement learning (RL) approach is a powerful framework for learning a responsive traffic control policy for short-term traffic demand changes without prior environmental knowledge. Previous RL approaches could handle high-dimensional feature space using a standard neural network, e.g., a convolutional neural network; however, to control traffic on a road network with multiple intersections, the geometric features between roads had to be created manually. Rather than using manually crafted geometric features, we developed an RL-based traffic signal control method that employs a graph convolutional neural network (GCNN). GCNNs can automatically extract features considering the traffic features between distant roads by stacking multiple neural network layers. We numerically evaluated the proposed method in a six-intersection environment. The results demonstrate that the proposed method can find comparable policies twice as fast as the conventional RL method with a neural network and can adapt to more extensive traffic demand changes. --- paper_title: Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control paper_content: Reinforcement learning (RL) is a promising data-driven approach for adaptive traffic signal control (ATSC) in complex urban traffic networks, and deep neural networks further enhance its learning power. However, centralized RL is infeasible for large-scale ATSC due to the extremely high dimension of the joint action space. Multi-agent RL (MARL) overcomes the scalability issue by distributing the global control to each local RL agent, but it introduces new challenges: now the environment becomes partially observable from the viewpoint of each local agent due to limited communication among agents. Most existing studies in MARL focus on designing efficient communication and coordination among traditional Q-learning agents. This paper presents, for the first time, a fully scalable and decentralized MARL algorithm for the state-of-the-art deep RL agent: advantage actor critic (A2C), within the context of ATSC. In particular, two methods are proposed to stabilize the learning procedure, by improving the observability and reducing the learning difficulty of each local agent. The proposed multi-agent A2C is compared against independent A2C and independent Q-learning algorithms, in both a large synthetic traffic grid and a large real-world traffic network of Monaco city, under simulated peak-hour traffic dynamics. Results demonstrate its optimality, robustness, and sample efficiency over other state-of-the-art decentralized MARL algorithms. --- paper_title: Reinforcement learning-based multi-agent system for network traffic signal control paper_content: A challenging application of artificial intelligence systems involves the scheduling of traffic signals in multi-intersection vehicular networks. This paper introduces a novel use of a multi-agent system and reinforcement learning (RL) framework to obtain an efficient traffic signal control policy. The latter is aimed at minimising the average delay, congestion and likelihood of intersection cross-blocking. A five-intersection traffic network has been studied in which each intersection is governed by an autonomous intelligent agent. Two types of agents, a central agent and an outbound agent, were employed. The outbound agents schedule traffic signals by following the longest-queue-first (LQF) algorithm, which has been proved to guarantee stability and fairness, and collaborate with the central agent by providing it local traffic statistics. The central agent learns a value function driven by its local and neighbours' traffic conditions. The novel methodology proposed here utilises the Q-Learning algorithm with a feedforward neural network for value function approximation. Experimental results clearly demonstrate the advantages of multi-agent RL-based control over LQF governed isolated single-intersection control, thus paving the way for efficient distributed traffic signal control in complex settings. --- paper_title: Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events paper_content: Abstract The transportation demand is rapidly growing in metropolises, resulting in chronic traffic congestions in dense downtown areas. Adaptive traffic signal control as the principle part of intelligent transportation systems has a primary role to effectively reduce traffic congestion by making a real-time adaptation in response to the changing traffic network dynamics. Reinforcement learning (RL) is an effective approach in machine learning that has been applied for designing adaptive traffic signal controllers. One of the most efficient and robust type of RL algorithms are continuous state actor-critic algorithms that have the advantage of fast learning and the ability to generalize to new and unseen traffic conditions. These algorithms are utilized in this paper to design adaptive traffic signal controllers called actor-critic adaptive traffic signal controllers (A-CATs controllers). The contribution of the present work rests on the integration of three threads: (a) showing performance comparisons of both discrete and continuous A-CATs controllers in a traffic network with recurring congestion (24-h traffic demand) in the upper downtown core of Tehran city, (b) analyzing the effects of different traffic disruptions including opportunistic pedestrians crossing, parking lane, non-recurring congestion, and different levels of sensor noise on the performance of A-CATS controllers, and (c) comparing the performance of different function approximators (tile coding and radial basis function) on the learning of A-CATs controllers. To this end, first an agent-based traffic simulation of the study area is carried out. Then six different scenarios are conducted to find the best A-CATs controller that is robust enough against different traffic disruptions. We observe that the A-CATs controller based on radial basis function networks (RBF (5)) outperforms others. This controller is benchmarked against controllers of discrete state Q-learning, Bayesian Q-learning, fixed time and actuated controllers; and the results reveal that it consistently outperforms them. --- paper_title: Multiagent Reinforcement Learning for Urban Traffic Control using Coordination Graphs paper_content: Since traffic jams are ubiquitous in the modern world, optimizing the behavior of traffic lights for efficient traffic flow is a critically important goal. Though most current traffic lights use simple heuristic protocols, more efficient controllers can be discovered automatically via multiagent reinforcement learning, where each agent controls a single traffic light. However, in previous work on this approach, agents select only locally optimal actions without coordinating their behavior. This paper extends this approach to include explicit coordination between neighboring traffic lights. Coordination is achieved using the max-plus algorithm, which estimates the optimal joint action by sending locally optimized messages among connected agents. This paper presents the first application of max-plus to a large-scale problem and thus verifies its efficacy in realistic settings. It also provides empirical evidence that max-plus performs well on cyclic graphs, though it has been proven to converge only for tree-structured graphs. Furthermore, it provides a new understanding of the properties a traffic network must have for such coordination to be beneficial and shows that max-plus outperforms previous methods on networks that possess those properties. --- paper_title: Game Theory and Multi-agent Reinforcement Learning paper_content: Reinforcement Learning was originally developed for Markov Decision Processes (MDPs). It allows a single agent to learn a policy that maximizes a possibly delayed reward signal in a stochastic stationary environment. It guarantees convergence to the optimal policy, provided that the agent can sufficiently experiment and the environment in which it is operating is Markovian. However, when multiple agents apply reinforcement learning in a shared environment, this might be beyond the MDP model. In such systems, the optimal policy of an agent depends not only on the environment, but on the policies of the other agents as well. These situations arise naturally in a variety of domains, such as: robotics, telecommunications, economics, distributed control, auctions, traffic light control, etc. In these domains multi-agent learning is used, either because of the complexity of the domain or because control is inherently decentralized. In such systems it is important that agents are capable of discovering good solutions to the problem at hand either by coordinating with other learners or by competing with them. This chapter focuses on the application reinforcement learning techniques in multi-agent systems. We describe a basic learning framework based on the economic research into game theory, and illustrate the additional complexity that arises in such systems. We also described a representative selection of algorithms for the different areas of multi-agent reinforcement learning research. --- paper_title: IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control paper_content: The intelligent traffic light control is critical for an efficient transportation system. While existing traffic lights are mostly operated by hand-crafted rules, an intelligent traffic light control system should be dynamically adjusted to real-time traffic. There is an emerging trend of using deep reinforcement learning technique for traffic light control and recent studies have shown promising results. However, existing studies have not yet tested the methods on the real-world traffic data and they only focus on studying the rewards without interpreting the policies. In this paper, we propose a more effective deep reinforcement learning model for traffic light control. We test our method on a large-scale real traffic dataset obtained from surveillance cameras. We also show some interesting case studies of policies learned from the real data. ---
Title: A Survey on Traffic Signal Control Methods Section 1: INTRODUCTION Description 1: This section introduces the problem of traffic congestion, its impacts, and the importance of traffic signal control in mitigating congestion. Section 2: Current Situation Description 2: This section discusses the existing adaptive traffic signal control systems and their limitations in modern urban environments. Section 3: Opportunities Description 3: This section highlights the new data sources, computational models, and machine learning techniques that can be leveraged to improve traffic signal control. Section 4: Motivation of This Survey Description 4: This section explains the reasons behind conducting the survey, including the need for interdisciplinary research that integrates transportation and machine learning approaches. Section 5: Scope of This Survey Description 5: This section outlines the scope of the survey, covering both classical transportation approaches and RL-based traffic signal control methods. Section 6: PRELIMINARY Description 6: This section defines important terms and concepts related to road structure, traffic movement, and traffic signals, as well as objectives and special considerations in traffic signal control. Section 7: METHODS IN TRANSPORTATION ENGINEERING Description 7: This section introduces several classical methods in traffic signal control used in transportation engineering, including Webster, GreenWave, Maxband, Actuated Control, Self-organizing Traffic Light Control, Max-pressure, and SCATS. Section 8: Preliminaries Description 8: This section introduces the basic concepts and formulation of reinforcement learning (RL) in the context of traffic signal control. Section 9: State Definitions Description 9: This section describes various elements of the state representation used in RL-based traffic signal control methods. Section 10: Reward Functions Description 10: This section discusses different reward functions used in RL for traffic signal control and their effectiveness. Section 11: Action Definitions Description 11: This section explains various types of actions defined in RL-based traffic signal control methods. Section 12: Learning Approaches Description 12: This section provides an overview of different RL methods, including value-based, policy-based, and actor-critic methods, and their applications in traffic signal control. Section 13: Coordination Strategies Description 13: This section discusses different strategies for coordinating multiple RL agents in multi-intersection traffic signal control scenarios. Section 14: Experimental Settings Description 14: This section describes the experimental settings that influence the performance of traffic signal control strategies, including simulation environments, road networks, and traffic flow settings. Section 15: Challenges in RL for Traffic Signal Control Description 15: This section outlines the current challenges faced by RL-based traffic signal control methods. Section 16: CONCLUSION Description 16: This section summarizes the survey, highlighting key insights and future research directions in traffic signal control methods.
A Survey on Evolutionary Computation: Methods and Their Applications in Engineering
29
--- paper_title: Task Scheduling Optimization in Cloud Computing Based on Heuristic Algorithm paper_content: Cloud computing is an emerging technology and it allows users to pay as you need and has the high performance. Cloud computing is a heterogeneous system as well and it holds large amount of application data. In the process of scheduling some intensive data or computing an intensive application, it is acknowledged that optimizing the transferring and processing time is crucial to an application program. In this paper in order to minimize the cost of the processing we formulate a model for task scheduling and propose a particle swarm optimization (PSO) algorithm which is based on small position value rule. By virtue of comparing PSO algorithm with the PSO algorithm embedded in crossover and mutation and in the local research, the experiment results show the PSO algorithm not only converges faster but also runs faster than the other two algorithms in a large scale. The experiment results prove that the PSO algorithm is more suitable to cloud computing. --- paper_title: Evolutionary computation: an overview paper_content: We present an overview of the most important representatives of algorithms gleaned from natural evolution, so-called evolutionary algorithms. Evolution strategies, evolutionary programming, and genetic algorithms are summarized, with special emphasis on the principle of strategy parameter self-adaptation utilized by the first two algorithms to learn their own strategy parameters such as mutation variances and covariances. Some experimental results are presented which demonstrate the working principle and robustness of the self-adaptation methods used in evolution strategies and evolutionary programming. General principles of evolutionary algorithms are discussed, and we identify certain properties of natural evolution which might help to improve the problem solving capabilities of evolutionary algorithms even further. --- paper_title: Community Cloud Computing paper_content: Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue. --- paper_title: A multilevel image thresholding using the honey bee mating optimization paper_content: Image thresholding is an important technique for image processing and pattern recognition. Many thresholding techniques have been proposed in the literature. Among them, the maximum entropy thresholding (MET) has been widely applied. In this paper, a new multilevel MET algorithm based on the technology of the honey bee mating optimization (HBMO) is proposed. This proposed method is called the maximum entropy based honey bee mating optimization thresholding (MEHBMOT) method. Three different methods such as the particle swarm optimization (PSO), the hybrid cooperative-comprehensive learning based PSO algorithm (HCOCLPSO) and the Fast Otsu's method are also implemented for comparison with the results of the proposed method. The experimental results manifest that the proposed MEHBMOT algorithm can search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method. In comparison with the other three thresholding methods, the segmentation results using the MEHBMOT algorithm is the best and its computation time is relatively low. Furthermore, the convergence of the MEHBMOT algorithm can rapidly achieve and the results validate that the proposed MEHBMOT algorithm is efficient. --- paper_title: Efficient resource management for Cloud computing environments paper_content: The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead. --- paper_title: Efficient Graph-Based Image Segmentation paper_content: This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions. --- paper_title: A Swarm Intelligence inspired algorithm for contour detection in images paper_content: Swarm Intelligence uses a set of agents which are able to move and gather local information in a search space and utilize communication, limited memory, and intelligence for problem solving. In this work, we present an agent-based algorithm which is specifically tailored to detect contours in images. Following a novel movement and communication scheme, the agents are able to position themselves distributed over the entire image to cover all important image positions. To generate global contours, the agents examine the local windowed image information, and based on a set of fitness functions and via communicating with each other, they establish connections. Instead of a centralized paradigm, the global solution is discovered by some principal rules each agent is following. The algorithm is independent of object models or training steps. In our evaluation we focus on boundary detection as a major step towards image segmentation. We therefore evaluate our algorithm using the Berkeley Segmentation Dataset (BSDS) and compare its performance to existing methods via the BSDS benchmark and Pratt's Figure of Merit. --- paper_title: GENETIC ALGORITHM: REVIEW AND APPLICATION paper_content: Genetic algorithms are considered as a search process used in computing to find exact or a approximate solution for optimization and search problems. There are also termed as global search heuristics. These techniques are inspired by evolutionary biology such as inheritance mutation, selection and cross over. These algorithms provide a technique for program to automatically improve their parameters. This paper is an introduction of genetic algorithm approach including various applications and described the integration of genetic algorithm with object oriented programming approaches. The Genetic algorithm is an adaptive heuristic search method based on population genetics. Genetic algorithm were introduced by John Holland in the early 1970s (1).Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. Genetic algorithm is started with a set of solutions called population. A solution is represented by a chromosome. The population size is preserved throughout each generation. At each generation, fitness of each chromosome is evaluated, and then chromosomes for the next generation are probabilistically selected according to their fitness values. Some of the selected chromosomes randomly mate and produce offspring. When producing offspring, crossover and mutation randomly occurs. Because chromosomes with high fitness values have high probability of being selected, chromosomes of the new generation may have higher average fitness value than those of the old generation. The process of evolution is repeated until the end condition is satisfied. The solutions in genetic algorithms are called chromosomes or strings (2). In most cases, chromosomes are represented by lists or strings. Thus, many operations in genetic algorithm are operations on lists or strings. The very high level languages like Python or Perl are more productive in list processing or string processing than C/C++/Java. In bioinformatics, Python or Perl is widely used. A genetic algorithm is a search technique used in computing to find exact or approximate solutions to --- paper_title: Natural Evolution Strategies paper_content: This paper presents natural evolution strategies (NES), a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method. Natural evolution strategies search the fitness landscape using a multivariate normal distribution with a self-adapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with covariance matrix adaption (CMA), an evolution strategy (ES) which has been shown to perform well on a variety of high-precision optimization tasks. The natural evolution strategies algorithm, however, is simpler, less ad-hoc and more principled. Self-adaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the dasiavanillapsila gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima. --- paper_title: Evolutionary Electronics: Automatic Design of Electronic Circuits and Systems by Genetic Algorithms paper_content: From the Publisher: ::: From the explosion of interest, research, and applications of evolutionary computation a new field emerges-evolutionary electronics. Focused on applying evolutionary computation concepts and techniques to the domain of electronics, many researchers now see it as holding the greatest potential for overcoming the drawbacks of conventional design techniques.Evolutionary Electronics: Automatic Design of Electronic Circuits and Systems by Genetic Algorithms formally introduces and defines this area of research, presents its main challenges in electronic design, and explores emerging technologies. It describes the evolutionary computation paradigm and its primary algorithms, and explores topics of current interest, such as multi-objective optimization. The authors examine numerous evolutionary electronics applications, draw conclusions about those applications, and sketch the future of evolutionary computation and its applications in electronics. In coming years, the appearance of more and more advanced technologies will increase the complexity of optimization and synthesis problems, and evolutionary electronics will almost certainly become a key to solving those problems. Evolutionary Electronics is your key to discovering and unlocking the potential of this promising new field. --- paper_title: GENETIC ALGORITHM: REVIEW AND APPLICATION paper_content: Genetic algorithms are considered as a search process used in computing to find exact or a approximate solution for optimization and search problems. There are also termed as global search heuristics. These techniques are inspired by evolutionary biology such as inheritance mutation, selection and cross over. These algorithms provide a technique for program to automatically improve their parameters. This paper is an introduction of genetic algorithm approach including various applications and described the integration of genetic algorithm with object oriented programming approaches. The Genetic algorithm is an adaptive heuristic search method based on population genetics. Genetic algorithm were introduced by John Holland in the early 1970s (1).Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. Genetic algorithm is started with a set of solutions called population. A solution is represented by a chromosome. The population size is preserved throughout each generation. At each generation, fitness of each chromosome is evaluated, and then chromosomes for the next generation are probabilistically selected according to their fitness values. Some of the selected chromosomes randomly mate and produce offspring. When producing offspring, crossover and mutation randomly occurs. Because chromosomes with high fitness values have high probability of being selected, chromosomes of the new generation may have higher average fitness value than those of the old generation. The process of evolution is repeated until the end condition is satisfied. The solutions in genetic algorithms are called chromosomes or strings (2). In most cases, chromosomes are represented by lists or strings. Thus, many operations in genetic algorithm are operations on lists or strings. The very high level languages like Python or Perl are more productive in list processing or string processing than C/C++/Java. In bioinformatics, Python or Perl is widely used. A genetic algorithm is a search technique used in computing to find exact or approximate solutions to --- paper_title: Particle swarm optimization paper_content: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described. --- paper_title: Particle swarm optimization paper_content: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described. --- paper_title: Economic dispatch using particle swarm optimization: A review paper_content: Electrical power industry restructuring has created highly vibrant and competitive market that altered many aspects of the power industry. In this changed scenario, scarcity of energy resources, increasing power generation cost, environment concern, ever growing demand for electrical energy necessitate optimal economic dispatch. Practical economic dispatch (ED) problems have nonlinear, non-convex type objective function with intense equality and inequality constraints. The conventional optimization methods are not able to solve such problems as due to local optimum solution convergence. Meta-heuristic optimization techniques especially particle swarm optimization (PSO) has gained an incredible recognition as the solution algorithm for such type of ED problems in last decade. The application of PSO in ED problem, which is considered as one of the most complex optimization problem has been summarized in present paper. --- paper_title: Particle swarm optimization paper_content: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described. --- paper_title: Particle swarm optimization paper_content: Particle swarm optimization (PSO) has undergone many changes since its intro- duction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors' perspective, including variations in the algorithm, current and ongoing research, applications and open problems. --- paper_title: Particle swarm optimization paper_content: Particle swarm optimization (PSO) has undergone many changes since its intro- duction in 1995. As researchers have learned about the technique, they have derived new versions, developed new applications, and published theoretical studies of the effects of the various parameters and aspects of the algorithm. This paper comprises a snapshot of particle swarming from the authors' perspective, including variations in the algorithm, current and ongoing research, applications and open problems. --- paper_title: Genetic simulated annealing algorithm for task scheduling based on cloud computing environment paper_content: Scheduling is a very important part of the cloud computing system. This paper introduces an optimized algorithm for task scheduling based on genetic simulated annealing algorithm in cloud computing and its implementation. Algorithm considers the QOS requirements of different type tasks, the QOS parameters are dealt with dimensionless. The algorithm efficiently completes tasks scheduling in the cloud computing environment computing. --- paper_title: Cloud Task Scheduling Based on Load Balancing Ant Colony Optimization paper_content: The cloud computing is the development of distributed computing, parallel computing and grid computing, or defined as the commercial implementation of these computer science concepts. One of the fundamental issues in this environment is related to task scheduling. Cloud task scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it. A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. This paper proposes a cloud task scheduling policy based on Load Balancing Ant Colony Optimization (LBACO) algorithm. The main contribution of our work is to balance the entire system load while trying to minimizing the make span of a given tasks set. The new scheduling strategy was simulated using the CloudSim toolkit package. Experiments results showed the proposed LBACO algorithm outperformed FCFS (First Come First Serve) and the basic ACO (Ant Colony Optimization). --- paper_title: Honey bee behavior inspired load balancing of tasks in cloud computing environments paper_content: Scheduling of tasks in cloud computing is an NP-hard optimization problem. Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing (HBB-LB), which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue. --- paper_title: Swarm Intelligence Approaches for Grid Load Balancing paper_content: With the rapid growth of data and computational needs, distributed systems and computational Grids are gaining more and more attention. The huge amount of computations a Grid can fulfill in a specific amount of time cannot be performed by the best supercomputers. However, Grid performance can still be improved by making sure all the resources available in the Grid are utilized optimally using a good load balancing algorithm. This research proposes two new distributed swarm intelligence inspired load balancing algorithms. One algorithm is based on ant colony optimization and the other algorithm is based on particle swarm optimization. A simulation of the proposed approaches using a Grid simulation toolkit (GridSim) is conducted. The performance of the algorithms are evaluated using performance criteria such as makespan and load balancing level. A comparison of our proposed approaches with a classical approach called State Broadcast Algorithm and two random approaches is provided. Experimental results show the proposed algorithms perform very well in a Grid environment. Especially the application of particle swarm optimization, can yield better performance results in many scenarios than the ant colony approach. --- paper_title: A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding paper_content: Segmentation is a critical task in image processing. Bi-level segmentation involves dividing the whole image into partitions based on a threshold value, whereas multilevel segmentation involves multiple threshold values. A successful segmentation assigns proper threshold values to optimise a criterion such as entropy or between-class variance. High computational cost and inefficiency of an exhaustive search for the optimal thresholds leads to the use of global search heuristics to set the optimal thresholds. An emerging area in global heuristics is swarm-intelligence, which models the collective behaviour of the organisms. In this paper, two successful swarm-intelligence-based global optimisation algorithms, particle swarm optimisation (PSO) and artificial bee colony (ABC), have been employed to find the optimal multilevel thresholds. Kapur's entropy, one of the maximum entropy techniques, and between-class variance have been investigated as fitness functions. Experiments have been performed on test images using various numbers of thresholds. The results were assessed using statistical tools and suggest that Otsu's technique, PSO and ABC show equal performance when the number of thresholds is two, while the ABC algorithm performs better than PSO and Otsu's technique when the number of thresholds is greater than two. Experiments based on Kapur's entropy indicate that the ABC algorithm can be efficiently used in multilevel thresholding. Moreover, segmentation methods are required to have a minimum running time in addition to high performance. Therefore, the CPU times of ABC and PSO have been investigated to check their validity in real-time. The CPU time results show that the algorithms are scalable and that the running times of the algorithms seem to grow at a linear rate as the problem size increases. --- paper_title: An artificial ant colonies approach to medical image segmentation paper_content: The success of image analysis depends heavily upon accurate image segmentation algorithms. This paper presents a novel segmentation algorithm based on artificial ant colonies (AC). Recent studies show that the self-organization of ants is similar to neurons in the human brain in many respects. Therefore, it has been used successfully for understanding biological systems. It is also widely used in many applications in robotics, computer graphics, etc. Considering the features of artificial ant colonies, we present an extended model for image segmentation. In our model, each ant can memorize a reference object, which will be refreshed when it finds a new target. A fuzzy connectedness measure is adopted to evaluate the similarity between target and the reference object. The behavior of an ant is affected by the neighbors and the cooperation between ants is performed by exchanging information through pheromone updating. Experimental results show that the new algorithm can preserve the detail of the object and is also insensitive to noise. --- paper_title: Dynamic particle swarm optimization and K-means clustering algorithm for image segmentation paper_content: Abstract K-means clustering is usually used in image segmentation due to its simplicity and rapidity. However, K-means is heavily dependent on the initial number of clusters and easily falls into local falls into local optimum. As a result, it is often difficult to obtain satisfactory visual effects. As an evolutionary computation technique, particle swarm optimization (PSO) has good global optimization capability. Combined with PSO, K-means clustering can enhance its global optimization capability. But PSO also has the shortcoming of easily falling into local optima. This study proposes a new image segmentation algorithm called dynamic particle swarm optimization and K-means clustering algorithm (DPSOK), which is based on dynamic particle swarm optimization (DPSO) and K-means clustering. The calculation methods of its inertia weight and learning factors have been improved to ensure DPSOK algorithm keeping an equilibrium optimization capability. Experimental results show that DPSOK algorithm can effectively improve the global search capability of K-means clustering. It has much better visual effect than K-means clustering in image segmentation. Compared with classic particle swarm optimization K-means clustering algorithm (PSOK), DPSOK algorithm has obvious superiority in improving image segmentation quality and efficiency. --- paper_title: Effective Scheduling Algorithm for Load balancing (SALB) using Ant Colony Optimization in Cloud Computing paper_content: In today's environment, the day to day business operations of organizations heavily rely on the automated processes from enterprise IT infrastructures. The cloud computing is an internet based concept which are dynamically scalable in nature and provide the virtualized resources, which are provided as service over the internet. Load balancing is one of the main challenging issues in cloud computing which is require to distribute the dynamic workload across multiple nodes to ensure that no single node is overloaded. This paper implement SALB algorithm. The main contribution of our work is to balance the entire system load while trying to maximize and minimize the different parameter (Performance, SLA violation, minimum overhead, energy issue) . Our objective is to study existing ACO's and to develop an effective load balancing algorithm using ant colony optimization. --- paper_title: Design of robust cellular manufacturing system for dynamic part population considering multiple processing routes using genetic algorithm paper_content: Abstract In this paper, a comprehensive mathematical model is proposed for designing robust machine cells for dynamic part production. The proposed model incorporates machine cell configuration design problem bridged with the machines allocation problem, the dynamic production problem and the part routing problem. Multiple process plans for each part and alternatives process routes for each of those plans are considered. The design of robust cell configurations is based on the selected best part process route from user specified multiple process routes for each part type considering average product demand during the planning horizon. The dynamic part demand can be satisfied from internal production having limited capacity and/or through subcontracting part operation without affecting the machine cell configuration in successive period segments of the planning horizon. A genetic algorithm based heuristic is proposed to solve the model for minimization of the overall cost considering various manufacturing aspects such as production volume, multiple process route, machine capacity, material handling and subcontracting part operation. --- paper_title: A robust optimization approach for an integrated dynamic cellular manufacturing system and production planning with unreliable machines paper_content: Abstract In this study, a robust optimization approach is developed for a new integrated mixed-integer linear programming (MILP) model to solve a dynamic cellular manufacturing system (DCMS) with unreliable machines and a production planning problem simultaneously. This model is incorporated with dynamic cell formation, inter-cell layout, machine reliability, operator assignment, alternative process routings and production planning concepts. To cope with the parts processing time uncertainty, a robust optimization approach immunized against even worst-case is adopted. In fact, this approach enables the system’s planner to assess different levels of uncertainty and conservation throughout planning horizon. This study minimizes the costs of machine breakdown and relocation, operator training and hiring, inter-intra cell part trip, and shortage and inventory. To verify the performance of the presented model and proposed approach, some numerical examples are solved in hypothetical limits using the CPLEX solver. The experimental results demonstrate the validity of the presented model and the performance of the developed approach in finding an optimal solution. Finally, the conclusion is presented. --- paper_title: The swarm and the queen: towards a deterministic and adaptive particle swarm optimization paper_content: A very simple particle swarm optimization iterative algorithm is presented, with just one equation and one social/confidence parameter. We define a "no-hope" convergence criterion and a "rehope" method so that, from time to time, the swarm re-initializes its position, according to some gradient estimations of the objective function and to the previous re-initialization (it means it has a kind of very rudimentary memory). We then study two different cases, a quite "easy" one (the Alpine function) and a "difficult" one (the Banana function), but both just in dimension two. The process is improved by taking into account the swarm gravity center (the "queen") and the results are good enough so that it is certainly worthwhile trying the method on more complex problems. --- paper_title: A review of swarm robotics tasks paper_content: Swarm intelligence principles have been widely studied and applied to a number of different tasks where a group of autonomous robots is used to solve a problem with a distributed approach, i.e. without central coordination. A survey of such tasks is presented, illustrating various algorithms that have been used to tackle the challenges imposed by each task. Aggregation, flocking, foraging, object clustering and sorting, navigation, path formation, deployment, collaborative manipulation and task allocation problems are described in detail, and a high-level overview is provided for other swarm robotics tasks. For each of the main tasks, (1) swarm design methods are identified, (2) past works are divided in task-specific categories, and (3) mathematical models and performance metrics are described. Consistently with the swarm intelligence paradigm, the main focus is on studies characterized by distributed control, simplicity of individual robots and locality of sensing and communication. Distributed algorithms are shown to bring cooperation between agents, obtained in various forms and often without explicitly programming a cooperative behavior in the single robot controllers. Offline and online learning approaches are described, and some examples of past works utilizing these approaches are reviewed. --- paper_title: Wind energy and green economy in Europe: Measuring policy-induced innovation using patent data paper_content: The green economy policy discourse has devoted a lot of attention to the design of public policy addressing low-carbon technologies. In this paper we examine the impacts of public R&D support and feed-in tariff schemes on innovation in the wind energy sector. The analysis is conducted using patent application data for four western European countries over the period 1977–2009. Different model specifications are tested, and the analysis highlights important policy interaction effects. The results indicate that both public R&D support and feed-in tariffs have positively affected patent application counts in the wind power sector. The (marginal) impact on patent applications of increases in feed-tariffs has also become more profound as the wind power technology has matured. There is also some evidence of policy interaction effects in that the impact of public R&D support to wind power is greater at the margin if it is accompanied by the use of feed-in tariff schemes. These results support the notion that technological innovation requires both R&D and learning-by-doing, and for this reason public R&D programs should typically not be designed in isolation from practical applications. The paper ends by outlining some important avenues for future research. --- paper_title: On-Off control based particle swarm optimization for maximum power point tracking of wind turbine equipped by DFIG connected to the grid with energy storage paper_content: Abstract In this paper, particle swarm optimization (PSO) is proposed to generate an On-Off Controller. On-Off Control scheme based maximum power point tracking is proposed to control the rotor side converter of wind turbine equipped with doubly fed induction generator connected to the grid with battery storage. The Grid Side Converter (GSC) is controlled in such a way to guarantee a smooth DC voltage and ensure sinusoidal current in the grid side. Simulation results show that the wind turbine can operate at its optimum power point for variable speed and power quality can be improved. ---
Title: A Survey on Evolutionary Computation: Methods and Their Applications in Engineering Section 1: Introduction Description 1: This section introduces evolutionary computation (EC), discussing its principles, significance, and how it aligns with biological evolution to solve complex computational problems. Section 2: Main Applications of EC Description 2: This section explores the various applications of evolutionary computation, especially in the fields of image and signal processing and cloud computing (CC). Section 3: General Steps in an EA Description 3: This section outlines the general steps involved in evolutionary algorithms (EAs), detailing the iterative process to find optimized solutions. Section 4: Differential Evolution (DE) Description 4: This section describes the DE algorithm, its purpose in minimizing objective functions, and its ability to find global solutions efficiently. Section 5: Differential Search Algorithm (DSA) Description 5: This section explains the DSA, which uses the concept of migration and movement to solve real-valued numerical optimization problems. Section 6: Genetic Programming (GP) Description 6: This section discusses GP, focusing on its ability to optimize computer populations to perform specific user tasks. Section 7: Evolutionary Programming (EP) Description 7: This section elaborates on EP, highlighting its flexibility and unique characteristic of not following a constant pattern. Section 8: Evolution Strategy (ES) Description 8: This section overviews ES, describing its iterative process, mutation, and selection as search mechanisms. Section 9: Genetic Algorithm (GA) Description 9: This section covers GA, detailing its use of natural selection and heuristic search to solve problems like automatic electronic circuit generation. Section 10: Gene Expressing Programming (GEP) Description 10: This section explains GEP, which uses tree structures and genotypic information to adapt and solve complex problems. Section 11: Swarm Intelligence Algorithms (SIAs) Description 11: This section introduces swarm intelligence (SI) algorithms, with examples such as ant colony optimization, bees algorithm, cuckoo bird search, and particle swarm optimization. Section 12: Ant Colony Optimization (ACO) Description 12: This section describes ACO, a technique for finding optimal paths in graphs inspired by the behavior of ants. Section 13: Bees Algorithm (BA) Description 13: This section explores BA, inspired by the food foraging behavior of bees, used for combinatorial and continuous optimization. Section 14: Cuckoo Bird Search Algorithm (CSA) Description 14: This section details CSA, an optimization method inspired by the reproductive strategy of cuckoo birds. Section 15: Particle Swarm Optimization (PSO) Description 15: This section covers PSO, an iterative method inspired by the social behavior of birds and fish to find optimized solutions. Section 16: Classic Real World Applications of EAs and SIAs Description 16: This section outlines classic applications of EAs and SIAs, such as cloud computing scheduling and load balancing. Section 17: CC Scheduling by GA Description 17: This section discusses the use of GA in scheduling tasks in cloud computing environments for better load balancing. Section 18: CC Scheduling by ACO and BA Description 18: This section explores how combining ACO and BA can improve resource management and power consumption in cloud computing. Section 19: Grid Load Balancing by ACO Description 19: This section describes the application of ACO for uniform load distribution in grid computing. Section 20: Image Processing by ACO and BA Description 20: This section details the use of ACO and BA in image feature extraction, segmentation, and noise reduction. Section 21: Newly Introduced Trends Description 21: This section discusses newly emerging trends and improved methods in evolutionary computation. Section 22: Improved Image Segmentation Via K-Means Clustering and PSO Description 22: This section explains the improvements in image segmentation using dynamic PSO integrated with K-means clustering. Section 23: Scheduling Algorithm for Load Balancing (SALB) in CC Based on ACO Description 23: This section elaborates on SALB, an enhanced version of ACO for dynamic workload balancing in cloud computing. Section 24: Improved Bi-Objective Dynamic Cell Formation Problem (DCFP) by Non-Dominated Sorting GA (NSGA-II) Description 24: This section describes the NSGA-II hybrid meta-heuristic for solving dynamic cellular manufacturing system problems. Section 25: Improvement of Robust Machine Cells for Dynamic Part Production (RMCDPP) by GA Description 25: This section discusses a GA-based heuristic for cost reduction in industrial production systems. Section 26: Integrated Mixed-Integer Linear Programming (MILP) Model to DCMS Problem with Uncertainty Description 26: This section covers the use of MILP models for handling uncertain factors in dynamic cellular manufacturing systems. Section 27: Robotic Applications Description 27: This section explores the various applications of SIAs in robotics, focusing on tasks like object sorting, navigation, and task assignment. Section 28: Power Control in Wind Turbines by GA and PSO Description 28: This section describes the use of GA and PSO in designing wind farms and controlling power conversion for optimal energy generation. Section 29: Conclusion Description 29: This section summarizes the significance of EC, its applications in various fields, and the discussed algorithms for solving specific problems.
A Deep Journey into Super-resolution: A survey
20
--- paper_title: Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition paper_content: We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output. The deep belief network pre-training algorithm is a robust and often helpful way to initialize deep neural networks generatively that can aid in optimization and reduce generalization error. We illustrate the key components of our model, describe the procedure for applying CD-DNN-HMMs to LVSR, and analyze the effects of various modeling choices on performance. Experiments on a challenging business search dataset demonstrate that CD-DNN-HMMs can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs, with an absolute sentence accuracy improvement of 5.8% and 9.2% (or relative error reduction of 16.0% and 23.2%) over the CD-GMM-HMMs trained using the minimum phone error rate (MPE) and maximum-likelihood (ML) criteria, respectively. --- paper_title: Super-Resolution in Medical Imaging paper_content: This paper provides an overview on super-resolution (SR) research in medical imaging applications. Many imaging modalities exist. Some provide anatomical information and reveal information about the structure of the human body, and others provide functional information, locations of activity for specific activities and specified tasks. Each imaging system has a characteristic resolution, which is determined based on physical constraints of the system detectors that are in turn tuned to signal-to-noise and timing considerations. A common goal across systems is to increase the resolution, and as much as possible achieve true isotropic 3-D imaging. SR technology can serve to advance this goal. Research on SR in key medical imaging modalities, including MRI, fMRI and PET, has started to emerge in recent years and is reviewed herein. The algorithms used are mostly based on standard SR algorithms. Results demonstrate the potential in introducing SR techniques into practical medical applications. --- paper_title: SOD-MTGAN: Small Object Detection via Multi-Task Generative Adversarial Network paper_content: Object detection is a fundamental and important problem in computer vision. Although impressive results have been achieved on large/medium sized objects in large-scale detection benchmarks (e.g. the COCO dataset), the performance on small objects is far from satisfactory. The reason is that small objects lack sufficient detailed appearance information, which can distinguish them from the background or similar objects. To deal with the small object detection problem, we propose an end-to-end multi-task generative adversarial network (MTGAN). In the MTGAN, the generator is a super-resolution network, which can up-sample small blurred images into fine-scale ones and recover detailed information for more accurate detection. The discriminator is a multi-task network, which describes each super-resolved image patch with a real/fake score, object category scores, and bounding box regression offsets. Furthermore, to make the generator recover more details for easier detection, the classification and regression losses in the discriminator are back-propagated into the generator during training. Extensive experiments on the challenging COCO dataset demonstrate the effectiveness of the proposed method in restoring a clear super-resolved image from a blurred small one, and show that the detection performance, especially for small sized objects, improves over state-of-the-art methods. --- paper_title: Resolution limits in astronomical images paper_content: A method is introduced to derive resolution criteria for various a priori defined templates of brightness distribution fitted to represent structures and objects in astronomical images. The method is used for deriving criteria for the minimum and maximum resolvable sizes of a template. The minimum resolvable size of a template is determined by the ratio of (SNR-1)/SNR, and the maximum detectable size is determined by the ratio of 1/SNR. Application of these criteria is discussed in connection to data from filled-aperture instruments and interferometers, accounting for different aperture shapes and the effects of Fourier sampling, tapering, apodization and visibility weighting. Specific resolution limits are derived for four different templates of brightness distribution: (1) two-dimensional Gaussian, (2) optically thin spherical shell, (3) disk of uniform brightness, and (4) ring. The limiting resolution for these templates changes with SNR similarly to the quantum limit on resolution. Practical application of the resolution limits is discussed in two examples dealing with measurements of maximum brightness temperature in compact relativistic jets and assessments of morphology of young supernova remnants. --- paper_title: A Guide to Convolutional Neural Networks for Computer Vision paper_content: Computer vision has become increasingly important and effective in recent years due to its wide-ranging applications in areas as diverse as smart surveillance and monitoring, health and medicine, sports and recreation, robotics, drones, and self-driving cars. Visual recognition tasks, such as image classification, localization, and detection, are the core building blocks of many of these applications, and recent developments in Convolutional Neural Networks (CNNs) have led to outstanding performance in these state-of-the-art visual recognition tasks and systems. As a result, CNNs now form the crux of deep learning algorithms in computer vision. is self-contained guide will benefit those who seek to both understand the theory be- hind CNNs and to gain hands-on experience on the application of CNNs in computer vision. It provides a comprehensive introduction to CNNs starting with the essential concepts behind neural networks: training, regularization, and optimization of CNNs. e book also discusses a wide range of loss functions, network layers, and popular CNN architectures, reviews the differ- ent techniques for the evaluation of CNNs, and presents some popular CNN tools and libraries that are commonly used in computer vision. Further, this text describes and discusses case stud- ies that are related to the application of CNN in computer vision, including image classification, object detection, semantic segmentation, scene understanding, and image generation. is book is ideal for undergraduate and graduate students, as no prior background knowl- edge in the field is required to follow the material, as well as new researchers, developers, engi- neers, and practitioners who are interested in gaining a quick understanding of CNN models. --- paper_title: Low Resolution Face Recognition Across Variations in Pose and Illumination paper_content: We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm. --- paper_title: Chaining Identity Mapping Modules for Image Denoising paper_content: We propose to learn a fully-convolutional network model that consists of a Chain of Identity Mapping Modules (CIMM) for image denoising. The CIMM structure possesses two distinctive features that are important for the noise removal task. Firstly, each residual unit employs identity mappings as the skip connections and receives pre-activated input in order to preserve the gradient magnitude propagated in both the forward and backward directions. Secondly, by utilizing dilated kernels for the convolution layers in the residual branch, in other words within an identity mapping module, each neuron in the last convolution layer can observe the full receptive field of the first layer. After being trained on the BSD400 dataset, the proposed network produces remarkably higher numerical accuracy and better visual image quality than the state-of-the-art when being evaluated on conventional benchmark images and the BSD68 dataset. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Region-Based Convolutional Networks for Accurate Object Detection and Segmentation paper_content: Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012—achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network . Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn. --- paper_title: Ask Me Anything: Dynamic Memory Networks for Natural Language Processing paper_content: Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets. --- paper_title: Deep Underwater Image Enhancement paper_content: In an underwater scene, wavelength-dependent light absorption and scattering degrade the visibility of images, causing low contrast and distorted color casts. To address this problem, we propose a convolutional neural network based image enhancement model, i.e., UWCNN, which is trained efficiently using a synthetic underwater image database. Unlike the existing works that require the parameters of underwater imaging model estimation or impose inflexible frameworks applicable only for specific scenes, our model directly reconstructs the clear latent underwater image by leveraging on an automatic end-to-end and data-driven training mechanism. Compliant with underwater imaging models and optical properties of underwater scenes, we first synthesize ten different marine image databases. Then, we separately train multiple UWCNN models for each underwater image formation type. Experimental results on real-world and synthetic underwater images demonstrate that the presented method generalizes well on different underwater scenes and outperforms the existing methods both qualitatively and quantitatively. Besides, we conduct an ablation study to demonstrate the effect of each component in our network. --- paper_title: Digital image forensics via intrinsic fingerprints paper_content: Digital imaging has experienced tremendous growth in recent decades, and digital camera images have been used in a growing number of applications. With such increasing popularity and the availability of low-cost image editing software, the integrity of digital image content can no longer be taken for granted. This paper introduces a new methodology for the forensic analysis of digital camera images. The proposed method is based on the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on digital images, and these intrinsic fingerprints can be identified and employed to verify the integrity of digital data. The intrinsic fingerprints of the various in-camera processing operations can be estimated through a detailed imaging model and its component analysis. Further processing applied to the camera captured image is modelled as a manipulation filter, for which a blind deconvolution technique is applied to obtain a linear time-invariant approximation and to estimate the intrinsic fingerprints associated with these postcamera operations. The absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes. Any change or inconsistencies among the estimated camera-imposed fingerprints, or the presence of new types of fingerprints suggest that the image has undergone some kind of processing after the initial capture, such as tampering or steganographic embedding. Through analysis and extensive experimental studies, this paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics. --- paper_title: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks paper_content: State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available. --- paper_title: Natural Language Processing (almost) from Scratch paper_content: We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements. --- paper_title: Fast Image Super-Resolution Based on In-Place Example Regression paper_content: We propose a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super-resolution approaches- learning from an external database and learning from self-examples. Our in-place self-similarity refines the recently proposed local self-similarity by proving that a patch in the upper scale image have good matches around its origin location in the lower scale image. Based on the in-place examples, a first-order approximation of the nonlinear mapping function from low-to high-resolution image patches is learned. Extensive experiments on benchmark and real-world images demonstrate that our algorithm can produce natural-looking results with sharp edges and preserved fine details, while the current state-of-the-art algorithms are prone to visual artifacts. Furthermore, our model can easily extend to deal with noise by combining the regression results on multiple in-place examples for robust estimation. The algorithm runs fast and is particularly useful for practical applications, where the input images typically contain diverse textures and they are potentially contaminated by noise or compression artifacts. --- paper_title: Image upsampling via imposed edge statistics paper_content: In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed. --- paper_title: Example-based super-resolution paper_content: We call methods for achieving high-resolution enlargements of pixel-based images super-resolution algorithms. Many applications in graphics or image processing could benefit from such resolution independence, including image-based rendering (IBR), texture mapping, enlarging consumer photographs, and converting NTSC video content to high-definition television. We built on another training-based super-resolution algorithm and developed a faster and simpler algorithm for one-pass super-resolution. Our algorithm requires only a nearest-neighbor search in the training set for a vector derived from each patch of local image data. This one-pass super-resolution algorithm is a step toward achieving resolution independence in image-based representations. We don't expect perfect resolution independence-even the polygon representation doesn't have that-but increasing the resolution independence of pixel-based representations is an important task for IBR. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Improving resolution by image registration paper_content: Abstract Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence. --- paper_title: Super-resolution through neighbor embedding paper_content: In this paper, we propose a novel method for solving single-image super-resolution problems. Given a low-resolution image as input, we recover its high-resolution counterpart using a set of training examples. While this formulation resembles other learning-based methods for super-resolution, our method has been inspired by recent manifold teaming methods, particularly locally linear embedding (LLE). Specifically, small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces. As in LLE, local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space. Besides using the training image pairs to estimate the high-resolution embedding, we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping. Experiments show that our method is very flexible and gives good empirical results. --- paper_title: Statistics of natural images and models paper_content: Large calibrated datasets of 'random' natural images have recently become available. These make possible precise and intensive statistical studies of the local nature of images. We report results ranging from the simplest single pixel intensity to joint distribution of 3 Haar wavelet responses. Some of these statistics shed light on old issues such as the near scale-invariance of image statistics and some are entirely new. We fit mathematical models to some of the statistics and explain others in terms of local image features. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Learning Deep CNN Denoiser Prior for Image Restoration paper_content: Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications. --- paper_title: Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising paper_content: The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing. --- paper_title: Deeply-Recursive Convolutional Network for Image Super-Resolution paper_content: We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/ vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Image Super-Resolution via Deep Recursive Residual Network paper_content: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Learning Deep CNN Denoiser Prior for Image Restoration paper_content: Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising paper_content: The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing. --- paper_title: Deep Networks for Image Super-Resolution with Sparse Prior paper_content: Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality. --- paper_title: Very Deep Convolutional Networks for Large-Scale Image Recognition paper_content: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. --- paper_title: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification paper_content: Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset. --- paper_title: Is the deconvolution layer the same as a convolutional layer? paper_content: In this note, we want to focus on aspects related to two questions most people asked us at CVPR about the network we presented. Firstly, What is the relationship between our proposed layer and the deconvolution layer? And secondly, why are convolutions in low-resolution (LR) space a better choice? These are key questions we tried to answer in the paper, but we were not able to go into as much depth and clarity as we would have liked in the space allowance. To better answer these questions in this note, we first discuss the relationships between the deconvolution layer in the forms of the transposed convolution layer, the sub-pixel convolutional layer and our efficient sub-pixel convolutional layer. We will refer to our efficient sub-pixel convolutional layer as a convolutional layer in LR space to distinguish it from the common sub-pixel convolutional layer. We will then show that for a fixed computational budget and complexity, a network with convolutions exclusively in LR space has more representation power at the same speed than a network that first upsamples the input in high resolution space. --- paper_title: Image Super-Resolution Via Sparse Representation paper_content: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper_content: Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods. --- paper_title: FormResNet: Formatted Residual Learning for Image Restoration paper_content: In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a "residual formatting layer" to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Balanced Two-Stage Residual Networks for Image Super-Resolution paper_content: In this paper, balanced two-stage residual networks (BTSRN) are proposed for single image super-resolution. The deep residual design with constrained depth achieves the optimal balance between the accuracy and the speed for super-resolving images. The experiments show that the balanced two-stage structure, together with our lightweight two-layer PConv residual block design, achieves very promising results when considering both accuracy and speed. We evaluated our models on the New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution (NTIRE SR 2017). Our final model with only 10 residual blocks ranked among the best ones in terms of not only accuracy (6th among 20 final teams) but also speed (2nd among top 6 teams in terms of accuracy). The source code both for training and evaluation is available in https://github.com/ychfan/sr_ntire2017. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections paper_content: In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods. --- paper_title: Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network paper_content: In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to real-world applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep network for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present variant models of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network paper_content: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. --- paper_title: Image Super-Resolution Via Sparse Representation paper_content: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. --- paper_title: Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network paper_content: In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to real-world applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep network for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present variant models of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods. --- paper_title: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics paper_content: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: FormResNet: Formatted Residual Learning for Image Restoration paper_content: In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a "residual formatting layer" to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Balanced Two-Stage Residual Networks for Image Super-Resolution paper_content: In this paper, balanced two-stage residual networks (BTSRN) are proposed for single image super-resolution. The deep residual design with constrained depth achieves the optimal balance between the accuracy and the speed for super-resolving images. The experiments show that the balanced two-stage structure, together with our lightweight two-layer PConv residual block design, achieves very promising results when considering both accuracy and speed. We evaluated our models on the New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution (NTIRE SR 2017). Our final model with only 10 residual blocks ranked among the best ones in terms of not only accuracy (6th among 20 final teams) but also speed (2nd among top 6 teams in terms of accuracy). The source code both for training and evaluation is available in https://github.com/ychfan/sr_ntire2017. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Image Super-Resolution Via Sparse Representation paper_content: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. --- paper_title: U-Net: Convolutional Networks for Biomedical Image Segmentation paper_content: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net . --- paper_title: Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections paper_content: In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods. --- paper_title: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics paper_content: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: Deeply-Recursive Convolutional Network for Image Super-Resolution paper_content: We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/ vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin. --- paper_title: Image Super-Resolution via Deep Recursive Residual Network paper_content: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17. --- paper_title: MemNet: A Persistent Memory Network for Image Restoration paper_content: Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the long-term dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at this https URL. --- paper_title: Deeply-Recursive Convolutional Network for Image Super-Resolution paper_content: We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/ vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin. --- paper_title: Deeply-Recursive Convolutional Network for Image Super-Resolution paper_content: We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/ vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Image Super-Resolution via Deep Recursive Residual Network paper_content: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17. --- paper_title: Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections paper_content: In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: Image Super-Resolution Via Sparse Representation paper_content: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: MemNet: A Persistent Memory Network for Image Restoration paper_content: Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the long-term dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at this https URL. --- paper_title: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper_content: Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods. --- paper_title: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics paper_content: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties. --- paper_title: Deep Networks for Image Super-Resolution with Sparse Prior paper_content: Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality. --- paper_title: Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution paper_content: Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy. --- paper_title: Image Super-Resolution Via Sparse Representation paper_content: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Deep Networks for Image Super-Resolution with Sparse Prior paper_content: Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality. --- paper_title: Image Super-Resolution Via Sparse Representation paper_content: This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework. --- paper_title: Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks paper_content: Convolutional neural networks have recently demonstrated high-quality reconstruction for single image super-resolution. However, existing methods often require a large number of network parameters and entail heavy computational loads at runtime for generating high-accuracy super-resolution results. In this paper, we propose the deep Laplacian Pyramid Super-Resolution Network for fast and accurate image super-resolution. The proposed network progressively reconstructs the sub-band residuals of high-resolution images at multiple pyramid levels. In contrast to existing methods that involve the bicubic interpolation for pre-processing (which results in large feature maps), the proposed method directly extracts features from the low-resolution input space and thereby entails low computational loads. We train the proposed network with deep supervision using the robust Charbonnier loss functions and achieve high-quality image reconstruction. Furthermore, we utilize the recursive layers to share parameters across as well as within pyramid levels, and thus drastically reduce the number of parameters. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of run-time and image quality. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics paper_content: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties. --- paper_title: Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution paper_content: Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy. --- paper_title: Densely Connected Convolutional Networks paper_content: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL . --- paper_title: Image Super-Resolution Using Dense Skip Connections paper_content: Recent studies have shown that the performance of single-image super-resolution methods can be significantly boosted by using deep convolutional neural networks. In this study, we present a novel single-image super-resolution method by introducing dense skip connections in a very deep network. In the proposed network, the feature maps of each layer are propagated into all subsequent layers, providing an effective way to combine the low-level features and high-level features to boost the reconstruction performance. In addition, the dense skip connections in the network enable short paths to be built directly from the output to each layer, alleviating the vanishing-gradient problem of very deep networks. Moreover, deconvolution layers are integrated into the network to learn the upsampling filters and to speedup the reconstruction process. Further, the proposed method substantially reduces the number of parameters, enhancing the computational efficiency. We evaluate the proposed method using images from four benchmark datasets and set a new state of the art. --- paper_title: Densely Connected Convolutional Networks paper_content: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL . --- paper_title: Image Super-Resolution Using Dense Skip Connections paper_content: Recent studies have shown that the performance of single-image super-resolution methods can be significantly boosted by using deep convolutional neural networks. In this study, we present a novel single-image super-resolution method by introducing dense skip connections in a very deep network. In the proposed network, the feature maps of each layer are propagated into all subsequent layers, providing an effective way to combine the low-level features and high-level features to boost the reconstruction performance. In addition, the dense skip connections in the network enable short paths to be built directly from the output to each layer, alleviating the vanishing-gradient problem of very deep networks. Moreover, deconvolution layers are integrated into the network to learn the upsampling filters and to speedup the reconstruction process. Further, the proposed method substantially reduces the number of parameters, enhancing the computational efficiency. We evaluate the proposed method using images from four benchmark datasets and set a new state of the art. --- paper_title: Residual Dense Network for Image Super-Resolution paper_content: A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Extensive experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods. --- paper_title: Improving resolution by image registration paper_content: Abstract Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: Image Super Resolution Based on Fusing Multiple Convolution Neural Networks paper_content: In this paper, we focus on constructing an accurate super resolution system based on multiple Convolution Neural Networks (CNNs). Each individual CNN is trained separately with different network structure. A Context-wise Network Fusion (CNF) approach is proposed to integrate the outputs of individual networks by additional convolution layers. With fine-tuning the whole fused network, the accuracy is significantly improved compared to the individual networks. We also discuss other network fusion schemes, including Pixel-Wise network Fusion (PWF) and Progressive Network Fusion (PNF). The experimental results show that the CNF outperforms PWF and PNF. Using SRCNN as individual network, the CNF network achieves the state-of-the-art accuracy on benchmark image datasets. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Single Image Super-Resolution via Cascaded Multi-Scale Cross Network paper_content: The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. However, as the depth of network grows, the information flow is weakened and the training becomes harder and harder. On the other hand, most of the models adopt a single-stream structure with which integrating complementary contextual information under different receptive fields is difficult. To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks is cascaded to infer high resolution features in a coarse-to-fine manner. In each cascaded subnetwork, we stack multiple multi-scale cross (MSC) modules to fuse complementary multi-scale information in an efficient way as well as to improve information flow across the layers. Meanwhile, by introducing residual-features learning in each stage, the relative information between high-resolution and low-resolution features is fully utilized to further boost reconstruction performance. We train the proposed network with cascaded-supervision and then assemble the intermediate predictions of the cascade to achieve high quality image reconstruction. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over state-of-the-art super-resolution methods. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Fast and Accurate Single Image Super-Resolution via Information Distillation Network paper_content: Recently, deep convolutional neural networks (CNNs) have been demonstrated remarkable progress on single image super-resolution. However, as the depth and width of the networks increase, CNN-based super-resolution methods have been faced with the challenges of computational complexity and memory consumption in practice. In order to solve the above questions, we propose a deep but compact convolutional network to directly reconstruct the high resolution image from the original low resolution image. In general, the proposed model consists of three parts, which are feature extraction block, stacked information distillation blocks and reconstruction block respectively. By combining an enhancement unit with a compression unit into a distillation block, the local long and short-path features can be effectively extracted. Specifically, the proposed enhancement unit mixes together two different types of features and the compression unit distills more useful information for the sequential blocks. In addition, the proposed network has the advantage of fast execution due to the comparatively few numbers of filters per layer and the use of group convolution. Experimental results demonstrate that the proposed method is superior to the state-of-the-art methods, especially in terms of time performance. --- paper_title: MemNet: A Persistent Memory Network for Image Restoration paper_content: Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the long-term dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at this https URL. --- paper_title: A Deep Convolutional Neural Network with Selection Units for Super-Resolution paper_content: Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, we reinterpret ReLU into point-wise multiplication of an identity mapping and a switch, and finally present a novel nonlinear unit, called a selection unit (SU). While conventional ReLU has no direct control through which data is passed, the proposed SU optimizes this on-off switching control, and is therefore capable of better handling nonlinearity functionality than ReLU in a more flexible way. Our proposed deep network with SUs, called SelNet, was top-5th ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries. Further experiment results show that our proposed SelNet outperforms our baseline only with ReLU (without SUs), and other state-of-the-art deep-learning-based SR methods. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: A Deep Convolutional Neural Network with Selection Units for Super-Resolution paper_content: Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, we reinterpret ReLU into point-wise multiplication of an identity mapping and a switch, and finally present a novel nonlinear unit, called a selection unit (SU). While conventional ReLU has no direct control through which data is passed, the proposed SU optimizes this on-off switching control, and is therefore capable of better handling nonlinearity functionality than ReLU in a more flexible way. Our proposed deep network with SUs, called SelNet, was top-5th ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries. Further experiment results show that our proposed SelNet outperforms our baseline only with ReLU (without SUs), and other state-of-the-art deep-learning-based SR methods. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Residual Dense Network for Image Super-Resolution paper_content: A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Extensive experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods. --- paper_title: Squeeze-and-Excitation Networks paper_content: Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, much existing work has shown the benefits of enhancing spatial encoding. In this work, we focus on channels and propose a novel architectural unit, which we term the"Squeeze-and-Excitation"(SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We demonstrate that by stacking these blocks together, we can construct SENet architectures that generalise extremely well across challenging datasets. Crucially, we find that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at slight computational cost. SENets formed the foundation of our ILSVRC 2017 classification submission which won first place and significantly reduced the top-5 error to 2.251%, achieving a 25% relative improvement over the winning entry of 2016. --- paper_title: Learning Deep CNN Denoiser Prior for Image Restoration paper_content: Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications. --- paper_title: MemNet: A Persistent Memory Network for Image Restoration paper_content: Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the long-term dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at this https URL. --- paper_title: Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution paper_content: Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: RAM: Residual Attention Module for Single Image Super-Resolution. paper_content: Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channel-wise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Zero-Shot Super-Resolution Using Deep Internal Learning paper_content: Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method. --- paper_title: Learning a Single Convolutional Super-Resolution Network for Multiple Degradations paper_content: Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to deal with multiple degradations. To address these issues, we propose a dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the proposed super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Single Image Super-Resolution via Cascaded Multi-Scale Cross Network paper_content: The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. However, as the depth of network grows, the information flow is weakened and the training becomes harder and harder. On the other hand, most of the models adopt a single-stream structure with which integrating complementary contextual information under different receptive fields is difficult. To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks is cascaded to infer high resolution features in a coarse-to-fine manner. In each cascaded subnetwork, we stack multiple multi-scale cross (MSC) modules to fuse complementary multi-scale information in an efficient way as well as to improve information flow across the layers. Meanwhile, by introducing residual-features learning in each stage, the relative information between high-resolution and low-resolution features is fully utilized to further boost reconstruction performance. We train the proposed network with cascaded-supervision and then assemble the intermediate predictions of the cascade to achieve high quality image reconstruction. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over state-of-the-art super-resolution methods. --- paper_title: Learning Deep CNN Denoiser Prior for Image Restoration paper_content: Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications. --- paper_title: Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising paper_content: The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing. --- paper_title: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper_content: Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods. --- paper_title: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics paper_content: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: Waterloo Exploration Database: New Challenges for Image Quality Assessment Models paper_content: The great content diversity of real-world digital images poses a grand challenge to image quality assessment (IQA) models, which are traditionally designed and validated on a handful of commonly used IQA databases with very limited content variation. To test the generalization capability and to facilitate the wide usage of IQA techniques in real-world applications, we establish a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them. Instead of collecting the mean opinion score for each image via subjective testing, which is extremely difficult if not impossible, we present three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test (P-test). We compare 20 well-known IQA models using the proposed criteria, which not only provide a stronger test in a more challenging testing environment for existing models, but also demonstrate the additional benefits of using the proposed database. For example, in the P-test, even for the best performing no-reference IQA model, more than 6 million failure cases against the model are “discovered” automatically out of over 1 billion test pairs. Furthermore, we discuss how the new database may be exploited using innovative approaches in the future, to reveal the weaknesses of existing IQA models, to provide insights on how to improve the models, and to shed light on how the next-generation IQA models may be developed. The database and codes are made publicly available at: https://ece.uwaterloo.ca/~k29ma/exploration/ . --- paper_title: EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis paper_content: Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. ::: We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks. --- paper_title: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network paper_content: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. --- paper_title: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks paper_content: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. --- paper_title: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network paper_content: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. --- paper_title: Fully convolutional networks for semantic segmentation paper_content: Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image. --- paper_title: EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis paper_content: Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. ::: We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks paper_content: The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL . --- paper_title: Perceptual Losses for Real-Time Style Transfer and Super-Resolution paper_content: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results. --- paper_title: Adam: A Method for Stochastic Optimization paper_content: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm. --- paper_title: Deep Residual Learning for Image Recognition paper_content: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. --- paper_title: SRFeat: Single Image Super-Resolution with Feature Discrimination paper_content: Generative adversarial networks (GANs) have recently been adopted to single image super-resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures. However, the results of such GAN-based approaches tend to include less meaningful high-frequency noise that is irrelevant to the input image. In this paper, we propose a novel GAN-based SISR method that overcomes the limitation and produces more realistic results by attaching an additional discriminator that works in the feature domain. Our additional discriminator encourages the generator to produce structural high-frequency features rather than noisy artifacts as it distinguishes synthetic and real images in terms of features. We also design a new generator that utilizes long-range skip connections so that information between distant layers can be transferred more effectively. Experiments show that our method achieves the state-of-the-art performance in terms of both PSNR and perceptual quality compared to recent GAN-based methods. --- paper_title: ImageNet: A large-scale hierarchical image database paper_content: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks paper_content: The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL . --- paper_title: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network paper_content: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method. --- paper_title: The relativistic discriminator: a key element missing from standard GAN paper_content: In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. ::: We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. ::: Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400%), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: Manga109 dataset and creation of metadata paper_content: We have created Manga109, a dataset of a variety of 109 Japanese comic books publicly available for use for academic purposes. This dataset provides numerous comic images but lacks the annotations of elements in the comics that are necessary for use by machine learning algorithms or evaluation of methods. In this paper, we present our ongoing project to build metadata for Manga109. We first define the metadata in terms of frames, texts and characters. We then present our web-based software for efficiently creating the ground truth for these images. In addition, we provide a guideline for the annotation with the intent of improving the quality of the metadata. --- paper_title: Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding paper_content: This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based ::: SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input ::: image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the ::: patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time. --- paper_title: Single image super-resolution from transformed self-exemplars paper_content: Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms. --- paper_title: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics paper_content: This paper presents a database containing 'ground truth' segmentations produced by humans for images of a wide variety of natural scenes. We define an error measure which quantifies the consistency between segmentations of differing granularities and find that different human segmentations of the same image are highly consistent. Use of this dataset is demonstrated in two applications: (1) evaluating the performance of segmentation algorithms and (2) measuring probability distributions associated with Gestalt grouping factors as well as statistics of image region properties. --- paper_title: NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results paper_content: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution. --- paper_title: ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks paper_content: The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL . --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Image quality assessment: from error visibility to structural similarity paper_content: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/. --- paper_title: Learning a Deep Convolutional Network for Image Super-Resolution paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. --- paper_title: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network paper_content: Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Image Super-Resolution via Deep Recursive Residual Network paper_content: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network paper_content: In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to real-world applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep network for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present variant models of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods. --- paper_title: Deeply-Recursive Convolutional Network for Image Super-Resolution paper_content: We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/ vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin. --- paper_title: Accurate Image Super-Resolution Using Very Deep Convolutional Networks paper_content: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable. --- paper_title: ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks paper_content: The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL . --- paper_title: Enhanced Deep Residual Networks for Single Image Super-Resolution paper_content: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge. --- paper_title: Residual Dense Network for Image Super-Resolution paper_content: A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Extensive experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods. --- paper_title: Image Super-Resolution via Deep Recursive Residual Network paper_content: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17. --- paper_title: Image Super-Resolution Using Deep Convolutional Networks paper_content: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. --- paper_title: Deep Image Prior paper_content: Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity (Code and supplementary material are available at https://dmitryulyanov.github.io/deep_image_prior). --- paper_title: Learning a Single Convolutional Super-Resolution Network for Multiple Degradations paper_content: Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to deal with multiple degradations. To address these issues, we propose a dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the proposed super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications. --- paper_title: Multiscale structural similarity for image quality assessment paper_content: The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method. --- paper_title: The Unreasonable Effectiveness of Deep Features as a Perceptual Metric paper_content: While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called"perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations. --- paper_title: Image quality assessment: from error visibility to structural similarity paper_content: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/. --- paper_title: Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks paper_content: We consider the single image super-resolution problem in a more general case that the low-/high-resolution pairs and the down-sampling process are unavailable. Different from traditional super-resolution formulation, the low-resolution input is further degraded by noises and blurring. This complicated setting makes supervised learning and accurate kernel estimation impossible. To solve this problem, we resort to unsupervised learning without paired data, inspired by the recent successful image-to-image translation applications. With generative adversarial networks (GAN) as the basic component, we propose a Cycle-in-Cycle network structure to tackle the problem within three steps. First, the noisy and blurry input is mapped to a noise-free low-resolution space. Then the intermediate image is up-sampled with a pre-trained deep model. Finally, we fine-tune the two modules in an end-to-end manner to get the high-resolution output. Experiments on NTIRE2018 datasets demonstrate that the proposed unsupervised method achieves comparable results as the state-of-the-art supervised models. --- paper_title: Learning Hybrid Sparsity Prior for Image Restoration: Where Deep Learning Meets Sparse Coding paper_content: State-of-the-art approaches toward image restoration can be classified into model-based and learning-based. The former - best represented by sparse coding techniques - strive to exploit intrinsic prior knowledge about the unknown high-resolution images; while the latter - popularized by recently developed deep learning techniques - leverage external image prior from some training dataset. It is natural to explore their middle ground and pursue a hybrid image prior capable of achieving the best in both worlds. In this paper, we propose a systematic approach of achieving this goal called Structured Analysis Sparse Coding (SASC). Specifically, a structured sparse prior is learned from extrinsic training data via a deep convolutional neural network (in a similar way to previous learning-based approaches); meantime another structured sparse prior is internally estimated from the input observation image (similar to previous model-based approaches). Two structured sparse priors will then be combined to produce a hybrid prior incorporating the knowledge from both domains. To manage the computational complexity, we have developed a novel framework of implementing hybrid structured sparse coding processes by deep convolutional neural networks. Experimental results show that the proposed hybrid image restoration method performs comparably with and often better than the current state-of-the-art techniques. --- paper_title: 2018 PIRM Challenge on Perceptual Image Super-resolution paper_content: This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptual-driven methods to compete alongside algorithms that target PSNR maximization. Twenty-one participating teams introduced algorithms which well-improved upon the existing state-of-the-art methods in perceptual SR, as confirmed by a human opinion study. We also analyze popular image quality measures and draw conclusions regarding which of them correlates best with human opinion scores. We conclude with an analysis of the current trends in perceptual SR, as reflected from the leading submissions. --- paper_title: PieAPP: Perceptual Image-Error Assessment through Pairwise Preference paper_content: The ability to estimate the perceptual error between images is an important problem in computer vision with many applications. Although it has been studied extensively, however, no method currently exists that can robustly predict visual differences like humans. Some previous approaches used hand-coded models, but they fail to model the complexity of the human visual system. Others used machine learning to train models on human-labeled datasets, but creating large, high-quality datasets is difficult because people are unable to assign consistent error labels to distorted images. In this paper, we present a new learning-based method that is the first to predict perceptual image error like human observers. Since it is much easier for people to compare two given images and identify the one more similar to a reference than to assign quality scores to each, we propose a new, large-scale dataset labeled with the probability that humans will prefer one image over another. We then train a deep-learning model using a novel, pairwise-learning framework to predict the preference of one distorted image over the other. Our key observation is that our trained network can then be used separately with only one distorted image and a reference to predict its perceptual error, without ever being trained on explicit human perceptual-error labels. The perceptual error estimated by our new metric, PieAPP, is well-correlated with human opinion. Furthermore, it significantly outperforms existing algorithms, beating the state-of-the-art by almost 3x on our test set in terms of binary error rate, while also generalizing to new kinds of distortions, unlike previous learning-based methods. --- paper_title: Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution paper_content: Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy. --- paper_title: Zero-Shot Super-Resolution Using Deep Internal Learning paper_content: Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method. --- paper_title: To learn image super-resolution, use a GAN to learn how to do image degradation first paper_content: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories. ---
Title: A Deep Journey into Super-resolution: A survey Section 1: INTRODUCTION Description 1: Write about the increasing attention to image super-resolution (SR) in recent years, the importance of high-resolution images, and the main focus of the survey on deep learning algorithms for single image super-resolution (SISR). Section 2: BACKGROUND Description 2: Explain the foundational concepts of super-resolution, degradation processes, and categorization of SR methods based on image priors and techniques. Section 3: SINGLE IMAGE SUPER-RESOLUTION Description 3: Provide an overview of deep learning-based techniques for SISR, categorizing them into various network designs and architectures. Section 3.1: Linear networks Description 3.1: Discuss linear network architectures for SR, subdividing them into early upsampling and late upsampling designs. Section 3.2: Residual Networks Description 3.2: Detail the concept of residual networks for SR, providing examples of single-stage and multi-stage residual networks. Section 3.3: Recursive networks Description 3.3: Explain recursive network designs and their application in SR tasks. Section 3.4: Progressive reconstruction designs Description 3.4: Introduce progressive reconstruction methods for handling large scaling factors in super-resolution. Section 3.5: Densely Connected Networks Description 3.5: Explore the application of densely connected network architectures inspired by DenseNet for SR. Section 3.6: Multi-branch designs Description 3.6: Explain multi-branch network designs and their usage in capturing features at multiple context scales for SR. Section 3.7: Attention-based Networks Description 3.7: Discuss attention-based models in SR and how they selectively attend to more important features. Section 3.8: Multiple-degradation handling networks Description 3.8: Describe networks designed to handle multiple degradations simultaneously, moving beyond the common assumption of bicubic degradation. Section 3.9: GAN Models Description 3.9: Explain the role of Generative Adversarial Networks (GAN) in SR and provide examples of GAN-based SR models. Section 4: Dataset Description 4: Compare and contrast the various publicly available benchmark datasets used for evaluating state-of-the-art SR algorithms. Section 5: Quantitative Measures Description 5: Evaluate SR algorithms using common quantitative measures such as PSNR and SSIM, discussing the merits and limitations of these metrics. Section 6: Choice of network loss Description 6: Discuss the different loss functions employed in SR networks and the recent trend toward mean absolute error (L1) over mean square error (L2). Section 7: Network depth Description 7: Investigate the impact of network depth on SR performance and the trend towards deeper networks for improved results. Section 8: Skip Connections Description 8: Provide an overview of the different types of skip connections used in SR networks and their influence on the performance improvements. Section 9: FUTURE DIRECTIONS/OPEN PROBLEMS Description 9: Outline the open research problems and future research directions for SR, focusing on incorporating priors, improving objective functions, handling multiple degradations, and exploring unsupervised SR. Section 10: Real vs Artificial Degradation Description 10: Discuss the challenges of generalizing SR models trained on synthetic degradations to real-world scenarios and potential solutions. Section 11: CONCLUSION Description 11: Summarize the main points covered in the survey, the progress in deep learning-based SR methods, and the ongoing challenges that need to be addressed.
A survey of hidden convex optimization
5
--- paper_title: On the copositive representation of binary and continuous nonconvex quadratic programs paper_content: In this paper, we model any nonconvex quadratic program having a mix of binary and continuous variables as a linear program over the dual of the cone of copositive matrices. This result can be viewed as an extension of earlier separate results, which have established the copositive representation of a small collection of NP-hard problems. A simplification, which reduces the dimension of the linear conic program, and an extension to complementarity constraints are established, and computational issues are discussed. --- paper_title: NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems paper_content: We show that unless P = NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomial of degree four (or higher even degree) is globally convex. This solves a problem that has been open since 1992 when N. Z. Shor asked for the complexity of deciding convexity for quartic polynomials. We also prove that deciding strict convexity, strong convexity, quasiconvexity, and pseudoconvexity of polynomials of even degree four or higher is strongly NP-hard. By contrast, we show that quasiconvexity and pseudoconvexity of odd degree polynomials can be decided in polynomial time. --- paper_title: On The Reduction of Duality Gap in Box Constrained Nonconvex Quadratic Program paper_content: In this paper, we investigate in this paper the reduction of the duality gap between box constrained nonconvex quadratic programming and its semidefinite programming (SDP) relaxation (or Lagrangian dual). Characterizing the zero duality gap by a set of saddle-point-type conditions, we propose a parameterized distance measure δ(θ) between a polyhedral set C and a perturbed nonconvex set Λ(θ) to measure the dissatisfaction degree of the optimality conditions for zero duality gap. An underestimation of the duality gap is then derived which leads to a reduction of the duality gap proportional to δ2(θ*) for the identified best parameter θ*. This reduction of duality gap can be extended to the cases with both box and linear equality constraints. We demonstrate that the computation of δ(θ*) can be reduced to the cell enumeration of hyperplane arrangement in discrete geometry. In particular, we show that the reduction of duality gap can be achieved in polynomial time for a fixed degeneracy degree of the modified ... --- paper_title: Strong duality in optimization: shifted power reformulation paper_content: For a general class of non-convex optimization problems, a class of power reformulation closes the duality gap between the primal problem and its Lagrangian dual, when the order of the power is sufficiently large. In this paper, we first estimate a lower bound of the power above which the attainment of the zero duality gap can be ensured. After introducing a suitable shifting, we further show, surprisingly, that order three is always sufficient to guarantee the zero duality gap. We then extend the proposed shifted power reformulation to discrete optimization. --- paper_title: Local saddle point and a class of convexification methods for nonconvex optimization problems paper_content: A class of general transformation methods are proposed to convert a nonconvex optimization problem to another equivalent problem. It is shown that under certain assumptions the existence of a local saddle point or local convexity of the Lagrangian function of the equivalent problem (EP) can be guaranteed. Numerical experiments are given to demonstrate the main results geometrically. --- paper_title: Zero duality gap for a class of nonconvex optimization problems paper_content: By an equivalent transformation using thepth power of the objective function and the constraint, a saddle point can be generated for a general class of nonconvex optimization problems. Zero duality gap is thus guaranteed when the primal-dual method is applied to the constructed equivalent form. --- paper_title: Towards Strong Duality in Integer Programming paper_content: We consider in this paper the Lagrangian dual method for solving general integer programming. New properties of Lagrangian duality are derived by a means of perturbation analysis. In particular, a necessary and sufficient condition for a primal optimal solution to be generated by the Lagrangian relaxation is obtained. The solution properties of Lagrangian relaxation problem are studied systematically. To overcome the difficulties caused by duality gap between the primal problem and the dual problem, we introduce an equivalent reformulation for the primal problem via applying a pth power to the constraints. We prove that this reformulation possesses an asymptotic strong duality property. Primal feasibility and primal optimality of the Lagrangian relaxation problems can be achieved in this reformulation when the parameter p is larger than a threshold value, thus ensuring the existence of an optimal primal-dual pair. We further show that duality gap for this partial pth power reformulation is a strictly decreasing function of p in the case of a single constraint. --- paper_title: Hidden Convex Minimization paper_content: A class of nonconvex minimization problems can be classified as hidden convex minimization problems. A nonconvex minimization problem is called a hidden convex minimization problem if there exists an equivalent transformation such that the equivalent transformation of it is a convex minimization problem. Sufficient conditions that are independent of transformations are derived in this paper for identifying such a class of seemingly nonconvex minimization problems that are equivalent to convex minimization problems. Thus, a global optimality can be achieved for this class of hidden convex optimization problems by using local search methods. The results presented in this paper extend the reach of convex minimization by identifying its equivalent with a nonconvex representation. --- paper_title: Peeling Off a Nonconvex Cover of an Actual Convex Problem: Hidden Convexity paper_content: Convexity is, without a doubt, one of the most desirable features in optimization. Many optimization problems that are nonconvex in their original settings may become convex after performing certain equivalent transformations. This paper studies the conditions for such hidden convexity. More specifically, some transformation-independent sufficient conditions have been derived for identifying hidden convexity. The derived sufficient conditions are readily verifiable for quadratic optimization problems. The global minimizer of a hidden convex programming problem can be identified using a local search algorithm. --- paper_title: NP-hardness of Linear Multiplicative Programming and Related Problems paper_content: The linear multiplicative programming problem minimizes a product of two (positive) variables subject to linear inequality constraints. In this paper, we show NP-hardness of linear multiplicative programming problems and related problems. --- paper_title: MULTIPLICATIVE PROGRAMMING PROBLEMS paper_content: This chapter reviews recent algorithmic developments in multiplicative programming. The multiplicative programming problem is a class of minimization problems containing a product of several convex functions either in its objective or in its constraints. It has various practical applications in such areas as microeconomics, geometric optimization, multicriteria optimization and so on. A product of convex functions is in general not (quasi)convex, and hence the problem can have multiple local minima. However, some types of multiplicative problems can be solved in a practical sense. The types to be discussed in this chapter are minimization of a product of p convex functions over a convex set, minimization of a sum of p convex multiplicative functions, and minimization of a convex function subject to a constraint on a product of p convex functions. If p is less than four or five, it is shown that parametric simplex algorithms or global optimization algorithms work very well for these problems. --- paper_title: NP-hardness of Linear Multiplicative Programming and Related Problems paper_content: The linear multiplicative programming problem minimizes a product of two (positive) variables subject to linear inequality constraints. In this paper, we show NP-hardness of linear multiplicative programming problems and related problems. --- paper_title: Minimizing the sum of a linear and a linear fractional function applying conic quadratic representation: continuous and discrete problems paper_content: This paper tries to minimize the sum of a linear and a linear fractional function over a closed convex set defined by some linear and conic quadratic constraints. At first, we represent some necessary and sufficient conditions for the pseudoconvexity of the problem. For each of the conditions, under some reasonable assumptions, an appropriate second-order cone programming (SOCP) reformulation of the problem is stated and a new applicable solution procedure is proposed. Efficiency of the proposed reformulations is demonstrated by numerical experiments. Secondly, we limit our attention to binary variables and derive a sufficient condition for SOCP representability. Using the experimental results on random instances, we show that the proposed conic reformulation is more efficient in comparison with the well-known linearization technique and it produces more eligible cuts for the branch and bound algorithm. --- paper_title: Fractional Programming. I, Duality paper_content: This paper, which is presented in two parts, is a contribution to the theory of fractional programming, i.e., maximization of quotients subject to constraints. In Part I a duality theory for linear and concave-convex fractional programs is developed and related to recent results by Bector, Craven-Mond, Jagannathan, Sharma-Swarup, et al. Basic duality theorems of linear, quadratic and convex programming are extended. In Part II Dinkelbach's algorithm solving fractional programs is considered. The rate of convergence as well as a priori and a posteriori error estimates are determined. In view of these results the stopping rule of the algorithm is changed. Also the starting rule is modified using duality as introduced in Part I. Furthermore a second algorithm is proposed. In contrast to Dinkelbach's procedure the rate of convergence is still controllable. Error estimates are obtained too. --- paper_title: A sequential method for a class of pseudoconcave fractional problems paper_content: The aim of the paper is to maximize a pseudoconcave function which is the sum of a linear and a linear fractional function subject to linear constraints. Theoretical properties of the problem are first established and then a sequential method based on a simplex-like procedure is suggested. Copyright Springer-Verlag 2008 --- paper_title: A convex optimization approach for minimizing the ratio of indefinite quadratic functions over an ellipsoid paper_content: We consider the nonconvex problem (RQ) of minimizing the ratio of two nonconvex quadratic functions over a possibly degenerate ellipsoid. This formulation is motivated by the so-called regularized total least squares problem (RTLS), which is a special case of the problem’s class we study. We prove that under a certain mild assumption on the problem’s data, problem (RQ) admits an exact semidefinite programming relaxation. We then study a simple iterative procedure which is proven to converge superlinearly to a global solution of (RQ) and show that the dependency of the number of iterations on the optimality tolerance $$\varepsilon$$grows as $$O(\sqrt{\ln \varepsilon^{-1}})$$. --- paper_title: Finding a global optimal solution for a quadratically constrained fractional quadratic problem with applications to the regularized total least squares paper_content: We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a two-sided quadratic form constraint. This formulation is motivated by the so-called regularized total least squares (RTLS) problem. A key difficulty with this problem is its nonconvexity, and all current known methods to solve it are guaranteed only to converge to a point satisfying first order necessary optimality conditions. We prove that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter. As a result, we derive an efficient algorithm that produces an $\epsilon$-global optimal solution in a computational effort of $O(n^3 \log \epsilon^{-1})$. The algorithm is tested on problems arising from the inverse Laplace transform and image deblurring. Comparison to other well-known RTLS solvers illustrates the attractiveness of our new method. --- paper_title: Parametric Lagrangian dual for the binary quadratic programming problem paper_content: Based on a difference between convex decomposition of the Lagrangian function, we propose and study a family of parametric Lagrangian dual for the binary quadratic program. Then we show they improve several lower bounds in recent literature. --- paper_title: A new semidefinite programming relaxation scheme for a class of quadratic matrix problems paper_content: Abstract We consider a special class of quadratic matrix optimization problems which often arise in applications. By exploiting the special structure of these problems, we derive a new semidefinite relaxation which, under mild assumptions, is proven to be tight for a larger number of constraints than could be achieved via a direct approach. We show the potential usefulness of these results when applied to robust least-squares and sphere-packing problems. --- paper_title: An SDP approach for quadratic fractional problems with a two-sided quadratic constraint paper_content: We consider a fractional programming problem (P) which minimizes a ratio of quadratic functions subject to a two-sided quadratic constraint. On one hand, (P) can be solved under some technical conditions by the Dinkelbach iterative method [W. Dinkelbach, On nonlinear fractional programming, Manag. Sci. 13 (1967), pp. 492–498] which has dominated the development of the area for nearly half a century. On the other hand, some special case of (P), typically the one in Beck and Teboulle [A convex optimization approach for minimizing the ratio of indefinite quadratic functions over an ellipsoid, Math. Program. Ser. A 118 (2009), pp. 13–35], could be directly solved via an exact semi-definite reformulation, rather than iteratively. In this paper, by a recent breakthrough of Xia et al. [S-Lemma with equality and its applications. Available at http://arxiv.org/abs/1403.2816] on the S-lemma with equality, we propose to analyse (P) with three cases and show that each of them admits an exact SDP relaxation. As a resu... --- paper_title: On Lagrangian duality gap of quadratic fractional programming with a two-sided quadratic constraint paper_content: Strong Lagrangian duality holds for the quadratic programming with a two-sided quadratic constraint. In this paper, we show that the two-sided quadratic constrained quadratic fractional programming, if well scaled, also has zero Lagrangian duality gap. However, this is not always true without scaling. For a special case, the identical regularized total least squares problem, we establish the necessary and sufficient condition under which the Lagrangian duality gap is positive. --- paper_title: Hidden convexity in some nonconvex quadratically constrained quadratic programming paper_content: We consider the problem of minimizing an indefinite quadratic objective function subject to twosided indefinite quadratic constraints. Under a suitable simultaneous diagonalization assumption (which trivially holds for trust region type problems), we prove that the original problem is equivalent to a convex minimization problem with simple linear constraints. We then consider a special problem of minimizing a concave quadratic function subject to finitely many convex quadratic constraints, which is also shown to be equivalent to a minimax convex problem. In both cases we derive the explicit nonlinear transformations which allow for recovering the optimal solution of the nonconvex problems via their equivalent convex counterparts. Special cases and applications are also discussed. We outline interior-point polynomial-time algorithms for the solution of the equivalent convex programs. --- paper_title: Local Minimizers of Quadratic Functions on Euclidean Balls and Spheres paper_content: In this paper a characterization of the local-nonglobal minimizes of a quadratic function defined on a Euclidean ball or a sphere is given. It is proven that there exists at most one local-nonglobal minimizes and that the Lagrange multiplier that corresponds to this minimizes is the largest solution of a nonlinear scalar equation. An algorithm is proposed for computing the local-nonglobal minimizes. --- paper_title: Quadratically constrained least squares and quadratic problems paper_content: We consider the following problem: Compute a vectorx such that ?Ax?b?2=min, subject to the constraint ?x?2=?. A new approach to this problem based on Gauss quadrature is given. The method is especially well suited when the dimensions ofA are large and the matrix is sparse. ::: ::: It is also possible to extend this technique to a constrained quadratic form: For a symmetric matrixA we consider the minimization ofx T A x?2b T x subject to the constraint ?x?2=?. ::: ::: Some numerical examples are given. --- paper_title: Second-order-cone constraints for extended trust-region subproblems paper_content: The classical trust-region subproblem (TRS) minimizes a nonconvex quadratic objective over the unit ball. In this paper, we consider extensions of TRS having extra constraints. When two parallel cuts are added to TRS, we show that the resulting nonconvex problem has an exact representation as a semidenite program with additional linear and second-order-cone constraints. For the case where an additional ellipsoidal constraint is added to TRS, resulting in the \two trust-region subproblem" (TTRS), we provide a new relaxation including second-order-cone constraints that strengthens the usual SDP relaxation. --- paper_title: Minimizing the sum of a linear and a linear fractional function applying conic quadratic representation: continuous and discrete problems paper_content: This paper tries to minimize the sum of a linear and a linear fractional function over a closed convex set defined by some linear and conic quadratic constraints. At first, we represent some necessary and sufficient conditions for the pseudoconvexity of the problem. For each of the conditions, under some reasonable assumptions, an appropriate second-order cone programming (SOCP) reformulation of the problem is stated and a new applicable solution procedure is proposed. Efficiency of the proposed reformulations is demonstrated by numerical experiments. Secondly, we limit our attention to binary variables and derive a sufficient condition for SOCP representability. Using the experimental results on random instances, we show that the proposed conic reformulation is more efficient in comparison with the well-known linearization technique and it produces more eligible cuts for the branch and bound algorithm. --- paper_title: Quadratic Assignment Problems paper_content: Abstract This paper surveys quadratic assignment problems (QAP). At first several applications of this problem class are described and mathematical formulations of QAPs are given. Then some exact solution methods and good heuristics are outlined. Their computational behaviour is illustrated by numerical results. Further recent results on the asymptotic probabilistic behaviour of QAPs are outlined. --- paper_title: On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming paper_content: We analyze two popular semidefinite programming relaxations for quadratically constrained quadratic programs with matrix variables. These relaxations are based on vector lifting and on matrix lifting; they are of different size and expense. We prove, under mild assumptions, that these two relaxations provide equivalent bounds. Thus, our results provide a theoretical guideline for how to choose a less expensive semidefinite programming relaxation and still obtain a strong bound. The main technique used to show the equivalence and that allows for the simplified constraints is the recognition of a class of nonchordal sparse patterns that admit a smaller representation of the positive semidefinite constraint. --- paper_title: Strong Duality for a Trust-Region Type Relaxation of the Quadratic Assignment Problem paper_content: Abstract Lagrangian duality underlies many efficient algorithms for convex minimization problems. A key ingredient is strong duality. Lagrangian relaxation also provides lower bounds for non-convex problems, where the quality of the lower bound depends on the duality gap. Quadratically constrained quadratic programs (QQPs) provide important examples of non-convex programs. For the simple case of one quadratic constraint (the trust-region subproblem) strong duality holds. In addition, necessary and sufficient (strengthened) second-order optimality conditions exist. However, these duality results already fail for the two trust-region sub-problem. Surprisingly, there are classes of more complex, non-convex QQPs where strong duality holds. One example is the special case of orthogonality constraints, which arise naturally in relaxations for the quadratic assignment problem (QAP). In this paper we show that strong duality also holds for a relaxation of QAP where the orthogonality constraint is replaced by a semidefinite inequality constraint. Using this strong duality result, and semidefinite duality, we develop new trust-region type necessary and sufficient optimality conditions for these problems. Our proof of strong duality introduces and uses a generalization of the Hoffman–Wielandt inequality. --- paper_title: Global optimization of a class of nonconvex quadratically constrained quadratic programming problems paper_content: In this paper we study a class of nonconvex quadratically constrained quadratic programming problems generalized from relaxations of quadratic assignment problems. We show that each problem is polynomially solved. Strong duality holds if a redundant constraint is introduced. As an application, a new lower bound is proposed for the quadratic assignment problem. --- paper_title: S-Lemma with Equality and Its Applications paper_content: Let $$f(x)=x^TAx+2a^Tx+c$$f(x)=xTAx+2aTx+c and $$h(x)=x^TBx+2b^Tx+d$$h(x)=xTBx+2bTx+d be two quadratic functions having symmetric matrices $$A$$A and $$B$$B. The S-lemma with equality asks when the unsolvability of the system $$f(x)<0, h(x)=0$$f(x)<0,h(x)=0 implies the existence of a real number $$\mu $$μ such that $$f(x) + \mu h(x)\ge 0, ~\forall x\in \mathbb {R}^n$$f(x)+μh(x)?0,?x?Rn. The problem is much harder than the inequality version which asserts that, under Slater condition, $$f(x)<0, h(x)\le 0$$f(x)<0,h(x)≤0 is unsolvable if and only if $$f(x) + \mu h(x)\ge 0, ~\forall x\in \mathbb {R}^n$$f(x)+μh(x)?0,?x?Rn for some $$\mu \ge 0$$μ?0. In this paper, we show that the S-lemma with equality does not hold only when the matrix $$A$$A has exactly one negative eigenvalue and $$h(x)$$h(x) is a non-constant linear function ($$B=0, b\not =0$$B=0,b?0). As an application, we can globally solve $$\inf \{f(x): h(x)=0\}$$inf{f(x):h(x)=0} as well as the two-sided generalized trust region subproblem $$\inf \{f(x): l\le h(x)\le u\}$$inf{f(x):l≤h(x)≤u} without any condition. Moreover, the convexity of the joint numerical range $$\{(f(x), h_1(x),\ldots , h_p(x)):x\in \mathbb R^n\}$${(f(x),h1(x),?,hp(x)):x?Rn} where $$f$$f is a (possibly non-convex) quadratic function and $$h_1(x),\ldots ,h_p(x)$$h1(x),?,hp(x) are affine functions can be characterized using the newly developed S-lemma with equality. --- paper_title: SOCP Reformulation for the Generalized Trust Region Subproblem via a Canonical Form of Two Symmetric Matrices paper_content: We investigate in this paper the generalized trust region subproblem (GTRS) of minimizing a general quadratic objective function subject to a general quadratic inequality constraint. By applying a simultaneous block diagonalization approach, we obtain a congruent canonical form for the symmetric matrices in both the objective and constraint functions. By exploiting the block separability of the canonical form, we show that all GTRSs with an optimal value bounded from below are second order cone programming (SOCP) representable. Our result generalizes the recent work of Ben-Tal and Hertog (Math. Program. 143(1-2):1-29, 2014), which establishes the SOCP representability of the GTRS under the assumption of the simultaneous diagonalizability of the two matrices in the objective and constraint functions. Compared with the state-of-the-art approach to reformulate the GTRS as a semi-definite programming problem, our SOCP reformulation delivers a much faster solution algorithm. We further extend our method to two variants of the GTRS in which the inequality constraint is replaced by either an equality constraint or an interval constraint. Our methods also enable us to obtain simplified versions of the classical S-lemma, the S-lemma with equality, and the S-lemma with interval bounds. --- paper_title: Strong duality in nonconvex quadratic optimization with two quadratic constraints paper_content: We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection between the image of the real and complex spaces under a quadratic mapping, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting. Preliminary numerical simulations suggest that for random instances of the extended trust region subproblem, the sufficient condition is satisfied with a high probability. Furthermore, we show that the sufficient condition is always satisfied in two classes of nonconvex quadratic problems. Finally, we discuss an application of our results to robust least squares problems. --- paper_title: Parametric Lagrangian dual for the binary quadratic programming problem paper_content: Based on a difference between convex decomposition of the Lagrangian function, we propose and study a family of parametric Lagrangian dual for the binary quadratic program. Then we show they improve several lower bounds in recent literature. --- paper_title: Convexity of quadratic transformations and its use in control and optimization paper_content: Quadratic transformations have the hidden convexity property which allows one to deal with them as if they were convex functions. This phenomenon was encountered in various optimization and control problems, but it was not always recognized as consequence of some general property. We present a theory on convexity and closedness of a 3D quadratic image of ℝn, n≥3, which explains many disjoint known results and provides some new ones. --- paper_title: Hidden conic quadratic representation of some nonconvex quadratic optimization problems paper_content: The problem of minimizing a quadratic objective function subject to one or two quadratic constraints is known to have a hidden convexity property, even when the quadratic forms are indefinite. The equivalent convex problem is a semidefinite one, and the equivalence is based on the celebrated S-lemma. In this paper, we show that when the quadratic forms are simultaneously diagonalizable (SD), it is possible to derive an equivalent convex problem, which is a conic quadratic (CQ) one, and as such is significantly more tractable than a semidefinite problem. The SD condition holds for free for many problems arising in applications, in particular, when deriving robust counterparts of quadratic, or conic quadratic, constraints affected by implementation error. The proof of the hidden CQ property is constructive and does not rely on the S-lemma. This fact may be significant in discovering hidden convexity in some nonquadratic problems. --- paper_title: On the Solution of the GPS Localization and Circle Fitting Problems paper_content: We consider the problem of locating a user's position from a set of noisy pseudoranges to a group of satellites. We consider both the nonlinear least squares formulation of the problem, which is nonconvex and nonsmooth, and the nonlinear squared least squares variant, in which the objective function is smooth, but still nonconvex. We show that the squared least squares problem can be reformulated as a generalized trust region subproblem and as such can be solved efficiently. Conditions for attainment of the optimal solutions of both problems are derived. The nonlinear least squares problem is shown to have tight connections to the well-known geometric circle fitting and orthogonal regression problems. Finally, a fixed point method for the nonlinear least squares formulation is derived and analyzed. --- paper_title: Strong Duality for Generalized Trust Region Subproblem: S-Lemma with Interval Bounds paper_content: With the help of the newly developed S-lemma with interval bounds, we show that strong duality holds for the interval bounded generalized trust region subproblem (GTRS) under some mild assumptions, which answers an open problem raised by Pong and Wolkowicz (Comput Optim Appl 58(2), 273–322, 2014). --- paper_title: A survey of the S-lemma paper_content: In this survey we review the many faces of the S-lemma, a result about the correctness of the S-procedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry, and linear algebra as well. These were all active research areas, but as there was little interaction between researchers in these different areas, their results remained mainly isolated. Here we give a unified analysis of the theory by providing three different proofs for the S-lemma and revealing hidden connections with various areas of mathematics. We prove some new duality results and present applications from control theory, error estimation, and computational geometry. --- paper_title: Semidefinite Programming Relaxations for the Quadratic Assignment Problem paper_content: Semidefinite programming (SDP) relaxations for the quadratic assignment problem (QAP) are derived using the dual of the (homogenized) Lagrangian dual of appropriate equivalent representations of QAP. These relaxations result in the interesting, special, case where only the dual problem of the SDP relaxation has strict interior, i.e., the Slater constraint qualification always fails for the primal problem. Although there is no duality gap in theory, this indicates that the relaxation cannot be solved in a numerically stable way. By exploring the geometrical structure of the relaxation, we are able to find projected SDP relaxations. These new relaxations, and their duals, satisfy the Slater constraint qualification, and so can be solved numerically using primal-dual interior-point methods. --- paper_title: A Note on Lack of Strong duality for Quadratic Problems with Orthogonal Constraints paper_content: Abstract The general quadratically constrained quadratic program (QQP) is an important modelling tool for many diverse problems. The QQP is in general NP hard, and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations and can be solved efficiently. For several special cases of QQP, the Lagrangian relaxation provides the exact optimal value. This means that there is a zero duality gap and the problem is tractable. It is important to know for which cases this is true, since they can then be used as subproblems to improve Lagrangian relaxation for intractable QQPs. In this paper we study the special QQP with orthogonal (matrix) constraints XX T = I . If C =0, the zero duality gap result holds if the redundant orthogonal constraints X T X = I are added. We show that this is not true in the general case. However, we show how to close the duality gap in the pure linear case by adding variables in addition to constraints. --- paper_title: On Lagrangian Relaxation of Quadratic Matrix Constraints paper_content: Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations. ::: For several special cases of QQP, e.g., convex programs and trust region subproblems, the Lagrangian relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective. ::: In this paper we consider a certain QQP where the quadratic constraints correspond to the matrix orthogonality condition XXT=I. For this problem we show that the Lagrangian dual based on relaxing the constraints XXT=I and the seemingly redundant constraints XT X=I has a zero duality gap. This result has natural applications to quadratic assignment and graph partitioning problems, as well as the problem of minimizing the weighted sum of the largest eigenvalues of a matrix. We also show that the technique of relaxing quadratic matrix constraints can be used to obtain a strengthened semidefinite relaxation for the max-cut problem. --- paper_title: On Equivalence of Semidefinite Relaxations for Quadratic Matrix Programming paper_content: We analyze two popular semidefinite programming relaxations for quadratically constrained quadratic programs with matrix variables. These relaxations are based on vector lifting and on matrix lifting; they are of different size and expense. We prove, under mild assumptions, that these two relaxations provide equivalent bounds. Thus, our results provide a theoretical guideline for how to choose a less expensive semidefinite programming relaxation and still obtain a strong bound. The main technique used to show the equivalence and that allows for the simplified constraints is the recognition of a class of nonchordal sparse patterns that admit a smaller representation of the positive semidefinite constraint. --- paper_title: Nondifferentiable Optimization and Polynomial Problems paper_content: Preface. 1. Elements of Convex Analysis, Linear Algebra, and Graph Theory. 2. Subgradient and epsilon-Subgradient Methods. 3. Subgradient-Type Methods with Space Dilation. 4. Elements of Information and Numerical Complexity of Polynomial Extremal Problems. 5. Decomposition Methods Based on Nonsmooth Optimization. 6. Algorithms for Constructing Optimal on Volume Ellipsoids and Semidefinite Programming. 7. The Role of Ellipsoid Method for Complexity Analysis of Combinatorial Problems. 8. Semidefinite Programming Bounds for Extremal Graph Problems. 9. Global Minimization of Polynomial Functions and 17-th Hilbert Problem. References. --- paper_title: Strong Duality for a Trust-Region Type Relaxation of the Quadratic Assignment Problem paper_content: Abstract Lagrangian duality underlies many efficient algorithms for convex minimization problems. A key ingredient is strong duality. Lagrangian relaxation also provides lower bounds for non-convex problems, where the quality of the lower bound depends on the duality gap. Quadratically constrained quadratic programs (QQPs) provide important examples of non-convex programs. For the simple case of one quadratic constraint (the trust-region subproblem) strong duality holds. In addition, necessary and sufficient (strengthened) second-order optimality conditions exist. However, these duality results already fail for the two trust-region sub-problem. Surprisingly, there are classes of more complex, non-convex QQPs where strong duality holds. One example is the special case of orthogonality constraints, which arise naturally in relaxations for the quadratic assignment problem (QAP). In this paper we show that strong duality also holds for a relaxation of QAP where the orthogonality constraint is replaced by a semidefinite inequality constraint. Using this strong duality result, and semidefinite duality, we develop new trust-region type necessary and sufficient optimality conditions for these problems. Our proof of strong duality introduces and uses a generalization of the Hoffman–Wielandt inequality. --- paper_title: Global optimization of a class of nonconvex quadratically constrained quadratic programming problems paper_content: In this paper we study a class of nonconvex quadratically constrained quadratic programming problems generalized from relaxations of quadratic assignment problems. We show that each problem is polynomially solved. Strong duality holds if a redundant constraint is introduced. As an application, a new lower bound is proposed for the quadratic assignment problem. --- paper_title: Maxima for graphs and a new proof of a theorem of Turán paper_content: Maximum of a square-free quadratic form on a simplex. The following question was suggested by a problem of J. E. MacDonald Jr. (1): Given a graph G with vertices 1, 2, . . . , n. Let S be the simplex in E n given by x i ≥ 0, Σ x i = 1. What is --- paper_title: Tightening a copositive relaxation for standard quadratic optimization problems paper_content: We focus in this paper the problem of improving the semidefinite programming (SDP) relaxations for the standard quadratic optimization problem (standard QP in short) that concerns with minimizing a quadratic form over a simplex. We first analyze the duality gap between the standard QP and one of its SDP relaxations known as "strengthened Shor's relaxation". To estimate the duality gap, we utilize the duality information of the SDP relaxation to construct a graph G ?. The estimation can be then reduced to a two-phase problem of enumerating first all the minimal vertex covers of G ? and solving next a family of second-order cone programming problems. When there is a nonzero duality gap, this duality gap estimation can lead to a strictly tighter lower bound than the strengthened Shor's SDP bound. With the duality gap estimation improving scheme, we develop further a heuristic algorithm for obtaining a good approximate solution for standard QP. --- paper_title: A Note on Lack of Strong duality for Quadratic Problems with Orthogonal Constraints paper_content: Abstract The general quadratically constrained quadratic program (QQP) is an important modelling tool for many diverse problems. The QQP is in general NP hard, and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations and can be solved efficiently. For several special cases of QQP, the Lagrangian relaxation provides the exact optimal value. This means that there is a zero duality gap and the problem is tractable. It is important to know for which cases this is true, since they can then be used as subproblems to improve Lagrangian relaxation for intractable QQPs. In this paper we study the special QQP with orthogonal (matrix) constraints XX T = I . If C =0, the zero duality gap result holds if the redundant orthogonal constraints X T X = I are added. We show that this is not true in the general case. However, we show how to close the duality gap in the pure linear case by adding variables in addition to constraints. --- paper_title: On the complexity of quadratic programming with two quadratic constraints paper_content: The complexity of quadratic programming problems with two quadratic constraints is an open problem. In this paper we show that when one constraint is a ball constraint and the Hessian of the quadratic function defining the other constraint is positive definite, then, under quite general conditions, the problem can be solved in polynomial time in the real-number model of computation through an approach based on the analysis of the dual space of the Lagrange multipliers. However, the degree of the polynomial is rather large, thus making the result mostly of theoretical interest. --- paper_title: Convex hull of the orthogonal similarity set with applications in quadratic assignment problems paper_content: In this paper, we study thoroughly the convex hull of the orthogonal ::: similarity set and give a new representation. When applied in ::: quadratic assignment problems, it motivates two new lower bounds. ::: The first is equivalent to the projected eigenvalue bound, while the ::: second highly outperforms several well-known lower bounds in ::: literature. --- paper_title: S-Lemma with Equality and Its Applications paper_content: Let $$f(x)=x^TAx+2a^Tx+c$$f(x)=xTAx+2aTx+c and $$h(x)=x^TBx+2b^Tx+d$$h(x)=xTBx+2bTx+d be two quadratic functions having symmetric matrices $$A$$A and $$B$$B. The S-lemma with equality asks when the unsolvability of the system $$f(x)<0, h(x)=0$$f(x)<0,h(x)=0 implies the existence of a real number $$\mu $$μ such that $$f(x) + \mu h(x)\ge 0, ~\forall x\in \mathbb {R}^n$$f(x)+μh(x)?0,?x?Rn. The problem is much harder than the inequality version which asserts that, under Slater condition, $$f(x)<0, h(x)\le 0$$f(x)<0,h(x)≤0 is unsolvable if and only if $$f(x) + \mu h(x)\ge 0, ~\forall x\in \mathbb {R}^n$$f(x)+μh(x)?0,?x?Rn for some $$\mu \ge 0$$μ?0. In this paper, we show that the S-lemma with equality does not hold only when the matrix $$A$$A has exactly one negative eigenvalue and $$h(x)$$h(x) is a non-constant linear function ($$B=0, b\not =0$$B=0,b?0). As an application, we can globally solve $$\inf \{f(x): h(x)=0\}$$inf{f(x):h(x)=0} as well as the two-sided generalized trust region subproblem $$\inf \{f(x): l\le h(x)\le u\}$$inf{f(x):l≤h(x)≤u} without any condition. Moreover, the convexity of the joint numerical range $$\{(f(x), h_1(x),\ldots , h_p(x)):x\in \mathbb R^n\}$${(f(x),h1(x),?,hp(x)):x?Rn} where $$f$$f is a (possibly non-convex) quadratic function and $$h_1(x),\ldots ,h_p(x)$$h1(x),?,hp(x) are affine functions can be characterized using the newly developed S-lemma with equality. --- paper_title: Duality and sensitivity in nonconvex quadratic optimization over an ellipsoid paper_content: Abstract In this paper a duality framework is discussed for the problem of optimizing a nonconvex quadratic function over an ellipsoid. Additional insight is obtained from the observation that this nonconvex problem is in a sense equivalent to a convex problem of the same type, from which known necessary and sufficient conditions for optimality readily follow. Based on the duality results, some existing solution procedures are interpreted as in fact solving the dual. The duality relations are also shown to provide a natural framework for sensitivity analysis. --- paper_title: Quadratic programs with hollows paper_content: Let \(\mathcal {F}\) be a quadratically constrained, possibly nonconvex, bounded set, and let \(\mathcal {E}_1, \ldots , \mathcal {E}_l\) denote ellipsoids contained in \(\mathcal {F}\) with non-intersecting interiors. We prove that minimizing an arbitrary quadratic \(q(\cdot )\) over \(\mathcal {G}:= \mathcal {F}{\setminus } \cup _{k=1}^\ell {{\mathrm{int}}}(\mathcal {E}_k)\) is no more difficult than minimizing \(q(\cdot )\) over \(\mathcal {F}\) in the following sense: if a given semidefinite-programming (SDP) relaxation for \(\min \{ q(x) : x \in \mathcal {F}\}\) is tight, then the addition of l linear constraints derived from \(\mathcal {E}_1, \ldots , \mathcal {E}_l\) yields a tight SDP relaxation for \(\min \{ q(x) : x \in \mathcal {G}\}\). We also prove that the convex hull of \(\{ (x,xx^T) : x \in \mathcal {G}\}\) equals the intersection of the convex hull of \(\{ (x,xx^T) : x \in \mathcal {F}\}\) with the same l linear constraints. Inspired by these results, we resolve a related question in a seemingly unrelated area, mixed-integer nonconvex quadratic programming. --- paper_title: Polynomial Solvability of Variants of the Trust-Region Subproblem paper_content: We consider an optimization problem of the form ::: ::: [EQUATION] ::: ::: where P ⊆ Rn is a polyhedron defined by m inequalities and Q is general and the μh e Rn and the rh quantities are given. In the case |S| = 1, |K| = 0 and m = 0 one obtains the classical trust-region subproblem; a strongly NP-hard problem which has been the focus of much interest because of applications to combinatorial optimization and nonlinear programming. We prove that for each fixed pair |S| and |K| our problem can be solved in polynomial time provided that either (1) |K| > 0 and the number of faces of P that intersect [EQUATION] is polynomially bounded, or (2) |K| = 0 and m is bounded. --- paper_title: Efficiently solving total least squares with Tikhonov identical regularization paper_content: The Tikhonov identical regularized total least squares (TI) is to deal with the ill-conditioned system of linear equations where the data are contaminated by noise. A standard approach for (TI) is to reformulate it as a problem of finding a zero point of some decreasing concave non-smooth univariate function such that the classical bisection search and Dinkelbach’s method can be applied. In this paper, by exploring the hidden convexity of (TI), we reformulate it as a new problem of finding a zero point of a strictly decreasing, smooth and concave univariate function. This allows us to apply the classical Newton’s method to the reformulated problem, which converges globally to the unique root with an asymptotic quadratic convergence rate. Moreover, in every iteration of Newton’s method, no optimization subproblem such as the extended trust-region subproblem is needed to evaluate the new univariate function value as it has an explicit expression. Promising numerical results based on the new algorithm are reported. --- paper_title: Finding a global optimal solution for a quadratically constrained fractional quadratic problem with applications to the regularized total least squares paper_content: We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a two-sided quadratic form constraint. This formulation is motivated by the so-called regularized total least squares (RTLS) problem. A key difficulty with this problem is its nonconvexity, and all current known methods to solve it are guaranteed only to converge to a point satisfying first order necessary optimality conditions. We prove that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter. As a result, we derive an efficient algorithm that produces an $\epsilon$-global optimal solution in a computational effort of $O(n^3 \log \epsilon^{-1})$. The algorithm is tested on problems arising from the inverse Laplace transform and image deblurring. Comparison to other well-known RTLS solvers illustrates the attractiveness of our new method. --- paper_title: On the solution of the Tikhonov regularization of the total least squares problem paper_content: Total least squares (TLS) is a method for treating an overdetermined system of linear equations ${\bf A} {\bf x} \approx {\bf b}$, where both the matrix ${\bf A}$ and the vector ${\bf b}$ are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadratic and quadratic functions. As such, the problem is nonconvex. We show how to reduce the problem to a single variable minimization of a function ${\mathcal{G}}$ over a closed interval. Computing a value and a derivative of ${\mathcal{G}}$ consists of solving a single trust region subproblem. For the special case of regularization with a squared Euclidean norm we show that ${\mathcal{G}}$ is unimodal and provide an alternative algorithm, which requires only one spectral decomposition. A numerical example is given to illustrate the effectiveness of our method. --- paper_title: A Linear-Time Algorithm for Trust Region Problems paper_content: We consider the fundamental problem of maximizing a general quadratic function over an ellipsoidal domain, also known as the trust region problem. We give the first provable linear-time (in the number of non-zero entries of the input) algorithm for approximately solving this problem. Specifically, our algorithm returns an $\epsilon$-approximate solution in time $\tilde{O}(N/\sqrt{\epsilon})$, where $N$ is the number of non-zero entries in the input. This matches the runtime of Nesterov's accelerated gradient descent, suitable for the special case in which the quadratic function is concave, and the runtime of the Lanczos method which is applicable when the problem is purely quadratic. --- paper_title: The trust region subproblem with non-intersecting linear constraints paper_content: This paper studies an extended trust region subproblem (eTRS) in which the trust region intersects the unit ball with $$m$$ m linear inequality constraints. When $$m=0,\,m = 1$$ m = 0 , m = 1 , or $$m = 2$$ m = 2 and the linear constraints are parallel, it is known that the eTRS optimal value equals the optimal value of a particular convex relaxation, which is solvable in polynomial time. However, it is also known that, when $$m \ge 2$$ m ? 2 and at least two of the linear constraints intersect within the ball, i.e., some feasible point of the eTRS satisfies both linear constraints at equality, then the same convex relaxation may admit a gap with eTRS. This paper shows that the convex relaxation has no gap for arbitrary $$m$$ m as long as the linear constraints are non-intersecting. --- paper_title: Convex Relaxations of the Weighted Maxmin Dispersion Problem paper_content: Consider the weighted maxmin dispersion problem of locating point(s) in a given region $ \mathcal{X} \subseteq \mathbb{R}^n$ that is/are furthest from a given set of $m$ points. The region is assumed to be convex under componentwise squaring. We show that this problem is NP-hard even when $\mathcal{X}$ is a box and the weights are equal. We then propose a convex relaxation of this problem for finding an approximate solution and derive an approximation bound of $\frac{1-O(\sqrt{\ln({m})\gamma^{*}})}{2}$, where $\gamma^{*}$ depends on $\mathcal{X}$. When $\mathcal{X}$ is a box or a product of low-dimensional spheres, $\gamma^{*}=O(\frac{1}{n})$ and the convex relaxation reduces to a semidefinite program and a second-order cone program. --- paper_title: On the Ball-Constrained Weighted Maximin Dispersion Problem paper_content: The ball-constrained weighted maximin dispersion problem $(P_ball)$ is to find a point in an $n$-dimensional Euclidean ball such that the minimum of the weighted Euclidean distance from given $m$ points is maximized. We propose a new second-order cone programming relaxation for $(P_ball)$. Under the condition $m\le n$, $(P_ball)$ is polynomial-time solvable since the new relaxation is shown to be tight. In general, we prove that $(P_ball)$ is NP-hard. Then, we propose a new randomized approximation algorithm for solving $(P_ball)$, which provides a new approximation bound of $\frac{1-O(\sqrt{\ln(m)/n})}{2}$. --- paper_title: A semidefinite framework for trust region subproblems with applications to large scale minimization paper_content: Primal-dual pairs of semidefinite programs provide a general framework for the theory and algorithms for the trust region subproblem (TRS). This latter problem consists in minimizing a general quadratic function subject to a convex quadratic constraint and, therefore, it is a generalization of the minimum eigenvalue problem. The importance of (TRS) is due to the fact that it provides the step in trust region minimization algorithms. The semidefinite framework is studied as an interesting instance of semidefinite programming as well as a tool for viewing known algorithms and deriving new algorithms for (TRS). In particular, a dual simplex type method is studied that solves (TRS) as a parametric eigenvalue problem. This method uses the Lanczos algorithm for the smallest eigenvalue as a black box. Therefore, the essential cost of the algorithm is the matrix-vector multiplication and, thus, sparsity can be exploited. A primal simplex type method provides steps for the so-called hard case. Extensive numerical tests for large sparse problems are discussed. These tests show that the cost of the algorithm is 1 +ź(n) times the cost of finding a minimum eigenvalue using the Lanczos algorithm, where 0 --- paper_title: A linear-time algorithm for the trust region subproblem based on hidden convexity paper_content: We present a linear-time approximation scheme for solving the trust region subproblem (TRS). It employs Nesterov’s accelerated gradient descent algorithm to solve a convex programming reformulation of (TRS). The total time complexity is less than that of the recent linear-time algorithm. The algorithm is further extended to the two-sided trust region subproblem. --- paper_title: Quadratic programs with hollows paper_content: Let \(\mathcal {F}\) be a quadratically constrained, possibly nonconvex, bounded set, and let \(\mathcal {E}_1, \ldots , \mathcal {E}_l\) denote ellipsoids contained in \(\mathcal {F}\) with non-intersecting interiors. We prove that minimizing an arbitrary quadratic \(q(\cdot )\) over \(\mathcal {G}:= \mathcal {F}{\setminus } \cup _{k=1}^\ell {{\mathrm{int}}}(\mathcal {E}_k)\) is no more difficult than minimizing \(q(\cdot )\) over \(\mathcal {F}\) in the following sense: if a given semidefinite-programming (SDP) relaxation for \(\min \{ q(x) : x \in \mathcal {F}\}\) is tight, then the addition of l linear constraints derived from \(\mathcal {E}_1, \ldots , \mathcal {E}_l\) yields a tight SDP relaxation for \(\min \{ q(x) : x \in \mathcal {G}\}\). We also prove that the convex hull of \(\{ (x,xx^T) : x \in \mathcal {G}\}\) equals the intersection of the convex hull of \(\{ (x,xx^T) : x \in \mathcal {F}\}\) with the same l linear constraints. Inspired by these results, we resolve a related question in a seemingly unrelated area, mixed-integer nonconvex quadratic programming. --- paper_title: Low-Rank Semidefinite Programming: Theory and Applications paper_content: Finding low-rank solutions of semidefinite programs is important in many applications. For example, semidefinite programs that arise as relaxations of polynomial optimization problems are exact relaxations when the semidefinite program has a rank-1 solution. Unfortunately, computing a minimum-rank solution of a semidefinite program is an NP-hard problem. In this paper we review the theory of low-rank semidefinite programming, presenting theorems that guarantee the existence of a low-rank solution, heuristics for computing low-rank solutions, and algorithms for finding low-rank approximate solutions. Then we present applications of the theory to trust-region problems and signal processing. --- paper_title: Max-Min Fairness Linear Transceiver Design Problem for a Multi-User SIMO Interference Channel is Polynomial Time Solvable paper_content: Consider the linear transceiver design problem for a multi-user single-input multi-output (SIMO) interference channel. Assuming perfect channel knowledge, we formulate this problem as one of maximizing the minimum signal to interference plus noise ratio (SINR) among all the users, subject to individual power constraints at each transmitter. We prove in this letter that the max-min fairness linear transceiver design problem for the SIMO interference channel can be solved to global optimality in polynomial time. We further propose a low-complexity inexact cyclic coordinate ascent algorithm (ICCAA) to solve this problem. Numerical simulations show the proposed algorithm can efficiently find the global optimal solution of the considered problem. --- paper_title: Quadratic matrix programming paper_content: We introduce and study a special class of nonconvex quadratic problems in which the objective and constraint functions have the form $f(\boldmath $X$)={Tr}(\boldmath $X$^T \boldmath $A$ \boldmath $X$) + 2 Tr(\boldmath $B$^T \boldmath $X$) +c, \boldmath $X$ \in {\real R}^{n \times r}$. The latter formulation is termed quadratic matrix programming (QMP) of order $r$. We construct a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and show that under some mild conditions strong duality holds for QMP problems with at most $r$ constraints. Using a result on the equivalence of two characterizations of the nonnegativity property of quadratic functions of the above form, we are able to compare the constructed SDR and dual problems to other known SDRs and dual formulations of the problem. An application to robust least squares problems is discussed. --- paper_title: A new semidefinite programming relaxation scheme for a class of quadratic matrix problems paper_content: Abstract We consider a special class of quadratic matrix optimization problems which often arise in applications. By exploiting the special structure of these problems, we derive a new semidefinite relaxation which, under mild assumptions, is proven to be tight for a larger number of constraints than could be achieved via a direct approach. We show the potential usefulness of these results when applied to robust least-squares and sphere-packing problems. --- paper_title: On the complexity of quadratic programming with two quadratic constraints paper_content: The complexity of quadratic programming problems with two quadratic constraints is an open problem. In this paper we show that when one constraint is a ball constraint and the Hessian of the quadratic function defining the other constraint is positive definite, then, under quite general conditions, the problem can be solved in polynomial time in the real-number model of computation through an approach based on the analysis of the dual space of the Lagrange multipliers. However, the degree of the polynomial is rather large, thus making the result mostly of theoretical interest. --- paper_title: Convexity Properties Associated with Nonconvex Quadratic Matrix Functions and Applications to Quadratic Programming ∗ paper_content: We establish several convexity results which are concerned with nonconvex quadratic matrix (QM) functions: strong duality of quadratic matrix programming problems, convexity of the image of mappings comprised of several QM functions and existence of a corresponding S-lemma. As a consequence of our results, we prove that a class of quadratic problems involving several functions with similar matrix terms has a zero duality gap. We present applications to robust optimization, to solution of linear systems immune to implementation errors and to the problem of computing the Chebyshev center of an intersection of balls. --- paper_title: On a self-consistent-field-like iteration for maximizing the sum of the Rayleigh quotients paper_content: In this paper, we consider efficient methods for maximizing x^@?Bxx^@?Wx+x^@?Dx over the unit sphere, where B,D are symmetric matrices, and W is symmetric and positive definite. This problem can arise in the downlink of a multi-user MIMO system and in the sparse Fisher discriminant analysis in pattern recognition. It is already known that the problem of finding a global maximizer is closely associated with a nonlinear extreme eigenvalue problem. Rather than resorting to some general optimization methods, we introduce a self-consistent-field-like (SCF-like) iteration for directly solving the resulting nonlinear eigenvalue problem. The SCF iteration is widely used for solving the nonlinear eigenvalue problems arising in electronic structure calculations. One attractive feature of the SCF for our problem is that once it converges, the limit point not only satisfies the necessary local optimality conditions automatically, but also, and most importantly, satisfies a global optimality condition, which generally is not achievable in some optimization-based methods. The global convergence and local quadratic convergence rate are proved for certain situations. For the general case, we then discuss a trust-region SCF iteration for stabilizing the SCF iteration, which is of good global convergence behavior. Our preliminary numerical experiments show that these algorithms are more efficient than some optimization-based methods. --- paper_title: The Legendre–Fenchel Conjugate of the Product of Two Positive Definite Quadratic Forms paper_content: It is well known that the Legendre-Fenchel conjugate of a positive definite quadratic form can be explicitly expressed as another positive definite quadratic form and that the conjugate of the sum of several positive definite quadratic forms can be expressed via inf-convolution. However, the Legendre-Fenchel conjugate of the product of two positive definite quadratic forms is not clear at present. Hiriart-Urruty posted it as an open question in the field of nonlinear analysis and optimization [Question 11 in SIAM Rev., 49 (2007), pp. 255-273]. From a convex analysis point of view, it is interesting and important to address such a question. The purpose of this paper is to answer this question and to provide a formula for the conjugate of the product of two positive definite quadratic forms. We prove that the computation of the conjugate can be implemented via finding a root of a certain univariate polynomial equation, and we also identify the situations in which the conjugate can be explicitly expressed as a single function without involving any parameter. Some other issues, including the convexity condition for the product function, are also investigated as well. Our analysis shows that the relationship between the matrices of quadratic forms plays a vital role in determining whether the conjugate can be explicitly expressed or not. --- paper_title: Polynomial Solvability of Variants of the Trust-Region Subproblem paper_content: We consider an optimization problem of the form ::: ::: [EQUATION] ::: ::: where P ⊆ Rn is a polyhedron defined by m inequalities and Q is general and the μh e Rn and the rh quantities are given. In the case |S| = 1, |K| = 0 and m = 0 one obtains the classical trust-region subproblem; a strongly NP-hard problem which has been the focus of much interest because of applications to combinatorial optimization and nonlinear programming. We prove that for each fixed pair |S| and |K| our problem can be solved in polynomial time provided that either (1) |K| > 0 and the number of faces of P that intersect [EQUATION] is polynomially bounded, or (2) |K| = 0 and m is bounded. --- paper_title: Efficiently solving total least squares with Tikhonov identical regularization paper_content: The Tikhonov identical regularized total least squares (TI) is to deal with the ill-conditioned system of linear equations where the data are contaminated by noise. A standard approach for (TI) is to reformulate it as a problem of finding a zero point of some decreasing concave non-smooth univariate function such that the classical bisection search and Dinkelbach’s method can be applied. In this paper, by exploring the hidden convexity of (TI), we reformulate it as a new problem of finding a zero point of a strictly decreasing, smooth and concave univariate function. This allows us to apply the classical Newton’s method to the reformulated problem, which converges globally to the unique root with an asymptotic quadratic convergence rate. Moreover, in every iteration of Newton’s method, no optimization subproblem such as the extended trust-region subproblem is needed to evaluate the new univariate function value as it has an explicit expression. Promising numerical results based on the new algorithm are reported. --- paper_title: Potpourri of Conjectures and Open Questions in Nonlinear Analysis and Optimization ∗ paper_content: We present a collection of fourteen conjectures and open problems in the fields of nonlinear analysis and optimization. These problems can be classified into three groups: problems of pure mathematical interest, problems motivated by scientific computing and applications, and problems whose solutions are known but for which we would like to know better proofs. For each problem we provide a succinct presentation, a list of appropriate references, and a view of the state of the art of the subject. --- paper_title: Solving Generalized CDT Problems via Two-Parameter Eigenvalues paper_content: We consider solving a nonconvex quadratic minimization problem with two quadratic constraints, one of which being convex. This problem is a generalization of the Celis--Denis--Tapia (CDT) problem and thus we refer to it as GCDT (Generalized CDT). The CDT problem has been widely studied, but no polynomial-time algorithm was known until Bienstock's recent work. His algorithm solves the CDT problem in polynomial time with respect to the number of bits in data and $\log\epsilon^{-1}$ by admitting an $\epsilon$ error in the constraints. The algorithm, however, appears to be difficult to implement. In this paper, we present another algorithm for GCDT, which is guaranteed to find a global solution for almost all GCDT instances (and slightly perturbed ones in some exceptionally rare cases), in exact arithmetic (including eigenvalue computation). Our algorithm is based on the approach proposed by Iwata, Nakatsukasa, and Takeda (2015) for computing the signed distance between overlapping ellipsoids. Our algorithm computes all the Lagrange multipliers of GCDT by solving a two-parameter linear eigenvalue problem, obtains the corresponding KKT points, and finds a global solution as the KKT point with the smallest objective value. In practice, in finite precision arithmetic, our algorithm requires $O(n^6\log\log u^{-1})$ computational time, where $n$ is the number of variables and $u$ is the unit roundoff. Although we derive our algorithm under the unrealistic assumption that exact eigenvalues can be computed, numerical experiments illustrate that our algorithm performs well in finite precision arithmetic. --- paper_title: On the solution of the Tikhonov regularization of the total least squares problem paper_content: Total least squares (TLS) is a method for treating an overdetermined system of linear equations ${\bf A} {\bf x} \approx {\bf b}$, where both the matrix ${\bf A}$ and the vector ${\bf b}$ are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadratic and quadratic functions. As such, the problem is nonconvex. We show how to reduce the problem to a single variable minimization of a function ${\mathcal{G}}$ over a closed interval. Computing a value and a derivative of ${\mathcal{G}}$ consists of solving a single trust region subproblem. For the special case of regularization with a squared Euclidean norm we show that ${\mathcal{G}}$ is unimodal and provide an alternative algorithm, which requires only one spectral decomposition. A numerical example is given to illustrate the effectiveness of our method. --- paper_title: Quadratic matrix programming paper_content: We introduce and study a special class of nonconvex quadratic problems in which the objective and constraint functions have the form $f(\boldmath $X$)={Tr}(\boldmath $X$^T \boldmath $A$ \boldmath $X$) + 2 Tr(\boldmath $B$^T \boldmath $X$) +c, \boldmath $X$ \in {\real R}^{n \times r}$. The latter formulation is termed quadratic matrix programming (QMP) of order $r$. We construct a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and show that under some mild conditions strong duality holds for QMP problems with at most $r$ constraints. Using a result on the equivalence of two characterizations of the nonnegativity property of quadratic functions of the above form, we are able to compare the constructed SDR and dual problems to other known SDRs and dual formulations of the problem. An application to robust least squares problems is discussed. --- paper_title: Trust Region Subproblem with a Fixed Number of Additional Linear Inequality Constraints has Polynomial Complexity paper_content: The trust region subproblem with a fixed number m additional linear inequality constraints, denoted by (Tm), have drawn much attention recently. The question as to whether Problem (Tm) is in Class P or Class NP remains open. So far, the only affirmative general result is that (T1) has an exact SOCP/SDP reformulation and thus is polynomially solvable. By adopting an early result of Mart´onez on local non-global minimum of the trust region subproblem, we can inductively reduce any instance in (Tm) to a sequence of trust region subproblems (T0). Although the total number of (T0) to be solved takes an exponential order of m, the reduction scheme still provides an argument that the class (Tm) has polynomial complexity for each fixed m. In contrast, we show by a simple example that, solving the class of extended trust region subproblems which contains more linear inequality constraints than the problem dimension; or the class of instances consisting of an arbitrarily number of linear constraints, namely S1=1 (Tm), is NP-hard. When m is small such as m = 1,2, our inductive algorithm should be more efficient than the SOCP/SDP reformulation since at most 2 or 5 subproblems of (T0), respectively, are to be handled. In the end of the paper, we improve a very recent dimension condition by Jeyakumar and Li under which (Tm) admits an exact SDP relaxation. Examples show that such an improvement can be strict indeed. --- paper_title: A generalized solution of the orthogonal procrustes problem paper_content: A solutionT of the least-squares problemAT=B +E, givenA andB so that trace (E′E)= minimum andT′T=I is presented. It is compared with a less general solution of the same problem which was given by Green [5]. The present solution, in contrast to Green's, is applicable to matricesA andB which are of less than full column rank. Some technical suggestions for the numerical computation ofT and an illustrative example are given. --- paper_title: A note on polynomial solvability of the CDT problem paper_content: We describe a simple polynomial-time algorithm for the CDT problem that relies on a construction of Barvinok. --- paper_title: On Local Convexity of Quadratic Transformations paper_content: In this paper, we improve Polyak's local convexity result for quadratic transformations. Extension and open problems are also presented. ---
Title: A Survey of Hidden Convex Optimization Section 1: Introduction Description 1: Present the overview of convex programming, its challenges, and the concept of hidden convex optimization. Summarize the structure of the paper. Section 2: Nonlinear transformation Description 2: Describe various nonlinear transformation techniques used to convert nonconvex problems into convex problems. Section 3: Lagrangian dual and its variation Description 3: Discuss how strong Lagrangian duality can be achieved for nonconvex problems and explore variations of Lagrangian duality. Section 4: Tight primal relaxation Description 4: Explain how primal relaxation approaches can reveal hidden convexity in nonconvex optimization problems. Section 5: Open problems Description 5: Identify and elaborate on open research problems that remain in the field of hidden convex optimization.
TAXONOMY AND SURVEY OF COMMUNITY DISCOVERY METHODS IN COMPLEX NETWORKS
6
--- paper_title: Community detection in complex networks using extremal optimization. paper_content: The description of the structure of complex networks has been one of the focus of attention of the physicist’s community in the recent years. The levels of description range from the microscopic (degree, clustering coefficient, centrality measures, etc., of individual nodes) to the macroscopic description in terms of statistical properties of the whole network (degree distribution, total clustering coefficient, degree-degree correlations, etc.) [1, 2, 3, 4]. Between these two extremes there is a ”mesoscopic” description of networks that tries to explain its community structure. The general notion of community structure in complex networks was first pointed out in the physics literature by Girvan and Newman [5], and refers to the fact that nodes in many real networks appear to group in subgraphs in which the density of internal connections is larger than the connections with the rest of nodes in the network. The community structure has been empirically found in many real technological, biological and social networks [6, 7, 8, 9, 10] and its emergence seems to be at the heart of the network formation process [11]. The existing methods intended to devise the community structure in complex networks have been recently reviewed in [10]. All these methods require a definition of community that imposes the limit up to which a group should be considered a community. However, the concept of community itself is qualitative: nodes must be more connected within its community than with the rest of the network, and its quantification is still a subject of debate. Some quantitative definitions that came from sociology have been used in recent studies [12], but in general, the physics community has widely accepted a recent measure for the community structure based on the concept of modularity Q introduced by Newman and Girvan [13]: --- paper_title: Statistical mechanics of community detection. paper_content: Starting from a general ansatz, we show how community detection can be interpreted as finding the ground state of an infinite range spin glass. Our approach applies to weighted and directed networks alike. It contains the ad hoc introduced quality function from [J. Reichardt and S. Bornholdt, Phys. Rev. Lett. 93, 218701 (2004)] and the modularity Q as defined by Newman and Girvan [Phys. Rev. E 69, 026113 (2004)] as special cases. The community structure of the network is interpreted as the spin configuration that minimizes the energy of the spin glass with the spin states being the community indices. We elucidate the properties of the ground state configuration to give a concise definition of communities as cohesive subgroups in networks that is adaptive to the specific class of network under study. Further, we show how hierarchies and overlap in the community structure can be detected. Computationally efficient local update rules for optimization procedures to find the ground state are given. We show how the ansatz may be used to discover the community around a given node without detecting all communities in the full network and we give benchmarks for the performance of this extension. Finally, we give expectation values for the modularity of random graphs, which can be used in the assessment of statistical significance of community structure. --- paper_title: Detecting fuzzy community structures in complex networks with a Potts model. paper_content: A fast community detection algorithm based on a q-state Potts model is presented. Communities (groups of densely interconnected nodes that are only loosely connected to the rest of the network) are found to coincide with the domains of equal spin value in the minima of a modified Potts spin glass Hamiltonian. Comparing global and local minima of the Hamiltonian allows for the detection of overlapping ("fuzzy") communities and quantifying the association of nodes with multiple communities as well as the robustness of a community. No prior knowledge of the number of communities has to be assumed. --- paper_title: Detecting community structure in complex networks based on a measure of information discrepancy paper_content: Properties of complex networks, such as small-world property, power-law degree distribution, network transitivity, and network- community structure which seem to be common to many real-world networks have attracted great interest among researchers. In this study, global information of the networks is considered by defining the profile of any node based on the shortest paths between it and all the other nodes in the network; then a useful iterative procedure for community detection based on a measure of information discrepancy and the popular modular function Q is presented. The new iterative method does not need any prior knowledge about the community structure and can detect an appropriate number of communities, which can be hub communities or non-hub communities. The computational results of the method on real networks confirm its capability. --- paper_title: Mixture models and exploratory analysis in networks paper_content: Networks are widely used in the biological, physical, and social sciences as a concise mathematical representation of the topology of systems of interacting components. Understanding the structure of these networks is one of the outstanding challenges in the study of complex systems. Here we describe a general technique for detecting structural features in large-scale network data that works by dividing the nodes of a network into classes such that the members of each class have similar patterns of connection to other nodes. Using the machinery of probabilistic mixture models and the expectation–maximization algorithm, we show that it is possible to detect, without prior knowledge of what we are looking for, a very broad range of types of structure in networks. We give a number of examples demonstrating how the method can be used to shed light on the properties of real-world networks, including social and information networks. --- paper_title: Extracting the hierarchical organization of complex systems paper_content: Extracting understanding from the growing “sea” of biological and socioeconomic data is one of the most pressing scientific challenges facing us. Here, we introduce and validate an unsupervised method for extracting the hierarchical organization of complex biological, social, and technological networks. We define an ensemble of hierarchically nested random graphs, which we use to validate the method. We then apply our method to real-world networks, including the air-transportation network, an electronic circuit, an e-mail exchange network, and metabolic networks. Our analysis of model and real networks demonstrates that our method extracts an accurate multiscale representation of a complex system. --- paper_title: Community detection in complex networks using extremal optimization. paper_content: The description of the structure of complex networks has been one of the focus of attention of the physicist’s community in the recent years. The levels of description range from the microscopic (degree, clustering coefficient, centrality measures, etc., of individual nodes) to the macroscopic description in terms of statistical properties of the whole network (degree distribution, total clustering coefficient, degree-degree correlations, etc.) [1, 2, 3, 4]. Between these two extremes there is a ”mesoscopic” description of networks that tries to explain its community structure. The general notion of community structure in complex networks was first pointed out in the physics literature by Girvan and Newman [5], and refers to the fact that nodes in many real networks appear to group in subgraphs in which the density of internal connections is larger than the connections with the rest of nodes in the network. The community structure has been empirically found in many real technological, biological and social networks [6, 7, 8, 9, 10] and its emergence seems to be at the heart of the network formation process [11]. The existing methods intended to devise the community structure in complex networks have been recently reviewed in [10]. All these methods require a definition of community that imposes the limit up to which a group should be considered a community. However, the concept of community itself is qualitative: nodes must be more connected within its community than with the rest of the network, and its quantification is still a subject of debate. Some quantitative definitions that came from sociology have been used in recent studies [12], but in general, the physics community has widely accepted a recent measure for the community structure based on the concept of modularity Q introduced by Newman and Girvan [13]: --- paper_title: Motif-based communities in complex networks paper_content: Community definitions usually focus on edges, inside and between the communities. However, the high density of edges within a community determines correlations between nodes going beyond nearest-neighbours, and which are indicated by the presence of motifs. We show how motifs can be used to define general classes of nodes, including communities, by extending the mathematical expression of Newman-Girvan modularity. We construct then a general framework and apply it to some synthetic and real networks. --- paper_title: Weighted network modules paper_content: A search technique locating network modules, i.e. internally densely connected groups of nodes in directed networks is introduced by extending the clique percolation method originally proposed for undirected networks. After giving a suitable definition for directed modules we investigate their percolation transition in the Erdős–Renyi graph both analytically and numerically. We also analyse four real-world directed networks, including Google's own web-pages, an email network, a word association graph and the transcriptional regulatory network of the yeast Saccharomyces cerevisiae. The obtained directed modules are validated by additional information available for the nodes. We find that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules. Accordingly, in the word-association network and Google's web-pages, overlaps are likely to contain in-hubs, whereas the modules in the email and transcriptional regulatory network tend to overlap via out-hubs. --- paper_title: Finding community structure in very large networks paper_content: The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m ~ n and d ~ log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers. --- paper_title: Community detection in graphs paper_content: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks. --- paper_title: Community structure in social and biological networks paper_content: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases. --- paper_title: Maximizing Modularity is hard paper_content: Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances. --- paper_title: Distance, dissimilarity index, and network community structure paper_content: We address the question of finding the community structure of a complex network. In an earlier effort [H. Zhou, Phys. Rev. E 67, 041908 (2003)], the concept of network random walking is introduced and a distance measure defined. Here we calculate, based on this distance measure, the dissimilarity index between nearest-neighboring vertices of a network and design an algorithm to partition these vertices into communities that are hierarchically organized. Each community is characterized by an upper and a lower dissimilarity threshold. The algorithm is applied to several artificial and real-world networks, and excellent results are obtained. In the case of artificially generated random modular networks, this method outperforms the algorithm based on the concept of edge betweenness centrality. For yeast's protein-protein interaction network, we are able to identify many clusters that have well defined biological functions. --- paper_title: Directed network modules paper_content: The inclusion of link weights into the analysis of network properties allows a deeper insight into the (often overlapping) modular structure of real- world webs. We introduce a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity. The algorithm allows overlaps between the modules. First, we give detailed analytical and numerical results about the critical point of weighted k-clique percolation on (weighted) Erdý os-Renyi graphs. Then, for a scientist collaboration web and a stock correlation graph we compute three-link weight correlations and with the CPMw the weighted modules. After reshuffling link weights in both networks and computing the same quantities for the randomized control graphs as well, we show that groups of three or more strong links prefer to cluster together in both original graphs. --- paper_title: Finding and evaluating community structure in networks paper_content: We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible "betweenness" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems. --- paper_title: Synchronization Reveals Topological Scales in Complex Networks paper_content: We study the relationship between topological scales and dynamic time scales in complex networks. The analysis is based on the full dynamics towards synchronization of a system of coupled oscillators. In the synchronization process, modular structures corresponding to well-defined communities of nodes emerge in different time scales, ordered in a hierarchical way. The analysis also provides a useful connection between synchronization dynamics, complex networks topology, and spectral graph analysis. --- paper_title: Detecting fuzzy community structures in complex networks with a Potts model. paper_content: A fast community detection algorithm based on a q-state Potts model is presented. Communities (groups of densely interconnected nodes that are only loosely connected to the rest of the network) are found to coincide with the domains of equal spin value in the minima of a modified Potts spin glass Hamiltonian. Comparing global and local minima of the Hamiltonian allows for the detection of overlapping ("fuzzy") communities and quantifying the association of nodes with multiple communities as well as the robustness of a community. No prior knowledge of the number of communities has to be assumed. --- paper_title: Graph spectra and the detectability of community structure in networks paper_content: We study networks that display community structure -- groups of nodes within which connections are unusually dense. Using methods from random matrix theory, we calculate the spectra of such networks in the limit of large size, and hence demonstrate the presence of a phase transition in matrix methods for community detection, such as the popular modularity maximization method. The transition separates a regime in which such methods successfully detect the community structure from one in which the structure is present but is not detected. By comparing these results with recent analyses of maximum-likelihood methods we are able to show that spectral modularity maximization is an optimal detection method in the sense that no other method will succeed in the regime where the modularity method fails. --- paper_title: Networks: An Introduction paper_content: The scientific study of networks, including computer networks, social networks, and biological networks, has received an enormous amount of interest in the last few years. The rise of the Internet and the wide availability of inexpensive computers have made it possible to gather and analyze network data on a large scale, and the development of a variety of new theoretical tools has allowed us to extract new knowledge from many different kinds of networks.The study of networks is broadly interdisciplinary and important developments have occurred in many fields, including mathematics, physics, computer and information sciences, biology, and the social sciences. This book brings together for the first time the most important breakthroughs in each of these fields and presents them in a coherent fashion, highlighting the strong interconnections between work in different areas. Subjects covered include the measurement and structure of networks in many branches of science, methods for analyzing network data, including methods developed in physics, statistics, and sociology, the fundamentals of graph theory, computer algorithms, and spectral methods, mathematical models of networks, including random graph models and generative models, and theories of dynamical processes taking place on networks. --- paper_title: Improved spectral algorithm for the detection of network communities paper_content: We review and improve a recently introduced method for the detection of communities in complex networks. This method combines spectral properties of some matrices encoding the network topology, with well known hierarchical clustering techniques, and the use of the modularity parameter to quantify the goodness of any possible community subdivision. This provides one of the best available methods for the detection of community structures in complex systems. --- paper_title: Partitioning sparse matrices with eigenvectors of graphs paper_content: The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is, shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorith... --- paper_title: Defining and identifying communities in networks paper_content: The investigation of community structures in networks is an important issue in many domains and disciplines. This problem is relevant for social tasks (objective analysis of relationships on the web), biological inquiries (functional studies in metabolic and protein networks), or technological problems (optimization of large infrastructures). Several types of algorithms exist for revealing the community structure in networks, but a general and quantitative definition of community is not implemented in the algorithms, leading to an intrinsic difficulty in the interpretation of the results without any additional nontopological information. In this article we deal with this problem by showing how quantitative definitions of community are implemented in practice in the existing algorithms. In this way the algorithms for the identification of the community structure become fully self-contained. Furthermore, we propose a local algorithm to detect communities which outperforms the existing algorithms with respect to computational cost, keeping the same level of reliability. The algorithm is tested on artificial and real-world graphs. In particular, we show how the algorithm applies to a network of scientific collaborations, which, for its size, cannot be attacked with the usual methods. This type of local algorithm could open the way to applications to large-scale technological and biological systems. --- paper_title: Community structure in directed networks paper_content: We consider the problem of finding communities or modules in directed networks. In the past, the most common approach to this problem has been to ignore edge direction and apply methods developed for community discovery in undirected networks, but this approach discards potentially useful information contained in the edge directions. Here we show how the widely used community finding technique of modularity maximization can be generalized in a principled fashion to incorporate information contained in edge directions. We describe an explicit algorithm based on spectral optimization of the modularity and show that it gives demonstrably better results than previous methods on a variety of test networks, both real and computer generated. --- paper_title: Mixture models and exploratory analysis in networks paper_content: Networks are widely used in the biological, physical, and social sciences as a concise mathematical representation of the topology of systems of interacting components. Understanding the structure of these networks is one of the outstanding challenges in the study of complex systems. Here we describe a general technique for detecting structural features in large-scale network data that works by dividing the nodes of a network into classes such that the members of each class have similar patterns of connection to other nodes. Using the machinery of probabilistic mixture models and the expectation–maximization algorithm, we show that it is possible to detect, without prior knowledge of what we are looking for, a very broad range of types of structure in networks. We give a number of examples demonstrating how the method can be used to shed light on the properties of real-world networks, including social and information networks. --- paper_title: Uncovering the overlapping community structure of complex networks in nature and society paper_content: Many complex systems in nature and society can be described in terms of networks capturing the intricate web of connections among the units they are made of. A key question is how to interpret the global organization of such networks as the coexistence of their structural subunits (communities) associated with more highly interconnected parts. Identifying these a priori unknown building blocks (such as functionally related proteins, industrial sectors and groups of people) is crucial to the understanding of the structural and functional properties of networks. The existing deterministic methods used for large networks find separated communities, whereas most of the actual networks are made of highly overlapping cohesive groups of nodes. Here we introduce an approach to analysing the main statistical features of the interwoven sets of overlapping communities that makes a step towards uncovering the modular structure of complex systems. After defining a set of new characteristic quantities for the statistics of communities, we apply an efficient technique for exploring overlapping communities on a large scale. We find that overlaps are significant, and the distributions we introduce reveal universal features of networks. Our studies of collaboration, word-association and protein interaction graphs show that the web of communities has non-trivial correlations and specific scaling properties. --- paper_title: Detect overlapping and hierarchical community structure in networks paper_content: Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper proposes an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results. --- paper_title: Mixing patterns in networks. paper_content: We study assortative mixing in networks, the tendency for vertices in networks to be connected to other vertices that are like (or unlike) them in some way. We consider mixing according to discrete characteristics such as language or race in social networks and scalar characteristics such as age. As a special example of the latter we consider mixing according to vertex degree, i.e., according to the number of connections vertices have to other vertices: do gregarious people tend to associate with other gregarious people? We propose a number of measures of assortative mixing appropriate to the various mixing types, and apply them to a variety of real-world networks, showing that assortative mixing is a pervasive phenomenon found in many networks. We also propose several models of assortatively mixed networks, both analytic ones based on generating function methods, and numerical ones based on Monte Carlo graph generation techniques. We use these models to probe the properties of networks as their level of assortativity is varied. In the particular case of mixing by degree, we find strong variation with assortativity in the connectivity of the network and in the resilience of the network to the removal of vertices. --- paper_title: Identifying the role that animals play in their social networks paper_content: Techniques recently developed for the analysis of human social networks are applied to the social network of bottlenose dolphins living in Doubtful Sound, New Zealand. We identify communities and subcommunities within the dolphin population and present evidence that sex- and age-related homophily play a role in the formation of clusters of preferred companionship. We also identify brokers who act as links between sub-communities and who appear to be crucial to the social cohesion of the population as a whole. The network is found to be similar to human social networks in some respects but different in some others, such as the level of assortative mixing by degree within the population. This difference elucidates some of the means by which the network forms and evolves. --- paper_title: Community detection in complex networks using extremal optimization. paper_content: The description of the structure of complex networks has been one of the focus of attention of the physicist’s community in the recent years. The levels of description range from the microscopic (degree, clustering coefficient, centrality measures, etc., of individual nodes) to the macroscopic description in terms of statistical properties of the whole network (degree distribution, total clustering coefficient, degree-degree correlations, etc.) [1, 2, 3, 4]. Between these two extremes there is a ”mesoscopic” description of networks that tries to explain its community structure. The general notion of community structure in complex networks was first pointed out in the physics literature by Girvan and Newman [5], and refers to the fact that nodes in many real networks appear to group in subgraphs in which the density of internal connections is larger than the connections with the rest of nodes in the network. The community structure has been empirically found in many real technological, biological and social networks [6, 7, 8, 9, 10] and its emergence seems to be at the heart of the network formation process [11]. The existing methods intended to devise the community structure in complex networks have been recently reviewed in [10]. All these methods require a definition of community that imposes the limit up to which a group should be considered a community. However, the concept of community itself is qualitative: nodes must be more connected within its community than with the rest of the network, and its quantification is still a subject of debate. Some quantitative definitions that came from sociology have been used in recent studies [12], but in general, the physics community has widely accepted a recent measure for the community structure based on the concept of modularity Q introduced by Newman and Girvan [13]: --- paper_title: An efficient and principled method for detecting communities in networks paper_content: A fundamental problem in the analysis of network data is the detection of network communities, groups of densely interconnected nodes, which may be overlapping or disjoint. Here we describe a method for finding overlapping communities based on a principled statistical approach using generative network models. We show how the method can be implemented using a fast, closed-form expectation-maximization algorithm that allows us to analyze networks of millions of nodes in reasonable running times. We test the method both on real-world networks and on synthetic benchmarks and find that it gives results competitive with previous methods. We also show that the same approach can be used to extract nonoverlapping community divisions via a relaxation method, and demonstrate that the algorithm is competitively fast and accurate for the nonoverlapping problem. --- paper_title: Mixture models and exploratory analysis in networks paper_content: Networks are widely used in the biological, physical, and social sciences as a concise mathematical representation of the topology of systems of interacting components. Understanding the structure of these networks is one of the outstanding challenges in the study of complex systems. Here we describe a general technique for detecting structural features in large-scale network data that works by dividing the nodes of a network into classes such that the members of each class have similar patterns of connection to other nodes. Using the machinery of probabilistic mixture models and the expectation–maximization algorithm, we show that it is possible to detect, without prior knowledge of what we are looking for, a very broad range of types of structure in networks. We give a number of examples demonstrating how the method can be used to shed light on the properties of real-world networks, including social and information networks. --- paper_title: Graph spectra and the detectability of community structure in networks paper_content: We study networks that display community structure -- groups of nodes within which connections are unusually dense. Using methods from random matrix theory, we calculate the spectra of such networks in the limit of large size, and hence demonstrate the presence of a phase transition in matrix methods for community detection, such as the popular modularity maximization method. The transition separates a regime in which such methods successfully detect the community structure from one in which the structure is present but is not detected. By comparing these results with recent analyses of maximum-likelihood methods we are able to show that spectral modularity maximization is an optimal detection method in the sense that no other method will succeed in the regime where the modularity method fails. --- paper_title: Defining and identifying communities in networks paper_content: The investigation of community structures in networks is an important issue in many domains and disciplines. This problem is relevant for social tasks (objective analysis of relationships on the web), biological inquiries (functional studies in metabolic and protein networks), or technological problems (optimization of large infrastructures). Several types of algorithms exist for revealing the community structure in networks, but a general and quantitative definition of community is not implemented in the algorithms, leading to an intrinsic difficulty in the interpretation of the results without any additional nontopological information. In this article we deal with this problem by showing how quantitative definitions of community are implemented in practice in the existing algorithms. In this way the algorithms for the identification of the community structure become fully self-contained. Furthermore, we propose a local algorithm to detect communities which outperforms the existing algorithms with respect to computational cost, keeping the same level of reliability. The algorithm is tested on artificial and real-world graphs. In particular, we show how the algorithm applies to a network of scientific collaborations, which, for its size, cannot be attacked with the usual methods. This type of local algorithm could open the way to applications to large-scale technological and biological systems. --- paper_title: Community structure in directed networks paper_content: We consider the problem of finding communities or modules in directed networks. In the past, the most common approach to this problem has been to ignore edge direction and apply methods developed for community discovery in undirected networks, but this approach discards potentially useful information contained in the edge directions. Here we show how the widely used community finding technique of modularity maximization can be generalized in a principled fashion to incorporate information contained in edge directions. We describe an explicit algorithm based on spectral optimization of the modularity and show that it gives demonstrably better results than previous methods on a variety of test networks, both real and computer generated. --- paper_title: Community structure in social and biological networks paper_content: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases. --- paper_title: Coarse-Graining and Self-Dissimilarity of Complex Networks paper_content: Can complex engineered and biological networks be coarse-grained into smaller and more understandable versions in which each node represents an entire pattern in the original network? To address this, we define coarse-graining units (CGU) as connectivity patterns which can serve as the nodes of a coarse-grained network, and present algorithms to detect them. We use this approach to systematically reverse-engineer electronic circuits, forming understandable high-level maps from incomprehensible transistor wiring: first, a coarse-grained version in which each node is a gate made of several transistors is established. Then, the coarse-grained network is itself coarse-grained, resulting in a high-level blueprint in which each node is a circuit-module made of multiple gates. We apply our approach also to a mammalian protein-signaling network, to find a simplified coarse-grained network with three main signaling channels that correspond to cross-interacting MAP-kinase cascades. We find that both biological and electronic networks are 'self-dissimilar', with different network motifs found at each level. The present approach can be used to simplify a wide variety of directed and nondirected, natural and designed networks. --- paper_title: Community detection in graphs paper_content: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks. --- paper_title: Networks: An Introduction paper_content: The scientific study of networks, including computer networks, social networks, and biological networks, has received an enormous amount of interest in the last few years. The rise of the Internet and the wide availability of inexpensive computers have made it possible to gather and analyze network data on a large scale, and the development of a variety of new theoretical tools has allowed us to extract new knowledge from many different kinds of networks.The study of networks is broadly interdisciplinary and important developments have occurred in many fields, including mathematics, physics, computer and information sciences, biology, and the social sciences. This book brings together for the first time the most important breakthroughs in each of these fields and presents them in a coherent fashion, highlighting the strong interconnections between work in different areas. Subjects covered include the measurement and structure of networks in many branches of science, methods for analyzing network data, including methods developed in physics, statistics, and sociology, the fundamentals of graph theory, computer algorithms, and spectral methods, mathematical models of networks, including random graph models and generative models, and theories of dynamical processes taking place on networks. --- paper_title: Method to find community structures based on information centrality paper_content: Community structures are an important feature of many social, biological and technological networks. Here we study a variation on the method for detecting such communities proposed by Girvan and Newman and based on the idea of using centrality measures to define the community boundaries ( M. Girvan and M. E. J. Newman, Community structure in social and biological networks Proc. Natl. Acad. Sci. USA 99, 7821-7826 (2002)). We develop an algorithm of hierarchical clustering that consists in finding and removing iteratively the edge with the highest information centrality. We test the algorithm on computer generated and real-world networks whose community structure is already known or has been studied by means of other methods. We show that our algorithm, although it runs to completion in a time O(n 4 ), is very effective especially when the communities are very mixed and hardly detectable by the other methods. --- paper_title: Introduction to Algorithms paper_content: From the Publisher: ::: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. ::: In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. ::: As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. ::: Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. --- paper_title: Finding and evaluating community structure in networks paper_content: We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible "betweenness" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems. --- paper_title: Extracting the hierarchical organization of complex systems paper_content: Extracting understanding from the growing “sea” of biological and socioeconomic data is one of the most pressing scientific challenges facing us. Here, we introduce and validate an unsupervised method for extracting the hierarchical organization of complex biological, social, and technological networks. We define an ensemble of hierarchically nested random graphs, which we use to validate the method. We then apply our method to real-world networks, including the air-transportation network, an electronic circuit, an e-mail exchange network, and metabolic networks. Our analysis of model and real networks demonstrates that our method extracts an accurate multiscale representation of a complex system. --- paper_title: Community structure in social and biological networks paper_content: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases. --- paper_title: Cycles and clustering in bipartite networks paper_content: We investigate the clustering coefficient in bipartite networks where cycles of size three are absent and therefore the standard definition of clustering coefficient cannot be used. Instead, we use another coefficient given by the fraction of cycles with size four, showing that both coefficients yield the same clustering properties. The new coefficient is computed for two networks of sexual contacts, one bipartite and another where no distinction between the nodes is made monopartite. In both cases the clustering coefficient is similar. Furthermore, combining both clustering coefficients we deduce an expression for estimating cycles of larger size, which improves previous estimations and is suitable for either monopartite and multipartite networks, and discuss the applicability of such analytical estimations. --- paper_title: Collective dynamics of ‘small-world’ networks paper_content: Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. --- paper_title: Uncovering the overlapping community structure of complex networks in nature and society paper_content: Many complex systems in nature and society can be described in terms of networks capturing the intricate web of connections among the units they are made of. A key question is how to interpret the global organization of such networks as the coexistence of their structural subunits (communities) associated with more highly interconnected parts. Identifying these a priori unknown building blocks (such as functionally related proteins, industrial sectors and groups of people) is crucial to the understanding of the structural and functional properties of networks. The existing deterministic methods used for large networks find separated communities, whereas most of the actual networks are made of highly overlapping cohesive groups of nodes. Here we introduce an approach to analysing the main statistical features of the interwoven sets of overlapping communities that makes a step towards uncovering the modular structure of complex systems. After defining a set of new characteristic quantities for the statistics of communities, we apply an efficient technique for exploring overlapping communities on a large scale. We find that overlaps are significant, and the distributions we introduce reveal universal features of networks. Our studies of collaboration, word-association and protein interaction graphs show that the web of communities has non-trivial correlations and specific scaling properties. --- paper_title: Defining and identifying communities in networks paper_content: The investigation of community structures in networks is an important issue in many domains and disciplines. This problem is relevant for social tasks (objective analysis of relationships on the web), biological inquiries (functional studies in metabolic and protein networks), or technological problems (optimization of large infrastructures). Several types of algorithms exist for revealing the community structure in networks, but a general and quantitative definition of community is not implemented in the algorithms, leading to an intrinsic difficulty in the interpretation of the results without any additional nontopological information. In this article we deal with this problem by showing how quantitative definitions of community are implemented in practice in the existing algorithms. In this way the algorithms for the identification of the community structure become fully self-contained. Furthermore, we propose a local algorithm to detect communities which outperforms the existing algorithms with respect to computational cost, keeping the same level of reliability. The algorithm is tested on artificial and real-world graphs. In particular, we show how the algorithm applies to a network of scientific collaborations, which, for its size, cannot be attacked with the usual methods. This type of local algorithm could open the way to applications to large-scale technological and biological systems. --- paper_title: A method of matrix analysis of group structure paper_content: Matrix methods may be applied to the analysis of experimental data concerning group structure when these data indicate relationships which can be depicted by line diagrams such as sociograms. One may introduce two concepts,n-chain and clique, which have simple relationships to the powers of certain matrices. Using them it is possible to determine the group structure by methods which are both faster and more certain than less systematic methods. This paper describes such a matrix method and applies it to the analysis of practical examples. At several points some unsolved problems in this field are indicated. ---
Title: TAXONOMY AND SURVEY OF COMMUNITY DISCOVERY METHODS IN COMPLEX NETWORKS Section 1: INTRODUCTION Description 1: This section introduces the concept of community detection in complex networks and outlines the objectives and scope of the survey. Section 2: COMMUNITIES DESCRIPTION Description 2: This section provides definitions of community structures, elaborating on graph concepts, comparative and self-referring definitions, and quality functions. Section 3: PROPOSITION OF TAXONOMY OF COMMUNITY DISCOVERY METHODS Description 3: This section proposes and explains a taxonomy for community discovery methods, dividing them into agglomerative and divisive approaches and further categorizing them based on stochastic versus deterministic methods and their technical implementations. Section 4: Agglomerative Methods Description 4: This section describes methods that start with each node as its own community and iteratively merge them, including stochastic methods like random walk techniques, synchronization techniques, systems of spins, and deterministic methods like greedy techniques and algorithms based on topological organization. Section 5: Divisive Methods Description 5: This section details methods that split the graph into communities by removing edges and gradually dividing the network, featuring stochastic methods, spectral graph partitioning, and algorithms based on local proprieties and clustering coefficient properties. Section 6: CONCLUSIONS Description 6: This section summarizes the survey and taxonomy, discussing the significance of community detection and suggesting future directions for applying these methods to various types of complex networks.
Survey:Natural Language Parsing For Indian Languages
6
--- paper_title: Computational Morphology and Natural Language Parsing for Indian Languages: A Literature Survey paper_content: Computational Morphology and Natural Language Parsing are the two important as well as essential tasks required for a number of natural language processing application including machine translation. Developing well fledged morphological analyzer and generator (MAG) tools or natural language parsers for highly agglutinative languages is a challenging task. The function of morphological analyzer is to return all the morphemes and their grammatical categories associated with a particular word form. For a given root word and grammatical information, morphological generator will generate the particular word form of that word. On the other hand Parsing is used to understand the syntax and semantics of a natural language sentences confined to the grammar. This literature survey is a ground work to understand the different morphology and parser developments in Indian language. In addition, the paper also deals with various approaches that are used to develop morphological analyzer and generator and natural language parsers tools. --- paper_title: Computational Morphology and Natural Language Parsing for Indian Languages: A Literature Survey paper_content: Computational Morphology and Natural Language Parsing are the two important as well as essential tasks required for a number of natural language processing application including machine translation. Developing well fledged morphological analyzer and generator (MAG) tools or natural language parsers for highly agglutinative languages is a challenging task. The function of morphological analyzer is to return all the morphemes and their grammatical categories associated with a particular word form. For a given root word and grammatical information, morphological generator will generate the particular word form of that word. On the other hand Parsing is used to understand the syntax and semantics of a natural language sentences confined to the grammar. This literature survey is a ground work to understand the different morphology and parser developments in Indian language. In addition, the paper also deals with various approaches that are used to develop morphological analyzer and generator and natural language parsers tools. --- paper_title: Parsing Free Word Order Languages In The Paninian Framework paper_content: There is a need to develop a suitable computational grammar formalism for free word order languages for two reasons: First, a suitably designed formalism is likely to be more efficient. Second, such a formalism is also likely to be linguistically more elegant and satisfying. In this paper, we describe such a formalism, called the Paninian framework, that has been successfully applied to Indian languages.This paper shows that the Paninian framework applied to modern Indian languages gives an elegant account of the relation between surface form (vibhakti) and semantic (karaka) roles. The mapping is elegant and compact. The same basic account also explains active-passives and complex sentences. This suggests that the solution is not just adhoc but has a deeper underlying unity.A constraint based parser is described for the framework. The constraints problem reduces to bipartite graph matching problem because of the nature of constraints. Efficient solutions are known for these problems.It is interesting to observe that such a parser (designed for free word order languages) compares well in asymptotic time complexity with the parser for context free grammars (CFGs) which are basically designed for positional languages. --- paper_title: Improving Data Driven Dependency Parsing using Clausal Information paper_content: The paper describes a data driven dependency parsing approach which uses clausal information of a sentence to improve the parser performance. The clausal information is added automatically during the parsing process. We demonstrate the experiments on Hindi, a language with relatively rich case marking system and free-word-order. All the experiments are done using a modified version of MSTParser. We did all the experiments on the ICON 2009 parsing contest data. We achieved an improvement of 0.87% and 0.77% in unlabeled attachment and labeled attachment accuracies respectively over the baseline parsing accuracies. --- paper_title: Simple Parser for Indian Languages in a Dependency Framework paper_content: This paper is an attempt to show that an intermediary level of analysis is an effective way for carrying out various NLP tasks for linguistically similar languages. We describe a process for developing a simple parser for doing such tasks. This parser uses a grammar driven approach to annotate dependency relations (both inter and intra chunk) at an intermediary level. Ease in identifying a particular dependency relation dictates the degree of analysis reached by the parser. To establish efficiency of the simple parser we show the improvement in its results over previous grammar driven dependency parsing approaches for Indian languages like Hindi. We also propose the possibility of usefulness of the simple parser for Indian languages that are similar in nature. --- paper_title: Dependency Parser for Bengali: the JU System at ICON 2009 paper_content: This paper reports about our work in the ICON 2009 NLP TOOLS CONTEST: Parsing. We submitted two runs for Bengali. A statistical CRF based model followed by a rule-based post-processing technique has been used. The system has been trained on the NLP TOOLS CONTEST: ICON 2009 datasets. The system demonstrated an unlabeled attachment score (UAS) of 74.09%, labeled attachment score (LAS) of 53.90% and labeled accuracy score (LS) of 61.71% respectively. --- paper_title: Parsing of part-of-speech tagged Assamese Texts paper_content: A natural language (or ordinary language) is a language that is spoken, written, or signed by humans for general-purpose communication, as distinguished from formal languages (such as computer-programming languages or the"languages"used in the study of formal logic). The computational activities required for enabling a computer to carry out information processing using natural language is called natural language processing. We have taken Assamese language to check the grammars of the input sentence. Our aim is to produce a technique to check the grammatical structures of the sentences in Assamese text. We have made grammar rules by analyzing the structures of Assamese sentences. Our parsing program finds the grammatical errors, if any, in the Assamese sentence. If there is no error, the program will generate the parse tree for the Assamese sentence --- paper_title: Solving the Noun Phrase and Verb Phrase Agreement in Kannada Sentences paper_content: This paper proposes a way of producing context free grammar for solving Noun and Verb agreement in Kannada Sentences. In most of the Indian languages including Kannada a verb ends with a token which indicates the gender of the person (Noun/ Pronoun). This paper shows the implementation of this agreement using Context Free Grammar. It uses Recursive Descent Parser to parse the CFG. Around 200 sample sentences have taken to test the agreement. --- paper_title: Exploring Semantic Information in Hindi WordNet for Hindi Dependency Parsing paper_content: In this paper, we present our efforts towards incorporating external knowledge from Hindi WordNet to aid dependency parsing. We conduct parsing experiments on Hindi, an Indo-Aryan language, utilizing the information from concept ontologies available in Hindi WordNet to complement the morpho-syntactic information already available. The work is driven by the insight that concept ontologies capture a specific real world aspect of lexical items, which is quite distinct and unlikely to be deduced from morpho-syntactic information such as morph, POS-tag and chunk. This complementing information is encoded as an additional feature for data driven parsing and experiments are conducted. We perform experiments over datasets of different sizes. We achieve an improvement of 1.1% (LAS) when training on 1,000 sentences and 0.2% (LAS) on 13,371 sentences over the baseline. The improvements are statistically significant at p<0.01. The higher improvements on 1,000 sentences suggest that the semantic information could address the data sparsity problem. ---
Title: Survey: Natural Language Parsing For Indian Languages Section 1: INTRODUCTION Description 1: Introduce the importance of syntactic parsing in natural language processing and highlight the challenges specifically faced in parsing Indian languages. Section 2: BACKGROUNG THEORY Description 2: Describe the different categories of natural language parsing techniques, including rule-based, statistical, and generalized parsing methods. Section 3: LITERATURE SURVEY FOR INDIAN LANGUAGES Description 3: Review the existing work on syntactic parsing for various Indian languages, detailing different parsers, methodologies, and results achieved. Section 4: ISSUES IN SYNTACTIC PARSING Description 4: Discuss the structural ambiguities and challenges faced in syntactic parsing of Indian languages, such as scope ambiguity and attachment ambiguity. Section 5: PERFORMANCE MEASURES Description 5: Explain the metrics used to evaluate the performance of syntactic parsers, such as precision, recall, and F-measure. Section 6: CONCLUSION Description 6: Summarize the survey, emphasizing the importance of annotated corpora for Indian languages and proposing future work towards creating efficient syntactic analyzers.
Cosmological surveys with multi-object spectrographs
10
--- paper_title: The Topology of the IRAS Point Source Catalogue Redshift Survey paper_content: We investigate the topology of the new Point Source Catalogue Redshift Survey (PSCz) of IRAS galaxies by means of the genus statistic. The survey maps the local Universe with approximately 15 000 galaxies over 84.1 per cent of the sky, and provides an unprecedented number of resolution elements for the topological analysis. For comparison with the PSCz data we also examine the genus of large N-body simulations of four variants of the cold dark matter (CDM) cosmogony. The simulations are part of the Virgo project to simulate the formation of structure in the Universe. We assume that the statistical properties of the galaxy distribution can be identified with those of the dark matter particles in the simulations. We extend the standard genus analysis by examining the influence of sampling noise on the genus curve and introducing a statistic able to quantify the amount of phase correlation present in the density field, the amplitude drop of the genus compared to a Gaussian field with identical power spectrum. The results for PSCz are consistent with the hypothesis of random-phase initial conditions. In particular, no strong phase correlation is detected on scales ranging from 10 to 32 h−1 Mpc, whereas there is a positive detection of phase correlation at smaller scales. Among the simulations, phase correlations are detected in all models at small scales, albeit with different strengths. When scaled to a common normalization, the amplitude drop depends primarily on the shape of the power spectrum. We find that the constant-bias standard CDM model can be ruled out at high significance, because the shape of its power spectrum is not consistent with PSCz. The other CDM models with more large-scale power all fit the PSCz data almost equally well, with a slight preference for a high-density τCDM model. --- paper_title: The Las Campanas Redshift Survey paper_content: The Las Campanas Redshift Survey (LCRS) consists of 26418 redshifts of galaxies selected from a CCD-based catalog obtained in the $R$ band. The survey covers over 700 square degrees in 6 strips, each 1.5$\arcdeg$ x 80$\arcdeg$, three each in the North and South galactic caps. The median redshift in the survey is about 30000 km~s$^{-1}$. Essential features of the galaxy selection and redshift measurement methods are described and tabulated here. These details are important for subsequent analysis of the LCRS data. Two dimensional representations of the redshift distributions reveal many repetitions of voids, on the scale of about 5000 km~s$^{-1}$, sharply bounded by large walls of galaxies as seen in nearby surveys. Statistical investigations of the mean galaxy properties and of clustering on the large scale are reported elsewhere. These include studies of the luminosity function, power spectrum in two and three dimensions, correlation function, pairwise velocity distribution, identification of large scale structures, and a group catalog. The LCRS redshift catalog will be made available to interested investigators at an internet web site and in archival form as an Astrophysical Journal CD-ROM. --- paper_title: The PSCz catalogue paper_content: We present the catalogue, mask, redshift data and selection function for the PSCz survey of 15 411 IRAS galaxies across 84 per cent of the sky. Most of the IRAS data are taken from the Point Source Catalog, but this has been supplemented and corrected in various ways to improve the completeness and uniformity. We quantify the known imperfections in the catalogue, and we assess the overall uniformity, completeness and data quality. We find that overall the catalogue is complete and uniform to within a few per cent at high latitudes and 10 per cent at low latitudes. Ancillary information, access details, guidelines and caveats for using the catalogue are given. --- paper_title: Measures of large-scale structure in the CfA redshift survey slices paper_content: Variations of the counts-in-cells with cell size are used here to define two statistical measures of large-scale clustering in three 6 deg slices of the CfA redshift survey. A percolation criterion is used to estimate the filling factor which measures the fraction of the total volume in the survey occupied by the large-scale structures. For the full 18 deg slice of the CfA redshift survey, f is about 0.25 + or - 0.05. After removing groups with more than five members from two of the slices, variations of the counts in occupied cells with cell size have a power-law behavior with a slope beta about 2.2 on scales from 1-10/h Mpc. Application of both this statistic and the percolation analysis to simulations suggests that a network of two-dimensional structures is a better description of the geometry of the clustering in the CfA slices than a network of one-dimensional structures. Counts-in-cells are also used to estimate at 0.3 galaxy h-squared/Mpc the average galaxy surface density in sheets like the Great Wall. 46 refs. --- paper_title: Theory of cosmological perturbations paper_content: We present in a manifestly gauge-invariant form the theory of classical linear gravitational perturbations in part I, and a quantum theory of cosmological perturbations in part II. Part I includes applications to several important examples arising in cosmology: a univese dominated by hydrodynamical matter, a universe filled with scalar-field matter, and higher-derivative theories of gravity. The growth rates of perturbations are calculated analytically in most interesting cases. The analysis is applied to study the evolution of fluctuations in inflationary universe models. Part II includes a unified description of the quantum generation and evolution of inhomogeneities about a classial Friedmann background. The method is based on standard canonical quantization of the action for cosmological perturbations which has been reduced to an expression in terms of a single gauge-invariant variable. The spectrum of density perturbations originating in quantum fluctuations is calculated in universe with hydrodynamical matter, in inflationary universe models with scalar-field matter, and in higher-derivative theories of gravity. ::: ::: The gauge-invariant theory of classical and quantized cosmological perturbations developed in parts I and II is applied in part III to several interesting physical problems. It allows a simple derivation of the relation between temperature anistropes in the cosmic microwave background. radiation and the gauge-invariant potential for metric perturbations. The generation and evolution of gravitational waves is studied. As another example, a simple analysis of entropy perturbations and non-scale-invariant spectra in inflationary universe models is presented. The gauge-invariant theory of cosmological perturbations also allows a consistent and gauge-invariant definition of statistical fluctuations. --- paper_title: Mapping the Universe paper_content: Redshift surveys starting from a catalog compiled in 1960's to a map of about two million galaxies covering an area of about 4000 sq deg in the southern hemisphere are outlined. Attention is focused on large-scale features in the distribution of galaxies and the time-saving observation method employed in the mapping of the general galaxy distribution. Very big structures such as the Great Attractor - an agglomeration of galaxies in the direction of Hydra and Centaurus, and the Great Wall are discussed. A cold-dark-matter model with biased galaxy formation is considered, and it is noted that if cold dark matter exists, the contrast between the voids and sheets in the maps is misleading. It is pointed out that the large-scale features in the galaxy distribution, dark-matter problem, and detection of organized flows are among the important observational constraints on models for the formation and evolution of the large-scale structure of the universe. --- paper_title: The Pairwise Velocity Distribution of Galaxies in the Las Campanas Redshift Survey paper_content: We present a novel measurement of the pairwise peculiar velocity distribution function of galaxies on scales r<3200 km s-1 in the Las Campanas Redshift Survey. The distribution is well described by a scale-independent exponential with a width τ, where σ12 = √2τ=363 km s-1. The signal is very stable. Results from the northern and southern sets of slices agree within ±13 km s-1, and the fluctuations among the six individual survey slices vary as ±44 km s-1. The distribution was determined using a Fourier-space deconvolution of the redshift-space distortions in the correlation function. This technique is insensitive to the effect of rich clusters in the survey and recovers the entire distribution function rather than just its second moment. Taken together with the large effective volume of the survey, 6.0 × 106 h-1 Mpc3, we believe this to be a definitive result for r-band-selected galaxies with absolute magnitudes -18.5 < Mr < -22.5 and z < 0.2. --- paper_title: The IRAS PSCz dipole paper_content: We use the PSCzIRAS galaxy redshift survey to analyse the cosmological galaxy dipole out to a distance of 300 h−1 Mpc. The masked area is filled in three different ways, first by sampling the whole sky at random, secondly by using neighbouring areas to fill a masked region, and thirdly using a spherical harmonic analysis. The method of treatment of the mask is found to have a significant effect on the final calculated dipole. ::: ::: ::: ::: The conversion from redshift space to real space is accomplished by using an analytical model of the cluster and void distribution, based on 88 nearby groups, 854 clusters and 163 voids, with some of the clusters and all of the voids found from the PSCz data base. ::: ::: ::: ::: The dipole for the whole PSCz sample appears to have converged within a distance of 200 h−1 Mpc and yields a value for , consistent with earlier determinations from IRAS samples by a variety of methods. For b=1, the 2σ range for Ω0 is 0.43–1.02. ::: ::: ::: ::: The direction of the dipole is within 13° of the cosmic microwave background (CMB) dipole, the main uncertainty in direction being associated with the masked area behind the Galactic plane. The improbability of further major contributions to the dipole amplitude coming from volumes larger than those surveyed here means that the question of the origin of the CMB dipole is essentially resolved. --- paper_title: The Las Campanas Redshift Survey Galaxy-Galaxy Autocorrelation Function paper_content: Presented are measurements of the observed redshift-space galaxy-galaxy autocorrelation function, xi(s), for the Las Campanas Redshift Survey (LCRS). For separations 2.0/h Mpc < s < 16.4/h Mpc, xi(s) can be approximated by a power law with slope of -1.52 +/- 0.03 and a correlation length of s_0 = (6.28 +\- 0.27)/h Mpc. A zero-crossing occurs on scales of roughly 30 - 40/h Mpc. On larger scales, xi(s) fluctuates closely about zero, indicating a high level of uniformity in the galaxy distribution on these scales. In addition, two aspects of the LCRS selection criteria - a variable field-to-field galaxy sampling rate and a 55 arcsec galaxy pair separation limit - are tested and found to have little impact on the measurement of xi(s). Finally, the LCRS xi(s) is compared with those from numerical simulations; it is concluded that, although the LCRS xi(s) does not discriminate sharply among modern cosmological models, redshift-space distortions in the LCRS xi(s) will likely provide a strong test of theory. --- paper_title: Spherical Harmonic Analysis of the PSCz Galaxy Catalogue: Redshift distortions and the real–space power paper_content: We apply the formalism of spherical harmonic decomposition to the galaxy density field of the IRAS PSCz redshift survey. The PSCz redshift survey has almost all-sky coverage and includes IRAS galaxies to a flux limit of 0.6 Jy. Using maximum likelihood methods to examine (to first order) the distortion of the galaxy pattern resulting from redshift coordinates, we have measured the parameter β≡Ω{0.6}/b. We also simultaneously measure either (a) the undistorted amplitude of perturbations in the galaxy distribution when a parametrized power spectrum is assumed, or (b) the shape and amplitude of the real-space power spectrum if the band-power in a set of passbands is measured in a step-wise fashion. These methods are extensively tested on a series of CDM, Λ CDM and MDM simulations and are found to be unbiased. ::: ::: ::: ::: We obtain consistent results for the subset of the PSCz catalogue with flux above 0.75 Jy, but inclusion of galaxies to the formal flux limit of the catalogue gives variations which are larger than our internal errors. For the 0.75-Jy catalogue we find, in the case of a parametrized power spectrum, β=0.58±0.26 and the amplitude of the real-space power measured at wavenumber k=0.1h Mpc−1 is Δ0.1=0.42±0.03. Freeing the shape of the power spectrum we find that β=0.47±0.16 (conditional error) and Δ0.1=0.47±0.03. The shape of the real-space power spectrum is consistent with a Γ=0.2 CDM-like model, but does not strongly rule out a number of other models. Finally by combining our estimate of the amplitude of galaxy clustering and the distortion parameter we find the amplitude of mass fluctuations on a scale k=0.1h Mpc−1 is Δρ=0.24Ω0−0.6, with an uncertainty of 50 per cent. --- paper_title: On density and velocity fields and beta from the IRAS PSCz survey paper_content: We present a version of the Fourier Bessel method first introduced by Fisher et al (1994) and Zaroubi et al (1994) with two extensions: (a) we amend the formalism to allow a generic galaxy weight which can be constant rather than the more conventional overweighting of galaxies at high distances, and (b) we correct for the masked zones by extrapolation of Fourier Bessel modes rather than by cloning from the galaxy distribution in neighbouring regions. We test the procedure extensively on N-body simulations and find that it gives generally unbiased results but that the reconstructed velocities tend to be overpredicted in high-density regions. Applying the formalism to the PSZz redshift catalog, we find that beta = 0.7 +/- 0.5 from a comparison of the reconstructed Local Group velocity to the CMB dipole. From an anisotropy test of the velocity field, we find that beta = 1 CDM models models normalized to the current cluster abundance can be excluded with 90% confidence. The density and velocity fields reconstructed agree with the fields found by Branchini et al (1998) in most points. We find a back-infall into the Great Attractor region (Hydra-Centaurus region) but tests suggest that this may be an artifact. We identify all the major clusters in our density field and confirm the existence of some previously identified possible ones. --- paper_title: The 2dF Galaxy Redshift Survey: Spectra and redshifts paper_content: The 2dF Galaxy Redshift Survey (2dFGRS) is designed to measure redshifts for approximately 250 000 galaxies. This paper describes the survey design, the spectroscopic observations, the redshift measurements and the survey data base. The 2dFGRS uses the 2dF multifibre spectrograph on the Anglo-Australian Telescope, which is capable of observing 400 objects simultaneously over a 2° diameter field. The source catalogue for the survey is a revised and extended version of the APM galaxy catalogue, and the targets are galaxies with extinction-corrected magnitudes brighter than b J = 19.45. The main survey regions are two declination strips, one in the southern Galactic hemisphere spanning 80° × 15° around the SGP, and the other in the northern Galactic hemisphere spanning 75° × 10° along the celestial equator; in addition, there are 99 fields spread over the southern Galactic cap. The survey covers 2000 deg 2 and has a median depth of z = 0.11. Adaptive tiling is used to give a highly uniform sampling rate of 93 per cent over the whole survey region. Redshifts are measured from spectra covering 3600-8000 A at a two-pixel resolution of 9.0 A and a median S/N of 13 pixel - 1 . All redshift identifications are visually checked and assigned a quality parameter Q in the range 1-5; Q ≥ 3 redshifts are 98.4 per cent reliable and have an rms uncertainty of 85 km s - 1 . The overall redshift completeness for Q ≥ 3 redshifts is 91.8 per cent, but this varies with magnitude from 99 per cent for the brightest galaxies to 90 per cent for objects at the survey limit. The 2dFGRS data base is available on the World Wide Web at http://www. mso.anu.edu.au/2dFGRS. --- paper_title: Evidence for a non-zero Λ and a low matter density from a combined analysis of the 2dF Galaxy Redshift Survey and cosmic microwave background anisotropies paper_content: We perform a joint likelihood analysis of the power spectra of the 2dF Galaxy Redshift Survey (2dFGRS) and the cosmic microwave background (CMB) anisotropies under the assumptions that the initial fluctuations were adiabatic, Gaussian and well described by power laws with scalar and tensor indices of n(s) and n(t). On its own, the 2dFGRS sets Light limits on the parameter combination Omega(m)h, but relatively weak limits on the fraction of the cosmic matter density in baryons Omega(b)/Omega(m)- (Here h is Hubble's constant H-0 in units of 100 km s(-1) Mpc(-1). The cosmic densities in baryons, cold dark matter and vacuum energy are denoted by Omega(b), Omega(c) and Omega(Lambda), respectively. The total matter density is Omega(m) = Omega(b) + Omega(c) and the curvature is fixed by Omega(k) = 1 - Omega(m) - Omega(Lambda).) The CMB anisotropy data alone set poor constraints on the cosmological constant and Hubble constant because of a 'geometrical degeneracy' among parameters. Furthermore, if tensor modes are allowed, the CMB data allow a wide range of values for the physical densities in baryons; and cold dark matter (omega(b) = Omegabh(2) and omega(c) = Omega(c)h(2)). Combining the CMB and 2dFGRS data sets helps to break both the geometrical and tensor mode degeneracies. The values of the parameters derived here are consistent with the predictions of the simplest models of inflation, with the baryon density derived from primordial nucleosynthesis and with direct measurements of the Hubble parameter. In particular, we find strong evidence for a positive cosmological constant with a +/-2sigma, range of 0.65 < U(U) < 0.85, independently of constraints on Omega(Lambda) derived from Type Ia supernovae. --- paper_title: A measurement of the cosmological mass density from clustering in the 2dF Galaxy Redshift Survey paper_content: The large-scale structure in the distribution of galaxies is thought to arise from the gravitational instability of small fluctuations in the initial density field of the Universe. A key test of this hypothesis is that forming superclusters of galaxies should generate a systematic infall of other galaxies. This would be evident in the pattern of recessional velocities, causing an anisotropy in the inferred spatial clustering of galaxies. Here we report a precise measurement of this clustering, using the redshifts of more than 141,000 galaxies from the two-degree-field (2dF) galaxy redshift survey. We determine the parameter β = Ω0.6/b = 0.43 ± 0.07, where Ω is the total mass-density parameter of the Universe and b is a measure of the ‘bias’ of the luminous galaxies in the survey. (Bias is the difference between the clustering of visible galaxies and of the total mass, most of which is dark.) Combined with the anisotropy of the cosmic microwave background, our results favour a low-density Universe with Ω ≈ 0.3. --- paper_title: The 2dF Galaxy Redshift Survey: the power spectrum and the matter content of the Universe paper_content: The 2dF Galaxy Redshift Survey has now measured in excess of 160 000 galaxy redshifts. This paper presents the power spectrum of the galaxy distribution, calculated using a direct Fourier transform based technique. We argue that, within the k-space region 0.02 less than or similar to k less than or similar to 0.15 h Mpc(-1), the shape of this spectrum should be close to that of the linear density perturbations convolved with the window function of the survey. This window function and its convolving effect on the power spectrum estimate are analysed in detail. By convolving model spectra, we are able to fit the power-spectrum data and provide a measure of the matter content of the Universe. Our results show that models containing baryon oscillations are mildly preferred over featureless power spectra. Analysis of the data yields 68 per cent confidence limits on the total matter density times the Hubble parameter Omega (m) h = 0.20 +/- 0.03, and the baryon fraction Omega (b)/Omega (m) = 0.15 +/- 0.07, assuming scale-invariant primordial fluctuations. --- paper_title: The 2dF Galaxy Redshift Survey: Final data release paper_content: The 2dF Galaxy Redshift Survey (2dFGRS) has obtained spectra for 245591 sources, mainly galaxies, brighter than a nominal extinction-corrected magnitude limit of b_J=19.45. Reliable redshifts were measured for 221414 galaxies. The galaxies are selected from the extended APM Galaxy Survey and cover an area of approximately 1500 square degrees in three regions: an NGP strip, an SGP strip and random fields scattered around the SGP strip. This paper describes the 2dFGRS final data release of 30 June 2003 and complements Colless et al. (2001), which described the survey and the initial 100k data release. The 2dFGRS database and full documentation are available on the WWW at http://www.mso.anu.edu.au/2dFGRS/ --- paper_title: The 2dF Galaxy Redshift Survey: Power-spectrum analysis of the final dataset and cosmological implications paper_content: We present a power-spectrum analysis of the final 2dF Galaxy Redshift Survey (2dFGRS), employing a direct Fourier method. The sample used comprises 221 414 galaxies with measured redshifts. We investigate in detail the modelling of the sample selection, improving on previous treatments in a number of respects. A new angular mask is derived, based on revisions to the photometric calibration. The redshift selection function is determined by dividing the survey according to rest-frame colour, and deducing a self-consistent treatment of k-corrections and evolution for each population. The covariance matrix for the power-spectrum estimates is determined using two different approaches to the construction of mock surveys, which are used to demonstrate that the input cosmological model can be correctly recovered. We discuss in detail the possible differences between the galaxy and mass power spectra, and treat these using simulations, analytic models and a hybrid empirical approach. Based on these investigations, we are confident that the 2dFGRS power spectrum can be used to infer the matter content of the universe. On large scales, our estimated power spectrum shows evidence for the ‘baryon oscillations’ that are predicted in cold dark matter (CDM) models. Fitting to a CDM model, assuming a primordial n s = 1 spectrum, h = 0.72 and negligible neutrino mass, the preferred --- paper_title: The 6dF Galaxy Survey: samples, observational techniques and the first data release paper_content: The 6dF Galaxy Survey (6dFGS) aims to measure the redshifts of around 150 000 galaxies, and the peculiar velocities of a 15 000-member subsample, over almost the entire southern sky. When complete, it will be the largest redshift survey of the nearby Universe, reaching out to about z similar to 0.15, and more than an order of magnitude larger than any peculiar velocity survey to date. The targets are all galaxies brighter than K-tot = 12.75 in the 2MASS Extended Source Catalog (XSC), supplemented by 2MASS and SuperCOSMOS galaxies that complete the sample to limits of (H, J, r(F), b(J)) = (13.05, 13.75, 15.6, 16.75). Central to the survey is the Six-Degree Field (6dF) multifibre spectrograph, an instrument able to record 150 simultaneous spectra over the 5.7-field of the UK Schmidt Telescope. An adaptive tiling algorithm has been employed to ensure around 95 per cent fibring completeness over the 17 046 deg(2) of the southern sky with \b\ > 10degrees. Spectra are obtained in two observations using separate V and R gratings, that together give R similar to 1000 over at least 4000-7500 Angstrom and signal-to-noise ratio similar to10 per pixel. Redshift measurements are obtained semi-automatically, and are assigned a quality value based on visual inspection. The 6dFGS data base is available at http://www-wfau.roe.ac.uk/6dFGS/, with public data releases occurring after the completion of each third of the survey. --- paper_title: Simultaneous constraints on the growth of structure and cosmic expansion from the multipole power spectra of the SDSS DR7 LRG sample paper_content: The anisotropic galaxy clustering on large scales provides us with a unique opportunity to probe into the gravity theory through the redshift-space distortions (RSDs) and the Alcock-Paczynski effect. Using the multipole power spectra up to hexadecapole (ell=4), of the Luminous Red Galaxy (LRG) sample in the data release 7 (DR7) of the Sloan Digital Sky Survey II (SDSS-II), we obtain simultaneous constraints on the linear growth rate f, angular diameter distance D_A, and Hubble parameter H at redshift z = 0.3. For this purpose, we first extensively examine the validity of a theoretical model for the non-linear RSDs using mock subhalo catalogues from N-body simulations, which are constructed to match with the observed multipole power spectra. We show that the input cosmological parameters of the simulations can be recovered well within the error bars by comparing the multipole power spectra of our theoretical model and those of the mock subhalo catalogues. We also carefully examine systematic uncertainties in our analysis by testing the dependence on prior assumption of the theoretical model and the range of wavenumbers to be used in the fitting. These investigations validate that the theoretical model can be safely applied to the real data. Thus, our results from the SDSS DR7 LRG sample are robust including systematics of theoretical modeling; f(z = 0.3) sigma_8(z = 0.3) =0.49+-0.08(stat.)+-0.04(sys.), D_A (z = 0.3) =968+-42(stat.)+-17(sys.)[Mpc], H (z = 0.3) =81.7+-5.0(stat.)+-3.7(sys.)[km/s/Mpc]. We believe that our method to constrain the cosmological parameters using subhaloes catalogues will be useful for more refined samples like CMASS and LOWZ catalogues in the Baryon Oscillation Spectroscopic Survey in SDSS-III. --- paper_title: Improved cosmological constraints from a joint analysis of the SDSS-II and SNLS supernova samples paper_content: Aims. We present cosmological constraints from a joint analysis of type Ia supernova (SN Ia) observations obtained by the SDSS-II and SNLS collaborations. The dataset includes several low-redshift samples (z< 0.1), all three seasons from the SDSS-II (0.05 <z< 0.4), and three years from SNLS (0.2 <z< 1), and it totals 740 spectroscopically confirmed type Ia supernovae with high-quality light curves. ::: ::: Methods. We followed the methods and assumptions of the SNLS three-year data analysis except for the following important improvements: 1) the addition of the full SDSS-II spectroscopically-confirmed SN Ia sample in both the training of the SALT2 light-curve model and in the Hubble diagram analysis (374 SNe); 2) intercalibration of the SNLS and SDSS surveys and reduced systematic uncertainties in the photometric calibration, performed blindly with respect to the cosmology analysis; and 3) a thorough investigation of systematic errors associated with the SALT2 modeling of SN Ia light curves. ::: ::: Results. We produce recalibrated SN Ia light curves and associated distances for the SDSS-II and SNLS samples. The large SDSS-II sample provides an effective, independent, low-z anchor for the Hubble diagram and reduces the systematic error from calibration systematics in the low-z SN sample. For a flat ΛCDM cosmology, we find Ωm =0.295 ± 0.034 (stat+sys), a value consistent with the most recent cosmic microwave background (CMB) measurement from the Planck and WMAP experiments. Our result is 1.8σ (stat+sys) different than the previously published result of SNLS three-year data. The change is due primarily to improvements in the SNLS photometric calibration. When combined with CMB constraints, we measure a constant dark-energy equation of state parameter w =−1.018 ± 0.057 (stat+sys) for a flat universe. Adding baryon acoustic oscillation distance measurements gives similar constraints: w =−1.027 ± 0.055. Our supernova measurements provide the most stringent constraints to date on the nature of dark energy. --- paper_title: Observational Probes of Cosmic Acceleration paper_content: The accelerating expansion of the universe is the most surprising cosmological discovery in many decades, implying that the universe is dominated by some form of “dark energy” with exotic physical properties, or that Einstein’s theory of gravity breaks down on cosmological scales. The profound implications of cosmic acceleration have inspired ambitious efforts to understand its origin, with experiments that aim to measure the history of expansion and growth of structure with percent-level precision or higher. We review in detail the four most well established methods for making such measurements: Type Ia supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, and the abundance of galaxy clusters. We pay particular attention to the systematic uncertainties in these techniques and to strategies for controlling them at the level needed to exploit “Stage IV” dark energy facilities such as BigBOSS, LSST, Euclid, and WFIRST. We briefly review a number of other approaches including redshift-space distortions, the Alcock–Paczynski effect, and direct measurements of the Hubble constant H_0. We present extensive forecasts for constraints on the dark energy equation of state and parameterized deviations from General Relativity, achievable with Stage III and Stage IV experimental programs that incorporate supernovae, BAO, weak lensing, and cosmic microwave background data. We also show the level of precision required for clusters or other methods to provide constraints competitive with those of these fiducial programs. We emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. Surveys to probe cosmic acceleration produce data sets that support a wide range of scientific investigations, and they continue the longstanding astronomical tradition of mapping the universe in ever greater detail over ever larger scales. --- paper_title: The Eleventh and Twelfth Data Releases of the Sloan Digital Sky Survey: Final Data from SDSS-III paper_content: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrograph, and a novel optical interferometer. All the data from SDSS-III are now made public. In particular, this paper describes Data Release 11 (DR11) including all data acquired through 2013 July, and Data Release 12 (DR12) adding data acquired through 2014 July (including all data included in previous data releases), marking the end of SDSS-III observing. Relative to our previous public release (DR10), DR12 adds one million new spectra of galaxies and quasars from the Baryon Oscillation Spectroscopic Survey (BOSS) over an additional 3000 sq. deg of sky, more than triples the number of H-band spectra of stars as part of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE), and includes repeated accurate radial velocity measurements of 5500 stars from the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS). The APOGEE outputs now include measured abundances of 15 different elements for each star. In total, SDSS-III added 2350 sq. deg of ugriz imaging; 155,520 spectra of 138,099 stars as part of the Sloan Exploration of Galactic Understanding and Evolution 2 (SEGUE-2) survey; 2,497,484 BOSS spectra of 1,372,737 galaxies, 294,512 quasars, and 247,216 stars over 9376 sq. deg; 618,080 APOGEE spectra of 156,593 stars; and 197,040 MARVELS spectra of 5,513 stars. Since its first light in 1998, SDSS has imaged over 1/3 of the Celestial sphere in five bands and obtained over five million astronomical spectra. --- paper_title: The Clustering of the SDSS DR7 Main Galaxy Sample I: A 4 per cent Distance Measure at z=0.15 paper_content: We create a sample of spectroscopically identified galaxies with z < 0.2 from the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7), covering 6813 deg(2). Galaxies are chosen to sample the highest mass haloes, with an effective bias of 1.5, allowing us to construct 1000 mock galaxy catalogues (described in Paper II), which we use to estimate statistical errors and test our methods. We use an estimate of the gravitational potential to 'reconstruct' the linear density fluctuations, enhancing the baryon acoustic oscillation (BAO) signal in the measured correlation function and power spectrum. Fitting to these measurements, we determine D-V(z(eff) = 0.15) = (664 +/- 25)(r(d)/r(d, fid)) Mpc; this is a better than 4 per cent distance measurement. This 'fills the gap' in BAO distance ladder between previously measured local and higher redshift measurements, and affords significant improvement in constraining the properties of dark energy. Combining our measurement with other BAO measurements from Baryon Oscillation Spectroscopic Survey and 6-degree Field Galaxy Redshift Survey galaxy samples provides a 15 per cent improvement in the determination of the equation of state of dark energy and the value of the Hubble parameter at z = 0 (H-0). Our measurement is fully consistent with the Planck results and the Lambda cold dark matter concordance cosmology, but increases the tension between Planck+BAO H-0 determinations and direct H-0 measurements. --- paper_title: The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: Testing gravity with redshift-space distortions using the power spectrum multipoles paper_content: We analyse the anisotropic clustering of the Baryon Oscillation Spectroscopic Survey (BOSS) CMASS Data Release 11 (DR11) sample, which consists of 690827 galaxies in the redshift range 0.43 < z < 0.7 and has a sky coverage of 8498 deg 2 . We perform our analysis in Fourier space using a power spectrum estimator suggested by Yamamoto et al. (2006). We measure the multipole power spectra in a self-consistent manner for the first time in the sense that we provide a proper way to treat the survey window function and the integral constraint, without the commonly used assumption of an isotropic power spectrum and without the need to split the survey into sub-regions. The main cosmological signals exploited in our analysis are the Baryon Acoustic Oscillations and the signal of redshift space distortions, both of which are distorted by the Alcock-Paczynski effect. Together, these signals allow us to constrain the distance ratio DV (zeff)/rs(zd) = 13.89 ± 0.18, the AlcockPaczynski parameter FAP(zeff) = 0.679 ± 0.031 and the growth rate of structure f(zeff)σ8(zeff) = 0.419 ± 0.044 at the effective redshift zeff = 0.57. We emphasise that our constraints are robust against possible systematic uncertainties. In order to ensure this, we perform a detailed systematics study against CMASS mock galaxy catalogues and N-body simulations. We find that such systematics will lead to 3.1% uncertainty for fσ8 if we limit our fitting range to k = 0.01 - 0.20h/Mpc, where the statistical uncertainty is expected to be three times larger. We did not find significant systematic uncertainties for DV /rs or FAP. Combining our dataset with Planck to test General Relativity (GR) through the simple γ-parameterisation, where the growth rate is given by f(z) = m(z), reveals a � 2σ tension between the data and the prediction by GR. The tension between our result and GR can be traced back to a tension in the clustering amplitude σ8 between CMASS and Planck. --- paper_title: Large Scale Structure Observations paper_content: Galaxy Surveys are enjoying a renaissance thanks to the advent of multi-object spectrographs on ground-based telescopes. The last 15 years have seen the fruits of this experimental advance, including the 2-degree Field Galaxy Redshift Survey (2dFGRS; Colless et al. 2003) and the Sloan Digital Sky Survey (SDSS; York et al. 2000). Most recently, the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al. 2013), part of the SDSS-III project (Eisenstein et al. 2011), has provided the largest volume of the low-redshift Universe ever surveyed with a galaxy density useful for high-precision cosmology. This set of lecture notes looks at some of the physical processes that underpin these measurements, the evolution of measurements themselves, and looks ahead to the next 15 years and the advent of surveys such as the enhanced Baryon Oscillation Spectroscopic Survey (eBOSS), the Dark Energy Spectroscopic Instrument (DESI) and the ESA Euclid satellite mission. --- paper_title: The 6dF Galaxy Survey: Baryon Acoustic Oscillations and the Local Hubble Constant paper_content: We analyse the large-scale correlation function of the 6dF Galaxy Survey (6dFGS) and detect a Baryon Acoustic Oscillation (BAO) signal. The 6dFGS BAO detection allows us to constrain the distance-redshift relation at z_{\rm eff} = 0.106. We achieve a distance measure of D_V(z_{\rm eff}) = 456\pm27 Mpc and a measurement of the distance ratio, r_s(z_d)/D_V(z_{\rm eff}) = 0.336\pm0.015 (4.5% precision), where r_s(z_d) is the sound horizon at the drag epoch z_d. The low effective redshift of 6dFGS makes it a competitive and independent alternative to Cepheids and low-z supernovae in constraining the Hubble constant. We find a Hubble constant of H_0 = 67\pm3.2 km s^{-1} Mpc^{-1} (4.8% precision) that depends only on the WMAP-7 calibration of the sound horizon and on the galaxy clustering in 6dFGS. Compared to earlier BAO studies at higher redshift, our analysis is less dependent on other cosmological parameters. The sensitivity to H_0 can be used to break the degeneracy between the dark energy equation of state parameter w and H_0 in the CMB data. We determine that w = -0.97\pm0.13, using only WMAP-7 and BAO data from both 6dFGS and \citet{Percival:2009xn}. We also discuss predictions for the large scale correlation function of two future wide-angle surveys: the WALLABY blind H{\sc I} survey (with the Australian SKA Pathfinder, ASKAP), and the proposed TAIPAN all-southern-sky optical galaxy survey with the UK Schmidt Telescope (UKST). We find that both surveys are very likely to yield detections of the BAO peak, making WALLABY the first radio galaxy survey to do so. We also predict that TAIPAN has the potential to constrain the Hubble constant with 3% precision. --- paper_title: Measuring D_A and H at z=0.35 from the SDSS DR7 LRGs using baryon acoustic oscillations paper_content: We present measurements of the angular diameter distance DA(z) and the Hubble parameter H(z) at z = 0.35 using the anisotropy of the baryon acoustic oscillation (BAO) signal measured in the galaxy clustering distribution of the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) Luminous Red Galaxies (LRG) sample. Our work is the first to apply density-field reconstruction to an anisotropic analysis of the acoustic peak. Reconstruction partially removes the effects of non-linear evolution and redshift-space distortions in order to sharpen the acoustic signal. We present the theoretical framework behind the anisotropic BAO signal and give a detailed account of the fitting model we use to extract this signal from the data. Our method focuses only on the acoustic peak anisotropy, rather than the more model-dependent anisotropic information from the broadband power. We test the robustness of our analysis methods on 160 LasDamas DR7 mock catalogues and find that our models are unbiased at the � 0.2% level in measuring the BAO anisotropy. After reconstruction we measure DA(z = 0.35) = 1050 ± 38 Mpc and H(z = 0.35) = 84.4 ± 7.0 km/s/Mpc assuming a sound horizon of rs = 152.76 Mpc. Note that these measurements are correlated with a correlation coefficient of 0.57. This represents a factor of 1.4improvement in the error on DA relative to the pre-reconstruction case; a factor of 1.2 improvement is seen for H. --- paper_title: The Clustering of Galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS): measuring growth rate and geometry with anisotropic clustering paper_content: We use the observed anisotropic clustering of galaxies in the Baryon Oscillation Spectroscopic Survey Data Release 11 CMASS sample to measure the linear growth rate of structure, the Hubble expansion rate and the comoving distance scale. Our sample covers 8498 deg2 and encloses an effective volume of 6 Gpc3 at an effective redshift of z¯=0.57. We find fσ8 = 0.441 ± 0.044, H = 93.1 ± 3.0 km s−1 Mpc−1 and DA = 1380 ± 23 Mpc when fitting the growth and expansion rate simultaneously. When we fix the background expansion to the one predicted by spatially flat Λ cold dark matter (ΛCDM) model in agreement with recent Planck results, we find fσ8 = 0.447 ± 0.028 (6 per cent accuracy). While our measurements are generally consistent with the predictions of ΛCDM and general relativity, they mildly favour models in which the strength of gravitational interactions is weaker than what is predicted by general relativity. Combining our measurements with recent cosmic microwave background data results in tight constraints on basic cosmological parameters and deviations from the standard cosmological model. Separately varying these parameters, we find w = −0.983 ± 0.075 (8 per cent accuracy) and γ = 0.69 ± 0.11 (16 per cent accuracy) for the effective equation of state of dark energy and the growth rate index, respectively. Both constraints are in good agreement with the standard model values of w = −1 and γ = 0.554. --- paper_title: The Shape of the Sloan Digital Sky Survey Data Release 5 Galaxy Power Spectrum paper_content: We present a Fourier analysis of the clustering of galaxies in the combined main galaxy and LRG SDSS DR5 sample. The aim of our analysis is to consider how well we can measure the cosmological matter density using the signature of the horizon at matter-radiation equality embedded in the large-scale power spectrum. The new data constrain the power spectrum on scales 100-600 h-1 Mpc with significantly higher precision than previous analyses of just the SDSS main galaxies, due to our larger sample and the inclusion of the LRGs. This improvement means that we can now reveal a discrepancy between the shape of the measured power and linear CDM models on scales 0.01 h Mpc-1 < k < 0.15 h Mpc-1, with linear model fits favoring a lower matter density (ΩM = 0.22 ± 0.04) on scales 0.01 h Mpc-1 < k < 0.06 h Mpc-1 and a higher matter density (ΩM = 0.32 ± 0.01) when smaller scales are included, assuming a flat ΛCDM model with h = 0.73 and ns = 0.96. This discrepancy could be explained by scale-dependent bias, and by analyzing subsamples of galaxies, we find that the ratio of small-scale to large-scale power increases with galaxy luminosity, so all of the SDSS galaxies cannot trace the same power spectrum shape over 0.01 h Mpc-1 < k < 0.2 h Mpc-1. However, the data are insufficient to clearly show a luminosity-dependent change in the largest scale at which a significant increase in clustering is observed, although they do not rule out such an effect. Significant scale-dependent galaxy bias on large scales, which changes with the r-band luminosity of the galaxies, could potentially explain differences in our ΩM estimates and differences previously observed between 2dFGRS and SDSS power spectra and the resulting parameter constraints. --- paper_title: The Seventh Data Release of the Sloan Digital Sky Survey paper_content: This paper describes the Seventh Data Release of the Sloan Digital Sky Survey (SDSS), marking the completion of the original goals of the SDSS and the end of the phase known as SDSS-II. It includes 11,663 deg^2 of imaging data, with most of the ~2000 deg^2 increment over the previous data release lying in regions of low Galactic latitude. The catalog contains five-band photometry for 357 million distinct objects. The survey also includes repeat photometry on a 120° long, 2°.5 wide stripe along the celestial equator in the Southern Galactic Cap, with some regions covered by as many as 90 individual imaging runs. We include a co-addition of the best of these data, going roughly 2 mag fainter than the main survey over 250 deg^2. The survey has completed spectroscopy over 9380 deg^2; the spectroscopy is now complete over a large contiguous area of the Northern Galactic Cap, closing the gap that was present in previous data releases. There are over 1.6 million spectra in total, including 930,000 galaxies, 120,000 quasars, and 460,000 stars. The data release includes improved stellar photometry at low Galactic latitude. The astrometry has all been recalibrated with the second version of the USNO CCD Astrograph Catalog, reducing the rms statistical errors at the bright end to 45 milliarcseconds per coordinate. We further quantify a systematic error in bright galaxy photometry due to poor sky determination; this problem is less severe than previously reported for the majority of galaxies. Finally, we describe a series of improvements to the spectroscopic reductions, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities. --- paper_title: The three-dimensional power spectrum of galaxies from the Sloan Digital Sky Survey paper_content: We measure the large-scale real-space power spectrum P(k) by using a sample of 205,443 galaxies from the Sloan Digital Sky Survey, covering 2417 effective square degrees with mean redshift z ≈ 0.1. We employ a matrix-based method using pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well-behaved window functions in the range 0.02 h Mpc-1 < k < 0.3 h Mpc-1. We pay particular attention to modeling, quantifying, and correcting for potential systematic errors, nonlinear redshift distortions, and the artificial red-tilt caused by luminosity-dependent bias. Our results are robust to omitting angular and radial density fluctuations and are consistent between different parts of the sky. Our final result is a measurement of the real-space matter power spectrum P(k) up to an unknown overall multiplicative bias factor. Our calculations suggest that this bias factor is independent of scale to better than a few percent for k < 0.1 h Mpc-1, thereby making our results useful for precision measurements of cosmological parameters in conjunction with data from other experiments such as the Wilkinson Microwave Anisotropy Probe satellite. The power spectrum is not well-characterized by a single power law but unambiguously shows curvature. As a simple characterization of the data, our measurements are well fitted by a flat scale-invariant adiabatic cosmological model with h Ωm = 0.213 ± 0.023 and σ8 = 0.89 ± 0.02 for L* galaxies, when fixing the baryon fraction Ωb/Ωm = 0.17 and the Hubble parameter h = 0.72; cosmological interpretation is given in a companion paper. --- paper_title: DETECTION OF THE BARYON ACOUSTIC PEAK IN THE LARGE-SCALE CORRELATION FUNCTION OF SDSS LUMINOUS RED GALAXIES paper_content: We present the large-scale correlation function measured from a spectroscopic sample of 46,748 luminous red galaxies from the Sloan Digital Sky Survey. The survey region covers 0.72h −3 Gpc 3 over 3816 square degrees and 0.16 < z < 0.47, making it the best sample yet for the study of large-scale structure. We find a well-detected peak in the correlation function at 100h −1 Mpc separation that is an excellent match to the predicted shape and location of the imprint of the recombination-epoch acoustic oscillations on the low-redshift clustering of matter. This detection demonstrates the linear growth of structure by gravitational instability between z ≈ 1000 and the present and confirms a firm prediction of the standard cosmological theory. The acoustic peak provides a standard ruler by which we can measure the ratio of the distances to z = 0.35 and z = 1089 to 4% fractional accuracy and the absolute distance to z = 0.35 to 5% accuracy. From the overall shape of the correlation function, we measure the matter density mh 2 to 8% and find agreement with the value from cosmic microwave background (CMB) anisotropies. Independent of the constraints provided by the CMB acoustic scale, we find m = 0.273 ±0.025+0.123(1+ w0)+0.137K. Including the CMB acoustic scale, we find that the spatial curvature is K = −0.010 ± 0.009 if the dark energy is a cosmological constant. More generally, our results provide a measurement of cosmological distance, and hence an argument for dark energy, based on a geometric method with the same simple physics as the microwave background anisotropies. The standard cosmological model convincingly passes these new and robust tests of its fundamental properties. Subject headings: cosmology: observations — large-scale structure of the universe — distance scale — cosmological parameters — cosmic microwave background — galaxies: elliptical and lenticular, cD --- paper_title: Galaxy Clustering in Early Sloan Digital Sky Survey Redshift Data paper_content: We present the first measurements of clustering in the Sloan Digital Sky Survey (SDSS) galaxy redshift survey. Our sample consists of 29,300 galaxies with redshifts 5700 km s-1 ≤ cz ≤ 39,000 km s-1, distributed in several long but narrow (25-5°) segments, covering 690 deg2. For the full, flux-limited sample, the redshift-space correlation length is approximately 8 h-1 Mpc. The two-dimensional correlation function ξ(rp,π) shows clear signatures of both the small-scale, fingers-of-God distortion caused by velocity dispersions in collapsed objects and the large-scale compression caused by coherent flows, though the latter cannot be measured with high precision in the present sample. The inferred real-space correlation function is well described by a power law, ξ(r) = (r/6.1 ± 0.2 h-1 Mpc)-1.75±0.03, for 0.1 h-1 Mpc ≤ r ≤ 16 h-1 Mpc. The galaxy pairwise velocity dispersion is σ12 ≈ 600 ± 100 km s-1 for projected separations 0.15 h-1 Mpc ≤ rp ≤ 5 h-1 Mpc. When we divide the sample by color, the red galaxies exhibit a stronger and steeper real-space correlation function and a higher pairwise velocity dispersion than do the blue galaxies. The relative behavior of subsamples defined by high/low profile concentration or high/low surface brightness is qualitatively similar to that of the red/blue subsamples. Our most striking result is a clear measurement of scale-independent luminosity bias at r 10 h-1 Mpc: subsamples with absolute magnitude ranges centered on M* - 1.5, M*, and M* + 1.5 have real-space correlation functions that are parallel power laws of slope ≈-1.8 with correlation lengths of approximately 7.4, 6.3, and 4.7 h-1 Mpc, respectively. --- paper_title: The 2dF Galaxy Redshift Survey: Power-spectrum analysis of the final dataset and cosmological implications paper_content: We present a power-spectrum analysis of the final 2dF Galaxy Redshift Survey (2dFGRS), employing a direct Fourier method. The sample used comprises 221 414 galaxies with measured redshifts. We investigate in detail the modelling of the sample selection, improving on previous treatments in a number of respects. A new angular mask is derived, based on revisions to the photometric calibration. The redshift selection function is determined by dividing the survey according to rest-frame colour, and deducing a self-consistent treatment of k-corrections and evolution for each population. The covariance matrix for the power-spectrum estimates is determined using two different approaches to the construction of mock surveys, which are used to demonstrate that the input cosmological model can be correctly recovered. We discuss in detail the possible differences between the galaxy and mass power spectra, and treat these using simulations, analytic models and a hybrid empirical approach. Based on these investigations, we are confident that the 2dFGRS power spectrum can be used to infer the matter content of the universe. On large scales, our estimated power spectrum shows evidence for the ‘baryon oscillations’ that are predicted in cold dark matter (CDM) models. Fitting to a CDM model, assuming a primordial n s = 1 spectrum, h = 0.72 and negligible neutrino mass, the preferred --- paper_title: 6dF: a very efficient multiobject spectroscopy system for the UK Schmidt Telescope paper_content: Multi-object spectroscopy at the Anglo-Australian Observatory's 1.2-m UK Schmidt Telescope (UKST) is carried out with the FLAIR multi-fiber system. The FLAIR front-end feeds an optically-efficient, all-Schmidt spectrograph mounted on the dome floor. However, positioning of the 92 available fibers within the 40 sq. deg. field of the telescope is essentially a manual operation, and can take from four to six hours. Typical observations of sufficient signal-to-noise usually take much less than this (e.g. about an hour for galaxy redshifts to B approximately 17). Clearly, therefore, the system is working at well under its potential efficiency for survey-type observations where repeated reconfigurations of fibers are required. To address the imbalance between reconfiguration time and observing time, a fully-automated, off telescope, pick-place fiber-positioning system known as 6 dF has been proposed. It will allow 150 fibers to be reconfigured across a 6-degree circular field in under an hour. Three field plates will be available with a 10 - 15 minute field-plate changeover anticipated. The resulting factor of 10 improvement in observing efficiency will deliver, for the first time, an effective means of tackling major, full-hemisphere, spectroscopic surveys. An all southern sky near-infrared-selected galaxy redshift survey is one high- priority example. The estimated cost of 6 dF is $A450k. A design study has been completed and substantial funding is already in place to build the instrument over a two-year timescale.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only. --- paper_title: The 6dF Galaxy Survey: Final Redshift Release (DR3) and Southern Large-Scale Structures paper_content: We report the final redshift release of the 6dF Galaxy Survey (6dFGS), a combined redshift and peculiar velocity survey over the southern sky (|b| > 10°). Its 136 304 spectra have yielded 110 256 new extragalactic redshifts and a new catalogue of 125 071 galaxies making near-complete samples with (K, H, J, r_F, b_J) ≤ (12.65, 12.95, 13.75, 15.60, 16.75). The median redshift of the survey is 0.053. Survey data, including images, spectra, photometry and redshifts, are available through an online data base. We describe changes to the information in the data base since earlier interim data releases. Future releases will include velocity dispersions, distances and peculiar velocities for the brightest early-type galaxies, comprising about 10 per cent of the sample. Here we provide redshift maps of the southern local Universe with z ≤ 0.1, showing nearby large-scale structures in hitherto unseen detail. A number of regions known previously to have a paucity of galaxies are confirmed as significantly underdense regions. The URL of the 6dFGS data base is http://www-wfau.roe.ac.uk/6dFGS. --- paper_title: The 6dF Galaxy Velocity Survey: Cosmological constraints from the velocity power spectrum paper_content: We present scale-dependent measurements of the normalized growth rate of structure f sigma 8(k, z = 0) using only the peculiar motions of galaxies. We use data from the 6-degree Field Galaxy Survey velocity sample together with a newly compiled sample of low-redshift (z 300 h(-1) Mpc, which represents one of the largest scale growth rate measurement to date. We find no evidence for a scale-dependence in the growth rate, or any statistically significant variation from the growth rate as predicted by the Planck cosmology. Bringing all the scales together, we determine the normalized growth rate at z = 0 to similar to 15 per cent in a manner independent of galaxy bias and in excellent agreement with the constraint from the measurements of redshift-space distortions from 6-degree Field Galaxy Survey. We pay particular attention to systematic errors. We point out that the intrinsic scatter present in Fundamental Plane and Tully-Fisher relations is only Gaussian in logarithmic distance units; wrongly assuming it is Gaussian in linear (velocity) units can bias cosmological constraints. We also analytically marginalize over zero-point errors in distance indicators, validate the accuracy of all our constraints using numerical simulations, and demonstrate how to combine different (correlated) velocity surveys using a matrix 'hyperparameter' analysis. Current and forthcoming peculiar velocity surveys will allow us to understand in detail the growth of structure in the low-redshift universe, providing strong constraints on the nature of dark energy. --- paper_title: The 6dF Galaxy Survey: Peculiar Velocity Field and Cosmography paper_content: We derive peculiar velocities for the 6dF Galaxy Survey (6dFGS) and describe the velocity field of the nearby ( z< 0.055) Southern hemisphere. The survey comprises 8885 galaxies for which we have previously reported Fundamental Plane data. We obtain peculiar velocity probability distributions for the redshift-space positions of each of these galaxies using a Bayesian approach. Accounting for selection bias, we find that the logarithmic distance uncertainty is 0.11 dex, corresponding to 26 per cent in linear distance. We use adaptive kernel smoothing to map the observed 6dFGS velocity field out to cz ∼ 16000 km s −1 , and compare this to the predicted velocity fields from the PSCz Survey and the 2MASS Redshift Survey. We find a better fit to the PSCz prediction, although the reduced χ 2 for the whole sample is approximately unity for both comparisons. This means that, within the observational uncertainties due to redshift-independent distance errors, observed galaxy velocities and those predicted by the linear approximation from the density field agree. However, we find peculiar velocities that are systematically more positive than model predictions in the direction of the Shapley and Vela superclusters, and systematically more negative than model predictions in the direction of the Pisces-Cetus Supercluster, suggesting contributions from volumes not covered by the models. --- paper_title: The 6dF Galaxy Survey: samples, observational techniques and the first data release paper_content: The 6dF Galaxy Survey (6dFGS) aims to measure the redshifts of around 150 000 galaxies, and the peculiar velocities of a 15 000-member subsample, over almost the entire southern sky. When complete, it will be the largest redshift survey of the nearby Universe, reaching out to about z similar to 0.15, and more than an order of magnitude larger than any peculiar velocity survey to date. The targets are all galaxies brighter than K-tot = 12.75 in the 2MASS Extended Source Catalog (XSC), supplemented by 2MASS and SuperCOSMOS galaxies that complete the sample to limits of (H, J, r(F), b(J)) = (13.05, 13.75, 15.6, 16.75). Central to the survey is the Six-Degree Field (6dF) multifibre spectrograph, an instrument able to record 150 simultaneous spectra over the 5.7-field of the UK Schmidt Telescope. An adaptive tiling algorithm has been employed to ensure around 95 per cent fibring completeness over the 17 046 deg(2) of the southern sky with \b\ > 10degrees. Spectra are obtained in two observations using separate V and R gratings, that together give R similar to 1000 over at least 4000-7500 Angstrom and signal-to-noise ratio similar to10 per pixel. Redshift measurements are obtained semi-automatically, and are assigned a quality value based on visual inspection. The 6dFGS data base is available at http://www-wfau.roe.ac.uk/6dFGS/, with public data releases occurring after the completion of each third of the survey. --- paper_title: The 6dF Galaxy Survey: The Near-Infrared Fundamental Plane of Early-Type Galaxies paper_content: We determine the near-infrared Fundamental Plane (FP) for $\sim10^4$ early-type galaxies in the 6dF Galaxy Survey (6dFGS). We fit the distribution of central velocity dispersion, near-infrared surface brightness and half-light radius with a three-dimensional Gaussian model using a maximum likelihood method. For the 6dFGS $J$ band sample we find a FP with $R_{e}$\,$\propto$\,$\sigma_0^{1.52\pm0.03}I_{e}^{-0.89\pm0.01}$, similar to previous near-IR determinations and consistent with the $H$ and $K$ band Fundamental Planes once allowance is made for differences in mean colour. The overall scatter in $R_e$ about the FP is $\sigma_r$,=,29%, and is the quadrature sum of an 18% scatter due to observational errors and a 23% intrinsic scatter. Because of the distribution of galaxies in FP space, $\sigma_r$ is not the distance error, which we find to be $\sigma_d$,=,23%. Using group richness and local density as measures of environment, and morphologies based on visual classifications, we find that the FP slopes do not vary with environment or morphology. However, for fixed velocity dispersion and surface brightness, field galaxies are on average 5% larger than galaxies in higher-density environments, and the bulges of early-type spirals are on average 10% larger than ellipticals and lenticulars. The residuals about the FP show significant trends with environment, morphology and stellar population. The strongest trend is with age, and we speculate that age is the most important systematic source of offsets from the FP, and may drive the other trends through its correlations with environment, morphology and metallicity. --- paper_title: The 6dF Galaxy Survey: Fundamental Plane Data paper_content: We report the 6dFGS Fundamental Plane (6dFGSv) catalogue that is used to estimate distances and peculiar velocities for nearly 9000 early-type galaxies in the local (z < 0.055) universe. Velocity dispersions are derived by cross-correlation from 6dF V-band spectra with typical S/N of 12.9 A^(−1) for a sample of 11 315 galaxies; the median velocity dispersion is 163 km s^(−1) and the median measurement error is 12.9 per cent. The photometric Fundamental Plane (FP) parameters (effective radii and surface brightnesses) are determined from the JHK 2MASS images for 11 102 galaxies. Comparison of the independent J- and K-band measurements implies that the average uncertainty in X_(FP), the combined photometric parameter that enters the FP, is 0.013 dex (3 per cent) for each band. Visual classification of morphologies was used to select a sample of nearly 9000 early-type galaxies that form 6dFGSv. This catalogue has been used to study the effects of stellar populations on galaxy scaling relations, to investigate the variation of the FP with environment and galaxy morphology, to explore trends in stellar populations through, along and across the FP, and to map and analyse the local peculiar velocity field. --- paper_title: The WiggleZ Dark Energy Survey: measuring the cosmic growth rate with the two-point galaxy correlation function paper_content: The growth history of large-scale structure in the Universe is a powerful probe of the cosmological model, including the nature of dark energy. We study the growth rate of cosmic structure to redshift $z = 0.9$ using more than $162{,}000$ galaxy redshifts from the WiggleZ Dark Energy Survey. We divide the data into four redshift slices with effective redshifts $z = [0.2,0.4,0.6,0.76]$ and in each of the samples measure and model the 2-point galaxy correlation function in parallel and transverse directions to the line-of-sight. After simultaneously fitting for the galaxy bias factor we recover values for the cosmic growth rate which are consistent with our assumed $\Lambda$CDM input cosmological model, with an accuracy of around 20% in each redshift slice. We investigate the sensitivity of our results to the details of the assumed model and the range of physical scales fitted, making close comparison with a set of N-body simulations for calibration. Our measurements are consistent with an independent power-spectrum analysis of a similar dataset, demonstrating that the results are not driven by systematic errors. We determine the pairwise velocity dispersion of the sample in a non-parametric manner, showing that it systematically increases with decreasing redshift, and investigate the Alcock-Paczynski effects of changing the assumed fiducial model on the results. Our techniques should prove useful for current and future galaxy surveys mapping the growth rate of structure using the 2-dimensional correlation function. --- paper_title: The WiggleZ Dark Energy Survey: mapping the distance-redshift relation with baryon acoustic oscillations paper_content: We present measurements of the baryon acoustic peak at redshifts z = 0.44, 0.6 and 0.73 in the galaxy correlation function of the final dataset of the WiggleZ Dark Energy Survey. We combine our correlation function with lower-redshift measurements from the 6-degree Field Galaxy Survey and Sloan Digital Sky Survey, producing a stacked survey correlation function in which the statistical significance of the detection of the baryon acoustic peak is 4.9-sigma relative to a zero-baryon model with no peak. We fit cosmological models to this combined baryon acoustic oscillation (BAO) dataset comprising six distance-redshift data points, and compare the results to similar fits to the latest compilation of supernovae (SNe) and Cosmic Microwave Background (CMB) data. The BAO and SNe datasets produce consistent measurements of the equation-of-state w of dark energy, when separately combined with the CMB, providing a powerful check for systematic errors in either of these distance probes. Combining all datasets we determine w = -1.03 +/- 0.08 for a flat Universe, consistent with a cosmological constant model. Assuming dark energy is a cosmological constant and varying the spatial curvature, we find Omega_k = -0.004 +/- 0.006. --- paper_title: The WiggleZ Dark Energy Survey: testing the cosmological model with baryon acoustic oscillations at z=0.6 paper_content: We measure the imprint of baryon acoustic oscillations (BAOs) in the galaxy clustering pattern at the highest redshift achieved to date, z=0.6, using the distribution of N=132,509 emission-line galaxies in the WiggleZ Dark Energy Survey. We quantify BAOs using three statistics: the galaxy correlation function, power spectrum and the band-filtered estimator introduced by Xu et al. (2010). The results are mutually consistent, corresponding to a 4.0% measurement of the cosmic distance-redshift relation at z=0.6 (in terms of the acoustic parameter "A(z)" introduced by Eisenstein et al. (2005) we find A(z=0.6) = 0.452 +/- 0.018). Both BAOs and power spectrum shape information contribute toward these constraints. The statistical significance of the detection of the acoustic peak in the correlation function, relative to a wiggle-free model, is 3.2-sigma. The ratios of our distance measurements to those obtained using BAOs in the distribution of Luminous Red Galaxies at redshifts z=0.2 and z=0.35 are consistent with a flat Lambda Cold Dark Matter model that also provides a good fit to the pattern of observed fluctuations in the Cosmic Microwave Background (CMB) radiation. The addition of the current WiggleZ data results in a ~ 30% improvement in the measurement accuracy of a constant equation-of-state, w, using BAO data alone. Based solely on geometric BAO distance ratios, accelerating expansion (w < -1/3) is required with a probability of 99.8%, providing a consistency check of conclusions based on supernovae observations. Further improvements in cosmological constraints will result when the WiggleZ Survey dataset is complete. --- paper_title: The 2dF-SDSS LRG and QSO (2SLAQ) Luminous Red Galaxy Survey paper_content: We present a spectroscopic survey of almost 15 000 candidate intermediate-redshift luminous red galaxies (LRGs) brighter than i = 19.8, observed with 2dF on the Anglo-Australian Telescope. The targets were selected photometrically from the Sloan Digital Sky Survey (SDSS) and lie along two narrow equatorial strips covering 180 deg 2 . Reliable redshifts were obtained for 92 per cent of the targets and the selection is very efficient: over 90 per cent have 0.45 < z < 0.8. More than 80 per cent of the ∼11 000 red galaxies have pure absorption-line spectra consistent with a passively evolving old stellar population. The redshift, photometric and spatial distributions of the LRGs are described. The 2SLAQ data will be released publicly from mid-2006, providing a powerful resource for observational cosmology and the study of galaxy evolution. --- paper_title: The SAMI Galaxy Survey: Early Data Release paper_content: We present the Early Data Release of the Sydney–AAO Multi-object Integral field spectrograph (SAMI) Galaxy Survey. The SAMI Galaxy Survey is an ongoing integral field spectroscopic survey of _3400 low-redshift (z < 0:12) galaxies, covering galaxies in the field and in groups within the Galaxy And Mass Assembly (GAMA) survey regions, and a sample of galaxies in clusters. In the Early Data Release, we publicly release the fully calibrated datacubes for a representative selection of 107 galaxies drawn from the GAMA regions, along with information about these galaxies from the GAMA catalogues. All datacubes for the Early Data Release galaxies can be downloaded individually or as a set from the SAMI Galaxy Survey website. In this paper we also assess the quality of the pipeline used to reduce the SAMI data, giving metrics that quantify its performance at all stages in processing the raw data into calibrated datacubes. The pipeline gives excellent results throughout, with typical sky subtraction residuals in the continuum of 0.9–1.2 per cent, a relative flux calibration uncertainty of 4.1 per cent (systematic) plus 4.3 per cent (statistical), and atmospheric dispersion removed with an accuracy of 0:0009, less than a fifth of a spaxel. --- paper_title: The SAMI Galaxy Survey: instrument specification and target selection paper_content: The SAMI Galaxy Survey will observe 3400 galaxies with the Sydney-AAO Multi- object Integral-field spectrograph (SAMI) on the Anglo-Australian Telescope (AAT) in a 3-year survey which began in 2013. We present the throughput of the SAMI system, the science basis and specifications for the target selection, the survey observation plan and the combined properties of the selected galaxies. The survey includes four volume-limited galaxy samples based on cuts in a proxy for stellar mass, along with low-stellar-mass dwarf galaxies all selected from the Galaxy And Mass Assembly (GAMA) survey. The GAMA regions were selected because of the vast array of ancillary data available, including ultraviolet through to radio bands. These fields are on the celestial equator at 9, 12, and 14.5 hours, and cover a total of 144 square degrees (in GAMA-I). Higher density environments are also included with the addition of eight clusters. The clusters have spectroscopy from 2dFGRS and SDSS and photometry in regions covered by the Sloan Digital Sky Survey (SDSS) and/or VLT Survey Telescope/ATLAS. The aim is to cover a broad range in stellar mass and environment, and therefore the primary survey targets cover redshifts 0.004 < z < 0.095, magnitudes rpet < 19.4, stellar masses 107– 1012M⊙, and environments from isolated field galaxies through groups to clusters of _ 1015M⊙. --- paper_title: The WiggleZ Dark Energy Survey: measuring the cosmic expansion history using the Alcock-Paczynski test and distant supernovae paper_content: Astronomical observations suggest that today’s Universe is dominated by a dark energy of unknown physical origin. One of the most notable results obtained from many models is that dark energy should cause the expansion of the Universe to accelerate: but the expansion rate as a function of time has proved very difficult to measure directly. We present a new determination of the cosmic expansion history by combining distant supernovae observations with a geometrical analysis of large-scale galaxy clustering within the WiggleZ Dark Energy Survey, using the Alcock–Paczynski test to measure the distortion of standard spheres. Our result constitutes a robust and non-parametric measurement of the Hubble expansion rate as a function of time, which we measure with 10–15 per cent precision in four bins within the redshift range 0.1 < z < 0.9. We demonstrate, in a manner insensitive to the assumed cosmological model, that the cosmic expansion is accelerating. Furthermore, we find that this expansion history is consistent with a cosmological-constant dark energy. --- paper_title: The WiggleZ Dark Energy Survey: Survey Design and First Data Release paper_content: The WiggleZ Dark Energy Survey is a survey of 240,000 emission line galaxies in the distant universe, measured with the AAOmega spectrograph on the 3.9-m Anglo-Australian Telescope (AAT). The target galaxies are selected using ultraviolet photometry from the GALEX satellite, with a flux limit of NUV<22.8 mag. The redshift range containing 90% of the galaxies is 0.2<z<1.0. The primary aim of the survey is to precisely measure the scale of baryon acoustic oscillations (BAO) imprinted on the spatial distribution of these galaxies at look-back times of 4-8 Gyrs. Detailed forecasts indicate the survey will measure the BAO scale to better than 2% and the tangential and radial acoustic wave scales to approximately 3% and 5%, respectively. This paper provides a detailed description of the survey and its design, as well as the spectroscopic observations, data reduction, and redshift measurement techniques employed. It also presents an analysis of the properties of the target galaxies, including emission line diagnostics which show that they are mostly extreme starburst galaxies, and Hubble Space Telescope images, which show they contain a high fraction of interacting or distorted systems. In conjunction with this paper, we make a public data release of data for the first 100,000 galaxies measured for the project. --- paper_title: The WiggleZ Dark Energy Survey: Final data release and cosmological results paper_content: This paper presents cosmological results from the final data release of the WiggleZ Dark Energy Survey. We perform full analyses of different cosmological models using the WiggleZ power spectra measured at z=0.22, 0.41, 0.60, and 0.78, combined with other cosmological datasets. The limiting factor in this analysis is the theoretical modelling of the galaxy power spectrum, including non-linearities, galaxy bias, and redshift-space distortions. In this paper we assess several different methods for modelling the theoretical power spectrum, testing them against the Gigaparsec WiggleZ simulations (GiggleZ). We fit for a base set of 6 cosmological parameters, {Omega_b h^2, Omega_CDM h^2, H_0, tau, A_s, n_s}, and 5 supplementary parameters {n_run, r, w, Omega_k, sum m_nu}. In combination with the Cosmic Microwave Background (CMB), our results are consistent with the LambdaCDM concordance cosmology, with a measurement of the matter density of Omega_m =0.29 +/- 0.016 and amplitude of fluctuations sigma_8 = 0.825 +/- 0.017. Using WiggleZ data with CMB and other distance and matter power spectra data, we find no evidence for any of the extension parameters being inconsistent with their LambdaCDM model values. The power spectra data and theoretical modelling tools are available for use as a module for CosmoMC, which we here make publicly available at http://smp.uq.edu.au/wigglez-data . We also release the data and random catalogues used to construct the baryon acoustic oscillation correlation function. --- paper_title: The WiggleZ Dark Energy Survey: Improved Distance Measurements to z = 1 with Reconstruction of the Baryonic Acoustic Feature paper_content: We present significant improvements in cosmic distance measurements from the WiggleZ Dark Energy Survey, achieved by applying the reconstruction of the baryonic acoustic feature technique. We show using both data and simulations that the reconstruction technique can often be effective despite patchiness of the survey, significant edge effects and shot-noise. We investigate three redshift bins in the redshift range 0.2 < z < 1, and in all three find improvement after reconstruction in the detection of the baryonic acoustic feature and its usage as a standard ruler. We measure model-independent distance measures D_V(r_s^(fid)/r_s) of 1716 ± 83, 2221 ± 101, 2516 ± 86 Mpc (68 per cent CL) at effective redshifts z = 0.44, 0.6, 0.73, respectively, where D_V is the volume-averaged distance, and r_s is the sound horizon at the end of the baryon drag epoch. These significantly improved 4.8, 4.5 and 3.4 per cent accuracy measurements are equivalent to those expected from surveys with up to 2.5 times the volume of WiggleZ without reconstruction applied. These measurements are fully consistent with cosmologies allowed by the analyses of the Planck Collaboration and the Sloan Digital Sky Survey. We provide the D_V(r_s^(fid)/r_s) posterior probability distributions and their covariances. When combining these measurements with temperature fluctuations measurements of Planck, the polarization of Wilkinson Microwave Anisotropy Probe 9, and the 6dF Galaxy Survey baryonic acoustic feature, we do not detect deviations from a flat Λ cold dark matter (ΛCDM) model. Assuming this model, we constrain the current expansion rate to H_0 = 67.15 ± 0.98 km s^(−1)Mpc^(−1). Allowing the equation of state of dark energy to vary, we obtain w_(DE) = −1.080 ± 0.135. When assuming a curved ΛCDM model we obtain a curvature value of Ω_K = −0.0043 ± 0.0047. --- paper_title: Performance of AAOmega: the AAT multi-purpose fiber-fed spectrograph paper_content: AAOmega is the new spectrograph for the 2dF fibre-positioning system on the Anglo-Australian Telescope. It is a bench-mounted, double-beamed design, using volume phase holographic (VPH) gratings and articulating cameras. It is fed by 392 fibres from either of the two 2dF field plates, or by the 512 fibre SPIRAL integral field unit (IFU) at Cassegrain focus. Wavelength coverage is 370 to 950nm and spectral resolution 1,000-8,000 in multi-Object mode, or 1,500-10,000 in IFU mode. Multi-object mode was commissioned in January 2006 and the IFU system will be commissioned in June 2006. ::: The spectrograph is located off the telescope in a thermally isolated room and the 2dF fibres have been replaced by new 38m broadband fibres. Despite the increased fibre length, we have achieved a large increase in throughput by use of VPH gratings, more efficient coatings and new detectors - amounting to a factor of at least 2 in the red. The number of spectral resolution elements and the maximum resolution are both more than doubled, and the stability is an order of magnitude better. ::: The spectrograph comprises: an f/3.15 Schmidt collimator, incorporating a dichroic beam-splitter; interchangeable VPH gratings; and articulating red and blue f/1.3 Schmidt cameras. Pupil size is 190mm, determined by the competing demands of cost, obstruction losses, and maximum resolution. A full suite of VPH gratings has been provided to cover resolutions 1,000 to 7,500, and up to 10,000 at particular wavelengths. --- paper_title: The WiggleZ Dark Energy Survey: the selection function and z=0.6 galaxy power spectrum paper_content: We report one of the most accurate measurements of the three-dimensional large-scale galaxy power spectrum achieved to date, using 56,159 redshifts of bright emission-line galaxies at effective redshift z=0.6 from the WiggleZ Dark Energy Survey at the Anglo-Australian Telescope. We describe in detail how we construct the survey selection function allowing for the varying target completeness and redshift completeness. We measure the total power with an accuracy of approximately 5% in wavenumber bands of dk=0.01 h/Mpc. A model power spectrum including non-linear corrections, combined with a linear galaxy bias factor and a simple model for redshift-space distortions, provides a good fit to our data for scales k<0.4 h/Mpc. The large-scale shape of the power spectrum is consistent with the best-fitting matter and baryon densities determined by observations of the Cosmic Microwave Background radiation. By splitting the power spectrum measurement as a function of tangential and radial wavenumbers we delineate the characteristic imprint of peculiar velocities. We use these to determine the growth rate of structure as a function of redshift in the range 0.4<z<0.8, including a data point at z=0.78 with an accuracy of 20%. Our growth rate measurements are a close match to the self-consistent prediction of the LCDM model. The WiggleZ Survey data will allow a wide range of investigations into the cosmological model, cosmic expansion and growth history, topology of cosmic structure, and Gaussianity of the initial conditions. Our calculation of the survey selection function will be released at a future date via our website wigglez.swin.edu.au. --- paper_title: The WiggleZ Dark Energy Survey: the growth rate of cosmic structure since redshift z=0.9 paper_content: We present precise measurements of the growth rate of cosmic structure for the redshift range 0.1 < z < 0.9, using redshift-space distortions in the galaxy power spectrum of the WiggleZ Dark Energy Survey. Our results, which have a precision of around 10% in four independent redshift bins, are well-fit by a flat LCDM cosmological model with matter density parameter Omega_m = 0.27. Our analysis hence indicates that this model provides a self-consistent description of the growth of cosmic structure through large-scale perturbations and the homogeneous cosmic expansion mapped by supernovae and baryon acoustic oscillations. We achieve robust results by systematically comparing our data with several different models of the quasi-linear growth of structure including empirical models, fitting formulae calibrated to N-body simulations, and perturbation theory techniques. We extract the first measurements of the power spectrum of the velocity divergence field, P_vv(k), as a function of redshift (under the assumption that P_gv(k) = -sqrt[P_gg(k) P_vv(k)] where g is the galaxy overdensity field), and demonstrate that the WiggleZ galaxy-mass cross-correlation is consistent with a deterministic (rather than stochastic) scale-independent bias model for WiggleZ galaxies for scales k < 0.3 h/Mpc. Measurements of the cosmic growth rate from the WiggleZ Survey and other current and future observations offer a powerful test of the physical nature of dark energy that is complementary to distance-redshift measures such as supernovae and baryon acoustic oscillations. --- paper_title: AAOmega: a scientific and optical overview paper_content: AAOmega is a new spectrograph for the existing 2dF and SPIRAL multifibre systems on the Ango-Australian Telescope. It is a bench-mounted, dual-beamed, articulating, all-Schmidt design, using ::: volume phase holographic gratings. The wavelength range is 370-950nm, with spectral resolutions from 1400-10000. Throughput, spectral coverage, and maximum resolution are all more than doubled compared with the existing 2dF spectrographs, and stability is increased by orders of magnitude. These features allow entirely new classes of observation to be undertaken, as well as dramatically improving ::: existing ones. AAOmega is scheduled for delivery and commissioning in Semester 2005B. --- paper_title: Observational Probes of Cosmic Acceleration paper_content: The accelerating expansion of the universe is the most surprising cosmological discovery in many decades, implying that the universe is dominated by some form of “dark energy” with exotic physical properties, or that Einstein’s theory of gravity breaks down on cosmological scales. The profound implications of cosmic acceleration have inspired ambitious efforts to understand its origin, with experiments that aim to measure the history of expansion and growth of structure with percent-level precision or higher. We review in detail the four most well established methods for making such measurements: Type Ia supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, and the abundance of galaxy clusters. We pay particular attention to the systematic uncertainties in these techniques and to strategies for controlling them at the level needed to exploit “Stage IV” dark energy facilities such as BigBOSS, LSST, Euclid, and WFIRST. We briefly review a number of other approaches including redshift-space distortions, the Alcock–Paczynski effect, and direct measurements of the Hubble constant H_0. We present extensive forecasts for constraints on the dark energy equation of state and parameterized deviations from General Relativity, achievable with Stage III and Stage IV experimental programs that incorporate supernovae, BAO, weak lensing, and cosmic microwave background data. We also show the level of precision required for clusters or other methods to provide constraints competitive with those of these fiducial programs. We emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. Surveys to probe cosmic acceleration produce data sets that support a wide range of scientific investigations, and they continue the longstanding astronomical tradition of mapping the universe in ever greater detail over ever larger scales. --- paper_title: The DESI Experiment, a whitepaper for Snowmass 2013 paper_content: The Dark Energy Spectroscopic Instrument (DESI) is a massively multiplexed fiber-fed spectrograph that will make the next major advance in dark energy in the timeframe 2018-2022. On the Mayall telescope, DESI will obtain spectra and redshifts for at least 18 million emission-line galaxies, 4 million luminous red galaxies and 3 million quasi-stellar objects, in order to: probe the effects of dark energy on the expansion history using baryon acoustic oscillations (BAO), measure the gravitational growth history through redshift-space distortions, measure the sum of neutrino masses, and investigate the signatures of primordial inflation. The resulting 3-D galaxy maps at z 2 will make 1%-level measurements of the distance scale in 35 redshift bins, thus providing unprecedented constraints on cosmological models. ---
Title: Cosmological Surveys with Multi-Object Spectrographs Section 1: INTRODUCTION Description 1: This section introduces the significance of multi-object spectroscopy (MOS) in cosmological surveys, highlighting key aspects such as multiplex and field of view. Section 2: THE CLASSICAL PERIOD Description 2: This section describes the early applications of MOS surveys, including pioneering efforts and the initial challenges faced. Section 3: THE ENLIGHTENMENT Description 3: This section covers the development and achievements of the 2dF Galaxy Redshift Survey and the Sloan Digital Sky Survey, detailing their technical innovations and scientific contributions. Section 4: THE MODERN ERA Description 4: This section discusses subsequent cosmological MOS surveys, focusing on the 6dFGS and WiggleZ surveys and their impact on understanding large-scale structure and cosmological parameters. Section 5: BOSS & eBOSS Description 5: This section provides an overview of the Baryon Oscillation Spectroscopic Survey and the extended BOSS, explaining their methodologies and significant findings. Section 6: THE FUTURE Description 6: This section explores upcoming cosmological MOS surveys, including their goals, expected improvements in precision, and potential contributions to the understanding of dark energy and the nature of gravity. Section 7: Low Redshift Surveys Description 7: This section details the importance of low redshift surveys like Taipan for measuring the present-day expansion rate and testing cosmological models through precise local measurements. Section 8: Higher Redshift Surveys Description 8: This section outlines plans for higher redshift surveys, particularly the DESI survey, and their role in mapping the matter distribution from moderate to high redshifts. Section 9: ELT Surveys Description 9: This section discusses the potential of Extremely Large Telescopes (ELTs) for high-redshift cosmological surveys and their advantages in probing the matter distribution on smaller scales. Section 10: CONCLUSIONS Description 10: This section summarizes the significant contributions of MOS to cosmology over the past two decades and projects its future role in advancing our understanding of the universe.
Big Data Quality: A systematic literature review and future research directions
9
--- paper_title: Customer Feedback and Data Collection Techniques in Software R&D: A Literature Review paper_content: In many companies, product management struggles in getting accurate customer feedback. Often, validation and confirmation of functionality with customers takes place only after the product has been deployed, and there are no mechanisms that help product managers to continuously learn from customers. Although there are techniques available for collecting customer feedback, these are typically not applied as part of a continuous feedback loop. As a result, the selection and prioritization of features becomes far from optimal, and product deviates from what the customers need. In this paper, we present a literature review of currently recognized techniques for collecting customer feedback. We develop a model in which we categorize the techniques according to their characteristics. The purpose of this literature review is to provide an overview of current software engineering research in this area and to better understand the different techniques that are used for collecting customer feedback. --- paper_title: Data quality in big data: A review paper_content: The Data Warehousing Institute (TDWI) estimates that data quality problems cost U.S. businesses more than $600 billion a year. The problem with data is that its quality quickly degenerates over time. Experts say 2 percent of records in a customer file become obsolete in one month because customers die, divorce, marry, and move. In addition, data entry errors, system migrations, and changes in source systems, among other things, generate bucket loads of errors. More complex, as organizations fragment into different divisions and units, interpretations of data elements change to meet the local business needs. However, there are several ways that the Company should concern, such as to treat data as a strategic corporate resource; develop a program for managing data quality with a commitment from the top; and hire, train, or outsource experienced data quality professionals to oversee and carry out the program. The Organizations can sustain a commitment to managing data quality over time and adjust monitoring and cleansing processes to changes in the business and underlying systems by using the Commercial data quality tools. Data is a vital resource. Companies that invest proportionally to manage this resource will stand a stronger chance of succeeding in today's competitive global economy than those that squander this critical resource by neglecting to ensure adequate levels of quality. This paper reviews the characteristics of big data quality and the managing processes that are involved in it. --- paper_title: Antecedents of big data quality: An empirical examination in financial service organizations paper_content: Big data has been acknowledged for its enormous potential. In contrast to the potential, in a recent survey more than half of financial service organizations reported that big data has not delivered the expected value. One of the main reasons for this is related to data quality. The objective of this research is to identify the antecedents of big data quality in financial institutions. This will help to understand how data quality from big data analysis can be improved. For this, a literature review was performed and data was collected using three case studies, followed by content analysis. The overall findings indicate that there are no fundamentally new data quality issues in big data projects. Nevertheless, the complexity of the issues is higher, which makes it harder to assess and attain data quality in big data projects compared to the traditional projects. Ten antecedents of big data quality were identified encompassing data, technology, people, process and procedure, organization, and external aspects. --- paper_title: Anomaly Detection and Redundancy Elimination of Big Sensor Data in Internet of Things paper_content: In the era of big data and Internet of things, massive sensor data are gathered with Internet of things. Quantity of data captured by sensor networks are considered to contain highly useful and valuable information. However, for a variety of reasons, received sensor data often appear abnormal. Therefore, effective anomaly detection methods are required to guarantee the quality of data collected by those sensor nodes. Since sensor data are usually correlated in time and space, not all the gathered data are valuable for further data processing and analysis. Preprocessing is necessary for eliminating the redundancy in gathered massive sensor data. In this paper, the proposed work defines a sensor data preprocessing framework. It is mainly composed of two parts, i.e., sensor data anomaly detection and sensor data redundancy elimination. In the first part, methods based on principal statistic analysis and Bayesian network is proposed for sensor data anomaly detection. Then, approaches based on static Bayesian network (SBN) and dynamic Bayesian networks (DBNs) are proposed for sensor data redundancy elimination. Static sensor data redundancy detection algorithm (SSDRDA) for eliminating redundant data in static datasets and real-time sensor data redundancy detection algorithm (RSDRDA) for eliminating redundant sensor data in real-time are proposed. The efficiency and effectiveness of the proposed methods are validated using real-world gathered sensor datasets. --- paper_title: Protection of Big Data Privacy paper_content: In recent years, big data have become a hot research topic. The increasing amount of big data also increases the chance of breaching the privacy of individuals. Since big data require high computational power and large storage, distributed systems are used. As multiple parties are involved in these systems, the risk of privacy violation is increased. There have been a number of privacy-preserving mechanisms developed for privacy protection at different stages (e.g., data generation, data storage, and data processing) of a big data life cycle. The goal of this paper is to provide a comprehensive overview of the privacy preservation mechanisms in big data and present the challenges for existing mechanisms. In particular, in this paper, we illustrate the infrastructure of big data and the state-of-the-art privacy-preserving mechanisms in each stage of the big data life cycle. Furthermore, we discuss the challenges and future research directions related to privacy preservation in big data. --- paper_title: Big data: The next frontier for innovation, competition, and productivity paper_content: The amount of data in our world has been exploding, and analyzing large data sets—so-called big data— will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus, according to research by MGI and McKinsey's Business Technology Office. Leaders in every sector will have to grapple with the implications of big data, not just a few data-oriented managers. The increasing volume and detail of information captured by enterprises, the rise of multimedia, social media, and the Internet of Things will fuel exponential growth in data for the foreseeable future. --- paper_title: Scene-Based Big Data Quality Management Framework paper_content: After the rise of big data to national strategy, the application of big data in every industry is increasing. The quality of data will directly affect the value of data and influence the analysis and decision of managers. Aiming at the characteristics of big data, such as volume, velocity, variety and value, a quality management framework of big data based on application scenario is proposed, which includes data quality assessment and quality management of structured data, unstructured data and data integration stage. In view of the current structured data leading to the core business of the enterprise, we use the research method to extend the peripheral data layer by layer on the main data. Big data processing technology, such as Hadoop and Storm, is used to construct a big data cleaning system based on semantics. Combined with JStorm platform, a real-time control system for big data quality is given. Finally, a big data quality evaluation system is built to detect the effect of data integration. The framework can guarantee the output of high quality big data on the basis of traditional data quality system. It helps enterprises to understand data rules and increase the value of core data, which has practical application value. --- paper_title: Big data for internet of things: A survey paper_content: Abstract With the rapid development of the Internet of Things (IoT), Big Data technologies have emerged as a critical data analytics tool to bring the knowledge within IoT infrastructures to better meet the purpose of the IoT systems and support critical decision making. Although the topic of Big Data analytics itself is extensively researched, the disparity between IoT domains (such as healthcare, energy, transportation and others) has isolated the evolution of Big Data approaches in each IoT domain. Thus, the mutual understanding across IoT domains can possibly advance the evolution of Big Data research in IoT. In this work, we therefore conduct a survey on Big Data technologies in different IoT domains to facilitate and stimulate knowledge sharing across the IoT domains. Based on our review, this paper discusses the similarities and differences among Big Data technologies used in different IoT domains, suggests how certain Big Data technology used in one IoT domain can be re-used in another IoT domain, and develops a conceptual framework to outline the critical Big Data technologies across all the reviewed IoT domains. --- paper_title: Quality assessment for Linked Data: A Survey paper_content: The development and standardization of semantic web technologies has resulted in an unprecedented volume of data being published on the Web as Linked Data (LD). However, we observe widely varying data quality ranging from extensively curated datasets to crowdsourced and extracted data of relatively low quality. In this article, we present the results of a systematic review of approaches for assessing the quality of LD. We gather existing approaches and analyze them qualitatively. In particular, we unify and formalize commonly used terminologies across papers related to data quality and provide a comprehensive list of 18 quality dimensions and 69 metrics. Additionally, we qualitatively analyze the 30 core approaches and 12 tools using a set of attributes. The aim of this article is to provide researchers and data curators a comprehensive understanding of existing work, thereby encouraging further experimentation and development of new approaches focused towards data quality, specifically for LD. --- paper_title: A Big Data Online Cleaning Algorithm Based on Dynamic Outlier Detection paper_content: To effectively clean the large-scale, mixed and inaccurate monitoring or collective data, reduce the cost of data cache and ensure the consistent deviation detection on timing data of each cycle, a big data online cleaning algorithm based on dynamic outlier detection has been proposed. The data cleaning method is improved by local outliner detection upon density, sampling cluster uniformly dilution Euclidean distance matrix retaining some corrections into next cycle of cleaning, which avoids a sampling causing overall cleaning deviation and reduces amount of calculation within data cleaning stable time, enhancing the speed greatly. Finally, the distributed solutions on online cleaning algorithm based on Hadoop platform. --- paper_title: Distributed online outlier detection in wireless sensor networks using ellipsoidal support vector machine paper_content: Low quality sensor data limits WSN capabilities for providing reliable real-time situation-awareness. Outlier detection is a solution to ensure the quality of sensor data. An effective and efficient outlier detection technique for WSNs not only identifies outliers in a distributed and online manner with high detection accuracy and low false alarm, but also satisfies WSN constraints in terms of communication, computational and memory complexity. In this paper, we take into account the correlation between sensor data attributes and propose two distributed and online outlier detection techniques based on a hyperellipsoidal one-class support vector machine (SVM). We also take advantage of the theory of spatio-temporal correlation to identify outliers and update the ellipsoidal SVM-based model representing the changed normal behavior of sensor data for further outlier identification. Simulation results show that our adaptive ellipsoidal SVM-based outlier detection technique achieves better detection accuracy and lower false alarm as compared to existing SVM-based techniques designed for WSNs. --- paper_title: Anomaly Detection and Redundancy Elimination of Big Sensor Data in Internet of Things paper_content: In the era of big data and Internet of things, massive sensor data are gathered with Internet of things. Quantity of data captured by sensor networks are considered to contain highly useful and valuable information. However, for a variety of reasons, received sensor data often appear abnormal. Therefore, effective anomaly detection methods are required to guarantee the quality of data collected by those sensor nodes. Since sensor data are usually correlated in time and space, not all the gathered data are valuable for further data processing and analysis. Preprocessing is necessary for eliminating the redundancy in gathered massive sensor data. In this paper, the proposed work defines a sensor data preprocessing framework. It is mainly composed of two parts, i.e., sensor data anomaly detection and sensor data redundancy elimination. In the first part, methods based on principal statistic analysis and Bayesian network is proposed for sensor data anomaly detection. Then, approaches based on static Bayesian network (SBN) and dynamic Bayesian networks (DBNs) are proposed for sensor data redundancy elimination. Static sensor data redundancy detection algorithm (SSDRDA) for eliminating redundant data in static datasets and real-time sensor data redundancy detection algorithm (RSDRDA) for eliminating redundant sensor data in real-time are proposed. The efficiency and effectiveness of the proposed methods are validated using real-world gathered sensor datasets. --- paper_title: Contextual anomaly detection framework for big sensor data paper_content: The ability to detect and process anomalies for Big Data in real-time is a difficult task. The volume and velocity of the data within many systems makes it difficult for typical algorithms to scale and retain their real-time characteristics. The pervasiveness of data combined with the problem that many existing algorithms only consider the content of the data source; e.g. a sensor reading itself without concern for its context, leaves room for potential improvement. The proposed work defines a contextual anomaly detection framework. It is composed of two distinct steps: content detection and context detection. The content detector is used to determine anomalies in real-time, while possibly, and likely, identifying false positives. The context detector is used to prune the output of the content detector, identifying those anomalies which are considered both content and contextually anomalous. The context detector utilizes the concept of profiles, which are groups of similarly grouped data points generated by a multivariate clustering algorithm. The research has been evaluated against two real-world sensor datasets provided by a local company in Brampton, Canada. Additionally, the framework has been evaluated against the open-source Dodgers dataset, available at the UCI machine learning repository, and against the R statistical toolbox. --- paper_title: A model-driven framework for data quality management in the Internet of Things paper_content: The internet of Things (IoT) is a data stream environment where a large scale deployment of smart things continuously report readings. These data streams are then consumed by pervasive applications, i.e. data consumers, to offer ubiquitous services. The data quality (DQ) is a key criteria for IoT data consumers especially when considering the inherent uncertainty of sensor-enabled data. However, DQ is a highly subjective concept and there is no standard agreement on how to determine “good” data. Moreover, the combinations of considered measured attributes and associated DQ information are as diverse as the needs of data consumers. This introduces expensive overheads for developers tasked with building DQ-aware IoT software systems which are capable of managing their own DQ information. To effectively handle these various perceptions of DQ, we propose a Model-Driven Architecture-based approach that allows each developer to easily and efficiently express, through models and other provided resources, the data consumer’s vision of DQ and its requirements using an easy-to-use graphical model editor. The defined DQ specifications are then automatically transformed to generate an entire infrastructure for DQ management that fits perfectly the data consumer’s requirements. We demonstrate the flexibility and the efficiency of our approach by generating two DQ management infrastructures built on top of different platforms and testing them through a real life data stream environment scenario. --- paper_title: Contextual Anomaly Detection in Big Sensor Data paper_content: Performing predictive modelling, such as anomaly detection, in Big Data is a difficult task. This problem is compounded as more and more sources of Big Data are generated from environmental sensors, logging applications, and the Internet of Things. Further, most current techniques for anomaly detection only consider the content of the data source, i.e. the data itself, without concern for the context of the data. As data becomes more complex it is increasingly important to bias anomaly detection techniques for the context, whether it is spatial, temporal, or semantic. The work proposed in this paper outlines a contextual anomaly detection technique for use in streaming sensor networks. The technique uses a well-defined content anomaly detection algorithm for real-time point anomaly detection. Additionally, we present a post-processing context-aware anomaly detection algorithm based on sensor profiles, which are groups of contextually similar sensors generated by a multivariate clustering algorithm. Our proposed research has been implemented and evaluated with real-world data provided by Powersmiths, located in Brampton, Ontario, Canada. --- paper_title: Automatic outlier detection for time series: an application to sensor data paper_content: In this article we consider the problem of detecting unusual values or outliers from time series data where the process by which the data are created is difficult to model. The main consideration is the fact that data closer in time are more correlated to each other than those farther apart. We propose two variations of a method that uses the median from a neighborhood of a data point and a threshold value to compare the difference between the median and the observed data value. Both variations of the method are fast and can be used for data streams that occur in quick succession such as sensor data on an airplane. --- paper_title: Statistics-based outlier detection for wireless sensor networks paper_content: Wireless sensor network WSN applications require efficient, accurate and timely data analysis in order to facilitate near real-time critical decision-making and situation awareness. Accurate analysis and decision-making relies on the quality of WSN data as well as on the additional information and context. Raw observations collected from sensor nodes, however, may have low data quality and reliability due to limited WSN resources and harsh deployment environments. This article addresses the quality of WSN data focusing on outlier detection. These are defined as observations that do not conform to the expected behaviour of the data. The developed methodology is based on time-series analysis and geostatistics. Experiments with a real data set from the Swiss Alps showed that the developed methodology accurately detected outliers in WSN data taking advantage of their spatial and temporal correlations. It is concluded that the incorporation of tools for outlier detection in WSNs can be based on current statistical methodology. This provides a usable and important tool in a novel scientific field. --- paper_title: Incorporating quality aspects in sensor data streams paper_content: Sensors are increasingly embedded into physical products in order to capture data about their conditions and usage for decision making in business applications. However, a major issue for such applications is the limited quality of the captured data due to inherently restricted precision and performance of the sensors. Moreover, the data quality is further decreased by data processing to meet resource constraints in streaming environments and ultimately influences business decisions. The issue of how to efficiently provide applications with information about data quality (DQ) is still an open research problem. In my Ph.D. thesis, I address this problem by developing a system to provide business applications with accurate information on data quality. Furthermore, the system will be able to incorporate and guarantee user-defined data quality levels. In this paper, I will present the major results from my research so far. This includes a novel jumping-window-based approach for the efficient transfer of data quality information as well as a flexible metamodel for storage and propagation of data quality. The comprehensive analysis of common data processing operators w.r.t. their impact on data quality allows a fruitful knowledge evaluation and thus diminishes incorrect business decisions. --- paper_title: Modelless Data Quality Improvement of Streaming Synchrophasor Measurements by Exploiting the Low-Rank Hankel Structure paper_content: This paper presents a new framework to improve the quality of streaming synchrophasor measurements with the existence of missing data and bad data. The method exploits the low-rank property of the Hankel structure to identify and correct bad data, as well as to estimate and fill in the missing data. The method is advantageous compared to existing methods in the literature that only estimate missing data by leveraging the low-rank property of the synchrophasor data observation matrix. The proposed algorithm can efficiently differentiate event data from bad data, even in the existence of simultaneous and consecutive bad data. The algorithm has been verified through numerical experiments on recorded synchrophasor datasets. --- paper_title: Antecedents of big data quality: An empirical examination in financial service organizations paper_content: Big data has been acknowledged for its enormous potential. In contrast to the potential, in a recent survey more than half of financial service organizations reported that big data has not delivered the expected value. One of the main reasons for this is related to data quality. The objective of this research is to identify the antecedents of big data quality in financial institutions. This will help to understand how data quality from big data analysis can be improved. For this, a literature review was performed and data was collected using three case studies, followed by content analysis. The overall findings indicate that there are no fundamentally new data quality issues in big data projects. Nevertheless, the complexity of the issues is higher, which makes it harder to assess and attain data quality in big data projects compared to the traditional projects. Ten antecedents of big data quality were identified encompassing data, technology, people, process and procedure, organization, and external aspects. --- paper_title: A software reference architecture for semantic-aware big data systems paper_content: Abstract Context: Big Data systems are a class of software systems that ingest, store, process and serve massive amounts of heterogeneous data, from multiple sources. Despite their undisputed impact in current society, their engineering is still in its infancy and companies find it difficult to adopt them due to their inherent complexity. Existing attempts to provide architectural guidelines for their engineering fail to take into account important Big Data characteristics, such as the management, evolution and quality of the data. Objective: In this paper, we follow software engineering principles to refine the λ -architecture, a reference model for Big Data systems, and use it as seed to create Bolster , a software reference architecture (SRA) for semantic-aware Big Data systems. Method: By including a new layer into the λ -architecture, the Semantic Layer, Bolster is capable of handling the most representative Big Data characteristics (i.e., Volume, Velocity, Variety, Variability and Veracity). Results: We present the successful implementation of Bolster in three industrial projects, involving five organizations. The validation results show high level of agreement among practitioners from all organizations with respect to standard quality factors. Conclusion: As an SRA, Bolster allows organizations to design concrete architectures tailored to their specific needs. A distinguishing feature is that it provides semantic-awareness in Big Data Systems. These are Big Data system implementations that have components to simplify data definition and exploitation. In particular, they leverage metadata (i.e., data describing data) to enable (partial) automation of data exploitation and to aid the user in their decision making processes. This simplification supports the differentiation of responsibilities into cohesive roles enhancing data governance. --- paper_title: Scene-Based Big Data Quality Management Framework paper_content: After the rise of big data to national strategy, the application of big data in every industry is increasing. The quality of data will directly affect the value of data and influence the analysis and decision of managers. Aiming at the characteristics of big data, such as volume, velocity, variety and value, a quality management framework of big data based on application scenario is proposed, which includes data quality assessment and quality management of structured data, unstructured data and data integration stage. In view of the current structured data leading to the core business of the enterprise, we use the research method to extend the peripheral data layer by layer on the main data. Big data processing technology, such as Hadoop and Storm, is used to construct a big data cleaning system based on semantics. Combined with JStorm platform, a real-time control system for big data quality is given. Finally, a big data quality evaluation system is built to detect the effect of data integration. The framework can guarantee the output of high quality big data on the basis of traditional data quality system. It helps enterprises to understand data rules and increase the value of core data, which has practical application value. --- paper_title: Big data for internet of things: A survey paper_content: Abstract With the rapid development of the Internet of Things (IoT), Big Data technologies have emerged as a critical data analytics tool to bring the knowledge within IoT infrastructures to better meet the purpose of the IoT systems and support critical decision making. Although the topic of Big Data analytics itself is extensively researched, the disparity between IoT domains (such as healthcare, energy, transportation and others) has isolated the evolution of Big Data approaches in each IoT domain. Thus, the mutual understanding across IoT domains can possibly advance the evolution of Big Data research in IoT. In this work, we therefore conduct a survey on Big Data technologies in different IoT domains to facilitate and stimulate knowledge sharing across the IoT domains. Based on our review, this paper discusses the similarities and differences among Big Data technologies used in different IoT domains, suggests how certain Big Data technology used in one IoT domain can be re-used in another IoT domain, and develops a conceptual framework to outline the critical Big Data technologies across all the reviewed IoT domains. --- paper_title: Anomaly detection in streaming environmental sensor data: A data-driven modeling approach paper_content: The deployment of environmental sensors has generated an interest in real-time applications of the data they collect. This research develops a real-time anomaly detection method for environmental data streams that can be used to identify data that deviate from historical patterns. The method is based on an autoregressive data-driven model of the data stream and its corresponding prediction interval. It performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no pre-classification of anomalies. Furthermore, this method can be easily deployed on a large heterogeneous sensor network. Sixteen instantiations of this method are compared based on their ability to identify measurement errors in a windspeed data stream from Corpus Christi, Texas. The results indicate that a multilayer perceptron model of the data stream, coupled with replacement of anomalous data points, performs well at identifying erroneous data in this data stream. --- paper_title: A model-based approach for RFID data stream cleansing paper_content: In recent years, RFID technologies have been used in many applications, such as inventory checking and object tracking. However, raw RFID data are inherently unreliable due to physical device limitations and different kinds of environmental noise. Currently, existing work mainly focuses on RFID data cleansing in a static environment (e.g. inventory checking). It is therefore difficult to cleanse RFID data streams in a mobile environment (e.g. object tracking) using the existing solutions, which do not address the data missing issue effectively. In this paper, we study how to cleanse RFID data streams for object tracking, which is a challenging problem, since a significant percentage of readings are routinely dropped. We propose a probabilistic model for object tracking in a mobile environment. We develop a Bayesian inference based approach for cleansing RFID data using the model. In order to sample data from the movement distribution, we devise a sequential sampler that cleans RFID data with high accuracy and efficiency. We validate the effectiveness and robustness of our solution through extensive simulations and demonstrate its performance by using two real RFID applications of human tracking and conveyor belt monitoring. --- paper_title: Distributed online outlier detection in wireless sensor networks using ellipsoidal support vector machine paper_content: Low quality sensor data limits WSN capabilities for providing reliable real-time situation-awareness. Outlier detection is a solution to ensure the quality of sensor data. An effective and efficient outlier detection technique for WSNs not only identifies outliers in a distributed and online manner with high detection accuracy and low false alarm, but also satisfies WSN constraints in terms of communication, computational and memory complexity. In this paper, we take into account the correlation between sensor data attributes and propose two distributed and online outlier detection techniques based on a hyperellipsoidal one-class support vector machine (SVM). We also take advantage of the theory of spatio-temporal correlation to identify outliers and update the ellipsoidal SVM-based model representing the changed normal behavior of sensor data for further outlier identification. Simulation results show that our adaptive ellipsoidal SVM-based outlier detection technique achieves better detection accuracy and lower false alarm as compared to existing SVM-based techniques designed for WSNs. --- paper_title: Assessing and Improving Sensors Data Quality in Streaming Context paper_content: An environmental monitoring process consists of a regular collection and analysis of sensors data streams. It aims to infer new knowledge about the environment, enabling the explorer to supervise the network and to take right decisions. Different data mining techniques are then applied to the collected data in order to infer aggregated statistics useful for anomalies detection and forecasting. The obtained results are closely dependent on the collected data quality. In fact, the data are often dirty, they contain noisy, erroneous and missing values. Poor data quality leads to defective and faulty results. One solution to overcome this problem will be presented in this paper. It consists of evaluating and improving the data quality, to be able to obtain reliable results. In this paper, we first introduce the data quality concept. Then, we discuss the existing related research studies. Finally, we propose a complete sensors data quality management system. --- paper_title: A trust assessment framework for streaming data in WSNs using iterative filtering paper_content: Trust and reputation systems are widely employed in WSNs to help decision making processes by assessing trustworthiness of sensors as well as the reliability of the reported data. Iterative filtering (IF) algorithms hold great promise for such a purpose; they simultaneously estimate the aggregate value of the readings and assess the trustworthiness of the nodes. Such algorithms, however, operate by batch processing over a widow of data reported by the nodes, which represents a difficulty in applications involving streaming data. In this paper, we propose STRIF (Streaming IF) which extends IF algorithms to data streaming by leveraging a novel method for updating the sensors' variances. We compare the performance of STRIF algorithm to several batch processing IF algorithms through extensive experiments across a wide variety of configurations over both real-world and synthetic datasets. Our experimental results demonstrate that STRIF can process data streams much more efficiently than the batch algorithms while keeping the accuracy of the data aggregation close to that of the batch IF algorithm. --- paper_title: A Framework for Distributed Cleaning of Data Streams paper_content: Abstract Vast and ever increasing quantities of data are produced by sensors in the Internet of Things (IoT). The quality of this data can be very variable due to problems with sensors, incorrect calibration etc. Data quality can be greatly enhanced by cleaning the data before it reaches its end user. This paper reports on the construction of a distributed cleaning system (DCS) to clean data streams in real-time for an environmental case-study. A combination of declarative and statistical model based cleaning methods are applied and initial results are reported. --- paper_title: Statistics-based outlier detection for wireless sensor networks paper_content: Wireless sensor network WSN applications require efficient, accurate and timely data analysis in order to facilitate near real-time critical decision-making and situation awareness. Accurate analysis and decision-making relies on the quality of WSN data as well as on the additional information and context. Raw observations collected from sensor nodes, however, may have low data quality and reliability due to limited WSN resources and harsh deployment environments. This article addresses the quality of WSN data focusing on outlier detection. These are defined as observations that do not conform to the expected behaviour of the data. The developed methodology is based on time-series analysis and geostatistics. Experiments with a real data set from the Swiss Alps showed that the developed methodology accurately detected outliers in WSN data taking advantage of their spatial and temporal correlations. It is concluded that the incorporation of tools for outlier detection in WSNs can be based on current statistical methodology. This provides a usable and important tool in a novel scientific field. --- paper_title: Context aware model-based cleaning of data streams paper_content: Despite advances in sensor technology, there are a number of problems that continue to require attention. Sensors fail due to low battery power, poor calibration, exposure to the elements and interference to name but a few factors. This can have a negative effect on data quality, which can however be improved by data cleaning. In particular, models can learn characteristics of data to detect and replace incorrect values. The research presented in this paper focuses on the building of models of environmental sensor data that can incorporate context awareness about the sampling locations. These models have been tested and validated both for static and streaming data. We show that contextual models demonstrate favourable outcomes when used to clean streaming data. --- paper_title: Modelless Data Quality Improvement of Streaming Synchrophasor Measurements by Exploiting the Low-Rank Hankel Structure paper_content: This paper presents a new framework to improve the quality of streaming synchrophasor measurements with the existence of missing data and bad data. The method exploits the low-rank property of the Hankel structure to identify and correct bad data, as well as to estimate and fill in the missing data. The method is advantageous compared to existing methods in the literature that only estimate missing data by leveraging the low-rank property of the synchrophasor data observation matrix. The proposed algorithm can efficiently differentiate event data from bad data, even in the existence of simultaneous and consecutive bad data. The algorithm has been verified through numerical experiments on recorded synchrophasor datasets. --- paper_title: Adaptive and online data anomaly detection for wireless sensor systems paper_content: Wireless sensor networks (WSNs) are increasingly used as platforms for collecting data from unattended environments and monitoring important events in phenomena. However, sensor data is affected by anomalies that occur due to various reasons, such as, node software or hardware failures, reading errors, unusual events, and malicious attacks. Therefore, effective, efficient, and real time detection of anomalous measurement is required to guarantee the quality of data collected by these networks. In this paper, two efficient and effective anomaly detection models PCCAD and APCCAD are proposed for static and dynamic environments, respectively. Both models utilize the One-Class Principal Component Classifier (OCPCC) to measure the dissimilarity between sensor measurements in the feature space. The proposed APCCAD model incorporates an incremental learning method that is able to track the dynamic normal changes of data streams in the monitored environment. The efficiency and effectiveness of the proposed models are demonstrated using real life datasets collected by real sensor network projects. Experimental results show that the proposed models have advantages over existing models in terms of efficient utilization of sensor limited resources. The results further reveal that the proposed models achieve better detection effectiveness in terms of high detection accuracy with low false alarms especially for dynamic environmental data streams compared to some existing models. --- paper_title: Low-rank singular value thresholding for recovering missing air quality data paper_content: With the increasing awareness of the harmful impacts of urban air pollution, air quality monitoring stations have been deployed in many metropolitan areas. These stations provide air quality data to the public. However, due to sampling device failures and data processing errors, missing data in air quality measurements is common. Data integrity becomes a critical challenge when such data are employed for public services. In this paper, we investigate the mathematical property of air quality measurements, and attempt to recover the missing data. First, we empirically study the low rank property of these measurements. Second, we formulate the low rank matrix completion (LRMC) optimization problem to reconstruct the missing air quality data. The problem is transformed using duality theory, and singular value thresholding (SVT) is employed to develop sub-optimal solutions. Third, to evaluate the performance of our methodology, we conduct a series of case studies including different types of missing data patterns. The simulation results demonstrate that the proposed SVT methodology can effectively recover missing air quality data, and outperform the existing Interpolation. Finally, we investigate the parameter sensitivity of SVT. Our study can serve as a guideline for missing data recovery in the real world. --- paper_title: Missing value imputation using a fuzzy clustering-based EM approach paper_content: Data preprocessing and cleansing play a vital role in data mining by ensuring good quality of data. Data-cleansing tasks include imputation of missing values, identification of outliers, and identification and correction of noisy data. In this paper, we present a novel technique called A Fuzzy Expectation Maximization and Fuzzy Clustering-based Missing Value Imputation Framework for Data Pre-processing (FEMI). It imputes numerical and categorical missing values by making an educated guess based on records that are similar to the record having a missing value. While identifying a group of similar records and making a guess based on the group, it applies a fuzzy clustering approach and our novel fuzzy expectation maximization algorithm. We evaluate FEMI on eight publicly available natural data sets by comparing its performance with the performance of five high-quality existing techniques, namely EMI, GkNN, FKMI, SVR and IBLLS. We use thirty-two types (patterns) of missing values for each data set. Two evaluation criteria namely root mean squared error and mean absolute error are used. Our experimental results indicate (according to a confidence interval and $$t$$t test analysis) that FEMI performs significantly better than EMI, GkNN, FKMI, SVR, and IBLLS. --- paper_title: Anomaly Detection and Redundancy Elimination of Big Sensor Data in Internet of Things paper_content: In the era of big data and Internet of things, massive sensor data are gathered with Internet of things. Quantity of data captured by sensor networks are considered to contain highly useful and valuable information. However, for a variety of reasons, received sensor data often appear abnormal. Therefore, effective anomaly detection methods are required to guarantee the quality of data collected by those sensor nodes. Since sensor data are usually correlated in time and space, not all the gathered data are valuable for further data processing and analysis. Preprocessing is necessary for eliminating the redundancy in gathered massive sensor data. In this paper, the proposed work defines a sensor data preprocessing framework. It is mainly composed of two parts, i.e., sensor data anomaly detection and sensor data redundancy elimination. In the first part, methods based on principal statistic analysis and Bayesian network is proposed for sensor data anomaly detection. Then, approaches based on static Bayesian network (SBN) and dynamic Bayesian networks (DBNs) are proposed for sensor data redundancy elimination. Static sensor data redundancy detection algorithm (SSDRDA) for eliminating redundant data in static datasets and real-time sensor data redundancy detection algorithm (RSDRDA) for eliminating redundant sensor data in real-time are proposed. The efficiency and effectiveness of the proposed methods are validated using real-world gathered sensor datasets. --- paper_title: HMM-based predictive model for enhancing data quality in WSN paper_content: AbstractWireless sensor network (WSN) has been widely used in the areas such as health care and industrial monitoring. However, WSN systems still suffer from inevitable problems of communication interference and data failure. In this paper, an improved Hidden Markov Model (HMM) is proposed to enhance the quality of WSN sensor data. This model can be used to recover the missing data and predict the upcoming data in order to improve the data integrity and reliability ultimately. K-means clustering is firstly used to group sensor data series on the basis of different patterns. Next, Particle Swarm Optimization is applied for optimizing HMM parameters, which is enhanced by a hybrid mutation strategy. Experiments on two real data-sets show that the proposed approach can outperform the baseline models (Naive Bayes, Grey System, BP-Neural Network and Traditional HMM) on precision of both single-step prediction and multiple-step prediction. The results also demonstrate that the proposed approach can improve data ... --- paper_title: A Data Cleaning Model for Electric Power Big Data Based on Spark Framework paper_content: The data cleaning of electrical power big data can improve the correctness, the completeness, the consistency and the reliability of the data. Aiming at the difficulties of the extracting of the unified anomaly detection pattern and the low accuracy and continuity of the anomaly data correction in the process of the electrical power big data cleaning, the data cleaning model of the electrical power big data based on Spark is proposed. Firstly, the normal clusters and the corresponding boundary samples are obtained by the improved CURE clustering algorithm. Then, the anomaly data identification algorithm based on boundary samples is designed. Finally, the anomaly data modification is realized by using exponential weighting moving mean value. The high efficiency and accuracy is proved by the experiment of the data cleaning of the wind power generation monitoring data from the wind power station. --- paper_title: An Electric Power Sensor Data Oriented Data Cleaning Solution paper_content: With the development of Smart Grid Technology, more and more electric power sensor data are utilized in various electric power systems. To guarantee the effectiveness of such systems, it is necessary to ensure the quality of electric power sensor data, especially when the scale of electric power sensor data is large. In the field of large-scale electric power sensor data cleaning, the computational efficiency and accuracy of data cleaning are two vital requirements. In order to satisfy these requirements, this paper presents an electric power sensor data oriented data cleaning solution, which is composed of a data cleaning framework and a data cleaning method. Based on Hadoop, the given framework is able to support large-scale electric power sensor data acquisition, storage and processing. Meanwhile, the proposed method which achieves outlier detection and reparation is implemented on the basis of a time-relevant k-means clustering algorithm in Spark. The feasibility and effectiveness of the proposed method is evaluated on a data set which originates from charging piles. Experimental results show that the proposed data cleaning method is able to improve the data quality of electric power sensor data by finding and repairing most outliers. For large-scale electric power sensor data, the proposed data cleaning method has high parallel performance and strong scalability. --- paper_title: Online outlier detection for data streams paper_content: Outlier detection is a well established area of statistics but most of the existing outlier detection techniques are designed for applications where the entire dataset is available for random access. A typical outlier detection technique constructs a standard data distribution or model and identifies the deviated data points from the model as outliers. Evidently these techniques are not suitable for online data streams where the entire dataset, due to its unbounded volume, is not available for random access. Moreover, the data distribution in data streams change over time which challenges the existing outlier detection techniques that assume a constant standard data distribution for the entire dataset. In addition, data streams are characterized by uncertainty which imposes further complexity. In this paper we propose an adaptive, online outlier detection technique addressing the aforementioned characteristics of data streams, called Adaptive Outlier Detection for Data Streams (A-ODDS), which identifies outliers with respect to all the received data points as well as temporally close data points. The temporally close data points are selected based on time and change of data distribution. We also present an efficient and online implementation of the technique and a performance study showing the superiority of A-ODDS over existing techniques in terms of accuracy and execution time on a real-life dataset collected from meteorological applications. --- paper_title: A Big Data Online Cleaning Algorithm Based on Dynamic Outlier Detection paper_content: To effectively clean the large-scale, mixed and inaccurate monitoring or collective data, reduce the cost of data cache and ensure the consistent deviation detection on timing data of each cycle, a big data online cleaning algorithm based on dynamic outlier detection has been proposed. The data cleaning method is improved by local outliner detection upon density, sampling cluster uniformly dilution Euclidean distance matrix retaining some corrections into next cycle of cleaning, which avoids a sampling causing overall cleaning deviation and reduces amount of calculation within data cleaning stable time, enhancing the speed greatly. Finally, the distributed solutions on online cleaning algorithm based on Hadoop platform. --- paper_title: Schema Extraction and Structural Outlier Detection for JSON-based NoSQL Data Stores. paper_content: Although most NoSQL Data Stores are schema-less, information on the structural properties of the persisted data is nevertheless essential during application development. Otherwise, accessing the data becomes simply impractical. In this paper, we introduce an algorithm for schema extraction that is operating outside of the NoSQL data store. Our method is specifically targeted at semi-structured data persisted in NoSQL stores, e.g., in JSON format. Rather than designing the schema up front, extracting a schema in hindsight can be seen as a reverse-engineering step. Based on the extracted schema information, we propose set of similarity measures that capture the degree of heterogeneity of JSON data and which reveal structural outliers in the data. We evaluate our implementation on two real-life datasets: a database from the Wendelstein 7-X project and Web Performance Data. --- paper_title: Distributed Top-N local outlier detection in big data paper_content: The concept of Top-N local outlier that focuses on the detection of the N points with the largest Local Outlier Factor (LOF) score has been shown to be very effective for identifying outliers in big datasets. However, detecting Top-N local outliers is computationally expensive, since the computation of LOF scores for all data points requires a huge number of high complexity k-nearest neighbor (kNN) searches. In this work, we thus present the first distributed solution to tackle this problem of Top-N local outlier detection (DTOLF). First, DTOLF features an innovative safe elimination strategy that efficiently identifies dually-safe points, namely those that are guaranteed to (1) not be classified as Top-N outliers and (2) not be needed as neighbors of points residing on other machines. Therefore, it effectively minimizes both the processing and communication costs of the Top-N outlier detection process. Further, based on the well-accepted observation that strong correlations among attributes are prevalent in real world datasets, we propose correlation-aware optimization strategies that ensure the effectiveness of grid-based partitioning and of the safe elimination strategy in multi-dimensional datasets. Our extensive experimental evaluation on OpenStreetMap, SDSS, and TIGER datasets demonstrates the effectiveness of DTOLF — up to 10 times faster than the alternative methods and scaling to terabyte level datasets. --- paper_title: An efficient algorithm for distributed density-based outlier detection on big data paper_content: The outlier detection is a popular issue in the area of data management and multimedia analysis, and it can be used in many applications such as detection of noisy images, credit card fraud detection, network intrusion detection. The density-based outlier is an important definition of outlier, whose target is to compute a Local Outlier Factor (LOF) for each tuple in a data set to represent the degree of this tuple to be an outlier. It shows several significant advantages comparing with other existing definitions. This paper focuses on the problem of distributed density-based outlier detection for large-scale data. First, we propose a Gird-Based Partition algorithm (GBP) as a data preparation method. GBP first splits the data set into several grids, and then allocates these grids to the datanodes in a distributed environment. Second, we propose a Distributed LOF Computing method (DLC) for detecting density-based outliers in parallel, which only needs a small amount of network communications. At last, the efficiency and effectiveness of the proposed approaches are verified through a series of simulation experiments. --- paper_title: A data stream outlier detection algorithm based on grid paper_content: The main aim of data stream outlier detection is to find the data stream outliers in rational time accurately. The existing outlier detection algorithms can find outliers in static data sets efficiently, but they are inapplicable for the dynamic data stream, and cannot find the abnormal data effectively. Due to the requirements of real-time detection, dynamic adjustment and the inapplicability of existing algorithms on data stream outlier detection, we propose a new data stream outlier detection algorithm, ODGrid, which can find the abnormal data in data stream in real time and adjust the detection results dynamically. According to the experiments on real datasets and synthetic datasets, ODGrid is superior to the existing data stream outlier detection algorithms, and it has good scalability to the dimensionality of data space. --- paper_title: Automatic outlier detection for time series: an application to sensor data paper_content: In this article we consider the problem of detecting unusual values or outliers from time series data where the process by which the data are created is difficult to model. The main consideration is the fact that data closer in time are more correlated to each other than those farther apart. We propose two variations of a method that uses the median from a neighborhood of a data point and a threshold value to compare the difference between the median and the observed data value. Both variations of the method are fast and can be used for data streams that occur in quick succession such as sensor data on an airplane. --- paper_title: In pursuit of outliers in multi-dimensional data streams paper_content: Among many Big Data applications are those that deal with data streams. A data stream is a sequence of data points with timestamps that possesses the properties of transiency, infiniteness, uncertainty, concept drift, and multi-dimensionality. In this paper we propose an outlier detection technique called Orion that addresses all the characteristics of data streams. Orion looks for a projected dimension of multi-dimensional data points with the help of an evolutionary algorithm, and identifies a data point as an outlier if it resides in a low-density region in that dimension. Experiments comparing Orion with existing techniques using both real and synthetic datasets show that Orion achieves an average of 7X the precision, 5X the recall, and a competitive execution time compared to existing techniques. --- paper_title: Contextual anomaly detection framework for big sensor data paper_content: The ability to detect and process anomalies for Big Data in real-time is a difficult task. The volume and velocity of the data within many systems makes it difficult for typical algorithms to scale and retain their real-time characteristics. The pervasiveness of data combined with the problem that many existing algorithms only consider the content of the data source; e.g. a sensor reading itself without concern for its context, leaves room for potential improvement. The proposed work defines a contextual anomaly detection framework. It is composed of two distinct steps: content detection and context detection. The content detector is used to determine anomalies in real-time, while possibly, and likely, identifying false positives. The context detector is used to prune the output of the content detector, identifying those anomalies which are considered both content and contextually anomalous. The context detector utilizes the concept of profiles, which are groups of similarly grouped data points generated by a multivariate clustering algorithm. The research has been evaluated against two real-world sensor datasets provided by a local company in Brampton, Canada. Additionally, the framework has been evaluated against the open-source Dodgers dataset, available at the UCI machine learning repository, and against the R statistical toolbox. --- paper_title: Ensemble stream model for data-cleaning in sensor networks paper_content: Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. ::: First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. ::: Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. ::: The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today. --- paper_title: Contextual Anomaly Detection in Big Sensor Data paper_content: Performing predictive modelling, such as anomaly detection, in Big Data is a difficult task. This problem is compounded as more and more sources of Big Data are generated from environmental sensors, logging applications, and the Internet of Things. Further, most current techniques for anomaly detection only consider the content of the data source, i.e. the data itself, without concern for the context of the data. As data becomes more complex it is increasingly important to bias anomaly detection techniques for the context, whether it is spatial, temporal, or semantic. The work proposed in this paper outlines a contextual anomaly detection technique for use in streaming sensor networks. The technique uses a well-defined content anomaly detection algorithm for real-time point anomaly detection. Additionally, we present a post-processing context-aware anomaly detection algorithm based on sensor profiles, which are groups of contextually similar sensors generated by a multivariate clustering algorithm. Our proposed research has been implemented and evaluated with real-world data provided by Powersmiths, located in Brampton, Ontario, Canada. --- paper_title: Cleaning Environmental Sensing Data Streams Based on Individual Sensor Reliability paper_content: Environmental sensing is becoming a significant way for understanding and transforming the environment, given recent technology advances in the Internet of Things (IoT). Current environmental sensing projects typically deploy commodity sensors, which are known to be unreliable and prone to produce noisy and erroneous data. Unfortunately, the accuracy of current cleaning techniques based on mean or median prediction is unsatisfactory. In this paper, we propose a cleaning method based on incrementally adjusted individual sensor reliabilities, called influence mean cleaning (IMC). By incrementally adjusting sensor reliabilities, our approach can properly discover latent sensor reliability values in a data stream, and improve reliability-weighted prediction even in a sensor network with changing conditions. The experimental results based on both synthetic and real datasets show that our approach achieves higher accuracy than the mean and median-based approaches after some initial adjustment iterations. --- paper_title: Data Stream Quality Evaluation for the Generation of Alarms in the Health Domain paper_content: Abstract The use of sensors has had an enormous increment in the last years, becoming a valuable tool in many different areas. In this kind of scenario, the quality of data becomes an extremely important issue; however, not much attention has been paid to this specific topic, with only a few existing works that focus on it. In this paper, we present a proposal for managing data streams from sensors that are installed in patients’ homes in order to monitor their health. It focuses on processing the sensors’ data streams, taking into account data quality. In order to achieve this, a data quality model for this kind of data streams and an architecture for the monitoring system are proposed. Moreover, our work introduces a mechanism for avoiding false alarms generated by data quality problems. --- paper_title: Computing data quality indicators on Big Data streams using a CEP paper_content: Big Data is often referred to as the 3Vs: Volume, Velocity and Variety. A 4th V (validity) was introduced to address the quality dimension. Poor data quality can be costly, lead to breaks in processes and invalidate the company's efforts on regulatory compliance. In order to process data streams in real time, a new technology called CEP (complex event processing) was developed. In France, the current deployment of smart meters will generate massive electricity consumption data. In this work, we developed a diagnostic approach to compute generic quality indicators of smart meter data streams on the fly. This solution is based on Tibco StreamBase CEP. Visualization tools were also developed in order to give a better understanding of the inter-relation between quality issues and geographical/temporal dimensions. According to the application purpose, two visualization methods can be loaded: (1) StreamBase LiveView is used to visualize quality indicators in real time; and (2) a Web application provides a posteriori and geographical analysis of the quality indicators which are plotted on a map within a color scale (lighter colors indicate good quality and darker colors indicate poor quality). In future works, new quality indicators could be added to the solution which can be applied in an operational context in order to monitor data quality from smart meters. --- paper_title: Representing Data Quality in Sensor Data Streaming Environments paper_content: Sensors in smart-item environments capture data about product conditions and usage to support business decisions as well as production automation processes. A challenging issue in this application area is the restricted quality of sensor data due to limited sensor precision and sensor failures. Moreover, data stream processing to meet resource constraints in streaming environments introduces additional noise and decreases the data quality. In order to avoid wrong business decisions due to dirty data, quality characteristics have to be captured, processed, and provided to the respective business task. However, the issue of how to efficiently provide applications with information about data quality is still an open research problem. In this article, we address this problem by presenting a flexible model for the propagation and processing of data quality. The comprehensive analysis of common data stream processing operators and their impact on data quality allows a fruitful data evaluation and diminishes incorrect business decisions. Further, we propose the data quality model control to adapt the data quality granularity to the data stream interestingness. --- paper_title: Representing Data Quality for Streaming and Static Data paper_content: In smart item environments, multitude of sensors are applied to capture data about product conditions and usage to guide business decisions as well as production automation processes. A big issue in this application area is posed by the restricted quality of sensor data due to limited sensor precision as well as sensor failures and malfunctions. Decisions derived on incorrect or misleading sensor data are likely to be faulty. The issue of how to efficiently provide applications with information about data quality (DQ) is still an open research problem. In this paper, we present a flexible model for the efficient transfer and management of data quality for streaming as well as static data. We propose a data stream metamodel to allow for the propagation of data quality from the sensors up to the respective business application without a significant overhead of data. Furthermore, we present the extension of the traditional RDBMS metamodel to permit the persistent storage of data quality information in a relational database. Finally, we demonstrate a data quality metadata mapping to close the gap between the streaming environment and the target database. Our solution maintains a flexible number of DQ dimensions and supports applications directly consuming streaming data or processing data filed in a persistent database. --- paper_title: ONTOLOGY-BASED DATA QUALITY FRAMEWORK FOR DATA STREAM APPLICATIONS paper_content: Data Stream Management Systems (DSMS) have been proposed to address the challenges of applications which produce continuous, rapid streams of data that have to be processed in real-time. Data quality (DQ) plays an important role in DSMS as there is usually a trade-off between accuracy and consistency on the one hand, and timeliness and completeness on the other hand. Previous work on data quality in DSMS has focused only on specific aspects of DQ. In this paper, we present a flexible, holistic ontology-based data quality framework for data stream applications. Our DQ model is based on a threefold notion of DQ. First, content-based evaluation of DQ uses semantic rules which can be user- defined in an extensible ontology. Second, query-based evaluation adds DQ information to the query results and updates it while queries are being processed. Third, the application-based evaluation can use any kind of function which computes an application-specific DQ value. The whole DQ process is driven by the metadata managed in an ontology which provides a semantically clear definition of the DQ features of the DSMS. The evaluation of our approach in two case studies in the domain of traffic information systems has shown that our framework provides the required flexibility, extensibility, and performance for DQ management in DSMS. --- paper_title: Ontology-Based Data Quality Management for Data Streams paper_content: Data Stream Management Systems (DSMS) provide real-time data processing in an effective way, but there is always a tradeoff between data quality (DQ) and performance. We propose an ontology-based data quality framework for relational DSMS that includes DQ measurement and monitoring in a transparent, modular, and flexible way. We follow a threefold approach that takes the characteristics of relational data stream management for DQ metrics into account. While (1) Query Metrics respect changes in data quality due to query operations, (2) Content Metrics allow the semantic evaluation of data in the streams. Finally, (3) Application Metrics allow easy user-defined computation of data quality values to account for application specifics. Additionally, a quality monitor allows us to observe data quality values and take counteractions to balance data quality and performance. The framework has been designed along a DQ management methodology suited for data streams. It has been evaluated in the domains of transportation systems and health monitoring. --- paper_title: A model-driven framework for data quality management in the Internet of Things paper_content: The internet of Things (IoT) is a data stream environment where a large scale deployment of smart things continuously report readings. These data streams are then consumed by pervasive applications, i.e. data consumers, to offer ubiquitous services. The data quality (DQ) is a key criteria for IoT data consumers especially when considering the inherent uncertainty of sensor-enabled data. However, DQ is a highly subjective concept and there is no standard agreement on how to determine “good” data. Moreover, the combinations of considered measured attributes and associated DQ information are as diverse as the needs of data consumers. This introduces expensive overheads for developers tasked with building DQ-aware IoT software systems which are capable of managing their own DQ information. To effectively handle these various perceptions of DQ, we propose a Model-Driven Architecture-based approach that allows each developer to easily and efficiently express, through models and other provided resources, the data consumer’s vision of DQ and its requirements using an easy-to-use graphical model editor. The defined DQ specifications are then automatically transformed to generate an entire infrastructure for DQ management that fits perfectly the data consumer’s requirements. We demonstrate the flexibility and the efficiency of our approach by generating two DQ management infrastructures built on top of different platforms and testing them through a real life data stream environment scenario. --- paper_title: Data quality assessment for on-line monitoring and measuring system of power quality based on big data and data provenance theory paper_content: Currently, on-line monitoring and measuring system of power quality has accumulated a huge amount of data. In the age of big data, those data integrated from various systems will face big data application problems. This paper proposes a data quality assessment system method for on-line monitoring and measuring system of power quality based on big data and data provenance to assess integrity, redundancy, accuracy, timeliness, intelligence and consistency of data set and single data. Specific assessment rule which conforms to the situation of on-line monitoring and measuring system of power quality will be devised to found data quality problems. Thus it will provide strong data support for big data application of power quality. --- paper_title: Incorporating quality aspects in sensor data streams paper_content: Sensors are increasingly embedded into physical products in order to capture data about their conditions and usage for decision making in business applications. However, a major issue for such applications is the limited quality of the captured data due to inherently restricted precision and performance of the sensors. Moreover, the data quality is further decreased by data processing to meet resource constraints in streaming environments and ultimately influences business decisions. The issue of how to efficiently provide applications with information about data quality (DQ) is still an open research problem. In my Ph.D. thesis, I address this problem by developing a system to provide business applications with accurate information on data quality. Furthermore, the system will be able to incorporate and guarantee user-defined data quality levels. In this paper, I will present the major results from my research so far. This includes a novel jumping-window-based approach for the efficient transfer of data quality information as well as a flexible metamodel for storage and propagation of data quality. The comprehensive analysis of common data processing operators w.r.t. their impact on data quality allows a fruitful knowledge evaluation and thus diminishes incorrect business decisions. --- paper_title: An Hybrid Approach to Quality Evaluation across Big Data Value Chain paper_content: While the potential benefits of Big Data adoption are significant, and some initial successes have already been realized, there remain many research and technical challenges that must be addressed to fully realize this potential. The Big Data processing, storage and analytics, of course, are major challenges that are most easily recognized. However, there are additional challenges related for instance to Big Data collection, integration, and quality enforcement. This paper proposes a hybrid approach to Big Data quality evaluation across the Big Data value chain. It consists of assessing first the quality of Big Data itself, which involve processes such as cleansing, filtering and approximation. Then, assessing the quality of process handling this Big Data, which involve for example processing and analytics process. We conduct a set of experiments to evaluate Quality of Data prior and after its pre-processing, and the Quality of the pre-processing and processing on a large dataset. Quality metrics have been measured to access three Big Data quality dimensions: accuracy, completeness, and consistency. The results proved that combination of data-driven and process-driven quality evaluation lead to improved quality enforcement across the Big Data value chain. Hence, we recorded high prediction accuracy and low processing time after we evaluate 6 well-known classification algorithms as part of processing and analytics phase of Big Data value chain. --- paper_title: A Big Data Framework for Electric Power Data Quality Assessment paper_content: Since a low-quality data may influence the effectiveness and reliability of applications, data quality is required to be guaranteed. Data quality assessment is considered as the foundation of the promotion of data quality, so it is essential to access the data quality before any other data related activities. In the electric power industry, more and more electric power data is continuously accumulated, and many electric power applications have been developed based on these data. In China, the power grid has many special characteristic, traditional big data assessment frameworks cannot be directly applied. Therefore, a big data framework for electric power data quality assessment is proposed. Based on big data techniques, the framework can accumulate both the real-time data and the history data, provide an integrated computation environment for electric power big data assessment, and support the storage of different types of data. --- paper_title: An in-network data cleaning approach for wireless sensor networks paper_content: AbstractWireless Sensor Networks (WSNs) are widely used for monitoring physical happenings of the environment. However, the data gathered by the WSNs may be inaccurate and unreliable due to power exhaustion, noise and other reasons. Unnecessary data such as erroneous data and redundant data transmission causes a lot of extra energy consumption. To improve the data reliability and reduce the energy consumption, we proposed an in-network processing architecture for data cleaning, which divides the task into four stages implemented in different nodes respectively. This strategy guaranteed the cleaning algorithms were computationally lightweight in local nodes and energy-efficient due to almost no communication overhead. In addition, we presented the detection algorithms for data faults and event outliers, which were conducted by utilizing the related attributes from the local sensor node and the cooperation with its relaying neighbor. Experiment results show that our proposed approach is accurate and energy-... --- paper_title: Adaptive Pre-processing and Regression of Weather Data paper_content: With the evolution of data and increasing popularity of IoT (Internet of Things), stream data mining has gained immense popularity. Researchers and developers are trying to analyze data patterns obtained from various devices. Stream data have several characteristics, the most important being its huge volume and high velocity. Although, a lot of research is being conducted in order to develop more efficient stream data mining techniques, pre-processing of stream data is an area that is under-studied. Real time applications generate data which is rather noisy and contain missing values. Apart from this, there is the issue of data evolution, which is a concern when dealing with stream data. To deal with the evolution of data, the proposed solution offers a hybrid of preprocessing techniques which are adaptive in nature. As a result of the study, an adaptive preprocessing and learning approach is implemented. The case study with sensor weather data demonstrates the results and accuracy of the proposed solution. --- paper_title: Bleach: A Distributed Stream Data Cleaning System paper_content: Existing scalable data cleaning approaches have focused on batch data cleaning. However, batch data cleaning is not suitable for streaming big data systems, in which dynamic data is generated continuously. Despite the increasing popularity of stream-processing systems, few stream data cleaning techniques have been proposed so far. In this paper, we bridge this gap by addressing the problem of rule-based stream data cleaning, which sets stringent requirements on latency, rule dynamics and ability to cope with the continuous nature of data streams. We design a system, called Bleach, which achieves real-time violation detection and data repair on a dirty data stream. Bleach relies on efficient, compact and distributed data structures to maintain the necessary state to repair data. Additionally, it supports rule dynamics and uses a "cumulative" sliding window operation to improve cleaning accuracy. We evaluate a prototype of Bleach using both synthetic and real data streams and experimentally validate its high throughput, low latency and high cleaning accuracy, which are preserved even with rule dynamics. --- paper_title: Missing value imputation using a fuzzy clustering-based EM approach paper_content: Data preprocessing and cleansing play a vital role in data mining by ensuring good quality of data. Data-cleansing tasks include imputation of missing values, identification of outliers, and identification and correction of noisy data. In this paper, we present a novel technique called A Fuzzy Expectation Maximization and Fuzzy Clustering-based Missing Value Imputation Framework for Data Pre-processing (FEMI). It imputes numerical and categorical missing values by making an educated guess based on records that are similar to the record having a missing value. While identifying a group of similar records and making a guess based on the group, it applies a fuzzy clustering approach and our novel fuzzy expectation maximization algorithm. We evaluate FEMI on eight publicly available natural data sets by comparing its performance with the performance of five high-quality existing techniques, namely EMI, GkNN, FKMI, SVR and IBLLS. We use thirty-two types (patterns) of missing values for each data set. Two evaluation criteria namely root mean squared error and mean absolute error are used. Our experimental results indicate (according to a confidence interval and $$t$$t test analysis) that FEMI performs significantly better than EMI, GkNN, FKMI, SVR, and IBLLS. --- paper_title: Schema Extraction and Structural Outlier Detection for JSON-based NoSQL Data Stores. paper_content: Although most NoSQL Data Stores are schema-less, information on the structural properties of the persisted data is nevertheless essential during application development. Otherwise, accessing the data becomes simply impractical. In this paper, we introduce an algorithm for schema extraction that is operating outside of the NoSQL data store. Our method is specifically targeted at semi-structured data persisted in NoSQL stores, e.g., in JSON format. Rather than designing the schema up front, extracting a schema in hindsight can be seen as a reverse-engineering step. Based on the extracted schema information, we propose set of similarity measures that capture the degree of heterogeneity of JSON data and which reveal structural outliers in the data. We evaluate our implementation on two real-life datasets: a database from the Wendelstein 7-X project and Web Performance Data. --- paper_title: Distributed Top-N local outlier detection in big data paper_content: The concept of Top-N local outlier that focuses on the detection of the N points with the largest Local Outlier Factor (LOF) score has been shown to be very effective for identifying outliers in big datasets. However, detecting Top-N local outliers is computationally expensive, since the computation of LOF scores for all data points requires a huge number of high complexity k-nearest neighbor (kNN) searches. In this work, we thus present the first distributed solution to tackle this problem of Top-N local outlier detection (DTOLF). First, DTOLF features an innovative safe elimination strategy that efficiently identifies dually-safe points, namely those that are guaranteed to (1) not be classified as Top-N outliers and (2) not be needed as neighbors of points residing on other machines. Therefore, it effectively minimizes both the processing and communication costs of the Top-N outlier detection process. Further, based on the well-accepted observation that strong correlations among attributes are prevalent in real world datasets, we propose correlation-aware optimization strategies that ensure the effectiveness of grid-based partitioning and of the safe elimination strategy in multi-dimensional datasets. Our extensive experimental evaluation on OpenStreetMap, SDSS, and TIGER datasets demonstrates the effectiveness of DTOLF — up to 10 times faster than the alternative methods and scaling to terabyte level datasets. --- paper_title: A Data Cleaning Model for Electric Power Big Data Based on Spark Framework paper_content: The data cleaning of electrical power big data can improve the correctness, the completeness, the consistency and the reliability of the data. Aiming at the difficulties of the extracting of the unified anomaly detection pattern and the low accuracy and continuity of the anomaly data correction in the process of the electrical power big data cleaning, the data cleaning model of the electrical power big data based on Spark is proposed. Firstly, the normal clusters and the corresponding boundary samples are obtained by the improved CURE clustering algorithm. Then, the anomaly data identification algorithm based on boundary samples is designed. Finally, the anomaly data modification is realized by using exponential weighting moving mean value. The high efficiency and accuracy is proved by the experiment of the data cleaning of the wind power generation monitoring data from the wind power station. --- paper_title: Context-aware data quality assessment for big data paper_content: Abstract Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive advantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the Data Quality assessment can support the identification of suitable data to process. If for traditional database numerous assessment methods are proposed, in the Big Data scenario new algorithms have to be designed in order to deal with novel requirements related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive approach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the Data Quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a Data Quality adapter module, which selects the best configuration for the Data Quality assessment based on the user main requirements: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study. --- paper_title: Enhancing data quality by cleaning inconsistent big RDF data paper_content: We address the problem of dealing with inconsistencies in fusion of big data sources using Resource Description Framework (RDF) and ontologies. We propose a scalable approach ensuring data quality for query answering over big RDF data in a distributed way on a Spark ecosystem. In so doing, the cleaning inconsistent big RDF data approach is built on the following steps (1) modeling consistency rules to detect the inconsistency triples even if it is implicitly hidden including inference and inconsistent rules (2) detecting inconsistency through rule evaluation based on Apache Spark framework to discover the minimally sub-set of inconsistent triples (3) cleaning the inconsistency through finding the best repair for consistent query answering. --- paper_title: Quality awareness for a Successful Big Data Exploitation paper_content: The combination of data and technology is having a high impact on the way we live. The world is getting smarter thanks to the quantity of collected and analyzed data. However, it is necessary to consider that such amount of data is continuously increasing and it is necessary to deal with novel requirements related to variety, volume, velocity, and veracity issues. In this paper we focus on veracity that is related to the presence of uncertain or imprecise data: errors, missing or invalid data can compromise the usefulness of the collected values. In such a scenario, new methods and techniques able to evaluate the quality of the available data are needed. In fact, the literature provides many data quality assessment and improvement techniques, especially for structured data, but in the Big Data era new algorithms have to be designed. We aim to provide an overview of the issues and challenges related to Data Quality assessment in the Big Data scenario. We also propose a possible solution developed by considering a smart city case study and we describe the lessons learned in the design and implementation phases. --- paper_title: Big Data Pre-Processing: Closing the Data Quality Enforcement Loop paper_content: In the Big Data Era, data is the core for any governmental, institutional, and private organization. Efforts were geared towards extracting highly valuable insights that cannot happen if data is of poor quality. Therefore, data quality (DQ) is considered as a key element in Big data processing phase. In this stage, low quality data is not penetrated to the Big Data value chain. This paper, addresses the data quality rules discovery (DQR) after the evaluation of quality and prior to Big Data pre-processing. We propose a DQR discovery model to enhance and accurately target the pre-processing activities based on quality requirements. We defined, a set of pre-processing activities associated with data quality dimensions (DQD's) to automatize the DQR generation process. Rules optimization are applied on validated rules to avoid multi-passes pre-processing activities and eliminates duplicate rules. Conducted experiments showed an increased quality scores after applying the discovered and optimized DQR's on data. --- paper_title: Evaluating the Quality of Social Media Data in Big Data Architecture paper_content: The use of freely available online data is rapidly increasing, as companies have detected the possibilities and the value of these data in their businesses. In particular, data from social media are seen as interesting as they can, when properly treated, assist in achieving customer insight into business decision making. However, the unstructured and uncertain nature of this kind of big data presents a new kind of challenge: how to evaluate the quality of data and manage the value of data within a big data architecture? This paper contributes to addressing this challenge by introducing a new architectural solution to evaluate and manage the quality of social media data in each processing phase of the big data pipeline. The proposed solution improves business decision making by providing real-time, validated data for the user. The solution is validated with an industrial case example, in which the customer insight is extracted from social media data in order to determine the customer satisfaction regarding the quality of a product. --- paper_title: Quality management architecture for social media data paper_content: Social media data has provided various insights into the behaviour of consumers and businesses. However, extracted data may be erroneous, or could have originated from a malicious source. Thus, quality of social media should be managed. Also, it should be understood how data quality can be managed across a big data pipeline, which may consist of several processing and analysis phases. The contribution of this paper is evaluation of data quality management architecture for social media data. The theoretical concepts based on previous work have been implemented for data quality evaluation of Twitter-based data sets. Particularly, reference architecture for quality management in social media data has been extended and evaluated based on the implementation architecture. Experiments indicate that 150–800 tweets/s can be evaluated with two cloud nodes depending on the configuration. --- paper_title: Context-aware data quality assessment for big data paper_content: Abstract Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive advantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the Data Quality assessment can support the identification of suitable data to process. If for traditional database numerous assessment methods are proposed, in the Big Data scenario new algorithms have to be designed in order to deal with novel requirements related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive approach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the Data Quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a Data Quality adapter module, which selects the best configuration for the Data Quality assessment based on the user main requirements: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study. --- paper_title: Big Data Pre-processing: A Quality Framework paper_content: With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis. --- paper_title: Data quality analysis and cleaning strategy for wireless sensor networks paper_content: The quality of data in wireless sensor networks has a significant impact on decision support, and data cleaning is an effective way to improve data quality. However, if the data cleaning strategies are not correctly designed, it might result in an unsatisfactory cleaning effect with increased system cleaning costs. Initially, data quality evaluation indicators and their measurement methods in wireless sensor networks were introduced. We then explored the impact of relationship between different indicators which are used in the quality assessment. Finally, data cleaning strategy for wireless sensor networks based on the relationship between data quality indicators was proposed by comparing and analyzing data cleaning schemes with different orders. The experimental results showed that the proposed data cleaning strategy can effectively improve data availability and have a better cleaning effect in wireless sensor networks for the same cleaning cost. --- paper_title: Enhancing data quality by cleaning inconsistent big RDF data paper_content: We address the problem of dealing with inconsistencies in fusion of big data sources using Resource Description Framework (RDF) and ontologies. We propose a scalable approach ensuring data quality for query answering over big RDF data in a distributed way on a Spark ecosystem. In so doing, the cleaning inconsistent big RDF data approach is built on the following steps (1) modeling consistency rules to detect the inconsistency triples even if it is implicitly hidden including inference and inconsistent rules (2) detecting inconsistency through rule evaluation based on Apache Spark framework to discover the minimally sub-set of inconsistent triples (3) cleaning the inconsistency through finding the best repair for consistent query answering. --- paper_title: Data Stream Quality Evaluation for the Generation of Alarms in the Health Domain paper_content: Abstract The use of sensors has had an enormous increment in the last years, becoming a valuable tool in many different areas. In this kind of scenario, the quality of data becomes an extremely important issue; however, not much attention has been paid to this specific topic, with only a few existing works that focus on it. In this paper, we present a proposal for managing data streams from sensors that are installed in patients’ homes in order to monitor their health. It focuses on processing the sensors’ data streams, taking into account data quality. In order to achieve this, a data quality model for this kind of data streams and an architecture for the monitoring system are proposed. Moreover, our work introduces a mechanism for avoiding false alarms generated by data quality problems. --- paper_title: Computing data quality indicators on Big Data streams using a CEP paper_content: Big Data is often referred to as the 3Vs: Volume, Velocity and Variety. A 4th V (validity) was introduced to address the quality dimension. Poor data quality can be costly, lead to breaks in processes and invalidate the company's efforts on regulatory compliance. In order to process data streams in real time, a new technology called CEP (complex event processing) was developed. In France, the current deployment of smart meters will generate massive electricity consumption data. In this work, we developed a diagnostic approach to compute generic quality indicators of smart meter data streams on the fly. This solution is based on Tibco StreamBase CEP. Visualization tools were also developed in order to give a better understanding of the inter-relation between quality issues and geographical/temporal dimensions. According to the application purpose, two visualization methods can be loaded: (1) StreamBase LiveView is used to visualize quality indicators in real time; and (2) a Web application provides a posteriori and geographical analysis of the quality indicators which are plotted on a map within a color scale (lighter colors indicate good quality and darker colors indicate poor quality). In future works, new quality indicators could be added to the solution which can be applied in an operational context in order to monitor data quality from smart meters. --- paper_title: An in-network data cleaning approach for wireless sensor networks paper_content: AbstractWireless Sensor Networks (WSNs) are widely used for monitoring physical happenings of the environment. However, the data gathered by the WSNs may be inaccurate and unreliable due to power exhaustion, noise and other reasons. Unnecessary data such as erroneous data and redundant data transmission causes a lot of extra energy consumption. To improve the data reliability and reduce the energy consumption, we proposed an in-network processing architecture for data cleaning, which divides the task into four stages implemented in different nodes respectively. This strategy guaranteed the cleaning algorithms were computationally lightweight in local nodes and energy-efficient due to almost no communication overhead. In addition, we presented the detection algorithms for data faults and event outliers, which were conducted by utilizing the related attributes from the local sensor node and the cooperation with its relaying neighbor. Experiment results show that our proposed approach is accurate and energy-... --- paper_title: Adaptive Pre-processing and Regression of Weather Data paper_content: With the evolution of data and increasing popularity of IoT (Internet of Things), stream data mining has gained immense popularity. Researchers and developers are trying to analyze data patterns obtained from various devices. Stream data have several characteristics, the most important being its huge volume and high velocity. Although, a lot of research is being conducted in order to develop more efficient stream data mining techniques, pre-processing of stream data is an area that is under-studied. Real time applications generate data which is rather noisy and contain missing values. Apart from this, there is the issue of data evolution, which is a concern when dealing with stream data. To deal with the evolution of data, the proposed solution offers a hybrid of preprocessing techniques which are adaptive in nature. As a result of the study, an adaptive preprocessing and learning approach is implemented. The case study with sensor weather data demonstrates the results and accuracy of the proposed solution. --- paper_title: A software reference architecture for semantic-aware big data systems paper_content: Abstract Context: Big Data systems are a class of software systems that ingest, store, process and serve massive amounts of heterogeneous data, from multiple sources. Despite their undisputed impact in current society, their engineering is still in its infancy and companies find it difficult to adopt them due to their inherent complexity. Existing attempts to provide architectural guidelines for their engineering fail to take into account important Big Data characteristics, such as the management, evolution and quality of the data. Objective: In this paper, we follow software engineering principles to refine the λ -architecture, a reference model for Big Data systems, and use it as seed to create Bolster , a software reference architecture (SRA) for semantic-aware Big Data systems. Method: By including a new layer into the λ -architecture, the Semantic Layer, Bolster is capable of handling the most representative Big Data characteristics (i.e., Volume, Velocity, Variety, Variability and Veracity). Results: We present the successful implementation of Bolster in three industrial projects, involving five organizations. The validation results show high level of agreement among practitioners from all organizations with respect to standard quality factors. Conclusion: As an SRA, Bolster allows organizations to design concrete architectures tailored to their specific needs. A distinguishing feature is that it provides semantic-awareness in Big Data Systems. These are Big Data system implementations that have components to simplify data definition and exploitation. In particular, they leverage metadata (i.e., data describing data) to enable (partial) automation of data exploitation and to aid the user in their decision making processes. This simplification supports the differentiation of responsibilities into cohesive roles enhancing data governance. --- paper_title: Research on real-time outlier detection over big data streams paper_content: ABSTRACTNowadays technological advances have promoted big data streams common in many applications, including mobile internet applications, internet of things, and industry production process. Outl... --- paper_title: A Data Quality in Use Model for Big Data paper_content: Organizations are nowadays immersed in the Big Data Era. Beyond the hype of the concept of Big Data, it is true that something in the way of doing business is really changing. Although some challenges keep being the same as for regular data, with big data, the focus has changed. The reason is due to Big Data is not only data, but also a complete framework including data themselves, storage, formats, and ways of provisioning, processing and analytics. A challenge that becomes even trickier is the one concerning to the management of the quality of big data. More than ever the need for assessing the quality-in-use of big datasets gains importance since the real contribution – business value- of a dataset to a business can be only estimated in its context of use. Although there exists different data quality models to assess the quality of data there still lacks of a quality-in-use model adapted to big data. To fill this gap, and based on ISO 25012 and ISO 25024, we propose the 3Cs model, which is composed of three data quality dimensions for assessing the quality-in-use of big datasets: Contextual Consistency, Operational Consistency and Temporal Consistency. --- paper_title: From Data Quality to Big Data Quality paper_content: This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review. --- paper_title: Anomaly Pattern Detection on Data Streams paper_content: A data stream is a sequence of data generated continuously over time. A data stream is too big to be saved in memory and its underlying data distribution may change over time. Outlier detection aims to find data instances which significantly deviate from the underlying data distribution. While outlier detection is performed at an individual instance level, anomalous pattern detection involves detecting a point in time where the behavior of the data becomes unusual and differs from normal behavior. Most outlier detection methods work in unsupervised mode, where the class labels of the data samples are not known. Alternatively, concept drift detection methods find a drifting point in the streaming data and try to adapt the model to the new emerging pattern. In this paper, we provide a review of outlier detection, anomaly pattern detection and concept drift detection approaches for streaming data. --- paper_title: Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues paper_content: Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. --- paper_title: Anomaly Detection Guidelines for Data Streams in Big Data paper_content: Real time data analysis in data streams is a highly challenging area in big data. The surge in big data techniques has recently attracted considerable interest to the detection of significant changes or anomalies in data streams. There is a variety of literature across a number of fields relevant to anomaly detection. The growing number of techniques, from seemingly disconnected areas, prevents a comprehensive review. Many interesting techniques may therefore remain largely unknown to the anomaly detection community at large. The survey presents a compact, but comprehensive overview of diverse strategies for anomaly detection in evolving data streams. A number of recommendations based performance and applicability to use cases are provided. We expect that our classification and recommendations will provide useful guidelines to practitioners in this rapidly evolving field. --- paper_title: Research on real-time outlier detection over big data streams paper_content: ABSTRACTNowadays technological advances have promoted big data streams common in many applications, including mobile internet applications, internet of things, and industry production process. Outl... --- paper_title: Outlier Detection for Temporal Data: A Survey paper_content: In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used. --- paper_title: Analysis and evaluation of outlier detection algorithms in data streams paper_content: Data mining is one of the most exciting fields of research for the researcher. As data is getting digitized, systems are getting connected and integrated, scope of data generation and analytics has increased exponentially. Today, most of the systems generate non-stationary data of huge, size, volume, occurrence speed, fast changing etc. these kinds of data are called data streams. One of the most recent trend i.e. IOT (Internet Of Things) is also promising lots of expectation of people which will ease the use of day to day activities and it could also connect systems and people together. This situation will also lead to generation of data streams, thus present and future scope of data stream mining is highly promising. Characteristics of data stream possess many challenges for the researcher; this makes analytics of such data difficult and also acts as source of inspiration for researcher. Outlier detection plays important role in any application. In this paper we reviewed different techniques of outlier detection for stream data and their issues in detail and presented results of the same. --- paper_title: Outlier Detection Techniques for Wireless Sensor Networks: A Survey paper_content: In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree. --- paper_title: Big data, big data quality problem paper_content: A USAF sponsored MITRE research team undertook four separate, domain-specific case studies about Big Data applications. Those case studies were initial investigations into the question of whether or not data quality issues encountered in Big Data collections are substantially different in cause, manifestation, or detection than those data quality issues encountered in more traditionally sized data collections. The study addresses several factors affecting Big Data Quality at multiple levels, including collection, processing, and storage. Though not unexpected, the key findings of this study reinforce that the primary factors affecting Big Data reside in the limitations and complexities involved with handling Big Data while maintaining its integrity. These concerns are of a higher magnitude than the provenance of the data, the processing, and the tools used to prepare, manipulate, and store the data. Data quality is extremely important for all data analytics problems. From the study's findings, the "truth about Big Data" is there are no fundamentally new DQ issues in Big Data analytics projects. Some DQ issues exhibit return-s-to-scale effects, and become more or less pronounced in Big Data analytics, though. Big Data Quality varies from one type of Big Data to another and from one Big Data technology to another. --- paper_title: Big data preprocessing: methods and prospects paper_content: The massive growth in the scale of data has been observed in recent years being a key factor of the Big Data scenario. Big Data can be defined as high volume, velocity and variety of data that require a new high-performance processing. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The presence of data preprocessing methods for data mining in big data is reviewed in this paper. The definition, characteristics, and categorization of data preprocessing approaches in big data are introduced. The connection between big data and data preprocessing throughout all families of methods and big data technologies are also examined, including a review of the state-of-the-art. In addition, research challenges are discussed, with focus on developments on different big data framework, such as Hadoop, Spark and Flink and the encouragement in devoting substantial research efforts in some families of data preprocessing methods and applications on new big data learning paradigms. --- paper_title: Big Data Validation and Quality Assurance -- Issuses, Challenges, and Needs paper_content: With the fast advance of big data technology and analytics solutions, big data computing and service is becoming a very hot research and application subject in academic research, industry community, and government services. Nevertheless, there are increasing data quality problems resulting in erroneous data costs in enterprises and businesses. Current research seldom discusses how to effectively validate big data to ensure data quality. This paper provides informative discussions for big data validation and quality assurance, including the essential concepts, focuses, and validation process. Moreover, the paper presents a comparison among big data validation tools and several major players in industry are discussed. Furthermore, the primary issues, challenges, and needs are discussed. --- paper_title: Big Data Quality: A Survey paper_content: With the advances in communication technologies and the high amount of data generated, collected, and stored, it becomes crucial to manage the quality of this data deluge in an efficient and cost-effective way. The storage, processing, privacy and analytics are the main keys challenging aspects of Big Data that require quality evaluation and monitoring. Quality has been recognized by the Big Data community as an essential facet of its maturity. Yet, it is a crucial practice that should be implemented at the earlier stages of its lifecycle and progressively applied across the other key processes. The earlier we incorporate quality the full benefit we can get from insights. In this paper, we first identify the key challenges that necessitates quality evaluation. We then survey, classify and discuss the most recent work on Big Data management. Consequently, we propose an across-the-board quality management framework describing the key quality evaluation practices to be conducted through the different Big Data stages. The framework can be used to leverage the quality management and to provide a roadmap for Data scientists to better understand quality practices and highlight the importance of managing the quality. We finally, conclude the paper and point to some future research directions on quality of Big Data. --- paper_title: Data quality in big data processing: Issues, solutions and open problems paper_content: With the rapid development of social networks, Internet of things, Cloud computing as well as other technologies, big data age is arriving. The increasing number of data has brought great value to the public and enterprises. Meanwhile how to manage and use big data better has become the focus of all walks of life. The 4V characteristics of big data have brought a lot of issues to the big data processing. The key to big data processing is to solve data quality issue, and to ensure data quality is a prerequisite for the successful application of big data technique. In this paper, we use recommendation systems and prediction systems as typical big data applications, and try to find out the data quality issues during data collection, data preprocessing, data storage and data analysis stages of big data processing. According to the elaboration and analysis of the proposed issues, the corresponding solutions are also put forward. Finally, some open problems to be solved in the future are also raised. --- paper_title: Data quality issues in big data paper_content: Though the issues of data quality trace back their origin to the early days of computing, the recent emergence of Big Data has added more dimensions. Furthermore, given the range of Big Data applications, potential consequences of bad data quality can be for more disastrous and widespread. This paper provides a perspective on data quality issues in the Big Data context. it also discusses data integration issues that arise in biological databases and attendant data quality issues. --- paper_title: A Data Quality in Use Model for Big Data paper_content: Organizations are nowadays immersed in the Big Data Era. Beyond the hype of the concept of Big Data, it is true that something in the way of doing business is really changing. Although some challenges keep being the same as for regular data, with big data, the focus has changed. The reason is due to Big Data is not only data, but also a complete framework including data themselves, storage, formats, and ways of provisioning, processing and analytics. A challenge that becomes even trickier is the one concerning to the management of the quality of big data. More than ever the need for assessing the quality-in-use of big datasets gains importance since the real contribution – business value- of a dataset to a business can be only estimated in its context of use. Although there exists different data quality models to assess the quality of data there still lacks of a quality-in-use model adapted to big data. To fill this gap, and based on ISO 25012 and ISO 25024, we propose the 3Cs model, which is composed of three data quality dimensions for assessing the quality-in-use of big datasets: Contextual Consistency, Operational Consistency and Temporal Consistency. --- paper_title: Big data and quality: A literature review paper_content: Big Data refers to data volumes in the range of Exabyte (1018) and beyond. Such volumes exceed the capacity of current on-line storage and processing systems. With characteristics like volume, velocity and variety big data throws challenges to the traditional IT establishments. Computer assisted innovation, real time data analytics, customer-centric business intelligence, industry wide decision making and transparency are possible advantages, to mention few, of Big Data. There are many issues with Big Data that warrant quality assessment methods. The issues are pertaining to storage and transport, management, and processing. This paper throws light into the present state of quality issues related to Big Data. It provides valuable insights that can be used to leverage Big Data science activities. --- paper_title: Data quality: The other face of Big Data paper_content: In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth ‘V’ of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three ‘V’s, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community. --- paper_title: Overview of data quality challenges in the context of Big Data paper_content: Data quality management systems are thoroughly researched topics and have resulted in many tools and techniques developed by both academia and industry. However, the advent of Big Data might pose some serious questions pertaining to the applicability of existing data quality concepts. There is a debate concerning the importance of data quality for Big Data; one school of thought argues that high data quality methods are essential for deriving higher level analytics while another school of thought argues that data quality level will not be so important as the volume of Big Data would be used to produce patterns and some amount of dirty data will not mask the analytic results which might be derived. This paper aims to investigate various components and activities forming part of data quality management such as dimensions, metrics, data quality rules, data profiling and data cleansing. The result list existing challenges and future research areas associated with Big Data for data quality management. --- paper_title: The Challenges of Data Quality and Data Quality Assessment in the Big Data Era paper_content: High-quality data are the precondition for analyzing and using big data and for guaranteeing the value of the data. Currently, comprehensive analysis and research of quality standards and quality assessment methods for big data are lacking. First, this paper summarizes reviews of data quality research. Second, this paper analyzes the data characteristics of the big data environment, presents quality challenges faced by big data, and formulates a hierarchical data quality framework from the perspective of data users. This framework consists of big data quality dimensions, quality characteristics, and quality indexes. Finally, on the basis of this framework, this paper constructs a dynamic assessment process for data quality. This process has good expansibility and adaptability and can meet the needs of big data quality assessment. The research results enrich the theoretical scope of big data and lay a solid foundation for the future by establishing an assessment model and studying evaluation algorithms. --- paper_title: From Data Quality to Big Data Quality paper_content: This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review. --- paper_title: Big data, big data quality problem paper_content: A USAF sponsored MITRE research team undertook four separate, domain-specific case studies about Big Data applications. Those case studies were initial investigations into the question of whether or not data quality issues encountered in Big Data collections are substantially different in cause, manifestation, or detection than those data quality issues encountered in more traditionally sized data collections. The study addresses several factors affecting Big Data Quality at multiple levels, including collection, processing, and storage. Though not unexpected, the key findings of this study reinforce that the primary factors affecting Big Data reside in the limitations and complexities involved with handling Big Data while maintaining its integrity. These concerns are of a higher magnitude than the provenance of the data, the processing, and the tools used to prepare, manipulate, and store the data. Data quality is extremely important for all data analytics problems. From the study's findings, the "truth about Big Data" is there are no fundamentally new DQ issues in Big Data analytics projects. Some DQ issues exhibit return-s-to-scale effects, and become more or less pronounced in Big Data analytics, though. Big Data Quality varies from one type of Big Data to another and from one Big Data technology to another. --- paper_title: Big data preprocessing: methods and prospects paper_content: The massive growth in the scale of data has been observed in recent years being a key factor of the Big Data scenario. Big Data can be defined as high volume, velocity and variety of data that require a new high-performance processing. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The presence of data preprocessing methods for data mining in big data is reviewed in this paper. The definition, characteristics, and categorization of data preprocessing approaches in big data are introduced. The connection between big data and data preprocessing throughout all families of methods and big data technologies are also examined, including a review of the state-of-the-art. In addition, research challenges are discussed, with focus on developments on different big data framework, such as Hadoop, Spark and Flink and the encouragement in devoting substantial research efforts in some families of data preprocessing methods and applications on new big data learning paradigms. --- paper_title: Anomaly Pattern Detection on Data Streams paper_content: A data stream is a sequence of data generated continuously over time. A data stream is too big to be saved in memory and its underlying data distribution may change over time. Outlier detection aims to find data instances which significantly deviate from the underlying data distribution. While outlier detection is performed at an individual instance level, anomalous pattern detection involves detecting a point in time where the behavior of the data becomes unusual and differs from normal behavior. Most outlier detection methods work in unsupervised mode, where the class labels of the data samples are not known. Alternatively, concept drift detection methods find a drifting point in the streaming data and try to adapt the model to the new emerging pattern. In this paper, we provide a review of outlier detection, anomaly pattern detection and concept drift detection approaches for streaming data. --- paper_title: Big Data Validation and Quality Assurance -- Issuses, Challenges, and Needs paper_content: With the fast advance of big data technology and analytics solutions, big data computing and service is becoming a very hot research and application subject in academic research, industry community, and government services. Nevertheless, there are increasing data quality problems resulting in erroneous data costs in enterprises and businesses. Current research seldom discusses how to effectively validate big data to ensure data quality. This paper provides informative discussions for big data validation and quality assurance, including the essential concepts, focuses, and validation process. Moreover, the paper presents a comparison among big data validation tools and several major players in industry are discussed. Furthermore, the primary issues, challenges, and needs are discussed. --- paper_title: Big Data Quality: A Survey paper_content: With the advances in communication technologies and the high amount of data generated, collected, and stored, it becomes crucial to manage the quality of this data deluge in an efficient and cost-effective way. The storage, processing, privacy and analytics are the main keys challenging aspects of Big Data that require quality evaluation and monitoring. Quality has been recognized by the Big Data community as an essential facet of its maturity. Yet, it is a crucial practice that should be implemented at the earlier stages of its lifecycle and progressively applied across the other key processes. The earlier we incorporate quality the full benefit we can get from insights. In this paper, we first identify the key challenges that necessitates quality evaluation. We then survey, classify and discuss the most recent work on Big Data management. Consequently, we propose an across-the-board quality management framework describing the key quality evaluation practices to be conducted through the different Big Data stages. The framework can be used to leverage the quality management and to provide a roadmap for Data scientists to better understand quality practices and highlight the importance of managing the quality. We finally, conclude the paper and point to some future research directions on quality of Big Data. --- paper_title: Data quality in big data processing: Issues, solutions and open problems paper_content: With the rapid development of social networks, Internet of things, Cloud computing as well as other technologies, big data age is arriving. The increasing number of data has brought great value to the public and enterprises. Meanwhile how to manage and use big data better has become the focus of all walks of life. The 4V characteristics of big data have brought a lot of issues to the big data processing. The key to big data processing is to solve data quality issue, and to ensure data quality is a prerequisite for the successful application of big data technique. In this paper, we use recommendation systems and prediction systems as typical big data applications, and try to find out the data quality issues during data collection, data preprocessing, data storage and data analysis stages of big data processing. According to the elaboration and analysis of the proposed issues, the corresponding solutions are also put forward. Finally, some open problems to be solved in the future are also raised. --- paper_title: Data quality issues in big data paper_content: Though the issues of data quality trace back their origin to the early days of computing, the recent emergence of Big Data has added more dimensions. Furthermore, given the range of Big Data applications, potential consequences of bad data quality can be for more disastrous and widespread. This paper provides a perspective on data quality issues in the Big Data context. it also discusses data integration issues that arise in biological databases and attendant data quality issues. --- paper_title: A Data Quality in Use Model for Big Data paper_content: Organizations are nowadays immersed in the Big Data Era. Beyond the hype of the concept of Big Data, it is true that something in the way of doing business is really changing. Although some challenges keep being the same as for regular data, with big data, the focus has changed. The reason is due to Big Data is not only data, but also a complete framework including data themselves, storage, formats, and ways of provisioning, processing and analytics. A challenge that becomes even trickier is the one concerning to the management of the quality of big data. More than ever the need for assessing the quality-in-use of big datasets gains importance since the real contribution – business value- of a dataset to a business can be only estimated in its context of use. Although there exists different data quality models to assess the quality of data there still lacks of a quality-in-use model adapted to big data. To fill this gap, and based on ISO 25012 and ISO 25024, we propose the 3Cs model, which is composed of three data quality dimensions for assessing the quality-in-use of big datasets: Contextual Consistency, Operational Consistency and Temporal Consistency. --- paper_title: Ontology-Based Data Quality Management for Data Streams paper_content: Data Stream Management Systems (DSMS) provide real-time data processing in an effective way, but there is always a tradeoff between data quality (DQ) and performance. We propose an ontology-based data quality framework for relational DSMS that includes DQ measurement and monitoring in a transparent, modular, and flexible way. We follow a threefold approach that takes the characteristics of relational data stream management for DQ metrics into account. While (1) Query Metrics respect changes in data quality due to query operations, (2) Content Metrics allow the semantic evaluation of data in the streams. Finally, (3) Application Metrics allow easy user-defined computation of data quality values to account for application specifics. Additionally, a quality monitor allows us to observe data quality values and take counteractions to balance data quality and performance. The framework has been designed along a DQ management methodology suited for data streams. It has been evaluated in the domains of transportation systems and health monitoring. --- paper_title: Big data and quality: A literature review paper_content: Big Data refers to data volumes in the range of Exabyte (1018) and beyond. Such volumes exceed the capacity of current on-line storage and processing systems. With characteristics like volume, velocity and variety big data throws challenges to the traditional IT establishments. Computer assisted innovation, real time data analytics, customer-centric business intelligence, industry wide decision making and transparency are possible advantages, to mention few, of Big Data. There are many issues with Big Data that warrant quality assessment methods. The issues are pertaining to storage and transport, management, and processing. This paper throws light into the present state of quality issues related to Big Data. It provides valuable insights that can be used to leverage Big Data science activities. --- paper_title: Data quality: The other face of Big Data paper_content: In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth ‘V’ of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three ‘V’s, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community. --- paper_title: Outlier Detection for Temporal Data: A Survey paper_content: In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used. --- paper_title: Overview of data quality challenges in the context of Big Data paper_content: Data quality management systems are thoroughly researched topics and have resulted in many tools and techniques developed by both academia and industry. However, the advent of Big Data might pose some serious questions pertaining to the applicability of existing data quality concepts. There is a debate concerning the importance of data quality for Big Data; one school of thought argues that high data quality methods are essential for deriving higher level analytics while another school of thought argues that data quality level will not be so important as the volume of Big Data would be used to produce patterns and some amount of dirty data will not mask the analytic results which might be derived. This paper aims to investigate various components and activities forming part of data quality management such as dimensions, metrics, data quality rules, data profiling and data cleansing. The result list existing challenges and future research areas associated with Big Data for data quality management. --- paper_title: The Challenges of Data Quality and Data Quality Assessment in the Big Data Era paper_content: High-quality data are the precondition for analyzing and using big data and for guaranteeing the value of the data. Currently, comprehensive analysis and research of quality standards and quality assessment methods for big data are lacking. First, this paper summarizes reviews of data quality research. Second, this paper analyzes the data characteristics of the big data environment, presents quality challenges faced by big data, and formulates a hierarchical data quality framework from the perspective of data users. This framework consists of big data quality dimensions, quality characteristics, and quality indexes. Finally, on the basis of this framework, this paper constructs a dynamic assessment process for data quality. This process has good expansibility and adaptability and can meet the needs of big data quality assessment. The research results enrich the theoretical scope of big data and lay a solid foundation for the future by establishing an assessment model and studying evaluation algorithms. --- paper_title: Outlier Detection Techniques for Wireless Sensor Networks: A Survey paper_content: In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree. --- paper_title: From Data Quality to Big Data Quality paper_content: This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review. --- paper_title: Big data, big data quality problem paper_content: A USAF sponsored MITRE research team undertook four separate, domain-specific case studies about Big Data applications. Those case studies were initial investigations into the question of whether or not data quality issues encountered in Big Data collections are substantially different in cause, manifestation, or detection than those data quality issues encountered in more traditionally sized data collections. The study addresses several factors affecting Big Data Quality at multiple levels, including collection, processing, and storage. Though not unexpected, the key findings of this study reinforce that the primary factors affecting Big Data reside in the limitations and complexities involved with handling Big Data while maintaining its integrity. These concerns are of a higher magnitude than the provenance of the data, the processing, and the tools used to prepare, manipulate, and store the data. Data quality is extremely important for all data analytics problems. From the study's findings, the "truth about Big Data" is there are no fundamentally new DQ issues in Big Data analytics projects. Some DQ issues exhibit return-s-to-scale effects, and become more or less pronounced in Big Data analytics, though. Big Data Quality varies from one type of Big Data to another and from one Big Data technology to another. --- paper_title: Quality awareness for a Successful Big Data Exploitation paper_content: The combination of data and technology is having a high impact on the way we live. The world is getting smarter thanks to the quantity of collected and analyzed data. However, it is necessary to consider that such amount of data is continuously increasing and it is necessary to deal with novel requirements related to variety, volume, velocity, and veracity issues. In this paper we focus on veracity that is related to the presence of uncertain or imprecise data: errors, missing or invalid data can compromise the usefulness of the collected values. In such a scenario, new methods and techniques able to evaluate the quality of the available data are needed. In fact, the literature provides many data quality assessment and improvement techniques, especially for structured data, but in the Big Data era new algorithms have to be designed. We aim to provide an overview of the issues and challenges related to Data Quality assessment in the Big Data scenario. We also propose a possible solution developed by considering a smart city case study and we describe the lessons learned in the design and implementation phases. --- paper_title: A model-based approach for RFID data stream cleansing paper_content: In recent years, RFID technologies have been used in many applications, such as inventory checking and object tracking. However, raw RFID data are inherently unreliable due to physical device limitations and different kinds of environmental noise. Currently, existing work mainly focuses on RFID data cleansing in a static environment (e.g. inventory checking). It is therefore difficult to cleanse RFID data streams in a mobile environment (e.g. object tracking) using the existing solutions, which do not address the data missing issue effectively. In this paper, we study how to cleanse RFID data streams for object tracking, which is a challenging problem, since a significant percentage of readings are routinely dropped. We propose a probabilistic model for object tracking in a mobile environment. We develop a Bayesian inference based approach for cleansing RFID data using the model. In order to sample data from the movement distribution, we devise a sequential sampler that cleans RFID data with high accuracy and efficiency. We validate the effectiveness and robustness of our solution through extensive simulations and demonstrate its performance by using two real RFID applications of human tracking and conveyor belt monitoring. --- paper_title: A Big Data Framework for Electric Power Data Quality Assessment paper_content: Since a low-quality data may influence the effectiveness and reliability of applications, data quality is required to be guaranteed. Data quality assessment is considered as the foundation of the promotion of data quality, so it is essential to access the data quality before any other data related activities. In the electric power industry, more and more electric power data is continuously accumulated, and many electric power applications have been developed based on these data. In China, the power grid has many special characteristic, traditional big data assessment frameworks cannot be directly applied. Therefore, a big data framework for electric power data quality assessment is proposed. Based on big data techniques, the framework can accumulate both the real-time data and the history data, provide an integrated computation environment for electric power big data assessment, and support the storage of different types of data. --- paper_title: Representing Data Quality in Sensor Data Streaming Environments paper_content: Sensors in smart-item environments capture data about product conditions and usage to support business decisions as well as production automation processes. A challenging issue in this application area is the restricted quality of sensor data due to limited sensor precision and sensor failures. Moreover, data stream processing to meet resource constraints in streaming environments introduces additional noise and decreases the data quality. In order to avoid wrong business decisions due to dirty data, quality characteristics have to be captured, processed, and provided to the respective business task. However, the issue of how to efficiently provide applications with information about data quality is still an open research problem. In this article, we address this problem by presenting a flexible model for the propagation and processing of data quality. The comprehensive analysis of common data stream processing operators and their impact on data quality allows a fruitful data evaluation and diminishes incorrect business decisions. Further, we propose the data quality model control to adapt the data quality granularity to the data stream interestingness. --- paper_title: Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues paper_content: Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. --- paper_title: Big Data Validation and Quality Assurance -- Issuses, Challenges, and Needs paper_content: With the fast advance of big data technology and analytics solutions, big data computing and service is becoming a very hot research and application subject in academic research, industry community, and government services. Nevertheless, there are increasing data quality problems resulting in erroneous data costs in enterprises and businesses. Current research seldom discusses how to effectively validate big data to ensure data quality. This paper provides informative discussions for big data validation and quality assurance, including the essential concepts, focuses, and validation process. Moreover, the paper presents a comparison among big data validation tools and several major players in industry are discussed. Furthermore, the primary issues, challenges, and needs are discussed. --- paper_title: A Big Data Online Cleaning Algorithm Based on Dynamic Outlier Detection paper_content: To effectively clean the large-scale, mixed and inaccurate monitoring or collective data, reduce the cost of data cache and ensure the consistent deviation detection on timing data of each cycle, a big data online cleaning algorithm based on dynamic outlier detection has been proposed. The data cleaning method is improved by local outliner detection upon density, sampling cluster uniformly dilution Euclidean distance matrix retaining some corrections into next cycle of cleaning, which avoids a sampling causing overall cleaning deviation and reduces amount of calculation within data cleaning stable time, enhancing the speed greatly. Finally, the distributed solutions on online cleaning algorithm based on Hadoop platform. --- paper_title: Distributed online outlier detection in wireless sensor networks using ellipsoidal support vector machine paper_content: Low quality sensor data limits WSN capabilities for providing reliable real-time situation-awareness. Outlier detection is a solution to ensure the quality of sensor data. An effective and efficient outlier detection technique for WSNs not only identifies outliers in a distributed and online manner with high detection accuracy and low false alarm, but also satisfies WSN constraints in terms of communication, computational and memory complexity. In this paper, we take into account the correlation between sensor data attributes and propose two distributed and online outlier detection techniques based on a hyperellipsoidal one-class support vector machine (SVM). We also take advantage of the theory of spatio-temporal correlation to identify outliers and update the ellipsoidal SVM-based model representing the changed normal behavior of sensor data for further outlier identification. Simulation results show that our adaptive ellipsoidal SVM-based outlier detection technique achieves better detection accuracy and lower false alarm as compared to existing SVM-based techniques designed for WSNs. --- paper_title: Big Data Quality: A Survey paper_content: With the advances in communication technologies and the high amount of data generated, collected, and stored, it becomes crucial to manage the quality of this data deluge in an efficient and cost-effective way. The storage, processing, privacy and analytics are the main keys challenging aspects of Big Data that require quality evaluation and monitoring. Quality has been recognized by the Big Data community as an essential facet of its maturity. Yet, it is a crucial practice that should be implemented at the earlier stages of its lifecycle and progressively applied across the other key processes. The earlier we incorporate quality the full benefit we can get from insights. In this paper, we first identify the key challenges that necessitates quality evaluation. We then survey, classify and discuss the most recent work on Big Data management. Consequently, we propose an across-the-board quality management framework describing the key quality evaluation practices to be conducted through the different Big Data stages. The framework can be used to leverage the quality management and to provide a roadmap for Data scientists to better understand quality practices and highlight the importance of managing the quality. We finally, conclude the paper and point to some future research directions on quality of Big Data. --- paper_title: An in-network data cleaning approach for wireless sensor networks paper_content: AbstractWireless Sensor Networks (WSNs) are widely used for monitoring physical happenings of the environment. However, the data gathered by the WSNs may be inaccurate and unreliable due to power exhaustion, noise and other reasons. Unnecessary data such as erroneous data and redundant data transmission causes a lot of extra energy consumption. To improve the data reliability and reduce the energy consumption, we proposed an in-network processing architecture for data cleaning, which divides the task into four stages implemented in different nodes respectively. This strategy guaranteed the cleaning algorithms were computationally lightweight in local nodes and energy-efficient due to almost no communication overhead. In addition, we presented the detection algorithms for data faults and event outliers, which were conducted by utilizing the related attributes from the local sensor node and the cooperation with its relaying neighbor. Experiment results show that our proposed approach is accurate and energy-... --- paper_title: Data quality in big data processing: Issues, solutions and open problems paper_content: With the rapid development of social networks, Internet of things, Cloud computing as well as other technologies, big data age is arriving. The increasing number of data has brought great value to the public and enterprises. Meanwhile how to manage and use big data better has become the focus of all walks of life. The 4V characteristics of big data have brought a lot of issues to the big data processing. The key to big data processing is to solve data quality issue, and to ensure data quality is a prerequisite for the successful application of big data technique. In this paper, we use recommendation systems and prediction systems as typical big data applications, and try to find out the data quality issues during data collection, data preprocessing, data storage and data analysis stages of big data processing. According to the elaboration and analysis of the proposed issues, the corresponding solutions are also put forward. Finally, some open problems to be solved in the future are also raised. --- paper_title: Anomaly Detection and Redundancy Elimination of Big Sensor Data in Internet of Things paper_content: In the era of big data and Internet of things, massive sensor data are gathered with Internet of things. Quantity of data captured by sensor networks are considered to contain highly useful and valuable information. However, for a variety of reasons, received sensor data often appear abnormal. Therefore, effective anomaly detection methods are required to guarantee the quality of data collected by those sensor nodes. Since sensor data are usually correlated in time and space, not all the gathered data are valuable for further data processing and analysis. Preprocessing is necessary for eliminating the redundancy in gathered massive sensor data. In this paper, the proposed work defines a sensor data preprocessing framework. It is mainly composed of two parts, i.e., sensor data anomaly detection and sensor data redundancy elimination. In the first part, methods based on principal statistic analysis and Bayesian network is proposed for sensor data anomaly detection. Then, approaches based on static Bayesian network (SBN) and dynamic Bayesian networks (DBNs) are proposed for sensor data redundancy elimination. Static sensor data redundancy detection algorithm (SSDRDA) for eliminating redundant data in static datasets and real-time sensor data redundancy detection algorithm (RSDRDA) for eliminating redundant sensor data in real-time are proposed. The efficiency and effectiveness of the proposed methods are validated using real-world gathered sensor datasets. --- paper_title: Distributed Top-N local outlier detection in big data paper_content: The concept of Top-N local outlier that focuses on the detection of the N points with the largest Local Outlier Factor (LOF) score has been shown to be very effective for identifying outliers in big datasets. However, detecting Top-N local outliers is computationally expensive, since the computation of LOF scores for all data points requires a huge number of high complexity k-nearest neighbor (kNN) searches. In this work, we thus present the first distributed solution to tackle this problem of Top-N local outlier detection (DTOLF). First, DTOLF features an innovative safe elimination strategy that efficiently identifies dually-safe points, namely those that are guaranteed to (1) not be classified as Top-N outliers and (2) not be needed as neighbors of points residing on other machines. Therefore, it effectively minimizes both the processing and communication costs of the Top-N outlier detection process. Further, based on the well-accepted observation that strong correlations among attributes are prevalent in real world datasets, we propose correlation-aware optimization strategies that ensure the effectiveness of grid-based partitioning and of the safe elimination strategy in multi-dimensional datasets. Our extensive experimental evaluation on OpenStreetMap, SDSS, and TIGER datasets demonstrates the effectiveness of DTOLF — up to 10 times faster than the alternative methods and scaling to terabyte level datasets. --- paper_title: A software reference architecture for semantic-aware big data systems paper_content: Abstract Context: Big Data systems are a class of software systems that ingest, store, process and serve massive amounts of heterogeneous data, from multiple sources. Despite their undisputed impact in current society, their engineering is still in its infancy and companies find it difficult to adopt them due to their inherent complexity. Existing attempts to provide architectural guidelines for their engineering fail to take into account important Big Data characteristics, such as the management, evolution and quality of the data. Objective: In this paper, we follow software engineering principles to refine the λ -architecture, a reference model for Big Data systems, and use it as seed to create Bolster , a software reference architecture (SRA) for semantic-aware Big Data systems. Method: By including a new layer into the λ -architecture, the Semantic Layer, Bolster is capable of handling the most representative Big Data characteristics (i.e., Volume, Velocity, Variety, Variability and Veracity). Results: We present the successful implementation of Bolster in three industrial projects, involving five organizations. The validation results show high level of agreement among practitioners from all organizations with respect to standard quality factors. Conclusion: As an SRA, Bolster allows organizations to design concrete architectures tailored to their specific needs. A distinguishing feature is that it provides semantic-awareness in Big Data Systems. These are Big Data system implementations that have components to simplify data definition and exploitation. In particular, they leverage metadata (i.e., data describing data) to enable (partial) automation of data exploitation and to aid the user in their decision making processes. This simplification supports the differentiation of responsibilities into cohesive roles enhancing data governance. --- paper_title: An Electric Power Sensor Data Oriented Data Cleaning Solution paper_content: With the development of Smart Grid Technology, more and more electric power sensor data are utilized in various electric power systems. To guarantee the effectiveness of such systems, it is necessary to ensure the quality of electric power sensor data, especially when the scale of electric power sensor data is large. In the field of large-scale electric power sensor data cleaning, the computational efficiency and accuracy of data cleaning are two vital requirements. In order to satisfy these requirements, this paper presents an electric power sensor data oriented data cleaning solution, which is composed of a data cleaning framework and a data cleaning method. Based on Hadoop, the given framework is able to support large-scale electric power sensor data acquisition, storage and processing. Meanwhile, the proposed method which achieves outlier detection and reparation is implemented on the basis of a time-relevant k-means clustering algorithm in Spark. The feasibility and effectiveness of the proposed method is evaluated on a data set which originates from charging piles. Experimental results show that the proposed data cleaning method is able to improve the data quality of electric power sensor data by finding and repairing most outliers. For large-scale electric power sensor data, the proposed data cleaning method has high parallel performance and strong scalability. --- paper_title: A trust assessment framework for streaming data in WSNs using iterative filtering paper_content: Trust and reputation systems are widely employed in WSNs to help decision making processes by assessing trustworthiness of sensors as well as the reliability of the reported data. Iterative filtering (IF) algorithms hold great promise for such a purpose; they simultaneously estimate the aggregate value of the readings and assess the trustworthiness of the nodes. Such algorithms, however, operate by batch processing over a widow of data reported by the nodes, which represents a difficulty in applications involving streaming data. In this paper, we propose STRIF (Streaming IF) which extends IF algorithms to data streaming by leveraging a novel method for updating the sensors' variances. We compare the performance of STRIF algorithm to several batch processing IF algorithms through extensive experiments across a wide variety of configurations over both real-world and synthetic datasets. Our experimental results demonstrate that STRIF can process data streams much more efficiently than the batch algorithms while keeping the accuracy of the data aggregation close to that of the batch IF algorithm. --- paper_title: Ensemble stream model for data-cleaning in sensor networks paper_content: Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. ::: First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. ::: Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. ::: The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today. --- paper_title: A model-driven framework for data quality management in the Internet of Things paper_content: The internet of Things (IoT) is a data stream environment where a large scale deployment of smart things continuously report readings. These data streams are then consumed by pervasive applications, i.e. data consumers, to offer ubiquitous services. The data quality (DQ) is a key criteria for IoT data consumers especially when considering the inherent uncertainty of sensor-enabled data. However, DQ is a highly subjective concept and there is no standard agreement on how to determine “good” data. Moreover, the combinations of considered measured attributes and associated DQ information are as diverse as the needs of data consumers. This introduces expensive overheads for developers tasked with building DQ-aware IoT software systems which are capable of managing their own DQ information. To effectively handle these various perceptions of DQ, we propose a Model-Driven Architecture-based approach that allows each developer to easily and efficiently express, through models and other provided resources, the data consumer’s vision of DQ and its requirements using an easy-to-use graphical model editor. The defined DQ specifications are then automatically transformed to generate an entire infrastructure for DQ management that fits perfectly the data consumer’s requirements. We demonstrate the flexibility and the efficiency of our approach by generating two DQ management infrastructures built on top of different platforms and testing them through a real life data stream environment scenario. --- paper_title: Data quality: The other face of Big Data paper_content: In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth ‘V’ of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three ‘V’s, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community. --- paper_title: Contextual Anomaly Detection in Big Sensor Data paper_content: Performing predictive modelling, such as anomaly detection, in Big Data is a difficult task. This problem is compounded as more and more sources of Big Data are generated from environmental sensors, logging applications, and the Internet of Things. Further, most current techniques for anomaly detection only consider the content of the data source, i.e. the data itself, without concern for the context of the data. As data becomes more complex it is increasingly important to bias anomaly detection techniques for the context, whether it is spatial, temporal, or semantic. The work proposed in this paper outlines a contextual anomaly detection technique for use in streaming sensor networks. The technique uses a well-defined content anomaly detection algorithm for real-time point anomaly detection. Additionally, we present a post-processing context-aware anomaly detection algorithm based on sensor profiles, which are groups of contextually similar sensors generated by a multivariate clustering algorithm. Our proposed research has been implemented and evaluated with real-world data provided by Powersmiths, located in Brampton, Ontario, Canada. --- paper_title: Outlier Detection for Temporal Data: A Survey paper_content: In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used. --- paper_title: Analysis and evaluation of outlier detection algorithms in data streams paper_content: Data mining is one of the most exciting fields of research for the researcher. As data is getting digitized, systems are getting connected and integrated, scope of data generation and analytics has increased exponentially. Today, most of the systems generate non-stationary data of huge, size, volume, occurrence speed, fast changing etc. these kinds of data are called data streams. One of the most recent trend i.e. IOT (Internet Of Things) is also promising lots of expectation of people which will ease the use of day to day activities and it could also connect systems and people together. This situation will also lead to generation of data streams, thus present and future scope of data stream mining is highly promising. Characteristics of data stream possess many challenges for the researcher; this makes analytics of such data difficult and also acts as source of inspiration for researcher. Outlier detection plays important role in any application. In this paper we reviewed different techniques of outlier detection for stream data and their issues in detail and presented results of the same. --- paper_title: Context-aware data quality assessment for big data paper_content: Abstract Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive advantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the Data Quality assessment can support the identification of suitable data to process. If for traditional database numerous assessment methods are proposed, in the Big Data scenario new algorithms have to be designed in order to deal with novel requirements related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive approach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the Data Quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a Data Quality adapter module, which selects the best configuration for the Data Quality assessment based on the user main requirements: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study. --- paper_title: Outlier Detection Techniques for Wireless Sensor Networks: A Survey paper_content: In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree. --- paper_title: Incorporating quality aspects in sensor data streams paper_content: Sensors are increasingly embedded into physical products in order to capture data about their conditions and usage for decision making in business applications. However, a major issue for such applications is the limited quality of the captured data due to inherently restricted precision and performance of the sensors. Moreover, the data quality is further decreased by data processing to meet resource constraints in streaming environments and ultimately influences business decisions. The issue of how to efficiently provide applications with information about data quality (DQ) is still an open research problem. In my Ph.D. thesis, I address this problem by developing a system to provide business applications with accurate information on data quality. Furthermore, the system will be able to incorporate and guarantee user-defined data quality levels. In this paper, I will present the major results from my research so far. This includes a novel jumping-window-based approach for the efficient transfer of data quality information as well as a flexible metamodel for storage and propagation of data quality. The comprehensive analysis of common data processing operators w.r.t. their impact on data quality allows a fruitful knowledge evaluation and thus diminishes incorrect business decisions. --- paper_title: Adaptive and online data anomaly detection for wireless sensor systems paper_content: Wireless sensor networks (WSNs) are increasingly used as platforms for collecting data from unattended environments and monitoring important events in phenomena. However, sensor data is affected by anomalies that occur due to various reasons, such as, node software or hardware failures, reading errors, unusual events, and malicious attacks. Therefore, effective, efficient, and real time detection of anomalous measurement is required to guarantee the quality of data collected by these networks. In this paper, two efficient and effective anomaly detection models PCCAD and APCCAD are proposed for static and dynamic environments, respectively. Both models utilize the One-Class Principal Component Classifier (OCPCC) to measure the dissimilarity between sensor measurements in the feature space. The proposed APCCAD model incorporates an incremental learning method that is able to track the dynamic normal changes of data streams in the monitored environment. The efficiency and effectiveness of the proposed models are demonstrated using real life datasets collected by real sensor network projects. Experimental results show that the proposed models have advantages over existing models in terms of efficient utilization of sensor limited resources. The results further reveal that the proposed models achieve better detection effectiveness in terms of high detection accuracy with low false alarms especially for dynamic environmental data streams compared to some existing models. --- paper_title: From Data Quality to Big Data Quality paper_content: This article investigates the evolution of data quality issues from traditional structured data managed in relational databases to Big Data. In particular, the paper examines the nature of the relationship between Data Quality and several research coordinates that are relevant in Big Data, such as the variety of data types, data sources and application domains, focusing on maps, semi-structured texts, linked open data, sensor & sensor networks and official statistics. Consequently a set of structural characteristics is identified and a systematization of the a posteriori correlation between them and quality dimensions is provided. Finally, Big Data quality issues are considered in a conceptual framework suitable to map the evolution of the quality paradigm according to three core coordinates that are significant in the context of the Big Data phenomenon: the data type considered, the source of data, and the application domain. Thus, the framework allows ascertaining the relevant changes in data quality emerging with the Big Data phenomenon, through an integrative and theoretical literature review. --- paper_title: Online outlier detection for data streams paper_content: Outlier detection is a well established area of statistics but most of the existing outlier detection techniques are designed for applications where the entire dataset is available for random access. A typical outlier detection technique constructs a standard data distribution or model and identifies the deviated data points from the model as outliers. Evidently these techniques are not suitable for online data streams where the entire dataset, due to its unbounded volume, is not available for random access. Moreover, the data distribution in data streams change over time which challenges the existing outlier detection techniques that assume a constant standard data distribution for the entire dataset. In addition, data streams are characterized by uncertainty which imposes further complexity. In this paper we propose an adaptive, online outlier detection technique addressing the aforementioned characteristics of data streams, called Adaptive Outlier Detection for Data Streams (A-ODDS), which identifies outliers with respect to all the received data points as well as temporally close data points. The temporally close data points are selected based on time and change of data distribution. We also present an efficient and online implementation of the technique and a performance study showing the superiority of A-ODDS over existing techniques in terms of accuracy and execution time on a real-life dataset collected from meteorological applications. --- paper_title: Big Data Pre-processing: A Quality Framework paper_content: With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis. --- paper_title: Anomaly Pattern Detection on Data Streams paper_content: A data stream is a sequence of data generated continuously over time. A data stream is too big to be saved in memory and its underlying data distribution may change over time. Outlier detection aims to find data instances which significantly deviate from the underlying data distribution. While outlier detection is performed at an individual instance level, anomalous pattern detection involves detecting a point in time where the behavior of the data becomes unusual and differs from normal behavior. Most outlier detection methods work in unsupervised mode, where the class labels of the data samples are not known. Alternatively, concept drift detection methods find a drifting point in the streaming data and try to adapt the model to the new emerging pattern. In this paper, we provide a review of outlier detection, anomaly pattern detection and concept drift detection approaches for streaming data. --- paper_title: A Big Data Framework for Electric Power Data Quality Assessment paper_content: Since a low-quality data may influence the effectiveness and reliability of applications, data quality is required to be guaranteed. Data quality assessment is considered as the foundation of the promotion of data quality, so it is essential to access the data quality before any other data related activities. In the electric power industry, more and more electric power data is continuously accumulated, and many electric power applications have been developed based on these data. In China, the power grid has many special characteristic, traditional big data assessment frameworks cannot be directly applied. Therefore, a big data framework for electric power data quality assessment is proposed. Based on big data techniques, the framework can accumulate both the real-time data and the history data, provide an integrated computation environment for electric power big data assessment, and support the storage of different types of data. --- paper_title: Big Data Validation and Quality Assurance -- Issuses, Challenges, and Needs paper_content: With the fast advance of big data technology and analytics solutions, big data computing and service is becoming a very hot research and application subject in academic research, industry community, and government services. Nevertheless, there are increasing data quality problems resulting in erroneous data costs in enterprises and businesses. Current research seldom discusses how to effectively validate big data to ensure data quality. This paper provides informative discussions for big data validation and quality assurance, including the essential concepts, focuses, and validation process. Moreover, the paper presents a comparison among big data validation tools and several major players in industry are discussed. Furthermore, the primary issues, challenges, and needs are discussed. --- paper_title: Anomaly Detection and Redundancy Elimination of Big Sensor Data in Internet of Things paper_content: In the era of big data and Internet of things, massive sensor data are gathered with Internet of things. Quantity of data captured by sensor networks are considered to contain highly useful and valuable information. However, for a variety of reasons, received sensor data often appear abnormal. Therefore, effective anomaly detection methods are required to guarantee the quality of data collected by those sensor nodes. Since sensor data are usually correlated in time and space, not all the gathered data are valuable for further data processing and analysis. Preprocessing is necessary for eliminating the redundancy in gathered massive sensor data. In this paper, the proposed work defines a sensor data preprocessing framework. It is mainly composed of two parts, i.e., sensor data anomaly detection and sensor data redundancy elimination. In the first part, methods based on principal statistic analysis and Bayesian network is proposed for sensor data anomaly detection. Then, approaches based on static Bayesian network (SBN) and dynamic Bayesian networks (DBNs) are proposed for sensor data redundancy elimination. Static sensor data redundancy detection algorithm (SSDRDA) for eliminating redundant data in static datasets and real-time sensor data redundancy detection algorithm (RSDRDA) for eliminating redundant sensor data in real-time are proposed. The efficiency and effectiveness of the proposed methods are validated using real-world gathered sensor datasets. --- paper_title: Schema Extraction and Structural Outlier Detection for JSON-based NoSQL Data Stores. paper_content: Although most NoSQL Data Stores are schema-less, information on the structural properties of the persisted data is nevertheless essential during application development. Otherwise, accessing the data becomes simply impractical. In this paper, we introduce an algorithm for schema extraction that is operating outside of the NoSQL data store. Our method is specifically targeted at semi-structured data persisted in NoSQL stores, e.g., in JSON format. Rather than designing the schema up front, extracting a schema in hindsight can be seen as a reverse-engineering step. Based on the extracted schema information, we propose set of similarity measures that capture the degree of heterogeneity of JSON data and which reveal structural outliers in the data. We evaluate our implementation on two real-life datasets: a database from the Wendelstein 7-X project and Web Performance Data. --- paper_title: A software reference architecture for semantic-aware big data systems paper_content: Abstract Context: Big Data systems are a class of software systems that ingest, store, process and serve massive amounts of heterogeneous data, from multiple sources. Despite their undisputed impact in current society, their engineering is still in its infancy and companies find it difficult to adopt them due to their inherent complexity. Existing attempts to provide architectural guidelines for their engineering fail to take into account important Big Data characteristics, such as the management, evolution and quality of the data. Objective: In this paper, we follow software engineering principles to refine the λ -architecture, a reference model for Big Data systems, and use it as seed to create Bolster , a software reference architecture (SRA) for semantic-aware Big Data systems. Method: By including a new layer into the λ -architecture, the Semantic Layer, Bolster is capable of handling the most representative Big Data characteristics (i.e., Volume, Velocity, Variety, Variability and Veracity). Results: We present the successful implementation of Bolster in three industrial projects, involving five organizations. The validation results show high level of agreement among practitioners from all organizations with respect to standard quality factors. Conclusion: As an SRA, Bolster allows organizations to design concrete architectures tailored to their specific needs. A distinguishing feature is that it provides semantic-awareness in Big Data Systems. These are Big Data system implementations that have components to simplify data definition and exploitation. In particular, they leverage metadata (i.e., data describing data) to enable (partial) automation of data exploitation and to aid the user in their decision making processes. This simplification supports the differentiation of responsibilities into cohesive roles enhancing data governance. --- paper_title: Data quality issues in big data paper_content: Though the issues of data quality trace back their origin to the early days of computing, the recent emergence of Big Data has added more dimensions. Furthermore, given the range of Big Data applications, potential consequences of bad data quality can be for more disastrous and widespread. This paper provides a perspective on data quality issues in the Big Data context. it also discusses data integration issues that arise in biological databases and attendant data quality issues. --- paper_title: A Framework for Distributed Cleaning of Data Streams paper_content: Abstract Vast and ever increasing quantities of data are produced by sensors in the Internet of Things (IoT). The quality of this data can be very variable due to problems with sensors, incorrect calibration etc. Data quality can be greatly enhanced by cleaning the data before it reaches its end user. This paper reports on the construction of a distributed cleaning system (DCS) to clean data streams in real-time for an environmental case-study. A combination of declarative and statistical model based cleaning methods are applied and initial results are reported. --- paper_title: Big data and quality: A literature review paper_content: Big Data refers to data volumes in the range of Exabyte (1018) and beyond. Such volumes exceed the capacity of current on-line storage and processing systems. With characteristics like volume, velocity and variety big data throws challenges to the traditional IT establishments. Computer assisted innovation, real time data analytics, customer-centric business intelligence, industry wide decision making and transparency are possible advantages, to mention few, of Big Data. There are many issues with Big Data that warrant quality assessment methods. The issues are pertaining to storage and transport, management, and processing. This paper throws light into the present state of quality issues related to Big Data. It provides valuable insights that can be used to leverage Big Data science activities. --- paper_title: Data quality: The other face of Big Data paper_content: In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth ‘V’ of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three ‘V’s, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community. --- paper_title: A data stream outlier detection algorithm based on grid paper_content: The main aim of data stream outlier detection is to find the data stream outliers in rational time accurately. The existing outlier detection algorithms can find outliers in static data sets efficiently, but they are inapplicable for the dynamic data stream, and cannot find the abnormal data effectively. Due to the requirements of real-time detection, dynamic adjustment and the inapplicability of existing algorithms on data stream outlier detection, we propose a new data stream outlier detection algorithm, ODGrid, which can find the abnormal data in data stream in real time and adjust the detection results dynamically. According to the experiments on real datasets and synthetic datasets, ODGrid is superior to the existing data stream outlier detection algorithms, and it has good scalability to the dimensionality of data space. --- paper_title: Analysis and evaluation of outlier detection algorithms in data streams paper_content: Data mining is one of the most exciting fields of research for the researcher. As data is getting digitized, systems are getting connected and integrated, scope of data generation and analytics has increased exponentially. Today, most of the systems generate non-stationary data of huge, size, volume, occurrence speed, fast changing etc. these kinds of data are called data streams. One of the most recent trend i.e. IOT (Internet Of Things) is also promising lots of expectation of people which will ease the use of day to day activities and it could also connect systems and people together. This situation will also lead to generation of data streams, thus present and future scope of data stream mining is highly promising. Characteristics of data stream possess many challenges for the researcher; this makes analytics of such data difficult and also acts as source of inspiration for researcher. Outlier detection plays important role in any application. In this paper we reviewed different techniques of outlier detection for stream data and their issues in detail and presented results of the same. --- paper_title: In pursuit of outliers in multi-dimensional data streams paper_content: Among many Big Data applications are those that deal with data streams. A data stream is a sequence of data points with timestamps that possesses the properties of transiency, infiniteness, uncertainty, concept drift, and multi-dimensionality. In this paper we propose an outlier detection technique called Orion that addresses all the characteristics of data streams. Orion looks for a projected dimension of multi-dimensional data points with the help of an evolutionary algorithm, and identifies a data point as an outlier if it resides in a low-density region in that dimension. Experiments comparing Orion with existing techniques using both real and synthetic datasets show that Orion achieves an average of 7X the precision, 5X the recall, and a competitive execution time compared to existing techniques. --- paper_title: Quality awareness for a Successful Big Data Exploitation paper_content: The combination of data and technology is having a high impact on the way we live. The world is getting smarter thanks to the quantity of collected and analyzed data. However, it is necessary to consider that such amount of data is continuously increasing and it is necessary to deal with novel requirements related to variety, volume, velocity, and veracity issues. In this paper we focus on veracity that is related to the presence of uncertain or imprecise data: errors, missing or invalid data can compromise the usefulness of the collected values. In such a scenario, new methods and techniques able to evaluate the quality of the available data are needed. In fact, the literature provides many data quality assessment and improvement techniques, especially for structured data, but in the Big Data era new algorithms have to be designed. We aim to provide an overview of the issues and challenges related to Data Quality assessment in the Big Data scenario. We also propose a possible solution developed by considering a smart city case study and we describe the lessons learned in the design and implementation phases. --- paper_title: Representing Data Quality in Sensor Data Streaming Environments paper_content: Sensors in smart-item environments capture data about product conditions and usage to support business decisions as well as production automation processes. A challenging issue in this application area is the restricted quality of sensor data due to limited sensor precision and sensor failures. Moreover, data stream processing to meet resource constraints in streaming environments introduces additional noise and decreases the data quality. In order to avoid wrong business decisions due to dirty data, quality characteristics have to be captured, processed, and provided to the respective business task. However, the issue of how to efficiently provide applications with information about data quality is still an open research problem. In this article, we address this problem by presenting a flexible model for the propagation and processing of data quality. The comprehensive analysis of common data stream processing operators and their impact on data quality allows a fruitful data evaluation and diminishes incorrect business decisions. Further, we propose the data quality model control to adapt the data quality granularity to the data stream interestingness. --- paper_title: Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues paper_content: Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. --- paper_title: Anomaly Detection Guidelines for Data Streams in Big Data paper_content: Real time data analysis in data streams is a highly challenging area in big data. The surge in big data techniques has recently attracted considerable interest to the detection of significant changes or anomalies in data streams. There is a variety of literature across a number of fields relevant to anomaly detection. The growing number of techniques, from seemingly disconnected areas, prevents a comprehensive review. Many interesting techniques may therefore remain largely unknown to the anomaly detection community at large. The survey presents a compact, but comprehensive overview of diverse strategies for anomaly detection in evolving data streams. A number of recommendations based performance and applicability to use cases are provided. We expect that our classification and recommendations will provide useful guidelines to practitioners in this rapidly evolving field. --- paper_title: Big Data Pre-Processing: Closing the Data Quality Enforcement Loop paper_content: In the Big Data Era, data is the core for any governmental, institutional, and private organization. Efforts were geared towards extracting highly valuable insights that cannot happen if data is of poor quality. Therefore, data quality (DQ) is considered as a key element in Big data processing phase. In this stage, low quality data is not penetrated to the Big Data value chain. This paper, addresses the data quality rules discovery (DQR) after the evaluation of quality and prior to Big Data pre-processing. We propose a DQR discovery model to enhance and accurately target the pre-processing activities based on quality requirements. We defined, a set of pre-processing activities associated with data quality dimensions (DQD's) to automatize the DQR generation process. Rules optimization are applied on validated rules to avoid multi-passes pre-processing activities and eliminates duplicate rules. Conducted experiments showed an increased quality scores after applying the discovered and optimized DQR's on data. --- paper_title: Evaluating the Quality of Social Media Data in Big Data Architecture paper_content: The use of freely available online data is rapidly increasing, as companies have detected the possibilities and the value of these data in their businesses. In particular, data from social media are seen as interesting as they can, when properly treated, assist in achieving customer insight into business decision making. However, the unstructured and uncertain nature of this kind of big data presents a new kind of challenge: how to evaluate the quality of data and manage the value of data within a big data architecture? This paper contributes to addressing this challenge by introducing a new architectural solution to evaluate and manage the quality of social media data in each processing phase of the big data pipeline. The proposed solution improves business decision making by providing real-time, validated data for the user. The solution is validated with an industrial case example, in which the customer insight is extracted from social media data in order to determine the customer satisfaction regarding the quality of a product. --- paper_title: Data quality in big data processing: Issues, solutions and open problems paper_content: With the rapid development of social networks, Internet of things, Cloud computing as well as other technologies, big data age is arriving. The increasing number of data has brought great value to the public and enterprises. Meanwhile how to manage and use big data better has become the focus of all walks of life. The 4V characteristics of big data have brought a lot of issues to the big data processing. The key to big data processing is to solve data quality issue, and to ensure data quality is a prerequisite for the successful application of big data technique. In this paper, we use recommendation systems and prediction systems as typical big data applications, and try to find out the data quality issues during data collection, data preprocessing, data storage and data analysis stages of big data processing. According to the elaboration and analysis of the proposed issues, the corresponding solutions are also put forward. Finally, some open problems to be solved in the future are also raised. --- paper_title: Quality management architecture for social media data paper_content: Social media data has provided various insights into the behaviour of consumers and businesses. However, extracted data may be erroneous, or could have originated from a malicious source. Thus, quality of social media should be managed. Also, it should be understood how data quality can be managed across a big data pipeline, which may consist of several processing and analysis phases. The contribution of this paper is evaluation of data quality management architecture for social media data. The theoretical concepts based on previous work have been implemented for data quality evaluation of Twitter-based data sets. Particularly, reference architecture for quality management in social media data has been extended and evaluated based on the implementation architecture. Experiments indicate that 150–800 tweets/s can be evaluated with two cloud nodes depending on the configuration. --- paper_title: Research on real-time outlier detection over big data streams paper_content: ABSTRACTNowadays technological advances have promoted big data streams common in many applications, including mobile internet applications, internet of things, and industry production process. Outl... --- paper_title: A Data Cleaning Model for Electric Power Big Data Based on Spark Framework paper_content: The data cleaning of electrical power big data can improve the correctness, the completeness, the consistency and the reliability of the data. Aiming at the difficulties of the extracting of the unified anomaly detection pattern and the low accuracy and continuity of the anomaly data correction in the process of the electrical power big data cleaning, the data cleaning model of the electrical power big data based on Spark is proposed. Firstly, the normal clusters and the corresponding boundary samples are obtained by the improved CURE clustering algorithm. Then, the anomaly data identification algorithm based on boundary samples is designed. Finally, the anomaly data modification is realized by using exponential weighting moving mean value. The high efficiency and accuracy is proved by the experiment of the data cleaning of the wind power generation monitoring data from the wind power station. --- paper_title: Contextual anomaly detection framework for big sensor data paper_content: The ability to detect and process anomalies for Big Data in real-time is a difficult task. The volume and velocity of the data within many systems makes it difficult for typical algorithms to scale and retain their real-time characteristics. The pervasiveness of data combined with the problem that many existing algorithms only consider the content of the data source; e.g. a sensor reading itself without concern for its context, leaves room for potential improvement. The proposed work defines a contextual anomaly detection framework. It is composed of two distinct steps: content detection and context detection. The content detector is used to determine anomalies in real-time, while possibly, and likely, identifying false positives. The context detector is used to prune the output of the content detector, identifying those anomalies which are considered both content and contextually anomalous. The context detector utilizes the concept of profiles, which are groups of similarly grouped data points generated by a multivariate clustering algorithm. The research has been evaluated against two real-world sensor datasets provided by a local company in Brampton, Canada. Additionally, the framework has been evaluated against the open-source Dodgers dataset, available at the UCI machine learning repository, and against the R statistical toolbox. --- paper_title: A model-driven framework for data quality management in the Internet of Things paper_content: The internet of Things (IoT) is a data stream environment where a large scale deployment of smart things continuously report readings. These data streams are then consumed by pervasive applications, i.e. data consumers, to offer ubiquitous services. The data quality (DQ) is a key criteria for IoT data consumers especially when considering the inherent uncertainty of sensor-enabled data. However, DQ is a highly subjective concept and there is no standard agreement on how to determine “good” data. Moreover, the combinations of considered measured attributes and associated DQ information are as diverse as the needs of data consumers. This introduces expensive overheads for developers tasked with building DQ-aware IoT software systems which are capable of managing their own DQ information. To effectively handle these various perceptions of DQ, we propose a Model-Driven Architecture-based approach that allows each developer to easily and efficiently express, through models and other provided resources, the data consumer’s vision of DQ and its requirements using an easy-to-use graphical model editor. The defined DQ specifications are then automatically transformed to generate an entire infrastructure for DQ management that fits perfectly the data consumer’s requirements. We demonstrate the flexibility and the efficiency of our approach by generating two DQ management infrastructures built on top of different platforms and testing them through a real life data stream environment scenario. --- paper_title: Contextual Anomaly Detection in Big Sensor Data paper_content: Performing predictive modelling, such as anomaly detection, in Big Data is a difficult task. This problem is compounded as more and more sources of Big Data are generated from environmental sensors, logging applications, and the Internet of Things. Further, most current techniques for anomaly detection only consider the content of the data source, i.e. the data itself, without concern for the context of the data. As data becomes more complex it is increasingly important to bias anomaly detection techniques for the context, whether it is spatial, temporal, or semantic. The work proposed in this paper outlines a contextual anomaly detection technique for use in streaming sensor networks. The technique uses a well-defined content anomaly detection algorithm for real-time point anomaly detection. Additionally, we present a post-processing context-aware anomaly detection algorithm based on sensor profiles, which are groups of contextually similar sensors generated by a multivariate clustering algorithm. Our proposed research has been implemented and evaluated with real-world data provided by Powersmiths, located in Brampton, Ontario, Canada. --- paper_title: Outlier Detection for Temporal Data: A Survey paper_content: In the statistics community, outlier detection for time series data has been studied for decades. Recently, with advances in hardware and software technology, there has been a large body of work on temporal outlier detection from a computational perspective within the computer science community. In particular, advances in hardware technology have enabled the availability of various forms of temporal data collection mechanisms, and advances in software technology have enabled a variety of data management mechanisms. This has fueled the growth of different kinds of data sets such as data streams, spatio-temporal data, distributed streams, temporal networks, and time series data, generated by a multitude of applications. There arises a need for an organized and detailed study of the work done in the area of outlier detection with respect to such temporal datasets. In this survey, we provide a comprehensive and structured overview of a large set of interesting outlier definitions for various forms of temporal data, novel techniques, and application scenarios in which specific definitions and techniques have been widely used. --- paper_title: Context-aware data quality assessment for big data paper_content: Abstract Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive advantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the Data Quality assessment can support the identification of suitable data to process. If for traditional database numerous assessment methods are proposed, in the Big Data scenario new algorithms have to be designed in order to deal with novel requirements related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive approach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the Data Quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a Data Quality adapter module, which selects the best configuration for the Data Quality assessment based on the user main requirements: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study. --- paper_title: Outlier Detection Techniques for Wireless Sensor Networks: A Survey paper_content: In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree. --- paper_title: Incorporating quality aspects in sensor data streams paper_content: Sensors are increasingly embedded into physical products in order to capture data about their conditions and usage for decision making in business applications. However, a major issue for such applications is the limited quality of the captured data due to inherently restricted precision and performance of the sensors. Moreover, the data quality is further decreased by data processing to meet resource constraints in streaming environments and ultimately influences business decisions. The issue of how to efficiently provide applications with information about data quality (DQ) is still an open research problem. In my Ph.D. thesis, I address this problem by developing a system to provide business applications with accurate information on data quality. Furthermore, the system will be able to incorporate and guarantee user-defined data quality levels. In this paper, I will present the major results from my research so far. This includes a novel jumping-window-based approach for the efficient transfer of data quality information as well as a flexible metamodel for storage and propagation of data quality. The comprehensive analysis of common data processing operators w.r.t. their impact on data quality allows a fruitful knowledge evaluation and thus diminishes incorrect business decisions. --- paper_title: Online outlier detection for data streams paper_content: Outlier detection is a well established area of statistics but most of the existing outlier detection techniques are designed for applications where the entire dataset is available for random access. A typical outlier detection technique constructs a standard data distribution or model and identifies the deviated data points from the model as outliers. Evidently these techniques are not suitable for online data streams where the entire dataset, due to its unbounded volume, is not available for random access. Moreover, the data distribution in data streams change over time which challenges the existing outlier detection techniques that assume a constant standard data distribution for the entire dataset. In addition, data streams are characterized by uncertainty which imposes further complexity. In this paper we propose an adaptive, online outlier detection technique addressing the aforementioned characteristics of data streams, called Adaptive Outlier Detection for Data Streams (A-ODDS), which identifies outliers with respect to all the received data points as well as temporally close data points. The temporally close data points are selected based on time and change of data distribution. We also present an efficient and online implementation of the technique and a performance study showing the superiority of A-ODDS over existing techniques in terms of accuracy and execution time on a real-life dataset collected from meteorological applications. --- paper_title: Quality awareness for a Successful Big Data Exploitation paper_content: The combination of data and technology is having a high impact on the way we live. The world is getting smarter thanks to the quantity of collected and analyzed data. However, it is necessary to consider that such amount of data is continuously increasing and it is necessary to deal with novel requirements related to variety, volume, velocity, and veracity issues. In this paper we focus on veracity that is related to the presence of uncertain or imprecise data: errors, missing or invalid data can compromise the usefulness of the collected values. In such a scenario, new methods and techniques able to evaluate the quality of the available data are needed. In fact, the literature provides many data quality assessment and improvement techniques, especially for structured data, but in the Big Data era new algorithms have to be designed. We aim to provide an overview of the issues and challenges related to Data Quality assessment in the Big Data scenario. We also propose a possible solution developed by considering a smart city case study and we describe the lessons learned in the design and implementation phases. --- paper_title: Big data preprocessing: methods and prospects paper_content: The massive growth in the scale of data has been observed in recent years being a key factor of the Big Data scenario. Big Data can be defined as high volume, velocity and variety of data that require a new high-performance processing. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The presence of data preprocessing methods for data mining in big data is reviewed in this paper. The definition, characteristics, and categorization of data preprocessing approaches in big data are introduced. The connection between big data and data preprocessing throughout all families of methods and big data technologies are also examined, including a review of the state-of-the-art. In addition, research challenges are discussed, with focus on developments on different big data framework, such as Hadoop, Spark and Flink and the encouragement in devoting substantial research efforts in some families of data preprocessing methods and applications on new big data learning paradigms. --- paper_title: Big Data Pre-processing: A Quality Framework paper_content: With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis. --- paper_title: A model-based approach for RFID data stream cleansing paper_content: In recent years, RFID technologies have been used in many applications, such as inventory checking and object tracking. However, raw RFID data are inherently unreliable due to physical device limitations and different kinds of environmental noise. Currently, existing work mainly focuses on RFID data cleansing in a static environment (e.g. inventory checking). It is therefore difficult to cleanse RFID data streams in a mobile environment (e.g. object tracking) using the existing solutions, which do not address the data missing issue effectively. In this paper, we study how to cleanse RFID data streams for object tracking, which is a challenging problem, since a significant percentage of readings are routinely dropped. We propose a probabilistic model for object tracking in a mobile environment. We develop a Bayesian inference based approach for cleansing RFID data using the model. In order to sample data from the movement distribution, we devise a sequential sampler that cleans RFID data with high accuracy and efficiency. We validate the effectiveness and robustness of our solution through extensive simulations and demonstrate its performance by using two real RFID applications of human tracking and conveyor belt monitoring. --- paper_title: A Big Data Framework for Electric Power Data Quality Assessment paper_content: Since a low-quality data may influence the effectiveness and reliability of applications, data quality is required to be guaranteed. Data quality assessment is considered as the foundation of the promotion of data quality, so it is essential to access the data quality before any other data related activities. In the electric power industry, more and more electric power data is continuously accumulated, and many electric power applications have been developed based on these data. In China, the power grid has many special characteristic, traditional big data assessment frameworks cannot be directly applied. Therefore, a big data framework for electric power data quality assessment is proposed. Based on big data techniques, the framework can accumulate both the real-time data and the history data, provide an integrated computation environment for electric power big data assessment, and support the storage of different types of data. --- paper_title: Advancements of Data Anomaly Detection Research in Wireless Sensor Networks: A Survey and Open Issues paper_content: Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept “Internet of Things” has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed. --- paper_title: Big Data Validation and Quality Assurance -- Issuses, Challenges, and Needs paper_content: With the fast advance of big data technology and analytics solutions, big data computing and service is becoming a very hot research and application subject in academic research, industry community, and government services. Nevertheless, there are increasing data quality problems resulting in erroneous data costs in enterprises and businesses. Current research seldom discusses how to effectively validate big data to ensure data quality. This paper provides informative discussions for big data validation and quality assurance, including the essential concepts, focuses, and validation process. Moreover, the paper presents a comparison among big data validation tools and several major players in industry are discussed. Furthermore, the primary issues, challenges, and needs are discussed. --- paper_title: A Big Data Online Cleaning Algorithm Based on Dynamic Outlier Detection paper_content: To effectively clean the large-scale, mixed and inaccurate monitoring or collective data, reduce the cost of data cache and ensure the consistent deviation detection on timing data of each cycle, a big data online cleaning algorithm based on dynamic outlier detection has been proposed. The data cleaning method is improved by local outliner detection upon density, sampling cluster uniformly dilution Euclidean distance matrix retaining some corrections into next cycle of cleaning, which avoids a sampling causing overall cleaning deviation and reduces amount of calculation within data cleaning stable time, enhancing the speed greatly. Finally, the distributed solutions on online cleaning algorithm based on Hadoop platform. --- paper_title: Distributed online outlier detection in wireless sensor networks using ellipsoidal support vector machine paper_content: Low quality sensor data limits WSN capabilities for providing reliable real-time situation-awareness. Outlier detection is a solution to ensure the quality of sensor data. An effective and efficient outlier detection technique for WSNs not only identifies outliers in a distributed and online manner with high detection accuracy and low false alarm, but also satisfies WSN constraints in terms of communication, computational and memory complexity. In this paper, we take into account the correlation between sensor data attributes and propose two distributed and online outlier detection techniques based on a hyperellipsoidal one-class support vector machine (SVM). We also take advantage of the theory of spatio-temporal correlation to identify outliers and update the ellipsoidal SVM-based model representing the changed normal behavior of sensor data for further outlier identification. Simulation results show that our adaptive ellipsoidal SVM-based outlier detection technique achieves better detection accuracy and lower false alarm as compared to existing SVM-based techniques designed for WSNs. --- paper_title: An in-network data cleaning approach for wireless sensor networks paper_content: AbstractWireless Sensor Networks (WSNs) are widely used for monitoring physical happenings of the environment. However, the data gathered by the WSNs may be inaccurate and unreliable due to power exhaustion, noise and other reasons. Unnecessary data such as erroneous data and redundant data transmission causes a lot of extra energy consumption. To improve the data reliability and reduce the energy consumption, we proposed an in-network processing architecture for data cleaning, which divides the task into four stages implemented in different nodes respectively. This strategy guaranteed the cleaning algorithms were computationally lightweight in local nodes and energy-efficient due to almost no communication overhead. In addition, we presented the detection algorithms for data faults and event outliers, which were conducted by utilizing the related attributes from the local sensor node and the cooperation with its relaying neighbor. Experiment results show that our proposed approach is accurate and energy-... --- paper_title: Data quality in big data processing: Issues, solutions and open problems paper_content: With the rapid development of social networks, Internet of things, Cloud computing as well as other technologies, big data age is arriving. The increasing number of data has brought great value to the public and enterprises. Meanwhile how to manage and use big data better has become the focus of all walks of life. The 4V characteristics of big data have brought a lot of issues to the big data processing. The key to big data processing is to solve data quality issue, and to ensure data quality is a prerequisite for the successful application of big data technique. In this paper, we use recommendation systems and prediction systems as typical big data applications, and try to find out the data quality issues during data collection, data preprocessing, data storage and data analysis stages of big data processing. According to the elaboration and analysis of the proposed issues, the corresponding solutions are also put forward. Finally, some open problems to be solved in the future are also raised. --- paper_title: Adaptive Pre-processing and Regression of Weather Data paper_content: With the evolution of data and increasing popularity of IoT (Internet of Things), stream data mining has gained immense popularity. Researchers and developers are trying to analyze data patterns obtained from various devices. Stream data have several characteristics, the most important being its huge volume and high velocity. Although, a lot of research is being conducted in order to develop more efficient stream data mining techniques, pre-processing of stream data is an area that is under-studied. Real time applications generate data which is rather noisy and contain missing values. Apart from this, there is the issue of data evolution, which is a concern when dealing with stream data. To deal with the evolution of data, the proposed solution offers a hybrid of preprocessing techniques which are adaptive in nature. As a result of the study, an adaptive preprocessing and learning approach is implemented. The case study with sensor weather data demonstrates the results and accuracy of the proposed solution. --- paper_title: Research on real-time outlier detection over big data streams paper_content: ABSTRACTNowadays technological advances have promoted big data streams common in many applications, including mobile internet applications, internet of things, and industry production process. Outl... --- paper_title: Distributed Top-N local outlier detection in big data paper_content: The concept of Top-N local outlier that focuses on the detection of the N points with the largest Local Outlier Factor (LOF) score has been shown to be very effective for identifying outliers in big datasets. However, detecting Top-N local outliers is computationally expensive, since the computation of LOF scores for all data points requires a huge number of high complexity k-nearest neighbor (kNN) searches. In this work, we thus present the first distributed solution to tackle this problem of Top-N local outlier detection (DTOLF). First, DTOLF features an innovative safe elimination strategy that efficiently identifies dually-safe points, namely those that are guaranteed to (1) not be classified as Top-N outliers and (2) not be needed as neighbors of points residing on other machines. Therefore, it effectively minimizes both the processing and communication costs of the Top-N outlier detection process. Further, based on the well-accepted observation that strong correlations among attributes are prevalent in real world datasets, we propose correlation-aware optimization strategies that ensure the effectiveness of grid-based partitioning and of the safe elimination strategy in multi-dimensional datasets. Our extensive experimental evaluation on OpenStreetMap, SDSS, and TIGER datasets demonstrates the effectiveness of DTOLF — up to 10 times faster than the alternative methods and scaling to terabyte level datasets. --- paper_title: An efficient algorithm for distributed density-based outlier detection on big data paper_content: The outlier detection is a popular issue in the area of data management and multimedia analysis, and it can be used in many applications such as detection of noisy images, credit card fraud detection, network intrusion detection. The density-based outlier is an important definition of outlier, whose target is to compute a Local Outlier Factor (LOF) for each tuple in a data set to represent the degree of this tuple to be an outlier. It shows several significant advantages comparing with other existing definitions. This paper focuses on the problem of distributed density-based outlier detection for large-scale data. First, we propose a Gird-Based Partition algorithm (GBP) as a data preparation method. GBP first splits the data set into several grids, and then allocates these grids to the datanodes in a distributed environment. Second, we propose a Distributed LOF Computing method (DLC) for detecting density-based outliers in parallel, which only needs a small amount of network communications. At last, the efficiency and effectiveness of the proposed approaches are verified through a series of simulation experiments. --- paper_title: A Data Cleaning Model for Electric Power Big Data Based on Spark Framework paper_content: The data cleaning of electrical power big data can improve the correctness, the completeness, the consistency and the reliability of the data. Aiming at the difficulties of the extracting of the unified anomaly detection pattern and the low accuracy and continuity of the anomaly data correction in the process of the electrical power big data cleaning, the data cleaning model of the electrical power big data based on Spark is proposed. Firstly, the normal clusters and the corresponding boundary samples are obtained by the improved CURE clustering algorithm. Then, the anomaly data identification algorithm based on boundary samples is designed. Finally, the anomaly data modification is realized by using exponential weighting moving mean value. The high efficiency and accuracy is proved by the experiment of the data cleaning of the wind power generation monitoring data from the wind power station. --- paper_title: Contextual anomaly detection framework for big sensor data paper_content: The ability to detect and process anomalies for Big Data in real-time is a difficult task. The volume and velocity of the data within many systems makes it difficult for typical algorithms to scale and retain their real-time characteristics. The pervasiveness of data combined with the problem that many existing algorithms only consider the content of the data source; e.g. a sensor reading itself without concern for its context, leaves room for potential improvement. The proposed work defines a contextual anomaly detection framework. It is composed of two distinct steps: content detection and context detection. The content detector is used to determine anomalies in real-time, while possibly, and likely, identifying false positives. The context detector is used to prune the output of the content detector, identifying those anomalies which are considered both content and contextually anomalous. The context detector utilizes the concept of profiles, which are groups of similarly grouped data points generated by a multivariate clustering algorithm. The research has been evaluated against two real-world sensor datasets provided by a local company in Brampton, Canada. Additionally, the framework has been evaluated against the open-source Dodgers dataset, available at the UCI machine learning repository, and against the R statistical toolbox. --- paper_title: An Electric Power Sensor Data Oriented Data Cleaning Solution paper_content: With the development of Smart Grid Technology, more and more electric power sensor data are utilized in various electric power systems. To guarantee the effectiveness of such systems, it is necessary to ensure the quality of electric power sensor data, especially when the scale of electric power sensor data is large. In the field of large-scale electric power sensor data cleaning, the computational efficiency and accuracy of data cleaning are two vital requirements. In order to satisfy these requirements, this paper presents an electric power sensor data oriented data cleaning solution, which is composed of a data cleaning framework and a data cleaning method. Based on Hadoop, the given framework is able to support large-scale electric power sensor data acquisition, storage and processing. Meanwhile, the proposed method which achieves outlier detection and reparation is implemented on the basis of a time-relevant k-means clustering algorithm in Spark. The feasibility and effectiveness of the proposed method is evaluated on a data set which originates from charging piles. Experimental results show that the proposed data cleaning method is able to improve the data quality of electric power sensor data by finding and repairing most outliers. For large-scale electric power sensor data, the proposed data cleaning method has high parallel performance and strong scalability. --- paper_title: A Framework for Distributed Cleaning of Data Streams paper_content: Abstract Vast and ever increasing quantities of data are produced by sensors in the Internet of Things (IoT). The quality of this data can be very variable due to problems with sensors, incorrect calibration etc. Data quality can be greatly enhanced by cleaning the data before it reaches its end user. This paper reports on the construction of a distributed cleaning system (DCS) to clean data streams in real-time for an environmental case-study. A combination of declarative and statistical model based cleaning methods are applied and initial results are reported. --- paper_title: Data quality: The other face of Big Data paper_content: In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth ‘V’ of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three ‘V’s, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community. --- paper_title: Contextual Anomaly Detection in Big Sensor Data paper_content: Performing predictive modelling, such as anomaly detection, in Big Data is a difficult task. This problem is compounded as more and more sources of Big Data are generated from environmental sensors, logging applications, and the Internet of Things. Further, most current techniques for anomaly detection only consider the content of the data source, i.e. the data itself, without concern for the context of the data. As data becomes more complex it is increasingly important to bias anomaly detection techniques for the context, whether it is spatial, temporal, or semantic. The work proposed in this paper outlines a contextual anomaly detection technique for use in streaming sensor networks. The technique uses a well-defined content anomaly detection algorithm for real-time point anomaly detection. Additionally, we present a post-processing context-aware anomaly detection algorithm based on sensor profiles, which are groups of contextually similar sensors generated by a multivariate clustering algorithm. Our proposed research has been implemented and evaluated with real-world data provided by Powersmiths, located in Brampton, Ontario, Canada. --- paper_title: Context-aware data quality assessment for big data paper_content: Abstract Big data changed the way in which we collect and analyze data. In particular, the amount of available information is constantly growing and organizations rely more and more on data analysis in order to achieve their competitive advantage. However, such amount of data can create a real value only if combined with quality: good decisions and actions are the results of correct, reliable and complete data. In such a scenario, methods and techniques for the Data Quality assessment can support the identification of suitable data to process. If for traditional database numerous assessment methods are proposed, in the Big Data scenario new algorithms have to be designed in order to deal with novel requirements related to variety, volume and velocity issues. In particular, in this paper we highlight that dealing with heterogeneous sources requires an adaptive approach able to trigger the suitable quality assessment methods on the basis of the data type and context in which data have to be used. Furthermore, we show that in some situations it is not possible to evaluate the quality of the entire dataset due to performance and time constraints. For this reason, we suggest to focus the Data Quality assessment only on a portion of the dataset and to take into account the consequent loss of accuracy by introducing a confidence factor as a measure of the reliability of the quality assessment procedure. We propose a methodology to build a Data Quality adapter module, which selects the best configuration for the Data Quality assessment based on the user main requirements: time minimization, confidence maximization, and budget minimization. Experiments are performed by considering real data gathered from a smart city case study. --- paper_title: Context aware model-based cleaning of data streams paper_content: Despite advances in sensor technology, there are a number of problems that continue to require attention. Sensors fail due to low battery power, poor calibration, exposure to the elements and interference to name but a few factors. This can have a negative effect on data quality, which can however be improved by data cleaning. In particular, models can learn characteristics of data to detect and replace incorrect values. The research presented in this paper focuses on the building of models of environmental sensor data that can incorporate context awareness about the sampling locations. These models have been tested and validated both for static and streaming data. We show that contextual models demonstrate favourable outcomes when used to clean streaming data. --- paper_title: Adaptive and online data anomaly detection for wireless sensor systems paper_content: Wireless sensor networks (WSNs) are increasingly used as platforms for collecting data from unattended environments and monitoring important events in phenomena. However, sensor data is affected by anomalies that occur due to various reasons, such as, node software or hardware failures, reading errors, unusual events, and malicious attacks. Therefore, effective, efficient, and real time detection of anomalous measurement is required to guarantee the quality of data collected by these networks. In this paper, two efficient and effective anomaly detection models PCCAD and APCCAD are proposed for static and dynamic environments, respectively. Both models utilize the One-Class Principal Component Classifier (OCPCC) to measure the dissimilarity between sensor measurements in the feature space. The proposed APCCAD model incorporates an incremental learning method that is able to track the dynamic normal changes of data streams in the monitored environment. The efficiency and effectiveness of the proposed models are demonstrated using real life datasets collected by real sensor network projects. Experimental results show that the proposed models have advantages over existing models in terms of efficient utilization of sensor limited resources. The results further reveal that the proposed models achieve better detection effectiveness in terms of high detection accuracy with low false alarms especially for dynamic environmental data streams compared to some existing models. --- paper_title: Bleach: A Distributed Stream Data Cleaning System paper_content: Existing scalable data cleaning approaches have focused on batch data cleaning. However, batch data cleaning is not suitable for streaming big data systems, in which dynamic data is generated continuously. Despite the increasing popularity of stream-processing systems, few stream data cleaning techniques have been proposed so far. In this paper, we bridge this gap by addressing the problem of rule-based stream data cleaning, which sets stringent requirements on latency, rule dynamics and ability to cope with the continuous nature of data streams. We design a system, called Bleach, which achieves real-time violation detection and data repair on a dirty data stream. Bleach relies on efficient, compact and distributed data structures to maintain the necessary state to repair data. Additionally, it supports rule dynamics and uses a "cumulative" sliding window operation to improve cleaning accuracy. We evaluate a prototype of Bleach using both synthetic and real data streams and experimentally validate its high throughput, low latency and high cleaning accuracy, which are preserved even with rule dynamics. ---
Title: Big Data Quality: A systematic literature review and future research directions Section 1: Introduction Description 1: Write about the advances in ICT, definition of Big Data, its impact on business effectiveness, the need for data quality assessment, and the organization of the paper. Section 2: Search Methodology Description 2: Describe the systematic literature review (SLR) methods used, including planning and conducting phases, specifying related conferences and journals, and inclusion and exclusion criteria for studies. Section 3: Research Tree Description 3: Explain the research tree obtained from the study, dividing the studies based on their processing model, task, technique used, and the categorization of the methods. Section 4: Stream Processing Methods Description 4: Discuss the methods for stream processing, categorized into outlier detection, evaluation, cleaning, and review papers, along with the techniques used in outlier detection. Section 5: Batch Processing Methods Description 5: Describe the methods that use batch processing models to improve the quality of big data, categorized into outlier detection, evaluation, cleaning, and review papers. Include the techniques used in these methods. Section 6: Hybrid Methods Description 6: Present methods that use both batch and stream data, discuss how static data can be used to build models for data streams, and categorize the works into outlier detection, evaluation, and cleaning. Section 7: Comparative Evaluation Description 7: Provide a comparative evaluation of review papers, including those focused on stream and batch processing, and compare them based on various criteria such as publication date, number of studies, processing type, and tasks. Section 8: Results Description 8: Present the results of the systematic review, including tag cloud analysis, trends in the number of papers published per year, distribution of study types, analysis of tasks and techniques used, and the application domains of the studies. Section 9: Challenges and Future Works Description 9: Discuss the challenges related to big data quality, categorizing them into source dependent, inherent, and technique dependent. Also, outline future research directions to address unsolved problems and improve data quality assessment methods. Section 10: Conclusion Description 10: Summarize the key points of the systematic literature review, highlighting the importance of big data quality, active regions and venues, research trends, and recommendations for future work in this area.
Molecular simulations have boosted knowledge of CRISPR/Cas9: A Review
8
--- paper_title: Initial sequencing and analysis of the human genome. paper_content: The human genome holds an extraordinary trove of information about human development, physiology, medicine and evolution. Here we report the results of an international collaboration to produce and make freely available a draft sequence of the human genome. We also present an initial analysis of the data, describing some of the insights that can be gleaned from the sequence. --- paper_title: Human Genome-Edited Babies: First Responder with Concerns Regarding Possible Neurological Deficits! paper_content: The ultimate outcome in genome-editing research stepped into unknown territories last month when two babies were brought into the world with clustered regularly interspaced short palindromic repeats (CRISPR)–CRISPR-associated protein 9 (Cas9) facilitated knockdown of chemokine receptor 5 (CCR5). An immediate outcry by the public and the scientific community followed, which is still ongoing with much apprehensions and criticism of the ethical and scientific aspects of the procedure and its effects on the future of genome editing needed in other stubborn inheritable diseases for which there is no cure at present. With the debate on the consequences of this particular receptor knockdown still going on and the after-shocks in the form of queries expected to continue for some time in the future, we enter the arena of this particular genome editing as first responders with concerns regarding the neurological aftermath of CCR5 knockout in the babies born. --- paper_title: The CRISPR-Cas immune system : Biology, mechanisms and applications paper_content: Viruses are a common threat to cellular life, not the least to bacteria and archaea who constitute the majority of life on Earth. Consequently, a variety of mechanisms to resist virus infection has ... --- paper_title: Molecular biology at the cutting edge: A review on CRISPR/CAS9 gene editing for undergraduates paper_content: Disrupting a gene to determine its effect on an organism's phenotype is an indispensable tool in molecular biology. Such techniques are critical for understanding how a gene product contributes to the development and cellular identity of organisms. The explosion of genomic sequencing technologies combined with recent advances in genome-editing techniques has elevated the possibilities of genetic manipulations in numerous organisms in which these experiments were previously not readily accessible or possible. Introducing the next generation of molecular biologists to these emerging techniques is key in the modern biology classroom. This comprehensive review introduces undergraduates to CRISPR/Cas9 editing and its uses in genetic studies. The goals of this review are to explain how CRISPR functions as a prokaryotic immune system, describe how researchers generate mutations with CRISPR/Cas9, highlight how Cas9 has been adapted for new functions, and discuss ethical considerations of genome editing. Additionally, anticipatory guides and questions for discussion are posed throughout the review to encourage active exploration of these topics in the classroom. Finally, the supplement includes a study guide and practical suggestions to incorporate CRISPR/Cas9 experiments into lab courses at the undergraduate level. © 2018 The Authors Biochemistry and Molecular Biology Education published by Wiley Periodicals, Inc. on behalf of International Union of Biochemistry and Molecular Biology, 46(2):195-205, 2018. --- paper_title: Diversity and evolution of class 2 CRISPR–Cas systems paper_content: Class 2 CRISPR-Cas systems are characterized by effector modules that consist of a single multidomain protein, such as Cas9 or Cpf1. We designed a computational pipeline for the discovery of novel class 2 variants and used it to identify six new CRISPR-Cas subtypes. The diverse properties of these new systems provide potential for the development of versatile tools for genome editing and regulation. In this Analysis article, we present a comprehensive census of class 2 types and class 2 subtypes in complete and draft bacterial and archaeal genomes, outline evolutionary scenarios for the independent origin of different class 2 CRISPR-Cas systems from mobile genetic elements, and propose an amended classification and nomenclature of CRISPR-Cas. --- paper_title: Classification and Nomenclature of CRISPR-Cas Systems: Where from Here? paper_content: Abstract As befits an immune mechanism, CRISPR-Cas systems are highly variable with respect to Cas protein sequences, gene composition, and organization of the genomic loci. Optimal classification ... --- paper_title: Cas9 versus Cas12a/Cpf1: Structure-function comparisons and implications for genome editing. paper_content: Cas9 and Cas12a are multidomain CRISPR-associated nucleases that can be programmed with a guide RNA to bind and cleave complementary DNA targets. The guide RNA sequence can be varied, making these effector enzymes versatile tools for genome editing and gene regulation applications. While Cas9 is currently the best-characterized and most widely used nuclease for such purposes, Cas12a (previously named Cpf1) has recently emerged as an alternative for Cas9. Cas9 and Cas12a have distinct evolutionary origins and exhibit different structural architectures, resulting in distinct molecular mechanisms. Here we compare the structural and mechanistic features that distinguish Cas9 and Cas12a, and describe how these features modulate their activity. We discuss implications for genome editing, and how they may influence the choice of Cas9 or Cas12a for specific applications. Finally, we review recent studies in which Cas12a has been utilized as a genome editing tool. This article is categorized under: RNA Interactions with Proteins and Other Molecules > Protein-RNA Interactions: Functional Implications Regulatory RNAs/RNAi/Riboswitches > Biogenesis of Effector Small RNAs RNA Interactions with Proteins and Other Molecules > RNA-Protein Complexes. --- paper_title: Evolution and classification of the CRISPR–Cas systems paper_content: The CRISPR-Cas (clustered regularly interspaced short palindromic repeats-CRISPR-associated proteins) modules are adaptive immunity systems that are present in many archaea and bacteria. These defence systems are encoded by operons that have an extraordinarily diverse architecture and a high rate of evolution for both the cas genes and the unique spacer content. Here, we provide an updated analysis of the evolutionary relationships between CRISPR-Cas systems and Cas proteins. Three major types of CRISPR-Cas system are delineated, with a further division into several subtypes and a few chimeric variants. Given the complexity of the genomic architectures and the extremely dynamic evolution of the CRISPR-Cas systems, a unified classification of these systems should be based on multiple criteria. Accordingly, we propose a 'polythetic' classification that integrates the phylogenies of the most common cas genes, the sequence and organization of the CRISPR repeats and the architecture of the CRISPR-cas loci. --- paper_title: Diversity, classification and evolution of CRISPR-Cas systems paper_content: The bacterial and archaeal CRISPR-Cas systems of adaptive immunity show remarkable diversity of protein composition, effector complex structure, genome locus architecture and mechanisms of adaptation, pre-CRISPR (cr)RNA processing and interference. The CRISPR-Cas systems belong to two classes, with multi-subunit effector complexes in Class 1 and single-protein effector modules in Class 2. Concerted genomic and experimental efforts on comprehensive characterization of Class 2 CRISPR-Cas systems led to the identification of two new types and several subtypes. The newly characterized type VI systems are the first among the CRISPR-Cas variants to exclusively target RNA. Unexpectedly, in some of the class 2 systems, the effector protein is additionally responsible for the pre-crRNA processing. Comparative analysis of the effector complexes indicates that Class 2 systems evolved from mobile genetic elements on multiple, independent occasions. --- paper_title: CRISPR–Cas9 Mediated DNA Unwinding Detected Using Site-Directed Spin Labeling paper_content: The RNA-guided CRISPR–Cas9 nuclease has revolutionized genome engineering, yet its mechanism for DNA target selection is not fully understood. A crucial step in Cas9 target recognition involves unwinding of the DNA duplex to form a three-stranded R-loop structure. Work reported here demonstrates direct detection of Cas9-mediated DNA unwinding by a combination of site-directed spin labeling and molecular dynamics simulations. The results support a model in which the unwound nontarget strand is stabilized by a positively charged patch located between the two nuclease domains of Cas9 and reveal uneven increases in flexibility along the unwound nontarget strand upon scissions of the DNA backbone. This work establishes the synergistic combination of spin-labeling and molecular dynamics to directly monitor Cas9-mediated DNA conformational changes and yields information on the target DNA in different stages of Cas9 function, thus advancing mechanistic understanding of CRISPR–Cas9 and aiding future technological ... --- paper_title: Structures of a CRISPR-Cas9 R-loop complex primed for DNA cleavage paper_content: Bacterial adaptive immunity and genome engineering involving the CRISPR (clustered regularly interspaced short palindromic repeats)–associated (Cas) protein Cas9 begin with RNA-guided DNA unwinding to form an RNA-DNA hybrid and a displaced DNA strand inside the protein. The role of this R-loop structure in positioning each DNA strand for cleavage by the two Cas9 nuclease domains is unknown. We determine molecular structures of the catalytically active Streptococcus pyogenes Cas9 R-loop that show the displaced DNA strand located near the RuvC nuclease domain active site. These protein-DNA interactions, in turn, position the HNH nuclease domain adjacent to the target DNA strand cleavage site in a conformation essential for concerted DNA cutting. Cas9 bends the DNA helix by 30°, providing the structural distortion needed for R-loop formation. --- paper_title: CRISPR-Cas9 Structures and Mechanisms. paper_content: Many bacterial clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated (Cas) systems employ the dual RNA-guided DNA endonuclease Cas9 to defend against invading phages and conjugative plasmids by introducing site-specific double-stranded breaks in target DNA. Target recognition strictly requires the presence of a short protospacer adjacent motif (PAM) flanking the target site, and subsequent R-loop formation and strand scission are driven by complementary base pairing between the guide RNA and target DNA, Cas9-DNA interactions, and associated conformational changes. The use of CRISPR-Cas9 as an RNA-programmable DNA targeting and editing platform is simplified by a synthetic single-guide RNA (sgRNA) mimicking the natural dual trans-activating CRISPR RNA (tracrRNA)-CRISPR RNA (crRNA) structure. This review aims to provide an in-depth mechanistic and structural understanding of Cas9-mediated RNA-guided DNA targeting and cleavage. Molecular insights from biochemical and structural studies provide a framework for rational engineering aimed at altering catalytic function, guide RNA specificity, and PAM requirements and reducing off-target activity for the development of Cas9-based therapies against genetic diseases. --- paper_title: Mapping the sugar dependency for rational generation of a DNA-RNA hybrid-guided Cas9 endonuclease paper_content: The CRISPR–Cas9 RNA-guided endonuclease system allows precise and efficient modification of complex genomes and is continuously developed to enhance specificity, alter targeting and add new functional moieties. However, one area yet to be explored is the base chemistry of the associated RNA molecules. Here we show the design and optimisation of hybrid DNA–RNA CRISPR and tracr molecules based on structure-guided approaches. Through careful mapping of the ribose requirements of Cas9, we develop hybrid versions possessing minimal RNA residues, which are sufficient to direct specific nuclease activity in vitro and in vivo with reduced off-target activity. We identify critical regions within these molecules that require ribose nucleotides and show a direct correlation between binding affinity/stability and cellular activity. This is the first demonstration of a non-RNA-guided Cas9 endonuclease and first step towards eliminating the ribose dependency of Cas9 to develop a XNA-programmable endonuclease. CRISPR-Cas9 systems are being continually improved to enhance specificity and improve functionality. Here the authors design hybrid DNA-RNA guide and tracr molecules to direct Cas9 nuclease activity with reduced off-target effects. --- paper_title: VMD: VISUAL MOLECULAR DYNAMICS paper_content: Abstract VMD is a molecular graphics program designed for the display and analysis of molecular assemblies, in particular biopolymers such as proteins and nucleic acids. VMD can simultaneously display any number of structures using a wide variety of rendering styles and coloring methods. Molecules are displayed as one or more “representations,” in which each representation embodies a particular rendering method and coloring scheme for a selected subset of atoms. The atoms displayed in each representation are chosen using an extensive atom selection syntax, which includes Boolean operators and regular expressions. VMD provides a complete graphical user interface for program control, as well as a text interface using the Tcl embeddable parser to allow for complex scripts with variable substitution, control loops, and function calls. Full session logging is supported, which produces a VMD command script for later playback. High-resolution raster images of displayed molecules may be produced by generating input scripts for use by a number of photorealistic image-rendering applications. VMD has also been expressly designed with the ability to animate molecular dynamics (MD) simulation trajectories, imported either from files or from a direct connection to a running MD simulation. VMD is the visualization component of MDScope, a set of tools for interactive problem solving in structural biology, which also includes the parallel MD program NAMD, and the MDCOMM software used to connect the visualization and simulation programs. VMD is written in C++, using an object-oriented design; the program, including source code and extensive documentation, is freely available via anonymous ftp and through the World Wide Web. --- paper_title: Crystal Structure of Cas9 in Complex with Guide RNA and Target DNA paper_content: The CRISPR-associated endonuclease Cas9 can be targeted to specific genomic loci by single guide RNAs (sgRNAs). Here, we report the crystal structure of Streptococcus pyogenes Cas9 in complex with sgRNA and its target DNA at 2.5 Å resolution. The structure revealed a bilobed architecture composed of target recognition and nuclease lobes, accommodating the sgRNA:DNA heteroduplex in a positively charged groove at their interface. Whereas the recognition lobe is essential for binding sgRNA and DNA, the nuclease lobe contains the HNH and RuvC nuclease domains, which are properly positioned for cleavage of the complementary and noncomplementary strands of the target DNA, respectively. The nuclease lobe also contains a carboxyl-terminal domain responsible for the interaction with the protospacer adjacent motif (PAM). This high-resolution structure and accompanying functional analyses have revealed the molecular mechanism of RNA-guided DNA targeting by Cas9, thus paving the way for the rational design of new, versatile genome-editing technologies. --- paper_title: CRISPR-Cas: biology, mechanisms and relevance paper_content: Prokaryotes have evolved several defence mechanisms to protect themselves from viral predators. Clustered regularly interspaced short palindromic repeats (CRISPR) and their associated proteins (Cas ... --- paper_title: Designed nucleases for targeted genome editing paper_content: Summary ::: Targeted genome-editing technology using designed nucleases has been evolving rapidly, and its applications are widely expanding in research, medicine and biotechnology. Using this genome-modifying technology, researchers can precisely and efficiently insert, remove or change specific sequences in various cultured cells, micro-organisms, animals and plants. This genome editing is based on the generation of double-strand breaks (DSBs), repair of which modifies the genome through nonhomologous end-joining (NHEJ) or homology-directed repair (HDR). In addition, designed nickase-induced generation of single-strand breaks can also lead to precise genome editing through HDR, albeit at relatively lower efficiencies than that induced by nucleases. Three kinds of designed nucleases have been used for targeted DSB formation: zinc-finger nucleases, transcription activator-like effector nucleases, and RNA-guided engineered nucleases derived from the bacterial clustered regularly interspaced short palindromic repeat (CRISPR)–Cas (CRISPR-associated) system. A growing number of researchers are using genome-editing technologies, which have become more accessible and affordable since the discovery and adaptation of CRISPR-Cas9. Here, the repair mechanism and outcomes of DSBs are reviewed and the three types of designed nucleases are discussed with the hope that such understanding will facilitate applications to genome editing. --- paper_title: A Programmable Dual-RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity paper_content: Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) systems provide bacteria and archaea with adaptive immunity against viruses and plasmids by using CRISPR RNAs (crRNAs) to guide the silencing of invading nucleic acids. We show here that in a subset of these systems, the mature crRNA that is base-paired to trans-activating crRNA (tracrRNA) forms a two-RNA structure that directs the CRISPR-associated protein Cas9 to introduce double-stranded (ds) breaks in target DNA. At sites complementary to the crRNA-guide sequence, the Cas9 HNH nuclease domain cleaves the complementary strand, whereas the Cas9 RuvC-like domain cleaves the noncomplementary strand. The dual-tracrRNA:crRNA, when engineered as a single RNA chimera, also directs sequence-specific Cas9 dsDNA cleavage. Our study reveals a family of endonucleases that use dual-RNAs for site-specific DNA cleavage and highlights the potential to exploit the system for RNA-programmable genome editing. --- paper_title: CRISPR RNA maturation by trans-encoded small RNA and host factor RNase III paper_content: CRISPR is a microbial RNA-based immune system protecting against viral and plasmid invasions. The CRISPR system is thought to rely on cleavage of a precursor RNA transcript by Cas endonucleases, but not all species with CRISPR-type immunity encode Cas proteins. A new study reveals an alternative pathway for CRISPR activation in the human pathogen Streptococcus pyogenes, in which a trans-encoded small RNA directs processing of precursor RNA into crRNAs through endogenous RNase III and the CRISPR-associated Csn1 protein. --- paper_title: DNA Unwinding Is the Primary Determinant of CRISPR-Cas9 Activity paper_content: Summary Bacterial adaptive immunity utilizes RNA-guided surveillance complexes comprising Cas proteins together with CRISPR RNAs (crRNAs) to target foreign nucleic acids for destruction. Cas9, a type II CRISPR-Cas effector complex, can be programed with a single-guide RNA that base pairs with the target strand of dsDNA, displacing the non-target strand to create an R-loop, where the HNH and the RuvC nuclease domains cleave opposing strands. While many structural and biochemical studies have shed light on the mechanism of Cas9 cleavage, a clear unifying model has yet to emerge. Our detailed kinetic characterization of the enzyme reveals that DNA binding is reversible, and R-loop formation is rate-limiting, occurring in two steps, one for each of the nuclease domains. The specificity constant for cleavage is determined through an induced-fit mechanism as the product of the equilibrium binding affinity for DNA and the rate of R-loop formation. --- paper_title: Rationally engineered Cas9 nucleases with improved specificity paper_content: The RNA-guided endonuclease Cas9 is a versatile genome-editing tool with a broad range of applications from therapeutics to functional annotation of genes. Cas9 creates double-strand breaks (DSBs) at targeted genomic loci complementary to a short RNA guide. However, Cas9 can cleave off-target sites that are not fully complementary to the guide, which poses a major challenge for genome editing. Here, we use structure-guided protein engineering to improve the specificity of Streptococcus pyogenes Cas9 (SpCas9). Using targeted deep sequencing and unbiased whole-genome off-target analysis to assess Cas9-mediated DNA cleavage in human cells, we demonstrate that "enhanced specificity" SpCas9 (eSpCas9) variants reduce off-target effects and maintain robust on-target cleavage. Thus, eSpCas9 could be broadly useful for genome-editing applications requiring a high level of specificity. --- paper_title: Structural basis of PAM-dependent target DNA recognition by the Cas9 endonuclease paper_content: The CRISPR-associated protein Cas9 is an RNA-guided endonuclease that cleaves double-stranded DNA bearing sequences complementary to a 20-nucleotide segment in the guide RNA. Cas9 has emerged as a versatile molecular tool for genome editing and gene expression control. RNA-guided DNA recognition and cleavage strictly require the presence of a protospacer adjacent motif (PAM) in the target DNA. Here we report a crystal structure of Streptococcus pyogenes Cas9 in complex with a single-molecule guide RNA and a target DNA containing a canonical 5'-NGG-3' PAM. The structure reveals that the PAM motif resides in a base-paired DNA duplex. The non-complementary strand GG dinucleotide is read out via major-groove interactions with conserved arginine residues from the carboxy-terminal domain of Cas9. Interactions with the minor groove of the PAM duplex and the phosphodiester group at the +1 position in the target DNA strand contribute to local strand separation immediately upstream of the PAM. These observations suggest a mechanism for PAM-dependent target DNA melting and RNA-DNA hybrid formation. Furthermore, this study establishes a framework for the rational engineering of Cas9 enzymes with novel PAM specificities. --- paper_title: CRISPR-Cas9 conformational activation as elucidated from enhanced molecular simulations paper_content: CRISPR-Cas9 has become a facile genome editing technology, yet the structural and mechanistic features underlying its function are unclear. Here, we perform extensive molecular simulations in an enhanced sampling regime, using a Gaussian-accelerated molecular dynamics (GaMD) methodology, which probes displacements over hundreds of microseconds to milliseconds, to reveal the conformational dynamics of the endonuclease Cas9 during its activation toward catalysis. We disclose the conformational transition of Cas9 from its apo form to the RNA-bound form, suggesting a mechanism for RNA recruitment in which the domain relocations cause the formation of a positively charged cavity for nucleic acid binding. GaMD also reveals the conformation of a catalytically competent Cas9, which is prone for catalysis and whose experimental characterization is still limited. We show that, upon DNA binding, the conformational dynamics of the HNH domain triggers the formation of the active state, explaining how the HNH domain exerts a conformational control domain over DNA cleavage [Sternberg SH et al. (2015) Nature , 527 , 110–113]. These results provide atomic-level information on the molecular mechanism of CRISPR-Cas9 that will inspire future experimental investigations aimed at fully clarifying the biophysics of this unique genome editing machinery and at developing new tools for nucleic acid manipulation based on CRISPR-Cas9. --- paper_title: DNA interrogation by the CRISPR RNA-guided endonuclease Cas9 paper_content: This study defines how a short DNA sequence, known as the PAM, is critical for target DNA interrogation by the CRISPR-associated enzyme Cas9 — DNA melting and heteroduplex formation initiate near the PAM and extend directionally through the remaining target sequence, and the PAM is also required to activate the catalytic activity of Cas9. --- paper_title: Structures of a CRISPR-Cas9 R-loop complex primed for DNA cleavage paper_content: Bacterial adaptive immunity and genome engineering involving the CRISPR (clustered regularly interspaced short palindromic repeats)–associated (Cas) protein Cas9 begin with RNA-guided DNA unwinding to form an RNA-DNA hybrid and a displaced DNA strand inside the protein. The role of this R-loop structure in positioning each DNA strand for cleavage by the two Cas9 nuclease domains is unknown. We determine molecular structures of the catalytically active Streptococcus pyogenes Cas9 R-loop that show the displaced DNA strand located near the RuvC nuclease domain active site. These protein-DNA interactions, in turn, position the HNH nuclease domain adjacent to the target DNA strand cleavage site in a conformation essential for concerted DNA cutting. Cas9 bends the DNA helix by 30°, providing the structural distortion needed for R-loop formation. --- paper_title: CRISPR-Cas9 Structures and Mechanisms. paper_content: Many bacterial clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated (Cas) systems employ the dual RNA-guided DNA endonuclease Cas9 to defend against invading phages and conjugative plasmids by introducing site-specific double-stranded breaks in target DNA. Target recognition strictly requires the presence of a short protospacer adjacent motif (PAM) flanking the target site, and subsequent R-loop formation and strand scission are driven by complementary base pairing between the guide RNA and target DNA, Cas9-DNA interactions, and associated conformational changes. The use of CRISPR-Cas9 as an RNA-programmable DNA targeting and editing platform is simplified by a synthetic single-guide RNA (sgRNA) mimicking the natural dual trans-activating CRISPR RNA (tracrRNA)-CRISPR RNA (crRNA) structure. This review aims to provide an in-depth mechanistic and structural understanding of Cas9-mediated RNA-guided DNA targeting and cleavage. Molecular insights from biochemical and structural studies provide a framework for rational engineering aimed at altering catalytic function, guide RNA specificity, and PAM requirements and reducing off-target activity for the development of Cas9-based therapies against genetic diseases. --- paper_title: Crystal Structure of Cas9 in Complex with Guide RNA and Target DNA paper_content: The CRISPR-associated endonuclease Cas9 can be targeted to specific genomic loci by single guide RNAs (sgRNAs). Here, we report the crystal structure of Streptococcus pyogenes Cas9 in complex with sgRNA and its target DNA at 2.5 Å resolution. The structure revealed a bilobed architecture composed of target recognition and nuclease lobes, accommodating the sgRNA:DNA heteroduplex in a positively charged groove at their interface. Whereas the recognition lobe is essential for binding sgRNA and DNA, the nuclease lobe contains the HNH and RuvC nuclease domains, which are properly positioned for cleavage of the complementary and noncomplementary strands of the target DNA, respectively. The nuclease lobe also contains a carboxyl-terminal domain responsible for the interaction with the protospacer adjacent motif (PAM). This high-resolution structure and accompanying functional analyses have revealed the molecular mechanism of RNA-guided DNA targeting by Cas9, thus paving the way for the rational design of new, versatile genome-editing technologies. --- paper_title: Making and breaking nucleic acids: two-Mg2+-ion catalysis and substrate specificity. paper_content: DNA and a large proportion of RNA are antiparallel duplexes composed of an unvarying phosphosugar backbone surrounding uniformly stacked and highly similar base pairs. How do the myriad of enzymes (including ribozymes) that perform catalysis on nucleic acids achieve exquisite structure or sequence specificity? In all DNA and RNA polymerases and many nucleases and transposases, two Mg2+ ions are jointly coordinated by the nucleic acid substrate and catalytic residues of the enzyme. Based on the exquisite sensitivity of Mg2+ ions to the ligand geometry and electrostatic environment, we propose that two-metal-ion catalysis greatly enhances substrate recognition and catalytic specificity. --- paper_title: Cas9-catalyzed DNA Cleavage Generates Staggered Ends: Evidence from Molecular Dynamics Simulations paper_content: The CRISPR-associated endonuclease Cas9 from Streptococcus pyogenes (spCas9) along with a single guide RNA (sgRNA) has emerged as a versatile toolbox for genome editing. Despite recent advances in the mechanism studies on spCas9-sgRNA-mediated double-stranded DNA (dsDNA) recognition and cleavage, it is still unclear how the catalytic Mg2+ ions induce the conformation changes toward the catalytic active state. It also remains controversial whether Cas9 generates blunt-ended or staggered-ended breaks with overhangs in the DNA. To investigate these issues, here we performed the first all-atom molecular dynamics simulations of the spCas9-sgRNA-dsDNA system with and without Mg2+ bound. The simulation results showed that binding of two Mg2+ ions at the RuvC domain active site could lead to structurally and energetically favorable coordination ready for the non-target DNA strand cleavage. Importantly, we demonstrated with our simulations that Cas9-catalyzed DNA cleavage produces 1-bp staggered ends rather than generally assumed blunt ends. --- paper_title: Striking Plasticity of CRISPR-Cas9 and Key Role of Non-target DNA, as Revealed by Molecular Simulations paper_content: The CRISPR (clustered regularly interspaced short palindromic repeats)-Cas9 system recently emerged as a transformative genome-editing technology that is innovating basic bioscience and applied medicine and biotechnology. The endonuclease Cas9 associates with a guide RNA to match and cleave complementary sequences in double stranded DNA, forming an RNA:DNA hybrid and a displaced non-target DNA strand. Although extensive structural studies are ongoing, the conformational dynamics of Cas9 and its interplay with the nucleic acids during association and DNA cleavage are largely unclear. Here, by employing multi-microsecond time scale molecular dynamics, we reveal the conformational plasticity of Cas9 and identify key determinants that allow its large-scale conformational changes during nucleic acid binding and processing. We show how the "closure" of the protein, which accompanies nucleic acid binding, fundamentally relies on highly coupled and specific motions of the protein domains, collectively initiating the prominent conformational changes needed for nucleic acid association. We further reveal a key role of the non-target DNA during the process of activation of the nuclease HNH domain, showing how the nontarget DNA positioning triggers local conformational changes that favor the formation of a catalytically competent Cas9. Finally, a remarkable conformational plasticity is identified as an intrinsic property of the HNH domain, constituting a necessary element that allows for the HNH repositioning. These novel findings constitute a reference for future experimental studies aimed at a full characterization of the dynamic features of the CRISPR-Cas9 system, and-more importantly-call for novel structure engineering efforts that are of fundamental importance for the rational design of new genome-engineering applications. --- paper_title: An equivalent metal ion in one- and two-metal-ion catalysis paper_content: Nucleotidyl-transfer enzymes, which synthesize, degrade and rearrange DNA and RNA, often depend on metal ions for catalysis. All DNA and RNA polymerases, MutH-like or RNase H-like nucleases and recombinases, and group I introns seem to require two divalent cations to form a complete active site. The two-metal-ion mechanism has been proposed to orient the substrate, facilitate acid-base catalysis and allow catalytic specificity to exceed substrate binding specificity attributable to the stringent metal-ion (Mg2+ in particular) coordination. Not all nucleotidyl-transfer enzymes use two metal ions for catalysis, however. The betabetaalpha-Me and HUH nucleases depend on a single metal ion in the active site for the catalysis. All of these one- and two metal ion-dependent enzymes generate 5'-phosphate and 3'-OH products. Structural and mechanistic comparisons show that these seemingly unrelated nucleotidyl-transferases share a functionally equivalent metal ion. --- paper_title: RNA-programmed genome editing in human cells paper_content: The ability to make specific changes to DNA—such as changing, inserting or deleting sequences that encode proteins—allows researchers to engineer cells, tissues and organisms for therapeutic and practical applications. Until now, such genome engineering has required the design and production of proteins with the ability to recognize a specific DNA sequence. The bacterial protein, Cas9, has the potential to enable a simpler approach to genome engineering because it is a DNA-cleaving enzyme that can be programmed with short RNA molecules to recognize specific DNA sequences, thus dispensing with the need to engineer a new protein for each new DNA target sequence. Now Jinek et al. demonstrate the capability of RNA-programmed Cas9 to introduce targeted double-strand breaks into human chromosomal DNA, thereby inducing site-specific genome editing reactions. Cas9 assembles with engineered single-guide RNAs in human cells and the resulting Cas9-RNA complex can induce the formation of double-strand breaks in genomic DNA at a site complementary to the guide RNA sequence. Experiments using extracts from transfected cells show that RNA expression and/or assembly into Cas9 is the limiting factor for the DNA cleavage, and that extension of the RNA sequence at the 3′ end enhances DNA targeting activity in vivo. These results show that RNA-programmed genome editing is a straightforward strategy for introducing site-specific genetic changes in human cells, and the ease with which it can programmed means that it is likely to become competitive with existing approaches based on zinc finger nucleases and transcription activator-like effector nucleases, and could lead to a new generation of experiments in the field of genome engineering for humans and other species with complex genomes. --- paper_title: DNA Unwinding Is the Primary Determinant of CRISPR-Cas9 Activity paper_content: Summary Bacterial adaptive immunity utilizes RNA-guided surveillance complexes comprising Cas proteins together with CRISPR RNAs (crRNAs) to target foreign nucleic acids for destruction. Cas9, a type II CRISPR-Cas effector complex, can be programed with a single-guide RNA that base pairs with the target strand of dsDNA, displacing the non-target strand to create an R-loop, where the HNH and the RuvC nuclease domains cleave opposing strands. While many structural and biochemical studies have shed light on the mechanism of Cas9 cleavage, a clear unifying model has yet to emerge. Our detailed kinetic characterization of the enzyme reveals that DNA binding is reversible, and R-loop formation is rate-limiting, occurring in two steps, one for each of the nuclease domains. The specificity constant for cleavage is determined through an induced-fit mechanism as the product of the equilibrium binding affinity for DNA and the rate of R-loop formation. --- paper_title: Structural basis of PAM-dependent target DNA recognition by the Cas9 endonuclease paper_content: The CRISPR-associated protein Cas9 is an RNA-guided endonuclease that cleaves double-stranded DNA bearing sequences complementary to a 20-nucleotide segment in the guide RNA. Cas9 has emerged as a versatile molecular tool for genome editing and gene expression control. RNA-guided DNA recognition and cleavage strictly require the presence of a protospacer adjacent motif (PAM) in the target DNA. Here we report a crystal structure of Streptococcus pyogenes Cas9 in complex with a single-molecule guide RNA and a target DNA containing a canonical 5'-NGG-3' PAM. The structure reveals that the PAM motif resides in a base-paired DNA duplex. The non-complementary strand GG dinucleotide is read out via major-groove interactions with conserved arginine residues from the carboxy-terminal domain of Cas9. Interactions with the minor groove of the PAM duplex and the phosphodiester group at the +1 position in the target DNA strand contribute to local strand separation immediately upstream of the PAM. These observations suggest a mechanism for PAM-dependent target DNA melting and RNA-DNA hybrid formation. Furthermore, this study establishes a framework for the rational engineering of Cas9 enzymes with novel PAM specificities. --- paper_title: CRISPR-Cas9 conformational activation as elucidated from enhanced molecular simulations paper_content: CRISPR-Cas9 has become a facile genome editing technology, yet the structural and mechanistic features underlying its function are unclear. Here, we perform extensive molecular simulations in an enhanced sampling regime, using a Gaussian-accelerated molecular dynamics (GaMD) methodology, which probes displacements over hundreds of microseconds to milliseconds, to reveal the conformational dynamics of the endonuclease Cas9 during its activation toward catalysis. We disclose the conformational transition of Cas9 from its apo form to the RNA-bound form, suggesting a mechanism for RNA recruitment in which the domain relocations cause the formation of a positively charged cavity for nucleic acid binding. GaMD also reveals the conformation of a catalytically competent Cas9, which is prone for catalysis and whose experimental characterization is still limited. We show that, upon DNA binding, the conformational dynamics of the HNH domain triggers the formation of the active state, explaining how the HNH domain exerts a conformational control domain over DNA cleavage [Sternberg SH et al. (2015) Nature , 527 , 110–113]. These results provide atomic-level information on the molecular mechanism of CRISPR-Cas9 that will inspire future experimental investigations aimed at fully clarifying the biophysics of this unique genome editing machinery and at developing new tools for nucleic acid manipulation based on CRISPR-Cas9. --- paper_title: Nucleases: diversity of structure, function and mechanism. paper_content: Nucleases cleave the phosphodiester bonds of nucleic acids and may be endo or exo, DNase or RNase, topoisomerases, recombinases, ribozymes, or RNA splicing enzymes. In this review, I survey nuclease activities with known structures and catalytic machinery and classify them by reaction mechanism and metal-ion dependence and by their biological function ranging from DNA replication, recombination, repair, RNA maturation, processing, interference, to defense, nutrient regeneration or cell death. Several general principles emerge from this analysis. There is little correlation between catalytic mechanism and biological function. A single catalytic mechanism can be adapted in a variety of reactions and biological pathways. Conversely, a single biological process can often be accomplished by multiple tertiary and quaternary folds and by more than one catalytic mechanism. Two-metal-ion-dependent nucleases comprise the largest number of different tertiary folds and mediate the most diverse set of biological functions. Metal-ion-dependent cleavage is exclusively associated with exonucleases producing mononucleotides and endonucleases that cleave double- or single-stranded substrates in helical and base-stacked conformations. All metal-ion-independent RNases generate 2',3'-cyclic phosphate products, and all metal-ion-independent DNases form phospho-protein intermediates. I also find several previously unnoted relationships between different nucleases and shared catalytic configurations. --- paper_title: DNA interrogation by the CRISPR RNA-guided endonuclease Cas9 paper_content: This study defines how a short DNA sequence, known as the PAM, is critical for target DNA interrogation by the CRISPR-associated enzyme Cas9 — DNA melting and heteroduplex formation initiate near the PAM and extend directionally through the remaining target sequence, and the PAM is also required to activate the catalytic activity of Cas9. --- paper_title: High-frequency off-target mutagenesis induced by CRISPR-Cas nucleases in human cells paper_content: CRISPR RNA-guided endonucleases (RGENs) have rapidly emerged as a facile and efficient platform for genome editing. Here, we use a human cell-based reporter assay to characterize off-target cleavage of Cas9-based RGENs. We find that single and double mismatches are tolerated to varying degrees depending on their position along the guide RNA (gRNA)-DNA interface. We readily detected off-target alterations induced by four out of six RGENs targeted to endogenous loci in human cells by examination of partially mismatched sites. The off-target sites we identified harbor up to five mismatches and many are mutagenized with frequencies comparable to (or higher than) those observed at the intended on-target site. Our work demonstrates that RGENs are highly active even with imperfectly matched RNA-DNA interfaces in human cells, a finding that might confound their use in research and therapeutic applications. --- paper_title: Structures of a CRISPR-Cas9 R-loop complex primed for DNA cleavage paper_content: Bacterial adaptive immunity and genome engineering involving the CRISPR (clustered regularly interspaced short palindromic repeats)–associated (Cas) protein Cas9 begin with RNA-guided DNA unwinding to form an RNA-DNA hybrid and a displaced DNA strand inside the protein. The role of this R-loop structure in positioning each DNA strand for cleavage by the two Cas9 nuclease domains is unknown. We determine molecular structures of the catalytically active Streptococcus pyogenes Cas9 R-loop that show the displaced DNA strand located near the RuvC nuclease domain active site. These protein-DNA interactions, in turn, position the HNH nuclease domain adjacent to the target DNA strand cleavage site in a conformation essential for concerted DNA cutting. Cas9 bends the DNA helix by 30°, providing the structural distortion needed for R-loop formation. --- paper_title: Exploring the Catalytic Mechanism of Cas9 Using Information Inferred from Endonuclease VII paper_content: Elucidating the nature of the gene editing mechanism of CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is an important task in view of the role of this breakthrough to the advancement of human medicine. In particular, it is crucial to understand the catalytic mechanism of Cas9 (one of the CRISPR associated proteins) and its role in confirming accurate editing. Thus, we focus in this work on an attempt to analyze the catalytic mechanism of Cas9. Considering the absence of detailed structural information on the active form of Cas9, we use an empirical valence bond (EVB) which is calibrated on the closely related mechanism of T4 endonuclease VII. The calibrated EVB is then used in studying the reaction of Cas9, while trying several structural models. It is found that the catalytic activation requires a large conformational change, where K848 or other positively charged group moves from a relatively large distance toward the scissile phosphate. This conformational change leads to the chang... --- paper_title: High-fidelity CRISPR–Cas9 nucleases with no detectable genome-wide off-target effects paper_content: CRISPR-Cas9 nucleases are widely used for genome editing but can induce unwanted off-target mutations. Existing strategies for reducing genome-wide off-target effects of the widely used Streptococcus pyogenes Cas9 (SpCas9) are imperfect, possessing only partial or unproven efficacies and other limitations that constrain their use. Here we describe SpCas9-HF1, a high-fidelity variant harbouring alterations designed to reduce non-specific DNA contacts. SpCas9-HF1 retains on-target activities comparable to wild-type SpCas9 with >85% of single-guide RNAs (sgRNAs) tested in human cells. Notably, with sgRNAs targeted to standard non-repetitive sequences, SpCas9-HF1 rendered all or nearly all off-target events undetectable by genome-wide break capture and targeted sequencing methods. Even for atypical, repetitive target sites, the vast majority of off-target mutations induced by wild-type SpCas9 were not detected with SpCas9-HF1. With its exceptional precision, SpCas9-HF1 provides an alternative to wild-type SpCas9 for research and therapeutic applications. More broadly, our results suggest a general strategy for optimizing genome-wide specificities of other CRISPR-RNA-guided nucleases. --- paper_title: Conformational control of DNA target cleavage by CRISPR–Cas9 paper_content: Cas9 is an RNA-guided DNA endonuclease that targets foreign DNA for destruction as part of a bacterial adaptive immune system mediated by clustered regularly interspaced short palindromic repeats (CRISPR). Together with single-guide RNAs, Cas9 also functions as a powerful genome engineering tool in plants and animals, and efforts are underway to increase the efficiency and specificity of DNA targeting for potential therapeutic applications. Studies of off-target effects have shown that DNA binding is far more promiscuous than DNA cleavage, yet the molecular cues that govern strand scission have not been elucidated. Here we show that the conformational state of the HNH nuclease domain directly controls DNA cleavage activity. Using intramolecular Förster resonance energy transfer experiments to detect relative orientations of the Cas9 catalytic domains when associated with on- and off-target DNA, we find that DNA cleavage efficiencies scale with the extent to which the HNH domain samples an activated conformation. We furthermore uncover a surprising mode of allosteric communication that ensures concerted firing of both Cas9 nuclease domains. Our results highlight a proofreading mechanism beyond initial protospacer adjacent motif (PAM) recognition and RNA–DNA base-pairing that serves as a final specificity checkpoint before DNA double-strand break formation. --- paper_title: Rationally engineered Cas9 nucleases with improved specificity paper_content: The RNA-guided endonuclease Cas9 is a versatile genome-editing tool with a broad range of applications from therapeutics to functional annotation of genes. Cas9 creates double-strand breaks (DSBs) at targeted genomic loci complementary to a short RNA guide. However, Cas9 can cleave off-target sites that are not fully complementary to the guide, which poses a major challenge for genome editing. Here, we use structure-guided protein engineering to improve the specificity of Streptococcus pyogenes Cas9 (SpCas9). Using targeted deep sequencing and unbiased whole-genome off-target analysis to assess Cas9-mediated DNA cleavage in human cells, we demonstrate that "enhanced specificity" SpCas9 (eSpCas9) variants reduce off-target effects and maintain robust on-target cleavage. Thus, eSpCas9 could be broadly useful for genome-editing applications requiring a high level of specificity. --- paper_title: High-frequency off-target mutagenesis induced by CRISPR-Cas nucleases in human cells paper_content: CRISPR RNA-guided endonucleases (RGENs) have rapidly emerged as a facile and efficient platform for genome editing. Here, we use a human cell-based reporter assay to characterize off-target cleavage of Cas9-based RGENs. We find that single and double mismatches are tolerated to varying degrees depending on their position along the guide RNA (gRNA)-DNA interface. We readily detected off-target alterations induced by four out of six RGENs targeted to endogenous loci in human cells by examination of partially mismatched sites. The off-target sites we identified harbor up to five mismatches and many are mutagenized with frequencies comparable to (or higher than) those observed at the intended on-target site. Our work demonstrates that RGENs are highly active even with imperfectly matched RNA-DNA interfaces in human cells, a finding that might confound their use in research and therapeutic applications. --- paper_title: DNA targeting specificity of RNA-guided Cas9 nucleases paper_content: Analyses of the determinants of the specificity of Cas9 nuclease provide rules for selecting optimal target sites. --- paper_title: Structural basis of PAM-dependent target DNA recognition by the Cas9 endonuclease paper_content: The CRISPR-associated protein Cas9 is an RNA-guided endonuclease that cleaves double-stranded DNA bearing sequences complementary to a 20-nucleotide segment in the guide RNA. Cas9 has emerged as a versatile molecular tool for genome editing and gene expression control. RNA-guided DNA recognition and cleavage strictly require the presence of a protospacer adjacent motif (PAM) in the target DNA. Here we report a crystal structure of Streptococcus pyogenes Cas9 in complex with a single-molecule guide RNA and a target DNA containing a canonical 5'-NGG-3' PAM. The structure reveals that the PAM motif resides in a base-paired DNA duplex. The non-complementary strand GG dinucleotide is read out via major-groove interactions with conserved arginine residues from the carboxy-terminal domain of Cas9. Interactions with the minor groove of the PAM duplex and the phosphodiester group at the +1 position in the target DNA strand contribute to local strand separation immediately upstream of the PAM. These observations suggest a mechanism for PAM-dependent target DNA melting and RNA-DNA hybrid formation. Furthermore, this study establishes a framework for the rational engineering of Cas9 enzymes with novel PAM specificities. --- paper_title: Rationally engineered Cas9 nucleases with improved specificity paper_content: The RNA-guided endonuclease Cas9 is a versatile genome-editing tool with a broad range of applications from therapeutics to functional annotation of genes. Cas9 creates double-strand breaks (DSBs) at targeted genomic loci complementary to a short RNA guide. However, Cas9 can cleave off-target sites that are not fully complementary to the guide, which poses a major challenge for genome editing. Here, we use structure-guided protein engineering to improve the specificity of Streptococcus pyogenes Cas9 (SpCas9). Using targeted deep sequencing and unbiased whole-genome off-target analysis to assess Cas9-mediated DNA cleavage in human cells, we demonstrate that "enhanced specificity" SpCas9 (eSpCas9) variants reduce off-target effects and maintain robust on-target cleavage. Thus, eSpCas9 could be broadly useful for genome-editing applications requiring a high level of specificity. ---
Title: Molecular Simulations Have Boosted Knowledge of CRISPR/Cas9: A Review Section 1: Introduction Description 1: This section introduces the topic and sets the stage for the review, highlighting the significance of CRISPR/Cas9 in genome editing and its impact on biomedical research. Section 2: Historical Background Description 2: This section covers the history of genome editing techniques, focusing on the development and advantages of CRISPR/Cas compared to earlier methods. Section 3: The Natural Origin of the CRISPR/Cas Technique Description 3: This section explains the natural origins of the CRISPR/Cas system as a bacterial defense mechanism against viruses and outlines its stages: adaptation, expression, and interference. Section 4: CRISPR/Cas Classification Description 4: This section describes the classification of CRISPR/Cas systems into different classes, types, and subtypes, detailing their structural and functional diversity. Section 5: CRISPR/Cas9 Structure Description 5: This section delves into the structural components of the CRISPR/Cas9 system, including the sgRNA, Cas9 protein, and their interactions within the complex. Section 6: Molecular Mechanism of CRISPR/Cas9 Description 6: This section explores the molecular mechanism behind DNA recognition and cleavage by the CRISPR/Cas9 system, emphasizing the roles of specific domains and catalytic residues. Section 7: MD Simulations of CRISPR/Cas9 Complexes Description 7: This section discusses the use of molecular dynamics (MD) simulations in studying the dynamics and interactions within the CRISPR/Cas9 complex, providing insights into on-target and off-target activities. Section 8: Limitations and Future Directions Description 8: This section addresses the limitations of the current CRISPR/Cas9 technology, particularly off-target effects, and proposes future directions for improving specificity and efficiency through computational and experimental approaches.
A Survey of Security Threats and Authentication Schemes in WiMAX
8
--- paper_title: Security concerns in WiMAX paper_content: WiMAX (Worldwide Interoperability for Microwave Access)/ IEEE 802.16 is a very promising 4G technology. WiMAX is now in the testing via implementation stage at various locations like Taiwan [12], India [13]. The results have been, so far, very promising. With the growing popularity of WiMAX, the security risks have increased many folds. In this paper we will give an overview of the security architecture of WiMAX. Then we will give an overview of the various kinds of threats viz. Physical Layer and MAC Layer threats. Finally, we will have a look at improvement reported in multi hop WiMAX networks. --- paper_title: Verification and research of a Wimax authentication protocol based on SSM paper_content: WiMAX is a new technology providing broadband data access to mobile as well as stationary users. Standards for Mobile WiMAX (IEEE 802.16e-2005) has already been finalized. In IEEE 802.16, security has been considered as the main issue during the design of the protocol. In this paper, a new WIMAX authentication protocol is designed to better satisfy the security goals under the WiMAX security architecture, then a formal correctness proof of the protocol is presented based on SSM. Finally, its performance is analyzed and improved. --- paper_title: WiMAX subscriber and mobile station authentication challenges paper_content: This article examines the subject of authentication within WiMAX (IEEE 802.16-2009) based wireless metropolitan networks. The two WiMAX authentication mechanisms (PKM versions 1 and 2) are discussed and a number of aspects affecting their authentication capabilities presented. Of particular n4056792ote is the handling of digital certificates and the lack of multiple certificate authority support. This lack essentially prevents the interoperability of WiMAX devices produced by different manufactures. Proposed recommendations are presented that should improve how WiMAX authentication operates and allow for mixed manufacturer device interoperability. ---
Title: A Survey of Security Threats and Authentication Schemes in WiMAX Section 1: INTRODUCTION Description 1: Provide an overview of wireless networks, highlight the security concerns in WiMAX, and introduce key security features like authentication, authorization, and encryption. Section 2: Protocol Layer Description 2: Describe the protocol layers in the IEEE 802.16 standard, including the physical layer and MAC layer, and explain their importance in the WiMAX framework. Section 3: Security Scheme Description 3: Explain the security sub-layer in WiMAX, outlining its roles in authentication, authorization, and encryption. Section 4: Authentication Schemes Description 4: Discuss various authentication schemes in WiMAX, including RSA and HMAC authentication, and highlight their strengths and potential weaknesses. Section 5: SECURITY THREATS IN WiMAX Description 5: Enumerate the different security threats in WiMAX, detailing vulnerabilities at both the physical and MAC layers, and discuss possible attack scenarios and their countermeasures. Section 6: AUTHENTICATION SCHEMES Description 6: Provide a deeper insight into authentication and authorization techniques in WiMAX, comparing different EAP methods and examining the PKM protocols used in IEEE 802.16e. Section 7: CONCLUSION Description 7: Summarize the key points discussed in the paper, stressing the importance of continued research and improvement in authentication schemes to enhance WiMAX security. Section 8: ACKNOWLEDGEMENTS Description 8: Acknowledge contributions, assistance, and support received during the study.
Overview of Micro- and Nano-Technology Tools for Stem Cell Applications: Micropatterned and Microelectronic Devices
13
--- paper_title: Revisiting lab-on-a-chip technology for drug discovery paper_content: Manz and colleagues discuss recent progress in the development of microfluidic techniques (lab-on-a-chip technology) and their applications in drug discovery. Highlights include high-throughput droplet technology and applications such as 'organs on a chip', which could help reduce reliance on animal testing. --- paper_title: Making a New Technology Work: The Standardization and Regulation of Microarrays paper_content: The translation of laboratory innovations into clinical tools is dependent upon the development of regulatory arrangements designed to ensure that the new technology will be used reliably and consistently. A case study of a key post-genomic technology, gene chips or microarrays, exemplifies this claim. The number of microarray publications and patents has increased exponentially during the last decade and diagnostic microarray tests already are making their way into the clinic. Yet starting in the mid-1990s, scientific journals were overrun with criticism concerning the ambiguities involved in interpreting most of the assumptions of a microarray experiment. Questions concerning platform comparability and statistical calculations were and continue to be raised, in spite of the emergence by 2001 of an initial set of standards concerning several components of a microarray experiment. This article probes the history and ongoing efforts aimed at turning microarray experimentation into a viable, meaningful, and consensual technology by focusing on two related elements:1) The history of the development of the Microarray Gene Expression Data Society (MGED), a remarkable bottom-up initiative that brings together different kinds of specialists from academic, commercial, and hybrid settings to produce, maintain, and update microarray standards; and 2) The unusual mix of skills and expertise involved in the development and use of microarrays. The production, accumulation, storage, and mining of microarray data remain multi-skilled endeavors bridging together different types of scientists who embody a diversity of scientific traditions. Beyond standardization, the interfacing of these different skills has become a key issue for further development of the field. --- paper_title: Microfluidic Large-Scale Integration: The Evolution of Design Rules for Biological Automation paper_content: Microfluidic large-scale integration (mLSI) refers to the development of microfluidic chips with thousands of integrated micromechanical valves and control components. This technology is utilized in many areas of biology and chemistry and is a candidate to replace today’s conventional automation paradigm, which consists of fluid-handling robots. We review the basic development of mLSI and then discuss design principles of mLSI to assess the capabilities and limitations of the current state of the art and to facilitate the application of mLSI to areas of biology. Many design and practical issues, including economies of scale, parallelization strategies, multiplexing, and multistep biochemical processing, are discussed. Several microfluidic components used as building blocks to create effective, complex, and highly integrated microfluidic networks are also highlighted. --- paper_title: Arrayed cellular microenvironments for identifying culture and differentiation conditions for stem, primary and rare cell populations paper_content: Arrayed cellular microenvironments for identifying culture and differentiation conditions for stem, primary and rare cell populations --- paper_title: Clinical Utility of Microarrays: Current Status, Existing Challenges and Future Outlook paper_content: Microarray-based clinical tests have become powerful tools in the diagnosis and treatment of diseases. In contrast to traditional DNA-based tests that largely focus on single genes associated with rare conditions, microarray-based tests are ideal for the study of diseases with underlying complex genetic causes. Several microarray based tests have been translated into clinical practice such as MammaPrint and AmpliChip CYP450. Additional cancer-related microarray-based tests are either in the process of FDA review or under active development, including Tissue of Tumor Origin and AmpliChip p53. All diagnostic microarray testing is ordered by physicians and tested by a Clinical Laboratories Improvement Amendment-certified (CLIA) reference laboratory. Recently, companies offering consumer based microarray testing have emerged. Individuals can order tests online and service providers deliver the results directly to the clients via a password-protected secure website. Navigenics, 23andMe and deCODE Genetics represent pioneering companies in this field. Although the progress of these microarray-based tests is extremely encouraging with the potential to revolutionize the recognition and treatment of common diseases, these tests are still in their infancy and face technical, clinical and marketing challenges. In this article, we review microarray-based tests which are currently approved or under review by the FDA, as well as the consumer-based testing. We also provide a summary of the challenges and strategic solutions in the development and clinical use of the microarray-based tests. Finally, we present a brief outlook for the future of microarray-based clinical applications. --- paper_title: From 3D cell culture to organs-on-chips. paper_content: 3D cell-culture models have recently garnered great attention because they often promote levels of cell differentiation and tissue organization not possible in conventional 2D culture systems. We review new advances in 3D culture that leverage microfabrication technologies from the microchip industry and microfluidics approaches to create cell-culture microenvironments that both support tissue differentiation and recapitulate the tissue-tissue interfaces, spatiotemporal chemical gradients, and mechanical microenvironments of living organs. These 'organs-on-chips' permit the study of human physiology in an organ-specific context, enable development of novel in vitro disease models, and could potentially serve as replacements for animals used in drug development and toxin testing. --- paper_title: Stem cell biology and drug discovery paper_content: There are many reasons to be interested in stem cells, one of the most prominent being their potential use in finding better drugs to treat human disease. This article focuses on how this may be implemented. Recent advances in the production of reprogrammed adult cells and their regulated differentiation to disease-relevant cells are presented, and diseases that have been modeled using these methods are discussed. Remaining difficulties are highlighted, as are new therapeutic insights that have emerged. --- paper_title: Transmembrane crosstalk between the extracellular matrix--cytoskeleton crosstalk. paper_content: Integrin-mediated cell adhesions provide dynamic, bidirectional links between the extracellular matrix and the cytoskeleton. Besides having central roles in cell migration and morphogenesis, focal adhesions and related structures convey information across the cell membrane, to regulate extracellular-matrix assembly, cell proliferation, differentiation, and death. This review describes integrin functions, mechanosensors, molecular switches and signal-transduction pathways activated and integrated by adhesion, with a unifying theme being the importance of local physical forces. --- paper_title: Human Pluripotent Stem Cells: Applications and Challenges in Neurological Diseases paper_content: The ability to generate human pluripotent stem cells (hPSCs) holds great promise for the understanding and the treatment of human neurological diseases in modern medicine. The hPSCs are considered for their in vitro use as research tools to provide relevant cellular model for human diseases, drug discovery and toxicity assays and for their in vivo use in regenerative medicine applications. In this review, we highlight recent progress, promises and challenges of hPSC applications in human neurological disease modelling and therapies. --- paper_title: High-throughput cellular microarray platforms: applications in drug discovery, toxicology and stem cell research paper_content: Cellular microarrays are powerful experimental tools for high-throughput screening of large numbers of test samples. Miniaturization increases assay throughput while reducing reagent consumption and the number of cells required, making these systems attractive for a wide range of assays in drug discovery, toxicology, stem cell research and potentially therapy. Here, we provide an overview of the emerging technologies that can be used to generate cellular microarrays, and we highlight recent significant advances in the field. This emerging and multidisciplinary approach offers new opportunities for the design and control of stem cells in tissue engineering and cellular therapies and promises to expedite drug discovery in the biotechnology and pharmaceutical industries. --- paper_title: Reverse transfected cell microarrays in infectious disease research. paper_content: Several human pathogenic viruses encode large genomes with often more than 100 genes. Viral pathogenicity is determined by carefully orchestrated co-operative activities of several different viral genes which trigger the phenotypic functions of the infected cells. Systematic analyses of these complex interactions require high-throughput transfection technology. Here we have provided a laboratory manual for the reverse transfected cell microarray (RTCM; alternative name: cell chip) as a high-throughput transfection procedure, which has been successfully applied for the systematic analyses of single and combination effects of genes encoded by the human herpesvirus-8 on the NF-kappaB signal transduction pathway. In order to quantitatively determine the effects of viral genes in transfected cells, protocols for the use of GFP as an indicator gene and for indirect immunofluorescence staining of cellular target proteins have been included. RTCM provides a useful methodological approach to investigate systematically combination effects of viral genes on cellular functions. --- paper_title: An extracellular matrix microarray for probing cellular differentiation paper_content: We present an extracellular matrix (ECM) microarray platform for the culture of patterned cells atop combinatorial matrix mixtures. This platform enables the study of differentiation in response to a multitude of microenvironments in parallel. The fabrication process required only access to a standard robotic DNA spotter, off-the-shelf materials and 1,000 times less protein than conventional means of investigating cell-ECM interactions. To demonstrate its utility, we applied this platform to study the effects of 32 different combinations of five extracellular matrix molecules (collagen I, collagen III, collagen IV, laminin and fibronectin) on cellular differentiation in two contexts: maintenance of primary rat hepatocyte phenotype indicated by intracellular albumin staining and differentiation of mouse embryonic stem (ES) cells toward an early hepatic fate, indicated by expression of a beta-galactosidase reporter fused to the fetal liver-specific gene, Ankrd17 (also known as gtar). Using this technique, we identified combinations of ECM that synergistically impacted both hepatocyte function and ES cell differentiation. This versatile technique can be easily adapted to other applications, as it is amenable to studying almost any insoluble microenvironmental cue in a combinatorial fashion and is compatible with several cell types. --- paper_title: Variable behavior and complications of autologous bone marrow mesenchymal stem cells transplanted in experimental autoimmune encephalomyelitis paper_content: Autologous bone marrow stromal cells (BMSCs) offer significant practical advantages for potential clinical applications in multiple sclerosis (MS). Based on recent experimental data, a number of clinical trials have been designed for the intravenous (IV) and/or intrathecal (ITH) administration of BMSCs in MS patients. Delivery of BMSCs in the cerebrospinal fluid via intracerebroventricular (ICV) transplantation is a useful tool to identify mechanisms underlying the migration and function of these cells. In the current study, BMSCs were ICV administered in severe and mild EAE, as well as naive animals; neural precursor cells (NPCs) served as cellular controls. Our data indicated that ICV-transplanted BMSCs significantly ameliorated mild though not severe EAE. Moreover, BMSCs exerted significant anti-inflammatory effect on spinal cord with concomitant reduced axonopathy only in the mild EAE model. BMSCs migrated into the brain parenchyma and, depending on their cellular density, within brain parenchyma formed cellular masses characterized by focal inflammation, demyelination, axonal loss and increased collagen-fibronectin deposition. These masses were present in 64% of ICV BMASC-transplanted severe EAE animals whereas neither BMSCs transplanted in mild EAE cases nor the NPCs exhibited similar behavior. BMSCs possibly exerted their fibrogenic effect via both paracrine and autocrine manner, at least partly due to up-regulation of connective tissue growth factor (CTGF) under the trigger of TGFb1. Our findings are of substantial relevance for clinical trials in MS, particularly regarding the possibility that ICV transplanted BMSCs entering the inflamed central nervous system may exhibit - under conditions - a local pathology of yet unknown consequences. --- paper_title: Single-cell profiling of developing and mature retinal neurons. paper_content: Highly specialized, but exceedingly small populations of cells play important roles in many tissues. The identification of cell-type specific markers and gene expression programs for extremely rare cell subsets has been a challenge using standard whole-tissue approaches. Gene expression profiling of individual cells allows for unprecedented access to cell types that comprise only a small percentage of the total tissue(1-7). In addition, this technique can be used to examine the gene expression programs that are transiently expressed in small numbers of cells during dynamic developmental transitions(8). This issue of cellular diversity arises repeatedly in the central nervous system (CNS) where neuronal connections can occur between quite diverse cells(9). The exact number of distinct cell types is not precisely known, but it has been estimated that there may be as many as 1000 different types in the cortex itself(10). The function(s) of complex neural circuits may rely on some of the rare neuronal types and the genes they express. By identifying new markers and helping to molecularly classify different neurons, the single-cell approach is particularly useful in the analysis of cell types in the nervous system. It may also help to elucidate mechanisms of neural development by identifying differentially expressed genes and gene pathways during early stages of neuronal progenitor development. As a simple, easily accessed tissue with considerable neuronal diversity, the vertebrate retina is an excellent model system for studying the processes of cellular development, neuronal differentiation and neuronal diversification. However, as in other parts of the CNS, this cellular diversity can present a problem for determining the genetic pathways that drive retinal progenitors to adopt a specific cell fate, especially given that rod photoreceptors make up the majority of the total retinal cell population(11). Here we report a method for the identification of the transcripts expressed in single retinal cells (Figure 1). The single-cell profiling technique allows for the assessment of the amount of heterogeneity present within different cellular populations of the retina(2,4,5,12). In addition, this method has revealed a host of new candidate genes that may play role(s) in the cell fate decision-making processes that occur in subsets of retinal progenitor cells(8). With some simple adjustments to the protocol, this technique can be utilized for many different tissues and cell types. --- paper_title: Telomere length changes after umbilical cord blood transplant paper_content: BACKGROUND: The establishment of donor-derived hematopoiesis in the recipients of hematopoietic stem cell (HSC) transplants involves extensive proliferation and differentiation of HSCs. Data from long-term survivors of HSC transplants suggest that these transplanted HSCs may experience a debilitating replicative senescence. A significant posttransplant shortening of peripheral blood mononuclear cell (PBMNC) telomeres has been observed in both marrow transplant and peripheral blood progenitor cell transplant recipients. Similar studies have not been performed for umbilical cord blood (UCB) HSC transplants, which might be expected to exhibit increased posttransplant replicative potential due to their inherently greater telomere length. ::: ::: ::: ::: STUDY DESIGN AND METHODS: Blood was obtained from donor-recipient pairs of allogeneic PBHSC transplant and UCB HSC transplant, both before transplant and at follow-up treatments (minimum 1 year after transplant) after engraftment. Telomere restriction fragment length (TRFL) analysis was performed on the blood samples. The mean TRFL and posttransplant changes in the mean TRFL were analyzed. ::: ::: ::: ::: RESULTS: Measurements of telomere lengths in the PBMNCs of transplant patients revealed a significant net decrease in telomere length in all transplant recipients compared with their respective donors. Our results also revealed that the PBMNCs of umbilical cord stem cell transplant patients retain a significantly longer posttransplant telomere length. ::: ::: ::: ::: CONCLUSION: The significantly longer telomeres observed in the allogeneic UCB HSC transplant recipients compared to the allogeneic PBHSC transplant recipients in our study may be indicative of a replicative advantage inherent in the use of UCB HSC for transplant. --- paper_title: Comparison of reverse transcription–quantitative polymerase chain reaction methods and platforms for single cell gene expression analysis paper_content: Abstract Single cell gene expression analysis can provide insights into development and disease progression by profiling individual cellular responses as opposed to reporting the global average of a population. Reverse transcription–quantitative polymerase chain reaction (RT–qPCR) is the “gold standard” for the quantification of gene expression levels; however, the technical performance of kits and platforms aimed at single cell analysis has not been fully defined in terms of sensitivity and assay comparability. We compared three kits using purification columns (PicoPure) or direct lysis (CellsDirect and Cells-to-CT) combined with a one- or two-step RT–qPCR approach using dilutions of cells and RNA standards to the single cell level. Single cell-level messenger RNA (mRNA) analysis was possible using all three methods, although the precision, linearity, and effect of lysis buffer and cell background differed depending on the approach used. The impact of using a microfluidic qPCR platform versus a standard instrument was investigated for potential variability introduced by preamplification of template or scaling down of the qPCR to nanoliter volumes using laser-dissected single cell samples. The two approaches were found to be comparable. These studies show that accurate gene expression analysis is achievable at the single cell level and highlight the importance of well-validated experimental procedures for low-level mRNA analysis. --- paper_title: Microarray analysis of normal and dystrophic skeletal muscle paper_content: The development and increasingly common use of DNA microarrays for comprehensive RNA expression analysis has had a substantial impact on the study of molecular pathology. DNA microarrays are orderly, high-density arrangements of nucleic acid spots that can be used as substrates for global gene expression analysis. Prior to their development, technical limitations necessitated that the molecular mechanisms underlying biological processes be broken down into their component parts and each gene or protein studied individually. This approach, focused as it is on a single aspect of a scientific phenomenon, does not allow appreciation or understanding of the fact that biological pathways do not exist in isolation, but are influenced by numerous factors. Enormous technological advances have been made over the past decade and now high-density DNA microarrays can provide rapid measurement of thousands of distinct transcripts simultaneously. These experiments raise the exciting opportunity to examine biological pathways in all their complexity and to compare the hypotheses deduced from the study of histological pathology with the findings of molecular pathology. This review focuses on how microarray technology has been used to interrogate muscular gene expression and, in particular, on how data generated from differential expression analysis of dystrophic and normal skeletal muscle has contributed to understanding the molecular pathophysiological pathways of muscular dystrophy. --- paper_title: Lifetime probabilities of hematopoietic stem cell transplantation in the U.S. paper_content: Healthcare policies regarding hematopoietic stem cell transplantation (HSCT) must address the need for the procedure as well as the availability of stem cell sources: bone marrow, peripheral blood, or umbilical cord blood (UCB). However, data with respect to the lifetime probability of undergoing HSCT are lacking. This study was undertaken to estimate the latter probability in the United States (U.S.), depending on age, sex, and race. We used data from the Center for International Blood and Marrow Transplant Research, the U.S. Surveillance, Epidemiology and End Results Program, and the U.S. Census Bureau and calculated probabilities as cumulative incidences. Several scenarios were considered: assuming current indications for autologous and allogeneic HSCT, assuming universal donor availability, and assuming broadening of HSCT use in hematologic malignancies. Incidences of diseases treated with HSCT and of HSCTs performed increase with age, rising strongly after age 40. Among individuals older than 40, incidences are higher for men than for women. The lifetime probabilities of undergoing HSCT range from 0.23% to 0.98% under the various scenarios. We conclude that, given current indications, the lifetime probability of undergoing autologous or allogeneic HSCT is much higher than previously reported by others and could rise even higher with increases in donor availability and HSCT applicability. --- paper_title: Integrin-ECM interactions regulate the changes in cell shape driving the morphogenesis of the Drosophila wing epithelium. paper_content: During development, morphogenesis involves migration and changes in the shape of epithelial sheets, both of which require coordination of cell adhesion. Thus, while modulation of integrin-mediated adhesion to the ECM regulates epithelial motility, cell-cell adhesion via cadherins controls the remodelling of epithelial sheets. We have used the Drosophila wing epithelium to demonstrate that cell-ECM interactions mediated by integrins also regulate the changes in cell shape that underly epithelial morphogenesis. We show that integrins control the transitions from columnar to cuboidal cell shape underlying wing formation, and we demonstrate that eliminating the ECM has the same effect on cell shape as inhibiting integrin function. Furthermore, lack of integrin activity also induces detachment of the basal lamina and failure to assemble the basal matrix. Hence, we propose that integrins control epithelial cell shape by mediating adherence of these cells to the ECM. Finally, we show that the ECM has an instructive rather than a structural role, because inhibition of Raf reverses the cell shape changes caused by perturbing integrins. --- paper_title: A planar interdigitated ring electrode array via dielectrophoresis for uniform patterning of cells. paper_content: Uniform patterning of cells is highly desirable for most cellular studies involving cell-cell interactions but is often difficult in an in vitro environment. This paper presents the development of a collagen-coated planar interdigitated ring electrode (PIRE) array utilizing positive dielectrophoresis to pattern cells uniformly. Key features of the PIRE design include: (1) maximizing length along the edges where the localized maximum in the electric field exists; (2) making the inner gap slightly smaller than the outer gap in causing the electric field strength near the center of a PIRE being generally stronger than that near the outer edge of the same PIRE. Results of human hepatocellular carcinoma cells, HepG2, adhered on a 6x6 PIRE array show that cells patterned within minutes with good uniformity (48+/-6 cells per PIRE). Cell viability test revealed healthy patterned cells after 24h that were still confined to the collagen-coated PIREs. Furthermore, quantification of fluorescence intensity of living cells shows an acceptable reproducibility of cell viability among PIREs (mean normalized intensity per PIRE was 1+/-0.138). The results suggest that the PIRE array would benefit applications that desire uniform cellular patterning, and improve both response and reproducibility of cell-based biosensors. --- paper_title: Stem cell-based treatments for Type 1 diabetes mellitus: bone marrow, embryonic, hepatic, pancreatic and induced pluripotent stem cells paper_content: Type 1 diabetes mellitus--characterized by the permanent destruction of insulin-secreting β-cells--is responsive to cell-based treatments that replace lost β-cell populations. The current gold standard of pancreas transplantation provides only temporary independence from exogenous insulin and is fraught with complications, including increased mortality. Stem cells offer a number of theoretical advantages over current therapies. Our review will focus on the development of treatments involving tissue stem cells from bone marrow, liver and pancreatic cells, as well as the potential use of embryonic and induced pluripotent stem cells for Type 1 diabetes therapy. While the body of research involving stem cells is at once promising and inconsistent, bone marrow-derived mesenchymal stem cell transplantation seems to offer the most compelling evidence of efficacy. These cells have been demonstrated to increase endogenous insulin production, while partially mitigating the autoimmune destruction of newly formed β-cells. However, recently successful experiments involving induced pluripotent stem cells could quickly move them into the foreground of therapeutic research. We address the limitations encountered by present research and look toward the future of stem cell treatments for Type 1 diabetes. --- paper_title: Induction of Pluripotent Stem Cells from Mouse Embryonic and Adult Fibroblast Cultures by Defined Factors paper_content: Differentiated cells can be reprogrammed to an embryonic-like state by transfer of nuclear contents into oocytes or by fusion with embryonic stem (ES) cells. Little is known about factors that induce this reprogramming. Here, we demonstrate induction of pluripotent stem cells from mouse embryonic or adult fibroblasts by introducing four factors, Oct3/4, Sox2, c-Myc, and Klf4, under ES cell culture conditions. Unexpectedly, Nanog was dispensable. These cells, which we designated iPS (induced pluripotent stem) cells, exhibit the morphology and growth properties of ES cells and express ES cell marker genes. Subcutaneous transplantation of iPS cells into nude mice resulted in tumors containing a variety of tissues from all three germ layers. Following injection into blastocysts, iPS cells contributed to mouse embryonic development. These data demonstrate that pluripotent stem cells can be directly generated from fibroblast cultures by the addition of only a few defined factors. --- paper_title: Using a single fluorescent reporter gene to infer half-life of extrinsic noise and other parameters of gene expression. paper_content: Fluorescent and luminescent proteins are often used as reporters of transcriptional activity. Given the prevalence of noise in biochemical systems, the time-series data arising from these is of significant interest in efforts to calibrate stochastic models of gene expression and obtain information about sources of nongenetic variability. We present a statistical inference framework that can be used to estimate kinetic parameters of gene expression, as well as the strength and half-life of extrinsic noise from single fluorescent-reporter-gene time-series data. The method takes into account stochastic variability in a fluorescent signal resulting from intrinsic noise of gene expression, kinetics of fluorescent protein maturation, and extrinsic noise, which is assumed to arise at transcriptional level. We use the linear noise approximation and derive an explicit formula for the likelihood of observed fluorescent data. The method is embedded in a Bayesian paradigm, so that certain parameters can be informed from other experiments allowing portability of results across different studies. Inference is performed using Markov chain Monte Carlo. Fluorescent reporters are primary tools to observe dynamics of gene expression and the correct interpretation of fluorescent data is crucial to investigating these fundamental processes of cellular life. As both magnitude and frequency of the noise may have a dramatic effect on the cell fitness, the quantification of stochastic fluctuation is essential to the understanding of how genes are regulated. Our method provides a framework that addresses this important question. --- paper_title: Neuronal Classification and Marker Gene Identification via Single-Cell Expression Profiling of Brainstem Vestibular Neurons Subserving Cerebellar Learning paper_content: Identification of marker genes expressed in specific cell types is essential for the genetic dissection of neural circuits. Here we report a new strategy for classifying heterogeneous populations of neurons into functionally distinct types and for identifying associated marker genes. Quantitative single-cell expression profiling of genes related to neurotransmitters and ion channels enables functional classification of neurons; transcript profiles for marker gene candidates identify molecular handles for manipulating each cell type. We apply this strategy to the mouse medial vestibular nucleus (MVN), which comprises several types of neurons subserving cerebellar-dependent learning in the vestibulo-ocular reflex. Ion channel gene expression differed both qualitatively and quantitatively across cell types and could distinguish subtle differences in intrinsic electrophysiology. Single-cell transcript profiling of MVN neurons established six functionally distinct cell types and associated marker genes. This strategy is applicable throughout the nervous system and could facilitate the use of molecular genetic tools to examine the behavioral roles of distinct neuronal populations. --- paper_title: Transistor Probes Local Potassium Conductances in the Adhesion Region of Cultured Rat Hippocampal Neurons paper_content: Adhesion interactions of neurons in a tissue may affect the ion conductance of the plasma membrane, inducing selective localization and modulation of channels. We studied the adhesion region of cultured neurons from rat hippocampus as a defined model where such effects could be observed electrophysiologically, taking advantage of extracellular recording by a transistor integrated in the substrate. We observed the K(+) current through the region of soma adhesion under voltage-clamp and compared it with the current through the whole cell. We found that the specific A-type conductance was depleted, even completely, in the region of adhesion, whereas the specific K-type conductance was enhanced up to a factor of 12. The electrophysiological approach opens a new way to investigate targeting of ion channels in the cell membrane as a function of adhesion processes. --- paper_title: Fluorescent proteins as a toolkit for in vivo imaging. paper_content: Green fluorescent protein (GFP) from the jellyfish Aequorea victoria, and its mutant variants, are the only fully genetically encoded fluorescent probes available and they have proved to be excellent tools for labeling living specimens. Since 1999, numerous GFP homologues have been discovered in Anthozoa, Hydrozoa and Copepoda species, demonstrating the broad evolutionary and spectral diversity of this protein family. Mutagenic studies gave rise to diversified and optimized variants of fluorescent proteins, which have never been encountered in nature. This article gives an overview of the GFP-like proteins developed to date and their most common applications to study living specimens using fluorescence microscopy. --- paper_title: The modulation of myogenic cells differentiation using a semiconductor-muscle junction. paper_content: Abstract The present study is aimed to design a prototype of hybrid silicon-muscle cell junction, analog to an artificial neuromuscular junction prototype and relevant to the development of advanced neuro-prostheses and bionic systems. The device achieves focal Electric Capacitive Stimulation (ECS) by coupling of single cells and semiconductors, without electrochemical reaction with the substrate. A voltage change applied to a stimulation spot beneath an electrogenic cell leads to a capacitive current (charge accumulation) that opens voltage-gated ion channels in the membrane and generates an action potential. The myo-electronic junction was employed to chronically stimulate muscle cells via ECS and to induce cytosolic calcium transients in myotubes, fibers isolated from mouse FDB (fast [Ca 2+ ] i transients) and surprisingly also in undifferentiated myoblasts (slow [Ca 2+ ] i waves). The hybrid junction elicited, via chronic ECS, a differential reprogramming of single muscle cells by inducing early muscle contraction maturation and plasticity effects, such as NFAT-C3 nuclear translocation. In addition, in the presence of agrin, chronic ECS induced a modulation of AchR clustering which simulates in vitro synaptogenesis. This methodology can coordinate the myogenic differentiation, thus offering direct but non-invasive single cell/wiring, providing a platform for regenerative medicine strategies. --- paper_title: CEL-Seq: Single-Cell RNA-Seq by Multiplexed Linear Amplification paper_content: Summary High-throughput sequencing has allowed for unprecedented detail in gene expression analyses, yet its efficient application to single cells is challenged by the small starting amounts of RNA. We have developed CEL-Seq, a method for overcoming this limitation by barcoding and pooling samples before linearly amplifying mRNA with the use of one round of in vitro transcription. We show that CEL-Seq gives more reproducible, linear, and sensitive results than a PCR-based amplification method. We demonstrate the power of this method by studying early C. elegans embryonic development at single-cell resolution. Differential distribution of transcripts between sister cells is seen as early as the two-cell stage embryo, and zygotic expression in the somatic cell lineages is enriched for transcription factors. The robust transcriptome quantifications enabled by CEL-Seq will be useful for transcriptomic analyses of complex tissues containing populations of diverse cell types. --- paper_title: Single Cell Profiling of Circulating Tumor Cells: Transcriptional Heterogeneity and Diversity from Breast Cancer Cell Lines paper_content: Background: To improve cancer therapy, it is critical to target metastasizing cells. Circulating tumor cells (CTCs) are rare cells found in the blood of patients with solid tumors and may play a key role in cancer dissemination. Uncovering CTC phenotypes offers a potential avenue to inform treatment. However, CTC transcriptional profiling is limited by leukocyte contamination; an approach to surmount this problem is single cell analysis. Here we demonstrate feasibility of performing high dimensional single CTC profiling, providing early insight into CTC heterogeneity and allowing comparisons to breast cancer cell lines widely used for drug discovery. Methodology/Principal Findings: We purified CTCs using the MagSweeper, an immunomagnetic enrichment device that isolates live tumor cells from unfractionated blood. CTCs that met stringent criteria for further analysis were obtained from 70% (14/20) of primary and 70% (21/30) of metastatic breast cancer patients; none were captured from patients with nonepithelial cancer (n=20) or healthy subjects (n=25). Microfluidic-based single cell transcriptional profiling of 87 cancerassociated and reference genes showed heterogeneity among individual CTCs, separating them into two major subgroups, based on 31 highly expressed genes. In contrast, single cells from seven breast cancer cell lines were tightly clustered together by sample ID and ER status. CTC profiles were distinct from those of cancer cell lines, questioning the suitability of such lines for drug discovery efforts for late stage cancer therapy. Conclusions/Significance: For the first time, we directly measured high dimensional gene expression in individual CTCs without the common practice of pooling such cells. Elevated transcript levels of genes associated with metastasis NPTN, S100A4, S100A9, and with epithelial mesenchymal transition: VIM, TGFs1, ZEB2, FOXC1, CXCR4, were striking compared to cell lines. Our findings demonstrate that profiling CTCs on a cell-by-cell basis is possible and may facilitate the application of ‘liquid biopsies’ to better model drug discovery. --- paper_title: Mesenchymal stem cells and progenitor cells in connective tissue engineering and regenerative medicine: is there a future for transplantation? paper_content: PURPOSE ::: Transplantation surgery suffers from a shortage of donor organs worldwide. Cell injection and tissue engineering (TE), thus emerge as alternative therapy options. The purpose of this article is to review the progress of TE technology, focusing on mesenchymal stem cells (MSC) as a cell source for artificial functional tissue. ::: ::: ::: RESULTS ::: MSC from many different sources can be minimally invasively harvested: peripheral blood, fat tissue, bone marrow, amniotic fluid, cord blood. In comparison to embryonic stem cells (ESC), there are no ethical concerns; MSC can be extracted from autologous or allogenic tissue and cause an immune modulatory effect by suppressing the graft-versus-host reaction (GvHD). Furthermore, MSC do not develop into teratomas when transplanted, a consequence observed with ESC and iPS cells. ::: ::: ::: CONCLUSION ::: MSC as multipotent cells are capable of differentiating into mesodermal and non-mesodermal lineages. However, further studies must be performed to elucidate the differentiation capacity of MSC from different sources, and to understand the involved pathways and processes. Already, MSC have been successfully applied in clinical trials, e.g., to heal large bone defects, cartilage lesions, spinal cord injuries, cardiovascular diseases, hematological pathologies, osteogenesis imperfecta, and GvHD. A detailed understanding of the behavior and homing of MSC is desirable to enlarge the clinical application spectrum of MSC towards the in vitro generation of functional tissue for implantation, for example, resilient cartilage, contractile myocardial replacement tissue, and bioartificial heart valves. --- paper_title: Generation of induced pluripotent stem cells without Myc from mouse and human fibroblasts paper_content: Direct reprogramming of somatic cells provides an opportunity to generate patient- or disease-specific pluripotent stem cells. Such induced pluripotent stem (iPS) cells were generated from mouse fibroblasts by retroviral transduction of four transcription factors: Oct3/4, Sox2, Klf4 and c-Myc. Mouse iPS cells are indistinguishable from embryonic stem (ES) cells in many respects and produce germline-competent chimeras. Reactivation of the c-Myc retrovirus, however, increases tumorigenicity in the chimeras and progeny mice, hindering clinical applications. Here we describe a modified protocol for the generation of iPS cells that does not require the Myc retrovirus. With this protocol, we obtained significantly fewer non-iPS background cells, and the iPS cells generated were consistently of high quality. Mice derived from Myc(-) iPS cells did not develop tumors during the study period. The protocol also enabled efficient isolation of iPS cells without drug selection. Furthermore, we generated human iPS cells from adult dermal fibroblasts without MYC. --- paper_title: Mesenchymal stem cells: therapeutic outlook for stroke. paper_content: Adult bone marrow-derived mesenchymal stem cells (MSCs) display a spectrum of functional properties. Transplantation of these cells improves clinical outcome in models of cerebral ischemia and spinal cord injury via mechanisms that may include replacement of damaged cells, neuroprotective effects, induction of axonal sprouting, and neovascularization. Therapeutic effects have been reported in animal models of stroke after intravenous delivery of MSCs, including those derived from adult human bone marrow. Initial clinical studies on intravenously delivered MSCs have now been completed in human subjects with stroke. Here, we review the reparative and protective properties of transplanted MSCs in stroke models, describe initial human studies on intravenous MSC delivery in stroke, and provide a perspective on prospects for future progress with MSCs. --- paper_title: High-throughput analysis of signals regulating stem cell fate and function. paper_content: Stem cells exhibit promise in numerous areas of regenerative medicine. Their fate and function are governed by a combination of intrinsic determinants and signals from the local microenvironment, or niche. An understanding of the mechanisms underlying both embryonic and adult stem cell functions has been greatly enhanced by the recent development of several high-throughput technologies: microfabricated platforms, including cellular microarrays, to investigate the combinatorial effects of microenvironmental stimuli and large-scale screens utilizing small molecules and short interfering RNAs to identify crucial genetic and signaling elements. Furthermore, the integration of these systems with other versatile platforms, such as microfluidics and lentiviral microarrays, will continue to enable the detailed elucidation of stem cell processes, and thus, greatly contribute to the development of stem cell based therapies. --- paper_title: Fluorescent Proteins as Biomarkers and Biosensors: Throwing Color Lights on Molecular and Cellular Processes paper_content: Green fluorescent protein (GFP) from jellyfish Aequorea victoria is the most extensively studied and widely used in cell biology protein. GFP-like proteins constitute a fast growing family as several naturally occurring GFP-like proteins have been discovered and enhanced mutants of Aequorea GFP have been created. These mutants differ from wild-type GFP by conformational stability, quantum yield, spectroscopic properties (positions of absorption and fluorescence spectra) and by photochemical properties. GFP-like proteins are very diverse, as they can be not only green, but also blue, orange-red, far-red, cyan, and yellow. They also can have dual-color fluorescence (e.g., green and red) or be non-fluorescent. Some of them possess kindling property, some are photoactivatable, and some are photoswitchable. This review is an attempt to characterize the main color groups of GFP-like proteins, describe their structure and mechanisms of chromophore formation, systemize data on their conformational stability and summarize the main trends of their utilization as markers and biosensors in cell and molecular biology. --- paper_title: Adult Bone Marrow: Which Stem Cells for Cellular Therapy Protocols in Neurodegenerative Disorders? paper_content: The generation of neuronal cells from stem cells obtained from adult bone marrow is of significant clinical interest in order to design new cell therapy protocols for several neurological disorders. The recent identification in adult bone marrow of stem cells derived from the neural crests (NCSCs) might explain the neuronal phenotypic plasticity shown by bone marrow cells. However, little information is available about the nature of these cells compared to mesenchymal stem cells (MSCs). In this paper, we will review all information available concerning NCSC from adult tissues and their possible use in regenerative medicine. Moreover, as multiple recent studies showed the beneficial effect of bone marrow stromal cells in neurodegenerative diseases, we will discuss which stem cells isolated from adult bone marrow should be more suitable for cell replacement therapy. --- paper_title: Concise review: Adipose-derived stem cells as a novel tool for future regenerative medicine. paper_content: The potential use of stem cell-based therapies for the repair and regeneration of various tissues and organs offers a paradigm shift that may provide alternative therapeutic solutions for a number of diseases. The use of either embryonic stem cells (ESCs) or induced pluripotent stem cells in clinical situations is limited due to cell regulations and to technical and ethical considerations involved in the genetic manipulation of human ESCs, even though these cells are, theoretically, highly beneficial. Mesenchymal stem cells seem to be an ideal population of stem cells for practical regenerative medicine, because they are not subjected to the same restrictions. In particular, large number of adipose-derived stem cells (ASCs) can be easily harvested from adipose tissue. Furthermore, recent basic research and preclinical studies have revealed that the use of ASCs in regenerative medicine is not limited to mesodermal tissue but extends to both ectodermal and endodermal tissues and organs, although ASCs originate from mesodermal lineages. Based on this background knowledge, the primary purpose of this concise review is to summarize and describe the underlying biology of ASCs and their proliferation and differentiation capacities, together with current preclinical and clinical data from a variety of medical fields regarding the use of ASCs in regenerative medicine. In addition, future directions for ASCs in terms of cell-based therapies and regenerative medicine are discussed. --- paper_title: New Array Approaches to Explore Single Cells Genomes paper_content: Microarray analysis enables the genome wide detection of copy number variations and the investigation of chromosomal instability. Whereas array techniques have been well established for the analysis of unamplified DNA derived from many cells, it has been more challenging to enable the accurate analysis of single cell genomes. In this review, we provide an overview of single cell DNA amplification techniques, the different array approaches and discuss their potential applications to study human embryos. --- paper_title: Methods for Synchronizing Cells at Specific Stages of the Cell Cycle paper_content: Exponentially growing cells are asynchronous with respect to the cell cycle stage. Detection of cell cycle-related events is improved by enriching the culture for cells at the stage during which the particular event occurs. Methods for synchronizing cells are provided here, including those based on morphological features of the cell (mitotic shake-off), cellular metabolism (thymidine inhibition, isoleucine depravation), and chemical inhibitors of cell progression in G1 (lovastatin), S (aphidicolin, mimosine), and G2/M (nocodazole). Applications of these methods and the advantages and disadvantages of each are described. --- paper_title: Cells on chips paper_content: Microsystems create new opportunities for the spatial and temporal control of cell growth and stimuli by combining surfaces that mimic complex biochemistries and geometries of the extracellular matrix with microfluidic channels that regulate transport of fluids and soluble factors. Further integration with bioanalytic microsystems results in multifunctional platforms for basic biological insights into cells and tissues, as well as for cell-based sensors with biochemical, biomedical and environmental functions. Highly integrated microdevices show great promise for basic biomedical and pharmaceutical research, and robust and portable point-of-care devices could be used in clinical settings, in both the developed and the developing world. --- paper_title: Gene expression analysis on a single cell level in Purkinje cells of Huntington's disease transgenic mice paper_content: Ataxia is a clinical feature of most polyglutamine disorders. Cerebellar neurodegeneration of Purkinje cells (PCs) in Huntington's Disease (HD) brain was described in the 1980s. PC death in the R6/2 transgenic model for HD was published by Turmaine et al. [27]. So far, PCs have not been examined on a single cell level. In order to begin to understand PC dysfunction and degeneration in HD we performed a gene expression study on laser-dissected PC based on a DNA microarray screening and quantitative real time PCR (Q-PCR). We demonstrate downregulation of the retinoid acid receptor-related orphan receptor alpha (ROR alpha) mRNA and ROR alpha-mediated mRNAs, also seen by immunofluorescent staining. As ROR alpha and ROR alpha-dependent transcriptional dysregulation is not only found in the R6/2 model for HD but also in a model for spinocerebellar ataxia type 1 (SCA1) (Serra et al. [24]) the data suggest common pathogenic mechanisms for both polyglutamine diseases. (c) 2012 Published by Elsevier Ireland Ltd. --- paper_title: Fluorescence microscopy today paper_content: Fluorescence microscopy has undergone a renaissance in the last decade. The introduction of green fluorescent protein (GFP) and two-photon microscopy has allowed systematic imaging studies of protein localization in living cells and of the structure and function of living tissues. The impact of these and other new imaging methods in biophysics, neuroscience, and developmental and cell biology has been remarkable. Further advances in fluorophore design, molecular biological tools and nonlinear and hyper-resolution microscopies are poised to profoundly transform many fields of biological research. --- paper_title: Dynamic culture of droplet-confined cell arrays. paper_content: Responding to the need of creating an accurate and controlled microenvironment surrounding the cell while meeting the requirements for biological processes or pharmacological screening tests, we aimed at designing and developing a microscaled culture system suitable for analyzing the synergic effects of extracellular matrix proteins and soluble environments on cell phenotype in a high-throughput fashion. We produced cell arrays deposing micrometer-scale protein islands on hydrogels using a robotic DNA microarrayer, constrained the culture media in a droplet-like volume and developed a suitable perfusion system. The droplet-confined cell arrays were used either with conventional culture methods (batch operating system) or with automated stable and constant perfusion (steady-state operating system). Mathematical modeling assisted the experimental design and assessed efficient mass transport and proper fluidodynamic regimes. Cells cultured on arrayed islands (500 mum diameter) maintained the correct phenotype both after static and perfused conditions, confirmed by immunostaining and gene expression analyses through total RNA extraction. The mathematical model, validated using a particle tracking experiment, predicted the constant value of velocities over the cell arrays (less than 10% variation) ensuring the same mass transport regime. BrdU analysis on an average of 96 cell spots for each experimental condition showed uniform expression inside each cell island and low variability in the data (average of 13%). Perfused arrays showed longer doubling times when compared with static cultures. In addition, perfused cultures showed a reduced variability in the collected data, allowing to detect statistically significant differences in cell behavior depending on the spotted ECM protein. --- paper_title: Derivation, propagation and differentiation of human embryonic stem cells. paper_content: Embryonic stem (ES) cells are in vitro cultivated pluripotent cells derived from the inner cell mass (ICM) of the embryonic blastocyst. Attesting to their pluripotency, ES cells can be differentiated into representative derivatives of all three embryonic germ layers (endoderm, ectoderm and mesoderm) both in vitro and in vivo. Although mouse ES cells have been studied for many years, human ES cells have only more recently been derived and successfully propagated. Many biochemical differences and culture requirements between mouse and human ES cells have been described, yet despite these differences the study of murine ES cells has provided important insights into methodologies aimed at generating a greater and more in depth understanding of human ES cell biology. One common feature of both mouse and human ES cells is their capacity to undergo controlled differentiation into spheroid structures termed embryoid bodies (EBs). EBs recapitulate several aspects of early development, displaying regional-specific differentiation programs into derivatives of all three embryonic germ layers. For this reason, EB formation has been utilised as an initial step in a wide range of studies aimed at differentiating both mouse and human ES cells into a specific and desired cell type. Recent reports utilising specific growth factor combinations and cell-cell induction systems have provided alternative strategies for the directed differentiation of cells into a desired lineage. According to each one of these strategies, however, a relatively high cell lineage heterogeneity remains, necessitating subsequent purification steps including mechanical dissection, selective media or fluorescent or magnetic activated cell sorting (FACS and MACS, respectively). In the future, the ability to specifically direct differentiation of human ES cells at 100% efficiency into a desired lineage will allow us to fully explore the potential of these cells in the analysis of early human development, drug discovery, drug testing and repair of damaged or diseased tissues via transplantation. --- paper_title: An extracellular matrix microarray for probing cellular differentiation paper_content: We present an extracellular matrix (ECM) microarray platform for the culture of patterned cells atop combinatorial matrix mixtures. This platform enables the study of differentiation in response to a multitude of microenvironments in parallel. The fabrication process required only access to a standard robotic DNA spotter, off-the-shelf materials and 1,000 times less protein than conventional means of investigating cell-ECM interactions. To demonstrate its utility, we applied this platform to study the effects of 32 different combinations of five extracellular matrix molecules (collagen I, collagen III, collagen IV, laminin and fibronectin) on cellular differentiation in two contexts: maintenance of primary rat hepatocyte phenotype indicated by intracellular albumin staining and differentiation of mouse embryonic stem (ES) cells toward an early hepatic fate, indicated by expression of a beta-galactosidase reporter fused to the fetal liver-specific gene, Ankrd17 (also known as gtar). Using this technique, we identified combinations of ECM that synergistically impacted both hepatocyte function and ES cell differentiation. This versatile technique can be easily adapted to other applications, as it is amenable to studying almost any insoluble microenvironmental cue in a combinatorial fashion and is compatible with several cell types. --- paper_title: Duplexes of 21-nucleotide RNAs mediate RNA interference in cultured mammalian cells paper_content: RNA interference (RNAi) is the process of sequence-specific, post-transcriptional gene silencing in animals and plants, initiated by double-stranded RNA (dsRNA) that is homologous in sequence to the silenced gene1,2,3,4. The mediators of sequence-specific messenger RNA degradation are 21- and 22-nucleotide small interfering RNAs (siRNAs) generated by ribonuclease III cleavage from longer dsRNAs5,6,7,8,9. Here we show that 21-nucleotide siRNA duplexes specifically suppress expression of endogenous and heterologous genes in different mammalian cell lines, including human embryonic kidney (293) and HeLa cells. Therefore, 21-nucleotide siRNA duplexes provide a new tool for studying gene function in mammalian cells and may eventually be used as gene-specific therapeutics. --- paper_title: A resource for large-scale RNA-interference-based screens in mammals paper_content: Gene silencing by RNA interference (RNAi) in mammalian cells using small interfering RNAs (siRNAs) and short hairpin RNAs (shRNAs) has become a valuable genetic tool1,2,3,4,5,6,7,8,9,10. Here, we report the construction and application of a shRNA expression library targeting 9,610 human and 5,563 mouse genes. This library is presently composed of about 28,000 sequence-verified shRNA expression cassettes contained within multi-functional vectors, which permit shRNA cassettes to be packaged in retroviruses, tracked in mixed cell populations by means of DNA ‘bar codes’, and shuttled to customized vectors by bacterial mating. In order to validate the library, we used a genetic screen designed to report defects in human proteasome function. Our results suggest that our large-scale RNAi library can be used in specific, genetic applications in mammals, and will become a valuable resource for gene analysis and discovery. --- paper_title: Human Cell Chips: Adapting DNA Microarray Spotting Technology to Cell-Based Imaging Assays paper_content: Here we describe human spotted cell chips, a technology for determining cellular state across arrays of cells subjected to chemical or genetic perturbation. Cells are grown and treated under standard tissue culture conditions before being fixed and printed onto replicate glass slides, effectively decoupling the experimental conditions from the assay technique. Each slide is then probed using immunofluorescence or other optical reporter and assayed by automated microscopy. We show potential applications of the cell chip by assaying HeLa and A549 samples for changes in target protein abundance (of the dsRNA-activated protein kinase PKR), subcellular localization (nuclear translocation of NFkappaB) and activation state (phosphorylation of STAT1 and of the p38 and JNK stress kinases) in response to treatment by several chemical effectors (anisomycin, TNFalpha, and interferon), and we demonstrate scalability by printing a chip with approximately 4,700 discrete samples of HeLa cells. Coupling this technology to high-throughput methods for culturing and treating cell lines could enable researchers to examine the impact of exogenous effectors on the same population of experimentally treated cells across multiple reporter targets potentially representing a variety of molecular systems, thus producing a highly multiplexed dataset with minimized experimental variance and at reduced reagent cost compared to alternative techniques. The ability to prepare and store chips also allows researchers to follow up on observations gleaned from initial screens with maximal repeatability. --- paper_title: Overview of Electrochemical DNA Biosensors: New Approaches to Detect the Expression of Life paper_content: DNA microarrays are an important tool with a variety of applications in gene expression studies, genotyping, pharmacogenomics, pathogen classification, drug discovery, sequencing and molecular diagnostics. They are having a strong impact in medical diagnostics for cancer, toxicology and infectious disease applications. A series of papers have been published describing DNA biochips as alternative to conventional microarray platforms to facilitate and ameliorate the signal readout. In this review, we will consider the different methods proposed for biochip construction, focusing on electrochemical detection of DNA. We also introduce a novel single-stranded DNA platform performing high-throughput SNP detection and gene expression profiling. --- paper_title: High-Throughput Selection of Effective RNAi Probes for Gene Silencing paper_content: RNA interference (RNAi) is a process of sequence-specific posttranscriptional gene silencing mediated by double-stranded RNA. RNAi has recently emerged as a powerful genetic tool to analyze gene function in mammalian cells. The power of this method is limited however, by the uncertainty in predicting the efficacy of small interfering RNAs (siRNAs) in silencing a gene. This has imposed serious limitations not only for small-scale but also for high-throughput RNAi screening initiatives in mammalian systems. We have developed a reliable and quantitative approach for the rapid and efficient identification of the most effective siRNA against any gene. The efficacy of siRNA sequences is monitored by their ability to reduce the expression of cognate target-reporter fusions with easily quantified readouts. Finally, using micro array-based cell transfections, we demonstrate an unlimited potential of this approach in high-throughput screens for identifying effective siRNA probes for silencing genes in mammalian systems. This approach is likely to have implications in the use of RNAi as a reverse genetic tool for analyzing mammalian gene function on a genome-wide scale. --- paper_title: Role of YAP/TAZ in mechanotransduction paper_content: Cells perceive their microenvironment not only through soluble signals but also through physical and mechanical cues, such as extracellular matrix (ECM) stiffness or confined adhesiveness. By mechanotransduction systems, cells translate these stimuli into biochemical signals controlling multiple aspects of cell behaviour, including growth, differentiation and cancer malignant progression, but how rigidity mechanosensing is ultimately linked to activity of nuclear transcription factors remains poorly understood. Here we report the identification of the Yorkie-homologues YAP (Yes-associated protein) and TAZ (transcriptional coactivator with PDZ-binding motif, also known as WWTR1) as nuclear relays of mechanical signals exerted by ECM rigidity and cell shape. This regulation requires Rho GTPase activity and tension of the actomyosin cytoskeleton, but is independent of the Hippo/LATS cascade. Crucially, YAP/TAZ are functionally required for differentiation of mesenchymal stem cells induced by ECM stiffness and for survival of endothelial cells regulated by cell geometry; conversely, expression of activated YAP overrules physical constraints in dictating cell behaviour. These findings identify YAP/TAZ as sensors and mediators of mechanical cues instructed by the cellular microenvironment. --- paper_title: Microarray Transfection Analysis of Transcriptional Regulation by cAMP-dependent Protein Kinase paper_content: A wide variety of bioinformatic tools have been described to characterize potential transcriptional regulatory mechanisms based on genomic sequence analysis and microarray hybridization studies. However, these regulatory mechanisms are still experimentally verified using transient transfection methods. Current transfection methods are limited both by their large scale and by the low level of efficiency for certain cell types. Our goals were to develop a microarray-based transfection method that could be optimized for different cell types and that would be useful in reporter assays of transcriptional regulation. Here we describe a novel transfection method, termed STEP (surface transfection and expression protocol), which employs microarray-based DNA transfection of adherent cells in the functional analysis of transcriptional regulation. In STEP, recombinant proteins with biological activities designed to enhance transfection are complexed with expression vector DNAs prior to spotting on microscope slides. The recombinant proteins used in STEP complexes can be varied to increase the efficiency for different cell types. We demonstrate that STEP efficiently transfects both supercoiled plasmids and PCR-generated linear expression cassettes. A co-transfection assay using effector expression vectors encoding the cAMP-dependent protein kinase (PKA), as well as reporter vectors containing PKA-regulated promoters, showed that STEP transfection allows detection and quantitation of transcriptional regulation by this protein kinase. Because bioinformatic studies often result in the identification of many putative regulatory elements and signaling pathways, this approach should be of utility in high-throughput functional genomic studies of transcriptional regulation. --- paper_title: Arrayed cellular microenvironments for identifying culture and differentiation conditions for stem, primary and rare cell populations paper_content: Arrayed cellular microenvironments for identifying culture and differentiation conditions for stem, primary and rare cell populations --- paper_title: Microarrays of cells expressing defined cDNAs paper_content: Genome and expressed sequence tag projects are rapidly cataloguing and cloning the genes of higher organisms, including humans. An emerging challenge is to rapidly uncover the functions of genes and to identify gene products with desired properties. We have developed a microarray-driven gene expression system for the functional analysis of many gene products in parallel. Mammalian cells are cultured on a glass slide printed in defined locations with different DNAs. Cells growing on the printed areas take up the DNA, creating spots of localized transfection within a lawn of non-transfected cells. By printing sets of complementary DNAs cloned in expression vectors, we make microarrays whose features are clusters of live cells that express a defined cDNA at each location. Here we demonstrate two uses for our approach: as an alternative to protein microarrays for the identification of drug targets, and as an expression cloning system for the discovery of gene products that alter cellular physiology. By screening transfected cell microarrays expressing 192 different cDNAs, we identified proteins involved in tyrosine kinase signalling, apoptosis and cell adhesion, and with distinct subcellular distributions. --- paper_title: Sequencing technologies — the next generation paper_content: Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest. --- paper_title: The Extracellular Matrix: Not Just Pretty Fibrils paper_content: The extracellular matrix (ECM) and ECM proteins are important in phenomena as diverse as developmental patterning, stem cell niches, cancer, and genetic diseases. The ECM has many effects beyond providing structural support. ECM proteins typically include multiple, independently folded domains whose sequences and arrangement are highly conserved. Some of these domains bind adhesion receptors such as integrins that mediate cell-matrix adhesion and also transduce signals into cells. However, ECM proteins also bind soluble growth factors and regulate their distribution, activation, and presentation to cells. As organized, solid-phase ligands, ECM proteins can integrate complex, multivalent signals to cells in a spatially patterned and regulated fashion. These properties need to be incorporated into considerations of the functions of the ECM. --- paper_title: Strategies for Engineering the Adhesive Microenvironment paper_content: Cells exist within a complex tissue microenvironment, which includes soluble factors, extracellular matrix molecules, and neighboring cells. In the breast, the adhesive microenvironment plays a crucial role in driving both normal mammary gland development as well tumor initiation and progression. Researchers are designing increasingly more complex ways to mimic the in vivo microenvironment in an in vitro setting, so that cells in culture may serve as model systems for tissue structures. Here, we explore the use of microfabrication technologies to engineer the adhesive microenvironment of cells in culture. These new tools permit the culture of cells on well-defined surface chemistries, patterning of cells into defined geometries either alone or in coculture scenarios, and measurement of forces associated with cell-ECM interactions. When applied to questions in mammary gland development and neoplasia, these new tools will enable a better understanding of how adhesive, structural, and mechanical cues regulate mammary epithelial biology. --- paper_title: Soft substrates drive optimal differentiation of human healthy and dystrophic myotubes. paper_content: The in vitro development of human myotubes carrying genetic diseases, such as Duchenne Muscular Dystrophy, will open new perspectives in the identification of innovative therapeutic strategies. Through the proper design of the substrate, we guided the differentiation of human healthy and dystrophic myoblasts into myotubes exhibiting marked functional differentiation and highly defined sarcomeric organization. A thin film of photo cross-linkable elastic poly-acrylamide hydrogel with physiological-like and tunable mechanical properties (elastic moduli, E: 12, 15, 18 and 21 kPa) was used as substrate. The functionalization of its surface by micro-patterning in parallel lanes (75 microm wide, 100 microm spaced) of three adhesion proteins (laminin, fibronectin and matrigel) was meant to maximize human myoblasts fusion. Myotubes formed onto the hydrogel showed a remarkable sarcomere formation, with the highest percentage (60.0% +/- 3.8) of myotubes exhibiting sarcomeric organization, of myosin heavy chain II and alpha-actinin, after 7 days of culture onto an elastic (15 kPa) hydrogel and a matrigel patterning. In addition, healthy myotubes cultured in these conditions showed a significant membrane-localized dystrophin expression. In this study, the culture substrate has been adapted to human myoblasts differentiation, through an easy and rapid methodology, and has led to the development of in vitro human functional skeletal muscle myotubes useful for clinical purposes and in vitro physiological study, where to carry out a broad range of studies on human muscle physiopathology. --- paper_title: Generation of dopaminergic neurons and pigmented epithelia from primate ES cells by stromal cell-derived inducing activity paper_content: Abstract ::: We previously identified a stromal cell-derived inducing activity (SDIA), which induces differentiation of neural cells, including midbrain tyrosine hydroxylase-positive (TH+) dopaminergic neurons, from mouse embryonic stem cells. We report here that SDIA induces efficient neural differentiation also in primate embryonic stem cells. Induced neurons contain TH+ neurons at a frequency of 35% and produce a significant amount of dopamine. Interestingly, differentiation of TH+ neurons from undifferentiated embryonic cells occurs much faster in vitro (10 days) than it does in the embryo (≈5 weeks). In addition, 8% of the colonies contain large patches of Pax6+-pigmented epithelium of the retina. The SDIA method provides an unlimited source of primate cells for the study of pathogenesis, drug development, and transplantation in degenerative diseases such as Parkinson's disease and retinitis pigmentosa. --- paper_title: Microarrays of small molecules embedded in biodegradable polymers for use in mammalian cell-based screens. paper_content: We developed a microarray-based system for screening small molecules in mammalian cells. This system is compatible with image-based screens and requires fewer than 100 cells per compound. Each compound is impregnated in a 200-microm-diameter disc composed of biodegradable poly-(D),(L)-lactide/glycolide copolymer. Cells are seeded on top of these discs, and compounds slowly diffuse out, affecting proximal cells. In contrast with microtiter-based screening, this system does not involve the use of wells or walls between each compound-treated group of cells. We demonstrate detection of the effects of a single compound in a large microarray, that diverse compounds can be released in this format, and that extended release over several days is feasible. We performed a small synthetic lethal screen and identified a compound (macbecin II) that has reduced activity in cells with RNA interference-mediated decrease in the expression of tuberous sclerosis 2. Thus, we have developed a microarray-based screening system for testing the effects of small molecules on mammalian cells by using an imaging-based readout. This method will be useful to those performing small-molecule screens to discover new chemical tools and potential therapeutic agents. --- paper_title: Nanoliter-scale synthesis of arrayed biomaterials and application to human embryonic stem cells paper_content: Nanoliter-scale synthesis of arrayed biomaterials and application to human embryonic stem cells --- paper_title: The Sequence of the Human Genome paper_content: A 2.91-billion base pair (bp) consensus sequence of the euchromatic portion of the human genome was generated by the whole-genome shotgun sequencing method. The 14.8-billion bp DNA sequence was generated over 9 months from 27,271,853 high-quality sequence reads (5.11-fold coverage of the genome) from both ends of plasmid clones made from the DNA of five individuals. Two assembly strategies—a whole-genome assembly and a regional chromosome assembly—were used, each combining sequence data from Celera and the publicly funded genome effort. The public data were shredded into 550-bp segments to create a 2.9-fold coverage of those genome regions that had been sequenced, without including biases inherent in the cloning and assembly procedure used by the publicly funded group. This brought the effective coverage in the assemblies to eightfold, reducing the number and size of gaps in the final assembly over what would be obtained with 5.11-fold coverage. The two assembly strategies yielded very similar results that largely agree with independent mapping data. The assemblies effectively cover the euchromatic regions of the human chromosomes. More than 90% of the genome is in scaffold assemblies of 100,000 bp or more, and 25% of the genome is in scaffolds of 10 million bp or larger. Analysis of the genome sequence revealed 26,588 protein-encoding transcripts for which there was strong corroborating evidence and an additional ∼12,000 computationally derived genes with mouse matches or other weak supporting evidence. Although gene-dense clusters are obvious, almost half the genes are dispersed in low G+C sequence separated by large tracts of apparently noncoding sequence. Only 1.1% of the genome is spanned by exons, whereas 24% is in introns, with 75% of the genome being intergenic DNA. Duplications of segmental blocks, ranging in size up to chromosomal lengths, are abundant throughout the genome and reveal a complex evolutionary history. Comparative genomic analysis indicates vertebrate expansions of genes associated with neuronal function, with tissue-specific developmental regulation, and with the hemostasis and immune systems. DNA sequence comparisons between the consensus sequence and publicly funded genome data provided locations of 2.1 million single-nucleotide polymorphisms (SNPs). A random pair of human haploid genomes differed at a rate of 1 bp per 1250 on average, but there was marked heterogeneity in the level of polymorphism across the genome. Less than 1% of all SNPs resulted in variation in proteins, but the task of determining which SNPs have functional consequences remains an open challenge. --- paper_title: Secreted protein prediction system combining CJ-SPHMM, TMHMM, and PSORT paper_content: To increase the coverage of secreted protein prediction, we describe a combination strategy. Instead of using a single method, we combine Hidden Markov Model (HMM)-based methods CJ-SPHMM and TMHMM with PSORT in secreted protein prediction. CJ-SPHMM is an HMM-based signal peptide prediction method, while TMHMM is an HMM-based transmembrane (TM) protein prediction algorithm. With CJ-SPHMM and TMHMM, proteins with predicted signal peptide and without predicted TM regions are taken as putative secreted proteins. This HMM-based approach predicts secreted protein with Ac (Accuracy) at 0.82 and Cc (Correlation coefficient) at 0.75, which are similar to PSORT with Ac at 0.82 and Cc at 0.76. When we further complement the HMM-based method, i.e., CJ-SPHMM + TMHMM with PSORT in secreted protein prediction, the Ac value is increased to 0.86 and the Cc value is increased to 0.81. Taking this combination strategy to search putative secreted proteins from the International Protein Index (IPI) maintained at the European Bioinformatics Institute (EBI), we constructed a putative human secretome with 5235 proteins. The prediction system described here can also be applied to predicting secreted proteins from other vertebrate proteomes. --- paper_title: Cell microarrays and RNA interference chip away at gene function paper_content: The recent development of cell microarrays offers the potential to accelerate high-throughput functional genetic studies. The widespread use of RNA interference (RNAi) has prompted several groups to fabricate RNAi cell microarrays that make possible discrete, in-parallel transfection with thousands of RNAi reagents on a microarray slide. Though still a budding technology, RNAi cell microarrays promise to increase the efficiency, economy and ease of genome-wide RNAi screens in metazoan cells. --- paper_title: Automatic Identification of Subcellular Phenotypes on Human Cell Arrays paper_content: Light microscopic analysis of cell morphology provides a high-content readout of cell function and protein localization. Cell arrays and microwell transfection assays on cultured cells have made cell phenotype analysis accessible to high-throughput experiments. Both the localization of each protein in the proteome and the effect of RNAi knock-down of individual genes on cell morphology can be assayed by manual inspection of microscopic images. However, the use of morphological readouts for functional genomics requires fast and automatic identification of complex cellular phenotypes. Here, we present a fully automated platform for high-throughput cell phenotype screening combining human live cell arrays, screening microscopy, and machine-learning-based classification methods. Efficiency of this platform is demonstrated by classification of eleven subcellular patterns marked by GFP-tagged proteins. Our classification method can be adapted to virtually any microscopic assay based on cell morphology, opening a wide range of applications including large-scale RNAi screening in human cells. --- paper_title: A large-scale RNAi screen in human cells identifies new components of the p53 pathway paper_content: RNA interference (RNAi) is a powerful new tool with which to perform loss-of-function genetic screens in lower organisms and can greatly facilitate the identification of components of cellular signalling pathways1,2,3. In mammalian cells, such screens have been hampered by a lack of suitable tools that can be used on a large scale. We and others have recently developed expression vectors to direct the synthesis of short hairpin RNAs (shRNAs) that act as short interfering RNA (siRNA)-like molecules to stably suppress gene expression4,5. Here we report the construction of a set of retroviral vectors encoding 23,742 distinct shRNAs, which target 7,914 different human genes for suppression. We use this RNAi library in human cells to identify one known and five new modulators of p53-dependent proliferation arrest. Suppression of these genes confers resistance to both p53-dependent and p19ARF-dependent proliferation arrest, and abolishes a DNA-damage-induced G1 cell-cycle arrest. Furthermore, we describe siRNA bar-code screens to rapidly identify individual siRNA vectors associated with a specific phenotype. These new tools will greatly facilitate large-scale loss-of-function genetic screens in mammalian cells. --- paper_title: Transfection microarray of human mesenchymal stem cells and on-chip siRNA gene knockdown. paper_content: The transfection efficiency of primary cells is the bottleneck for their use with miniaturized formats for gene validation assays. We have found that when formulations containing various reporter plasmids were microarrayed on glass slides (chips), hMSCs cultivated on the chip incorporated and expressed the microarrayed plasmid DNAs with high efficiency and virtually total spatial resolution. Fibronectin, as the key formulation component, was found to significantly increase the on-chip transfection efficiency in hMSCs as well as many other cells. Further, we have conclusively proven that when siRNA was co-arrayed with the target plasmid DNA, a concentration-dependent gene knockdown was observed. Thus, massively miniaturized RNAi gene knockdown experiments can now be performed in primary cells, previously unusable with transfection microarrays (TMA). --- paper_title: Cell microarray for screening feeder cells for differentiation of embryonic stem cells paper_content: Microarrays are currently recognized as one of major tools in the assessment of gene expression via cDNA or RNA analysis and are now accepted as a powerful experimental tool for high-throughput screening of a large number of samples, such as cDNA and siRNAs. In this study, we examined the potential of the microarray methodology for high-throughput screening of candidate cells as feeder cells which effectively differentiate embryonic stem (ES) cells to the specific lineage. Cell arrays were prepared by applying three kinds of cells, PA6, human umbilical vein endothelial, and COS-1 cells, to circular spots, 2 mm in diameter, on a glass plate, followed by the application of mouse ES cells to the cell microarray. After 8 d in culture, TuJ1 (neuron-specific class III β-tubulin) immunocytochemical staining clearly demonstrated that only PA6 cell spots had the capability to induce ES cells to neuronal differentiation. Although this is a model experiment, these findings clearly indicate that the cell microarray will become a powerful tool for high-throughput screening large numbers of candidate feeder cells for specific differentiation. --- paper_title: Induction of the Differentiation of Lentoids from Primate Embryonic Stem Cells paper_content: PURPOSE ::: To produce lens cells from primate embryonic stem (ES) cells in a reproducible, controlled manner. ::: ::: ::: METHODS ::: Cynomologus monkey ES cells were induced to differentiate by stromal cell-derived inducing activity (SDIA). The lentoids produced by this treatment were processed for immunohistochemical and immunoblotting analysis. The effect of varying the concentration of fibroblast growth factor (FGF)-2 and the density of the ES colonies plated during the differentiation process were also examined. ::: ::: ::: RESULTS ::: After a 2- to 3-week induction period, lentoids were produced by a subpopulation of ES colonies. Western blot analysis and immunohistochemistry revealed that these lentoids expressed alphaA-crystallin and Pax6. The number of lentoids resulting from treatment increased with increasing FGF-2 concentration and plated colony density. ::: ::: ::: CONCLUSIONS ::: The differentiation of primate ES cells into lentoids can be achieved by treatment with SIDA. ES cells can be used to facilitate a greater understanding of the mechanisms functioning in differentiation in vivo and in vitro. --- paper_title: An approach to genomewide screens of expressed small interfering RNAs in mammalian cells paper_content: To facilitate the construction of large genomewide libraries of small interfering RNAs (siRNAs), we have developed a dual promoter system (pDual) in which a synthetic DNA encoding a gene-specific siRNA sequence is inserted between two different opposing polymerase III promoters, the mouse U6 and human H1 promoters. Upon transfection into mammalian cells, the sense and antisense strands of the duplex are transcribed by these two opposing promoters from the same template, resulting in a siRNA duplex with a uridine overhang on each 3′ terminus. A single-step PCR protocol has been developed by using this dual promoter system that allows the production of siRNA expression cassettes in a high-throughput manner. We have shown that siRNAs transcribed by either the dual promoter vector or siRNA expression cassettes can induce strong and gene-specific suppression of both endogenous genes and ectopically expressed genes in mammalian cells. Furthermore, we have constructed an arrayed siRNA expression cassette library that targets >8,000 genes with two siRNA sequences per gene. A high-throughput screen of this library has revealed both known and unique genes involved in the NF-κB signaling pathway. --- paper_title: Differentiation of Monkey Embryonic Stem Cells into Neural Lineages1 paper_content: Embryonic stem (ES) cells are self-renewing, pluripotent, and capable of differentiating into all of the cell types found in the adult body. Therefore, they have the potential to replace degenerated or damaged cells, including those in the central nervous system. For ES cell-based therapy to become a clinical reality, translational research involving nonhuman primates is essential. Here, we report monkey ES cell differentiation into embryoid bodies (EBs), neural progenitor cells (NPCs), and committed neural phenotypes. The ES cells were aggregated in hanging drops to form EBs. The EBs were then plated onto adhesive surfaces in a serum-free medium to form NPCs and expanded in serum-free medium containing fibroblast growth factor (FGF)-2 before neural differentiation was induced. Cells were characterized at each step by immunocytochemistry for the presence of specific markers. The majority of cells in complex/cystic EBs expressed antigens (a-fetal protein, cardiac troponin I, and vimentin) representative of all three embryonic germ layers. Greater than 70% of the expanded cell populations expressed antigenic markers (nestin and musashi1) for NPCs. After removal of FGF-2, approximately 70% of the NPCs differentiated into neuronal phenotypes expressing either microtubule-associated protein-2C (MAP2C) or neuronal nuclear antigen (NeuN), and approximately 28% differentiated into glial cell types expressing glial fibrillary acidic protein. Small populations of MAP2C/NeuN-positive cells also expressed tyrosine hydroxylase (;4%) or choline acetyltransferase (;13%). These results suggest that monkey ES cells spontaneously differentiate into cells of all three germ layers, can be induced and maintained as NPCs, and can be further differentiated into committed neural lineages, including putative neurons and glial cells. --- paper_title: Induction of Midbrain Dopaminergic Neurons from ES Cells by Stromal Cell–Derived Inducing Activity paper_content: Summary Precise specification of a particular neuronal character-istic, such as neurotransmitter choice, is crucial when We have identified a stromal cell–derived inducing ac- induced neurons are to be used for therapeutic applica- tivity (SDIA) that promotes neural differentiation of tionsorbasicneuroscienceresearch.Itisthereforepref- mouse ES cells. SDIA accumulates on the surface of erable to avoid RA treatment unless RA induces the PA6stromalcellsandinducesefficientneuronaldiffer- particular type of neurons of one’s interest. entiation of cocultured ES cells in serum-free condi- In this report, we introduce an efficient system for in tions without use of either retinoic acid or embryoid vitro neural differentiation of mouse ES cells in a serum- bodies. BMP4, which acts as an antineuralizing mor- free condition that requires neither EBs nor RA treat- phogen in Xenopus , suppresses SDIA-induced neu- ment. We also discuss the possibilities for therapeutic ralization and promotes epidermal differentiation. A --- paper_title: An endoribonuclease-prepared siRNA screen in human cells identifies genes essential for cell division paper_content: RNA interference (RNAi) is an evolutionarily conserved defence mechanism whereby genes are specifically silenced through degradation of messenger RNAs; this process is mediated by homologous double-stranded (ds)RNA molecules1,2,3,4. In invertebrates, long dsRNAs have been used for genome-wide screens and have provided insights into gene functions5,6,7,8. Because long dsRNA triggers a nonspecific interferon response in many vertebrates, short interfering (si)RNA or short hairpin (sh)RNAs must be used for these organisms to ensure specific gene silencing9,10,11. Here we report the generation of a genome-scale library of endoribonuclease-prepared short interfering (esi)RNAs12 from a sequence-verified complementary DNA collection representing 15,497 human genes. We used 5,305 esiRNAs from this library to screen for genes required for cell division in HeLa cells. Using a primary high-throughput cell viability screen followed by a secondary high content videomicroscopy assay, we identified 37 genes required for cell division. These include several splicing factors for which knockdown generates mitotic spindle defects. In addition, a putative nuclear-export terminator was found to speed up cell proliferation and mitotic progression after knockdown. Thus, our study uncovers new aspects of cell division and establishes esiRNA as a versatile approach for genomic RNAi screens in mammalian cells. --- paper_title: Identification of Hedgehog pathway components by RNAi in Drosophila cultured cells. paper_content: Classical genetic screens can be limited by the selectivity of mutational targeting, the complexities of anatomically based phenotypic analysis, or difficulties in subsequent gene identification. Focusing on signaling response to the secreted morphogen Hedgehog (Hh), we used RNA interference (RNAi) and a quantitative cultured cell assay to systematically screen functional roles of all kinases and phosphatases, and subsequently 43% of predicted Drosophila genes. Two gene products reported to function in Wingless (Wg) signaling were identified as Hh pathway components: a cell surface protein (Dally-like protein) required for Hh signal reception, and casein kinase 1alpha, a candidate tumor suppressor that regulates basal activities of both Hh and Wg pathways. This type of cultured cell-based functional genomics approach may be useful in the systematic analysis of other biological processes. --- paper_title: The emergence and diffusion of DNA microarray technology paper_content: The network model of innovation widely adopted among researchers in the economics of science and technology posits relatively porous boundaries between firms and academic research programs and a bi-directional flow of inventions, personnel, and tacit knowledge between sites of university and industry innovation. Moreover, the model suggests that these bi-directional flows should be considered as mutual stimulation of research and invention in both industry and academe, operating as a positive feedback loop. One side of this bi-directional flow – namely; the flow of inventions into industry through the licensing of university-based technologies – has been well studied; but the reverse phenomenon of the stimulation of university research through the absorption of new directions emanating from industry has yet to be investigated in much detail. We discuss the role of federal funding of academic research in the microarray field, and the multiple pathways through which federally supported development of commercial microarray technologies have transformed core academic research fields.Results and conclusionOur study confirms the picture put forward by several scholars that the open character of networked economies is what makes them truly innovative. In an open system innovations emerge from the network. The emergence and diffusion of microarray technologies we have traced here provides an excellent example of an open system of innovation in action. Whether they originated in a startup company environment that operated like a think-tank, such as Affymax, the research labs of a large firm, such as Agilent, or within a research university, the inventors we have followed drew heavily on knowledge resources from all parts of the network in bringing microarray platforms to light.Federal funding for high-tech startups and new industrial development was important at several phases in the early history of microarrays, and federal funding of academic researchers using microarrays was fundamental to transforming the research agendas of several fields within academe. The typical story told about the role of federal funding emphasizes the spillovers from federally funded academic research to industry. Our study shows that the knowledge spillovers worked both ways, with federal funding of non-university research providing the impetus for reshaping the research agendas of several academic fields. --- paper_title: Hepatic maturation in differentiating embryonic stem cells in vitro paper_content: We investigated the potential of mouse embryonic stem (ES) cells to differentiate into hepatocytes in vitro. Differentiating ES cells expressed endodermal-specific genes, such as α-fetoprotein, transthyretin, α 1-anti-trypsin and albumin, when cultured without additional growth factors and late differential markers of hepatic development, such as tyrosine aminotransferase (TAT) and glucose-6-phosphatase (G6P), when cultured in the presence of growth factors critical for late embryonic liver development. Further, induction of TAT and G6P expression was induced regardless of expression of the functional SEK1 gene, which is thought to provide a survival signal for hepatocytes during an early stage of liver morphogenesis. The data indicate that the in vitro ES differentiation system has a potential to generate mature hepatocytes. The system has also been found useful in analyzing the role of growth factors and intracellular signaling molecules in hepatic development. --- paper_title: Multiplex GPCR Assay in Reverse Transfection Cell Microarrays paper_content: G protein-coupled receptors (GPCRs) are a superfamily of proteins that include some of the most important drug targets in the pharmaceutical industry. Despite the success of this group of drugs, there remains a need to identify GPCR-targeted drugs with greater selectivity, to develop screening assays for validated targets, and to identify ligands for orphan receptors. To address these challenges, the authors have created a multiplexed GPCR assay that measures greater than 3000 receptor: ligand interactions in a single microplate. The multiplexed assay is generated by combining reverse transfection in a 96-well plate format with a calcium flux readout. This assay quantitatively measures receptor activation and inhibition and permits the determination of compound potency and selectivity for entire families of GPCRs in parallel. To expand the number of GPCR targets that may be screened in this system, receptors are cotransfected with plasmids encoding a promiscuous G protein, permitting the analysis of recep... --- paper_title: Electroactive Self-Assembled Monolayers that Permit Orthogonal Control over the Adhesion of Cells to Patterned Substrates† paper_content: This article describes an electroactive substrate that displays two independent dynamic functions for controlling the adhesion of cells. The approach is based on self-assembled monolayers on gold that are patterned into regions presenting the Arg-Gly-Asp peptide cell adhesion ligand. The patterned regions differ in the electrochemical properties of the linkers that tether the peptides to the monolayer. In this work, three distinct chemistries are employed that provide for release of the ligand on application of a negative potential, release of the ligand on application of a positive potential, and no change in response to a potential. Cells were allowed to attach to a monolayer patterned into circular regions comprising the three chemistries. Treatment with electric potentials of 650 or -650 mV resulted in the selective release of adherent cells only from regions that display the relevant electroactive groups. This example establishes the preparation of dynamic substrates with multiple functions and will be important to preparing model cultures derived from multiple cell types, with control over the temporal interactions of each cell population. --- paper_title: Satellite cells delivered by micro-patterned scaffolds: a new strategy for cell transplantation in muscle diseases. paper_content: Myoblast transplantation is a potentially useful therapeutic tool in muscle diseases, but the lack of an efficient delivery system has hampered its application. Here we have combined cell biology and polymer processing to create an appropriate microenvironment for in vivo transplantation of murine satellite cells (mSCs). Cells were prepared from single muscle fibers derived from C57BL/6-Tgn enhanced green fluorescent protein (GFP) transgenic mice. mSCs were expanded and seeded within micro-patterned polyglycolic acid 3-dimensional scaffolds fabricated using soft lithography and thermal membrane lamination. Myogenicity was then evaluated in vitro using immunostaining, flow cytometry, and reverse transcription polymerase chain reaction analyses. Scaffolds containing mSCs were implanted in pre-damaged tibialis anterior muscles of GFP-negative syngenic mice. Cells detached from culture dishes were directly injected into contra-lateral limbs as controls. In both cases, delivered cells participated in muscle re... --- paper_title: Fiber Types in Mammalian Skeletal Muscles paper_content: Mammalian skeletal muscle comprises different fiber types, whose identity is first established during embryonic development by intrinsic myogenic control mechanisms and is later modulated by neural and hormonal factors. The relative proportion of the different fiber types varies strikingly between species, and in humans shows significant variability between individuals. Myosin heavy chain isoforms, whose complete inventory and expression pattern are now available, provide a useful marker for fiber types, both for the four major forms present in trunk and limb muscles and the minor forms present in head and neck muscles. However, muscle fiber diversity involves all functional muscle cell compartments, including membrane excitation, excitation-contraction coupling, contractile machinery, cytoskeleton scaffold, and energy supply systems. Variations within each compartment are limited by the need of matching fiber type properties between different compartments. Nerve activity is a major control mechanism of the fiber type profile, and multiple signaling pathways are implicated in activity-dependent changes of muscle fibers. The characterization of these pathways is raising increasing interest in clinical medicine, given the potentially beneficial effects of muscle fiber type switching in the prevention and treatment of metabolic diseases. --- paper_title: Affinity capture of proteins from solution and their dissociation by contact printing paper_content: Biological experiments at the solid/liquid interface, in general, require surfaces with a thin layer of purified molecules, which often represent precious material. Here, we have devised a method to extract proteins with high selectivity from crude biological sample solutions and place them on a surface in a functional, arbitrary pattern. This method, called affinity-contact printing (αCP), uses a structured elastomer derivatized with ligands against the target molecules. After the target molecules have been captured, they are printed from the elastomer onto a variety of surfaces. The ligand remains on the stamp for reuse. In contrast with conventional affinity chromatography, here dissociation and release of captured molecules to the substrate are achieved mechanically. We demonstrate this technique by extracting the cell adhesion molecule neuron-glia cell adhesion molecule (NgCAM) from tissue homogenates and cell culture lysates and patterning affinity-purified NgCAM on polystyrene to stimulate the attachment of neuronal cells and guide axon outgrowth. --- paper_title: Selective adhesion of hepatocytes on patterned surfaces. paper_content: Successful development of cell-biased bioartificial liver devices necessitates the establishment of techniques and designs for long-term, stable hepatocellular function and efficient transport of nutrients and wastes within the device. Given the relatively large cell mass that one must consider, one possible solution involves the use of micropatterning technology to sandwich hepatocytes aligned in rows between two micropatterned surfaces. Rows of cells would alternate with hepatocyte-free areas, creating efficient transport channels for fluid flow and nutrient exchange. Ultimately, this type of device could also be used as a three-dimensional construct for investigating a variety of cell-surface, cell-extracellular matrix, and cell-cell interactions. To achieve this goal, one must develop techniques for selectively adhering hepatocytes to solid substrates. In this study, reproducible, selective adhesion of hepatocytes on a glass substrate with large regions of adhesive (AS) and nonadhesive (NAS) surfaces was obtained. The AS had hydrophilic characteristics, enhancing deposition of collagen molecules from an aqueous solution, and subsequent hepatocyte adhesion, whereas the NAS had hydrophobic properties and remained collagen-free and hepatocyte-free. In addition, a reproducible processing technique for obtaining patterns of hepatocytes was developed and optimized, using a surface with a single AS band as a first approximation to a micropatterned device. This was achieved by spincoating an aqueous collagen type I solution (0.1 mg/mL) on a banded surface at 500 rpm for 25 seconds. The morphology and long-term function of the hepatocytes attached to AS in nonbanded and banded surface configurations was assessed by mimicking sandwich culture and was shown to be similar to stable, differentiated sandwich cultures. Mathematical modeling was used to determine critical design criteria for the hypothetical micropatterned device. The oxygen distribution and viscous pressure drop were modeled along a typical microchannel and limited to in vivo values. An optimal channel length of 0.6 cm and a flow rate of 2.0 x 10(-6) mL/s were obtained for a channel of 100 microns in width and 10 microns in height. These values were reasonable in terms of practical implementation. --- paper_title: Soft substrates drive optimal differentiation of human healthy and dystrophic myotubes. paper_content: The in vitro development of human myotubes carrying genetic diseases, such as Duchenne Muscular Dystrophy, will open new perspectives in the identification of innovative therapeutic strategies. Through the proper design of the substrate, we guided the differentiation of human healthy and dystrophic myoblasts into myotubes exhibiting marked functional differentiation and highly defined sarcomeric organization. A thin film of photo cross-linkable elastic poly-acrylamide hydrogel with physiological-like and tunable mechanical properties (elastic moduli, E: 12, 15, 18 and 21 kPa) was used as substrate. The functionalization of its surface by micro-patterning in parallel lanes (75 microm wide, 100 microm spaced) of three adhesion proteins (laminin, fibronectin and matrigel) was meant to maximize human myoblasts fusion. Myotubes formed onto the hydrogel showed a remarkable sarcomere formation, with the highest percentage (60.0% +/- 3.8) of myotubes exhibiting sarcomeric organization, of myosin heavy chain II and alpha-actinin, after 7 days of culture onto an elastic (15 kPa) hydrogel and a matrigel patterning. In addition, healthy myotubes cultured in these conditions showed a significant membrane-localized dystrophin expression. In this study, the culture substrate has been adapted to human myoblasts differentiation, through an easy and rapid methodology, and has led to the development of in vitro human functional skeletal muscle myotubes useful for clinical purposes and in vitro physiological study, where to carry out a broad range of studies on human muscle physiopathology. --- paper_title: Muscle Differentiation and Myotubes Alignment Is Influenced by Micropatterned Surfaces and Exogenous Electrical Stimulation paper_content: An in vitro muscle-like structure with parallel-oriented contractile myotubes is needed as a model of muscle tissue regeneration. For this purpose, it is necessary to reproduce a controllable microscale environment mimicking the in vivo cues. In this work we focused on the application of topological and electrical stimuli on muscle precursor cell (MPC) culture to influence MPC orientation and induce myotube alignment. The two stimulations were tested both independently and together. A structural and topological template was achieved using micropatterned poly-(L-lactic acid) membranes. Electrical stimulation, consisting of square pulses of 70 mV/cm amplitude each 30 s, was applied to the MPC culture. The effect of different pulse durations on cultures was evaluated by galvanotaxis analysis. The highest cell displacement rate toward the cathode was observed for 3 ms pulse stimulation, which was then applied in combination with topological stimuli. Topological and electrical stimuli had an additive effect in... --- paper_title: Change in cell shape is required for matrix metalloproteinase-induced epithelial-mesenchymal transition of mammary epithelial cells paper_content: Cell morphology dictates response to a wide variety of stimuli, controlling cell metabolism, differentiation, proliferation, and death. Epithelial-mesenchymal transition (EMT) is a developmental process in which epithelial cells acquire migratory characteristics, and in the process convert from a “cuboidal” epithelial structure into an elongated mesenchymal shape. We had shown previously that matrix metalloproteinase-3 (MMP3) can stimulate EMT of cultured mouse mammary epithelial cells through a process that involves increased expression of Rac1b, a protein that stimulates alterations in cytoskeletal structure. We show here that cells treated with MMP-3 or induced to express Rac1b spread to cover a larger surface, and that this induction of cell spreading is a requirement of MMP-3/Rac1b-induced EMT. We find that limiting cell spreading, either by increasing cell density or by culturing cells on precisely defined micropatterned substrata, blocks expression of characteristic markers of EMT in cells treated with MMP-3. These effects are not caused by general disruptions in cell signaling pathways, as TGF-β-induced EMT is not affected by similar limitations on cell spreading. Our data reveal a previously unanticipated cell shape-dependent mechanism that controls this key phenotypic alteration and provide insight into the distinct mechanisms activated by different EMT-inducing agents. --- paper_title: Production of arrays of cardiac and skeletal muscle myofibers by micropatterning techniques on a soft substrate paper_content: Micropatterning and microfabrication techniques have been widely used to pattern cells on surfaces and to have a deeper insight into many processes in cell biology such as cell adhesion and interactions with the surrounding environment. The aim of this study was the development of an easy and versatile technique for the in vitro production of arrays of functional cardiac and skeletal muscle myofibers using micropatterning techniques on soft substrates. Cardiomyocytes were used for the production of oriented cardiac myofibers whereas mouse muscle satellite cells for that of differentiated parallel myotubes. We performed micro-contact printing of extracellular matrix proteins on soft polyacrylamide-based hydrogels photopolymerized onto functionalized glass slides. Our methods proved to be simple, repeatable and effective in obtaining an extremely selective adhesion of both cardiomyocytes and satellite cells onto patterned soft hydrogel surfaces. Cardiomyocytes resulted in aligned cardiac myofibers able to exhibit a synchronous contractile activity after 2 days of culture. We demonstrated for the first time that murine satellite cells, cultured on a soft hydrogel substrate, fuse and form aligned myotubes after 7 days of culture. Immunofluorescence analyses confirmed correct expression of cell phenotype, differentiation markers and sarcomeric organization. These results were obtained in myotubes derived from satellite cells from both wild type and MDX mice which are research models for the study of muscle dystrophy. These arrays of both cardiac and skeletal muscle myofibers could be used as in vitro models for pharmacological screening tests or biological studies at the single fiber level. --- paper_title: Cell-cell signaling by direct contact increases cell proliferation via a PI3K-dependent signal paper_content: We report a novel mechanism of cellular growth control. Increasing the density of endothelial or smooth muscle cells in culture increased cell-cell contact and decreased cell spreading, leading to growth arrest. Using a new method to independently control cell-cell contact and cell spreading, we found that introducing cell-cell contact positively regulates proliferation, but that contact-mediated proliferation can be masked by changes in cell spreading: Round cells with many contacts proliferated less than spread cells with none. Physically blocking cell-cell contact or inhibiting PI3K signaling abrogated cell-cell induced proliferation, but inhibiting diffusible paracrine signaling did not. Thus, direct cell-cell contact induces proliferation in these cells. --- paper_title: Protein and cell micropatterning and its integration with micro/nanoparticles assembly. paper_content: Micropatterning of proteins and cells has become very popular over the past decade due to its importance in the development of biosensors, microarrays, tissue engineering and cellular studies. This article reviews the techniques developed for protein and cell micropatterning and its biomedical applications. The prospect of integrating micro and nanoparticles with protein and cell micropatterning is discussed. The micro/nanoparticles are assembled into patterns and form the substrate for proteins and cell attachment. The assembled particles create a micro or nanotopography, depending on the size of the particles employed. The nonplanar structure can increase the surface area for biomolecules attachment and therefore enhance the sensitivity for detection in biosensors. Furthermore, a nanostructured substrate can influence the conformation and functionality of protein attached to it, while cellular response in terms of morphology, adhesion, proliferation, differentiation, etc. can be affected by a surface expressing micro or nanoscale structures. Proteins and cells tend to lose their normal functions upon attachment to substrate. By recognizing the types of topography that are favourable for preserving proteins and cell behaviour, and integrating it with micropattering will lead to the development of functional protein and cell patterns. --- paper_title: Geometric control of human stem cell morphology and differentiation paper_content: During tissue morphogenesis, stem cells and progenitor cells migrate, proliferate, and differentiate, with striking changes in cell shape, size, and acting mechanical stresses. The local cellular function depends on the spatial distribution of cytokines as well as local mechanical microenvironments in which the cells reside. In this study, we controlled the organization of human adipose derived stem cells using micro-patterning technologies, to investigate the influence of multi-cellular form on spatial distribution of cellular function at an early stage of cell differentiation. The underlying role of cytoskeletal tension was probed through drug treatment. Our results show that the cultivation of stem cells on geometric patterns resulted in pattern- and position-specific cell morphology, proliferation and differentiation. The highest cell proliferation occurred in the regions with large, spreading cells (such as the outer edge of a ring and the short edges of rectangles). In contrast, stem cell differentiation co-localized with the regions containing small, elongated cells (such as the inner edge of a ring and the regions next to the short edges of rectangles). The application of drugs that inhibit the formation of actomyosin resulted in the lack of geometrically specific differentiation patterns. This study confirms the role of substrate geometry on stem cell differentiation, through associated physical forces, and provides a simple and controllable system for studying biophysical regulation of cell function. --- paper_title: Biological role of connexin intercellular channels and hemichannels. paper_content: Gap junctions (GJ) and hemichannels (HC) formed from the protein subunits called connexins are transmembrane conduits for the exchange of small molecules and ions. Connexins and another group of HC-forming proteins, pannexins comprise the two families of transmembrane proteins ubiquitously distributed in vertebrates. Most cell types express more than one connexin or pannexin. While connexin expression and channel activity may vary as a function of physiological and pathological states of the cell and tissue, only a few studies suggest the involvement of pannexin HC in acquired pathological conditions. Importantly, genetic mutations in connexin appear to interfere with GJ and HC function which results in several diseases. Thus connexins could serve as potential drug target for therapeutic intervention. Growing evidence suggests that diseases resulting from HC dysfunction might open a new direction for development of specific HC reagents. This review provides a comprehensive overview of the current studies of GJ and HC formed by connexins and pannexins in various tissue and organ systems including heart, central nervous system, kidney, mammary glands, ovary, testis, lens, retina, inner ear, bone, cartilage, lung and liver. In addition, present knowledge of the role of GJ and HC in cell cycle progression, carcinogenesis and stem cell development is also discussed. --- paper_title: Micropatterned Surfaces for Control of Cell Shape, Position, and Function paper_content: The control of cell position and function is a fundamental focus in the development of applications ranging from cellular biosensors to tissue engineering. Using microcontact printing of self-assembled monolayers (SAMs) of alkanethiolates on gold, we manufactured substrates that contained micrometer-scale islands of extracellular matrix (ECM) separated by nonadhesive regions such that the pattern of islands determined the distribution and position of bovine and human endothelial cells. In addition, the size and geometry of the islands were shown to control cell shape. Traditional approaches to modulate cell shape, either by attaching suspended cells to microbeads of different sizes or by plating cells on substrates coated with different densities of ECM, suggested that cell shape may play an important role in control of apoptosis as well as growth. Data are presented which show how micropatterned substrates were used to definitively test this hypothesis. Progressively restricting bovine and human endothelial cell extension by culturing cells on smaller and smaller micropatterned adhesive islands regulated a transition from growth to apoptosis on a single continuum of cell spreading, thus confirming the central role of cell shape in cell function. The micropatterning technology is therefore essential not only for construction of biosurface devices but also for the investigation of the fundamental biology of cell-ECM interactions. --- paper_title: Degradation of Micropatterned Surfaces by Cell-Dependent and -Independent Processes† paper_content: This paper describes a study to determine the role of active cellular processes in the initial patterning and eventual degradation of different micropatterned substrates. We compared the effects of serum and cell type on the ability of cells to crawl onto the nonadhesive regions of a variety of patterned substrates. Cells initially patterned in the presence of serum onto substrates manufactured using agarose, pluronics, hexa(ethylene glycol), or polyacrylamide as the nonadhesive. While polyacrylamide remained inert and patterned cells for at least 28 days, agarose and pluronics degraded by gradual desorption of the nonadhesive from the surface independently of the presence of cells. Hexa(ethylene glycol) degraded by a time-dependent mechanism that could be accelerated by cell-dependent oxidative processes. In contrast to the other substrates studied, bovine serum albumin (BSA) patterned cells only under serum-free conditions. The serum did not displace BSA from the surface but instead activated cell-secreted proteases that led to degradation of the substrate. These findings illustrate the importance of specific cellular and noncellular processes in the failure of different nonadhesive chemistries commonly used to pattern cells. --- paper_title: Effect of cell – cell interactions in preservation of cellular phenotype : cocultivation of hepatocytes and nonparenchymal cells paper_content: Heterotypic cell interaction between parenchymal cells and nonparenchymal neighbors has been reported to modulate cell growth, migration, and/or differentiation. In both the developing and adult liver, cell–cell interactions are imperative for coordinated organ function. In vitro, cocultivation of hepatocytes and nonparenchymal cells has been used to preserve and modulate the hepatocyte phenotype. We summarize previous studies in this area as well as recent advances in microfabrication that have allowed for more precise control over cell–cell interactions through ‘cellular patterning’ or ‘micropatterning’. Although the precise mechanisms by which nonparenchymal cells modulate the hepatocyte phenotype remain unelucidated, some new insights on the modes of cell signaling, the extent of cell–cell interaction, and the ratio of cell populations are noted. Proposed clinical applications of hepatocyte cocultures, typically extracorporeal bioartificial liver support systems, are reviewed in the context of these n... --- paper_title: Haptotactic islands: a method of confining single cells to study individual cell reactions and clone formation. paper_content: Abstract A method is described for confining individual cells in culture to restricted areas so that they can be repeatedly identified and followed over long periods. The technique is also of value in studying clone formation and in time-lapse cinematography. --- paper_title: Micropatterning Topology on Soft Substrates Affects Myoblast Proliferation and Differentiation paper_content: Micropatterning techniques and substrate engineering are becoming useful tools to investigate several aspects of cell–cell interaction biology. In this work, we rationally study how different micropatterning geometries can affect myoblast behavior in the early stage of in vitro myogenesis. Soft hydrogels with physiological elastic modulus (E = 15 kPa) were micropatterned in parallel lanes (100, 300, and 500 μm width) resulting in different local and global myoblast densities. Proliferation and differentiation into multinucleated myotubes were evaluated for murine and human myoblasts. Wider lanes showed a decrease in murine myoblast proliferation: (69 ± 8)% in 100 μm wide lanes compared to (39 ± 7)% in 500 μm lanes. Conversely, fusion index increased in wider lanes: from (46 ± 7)% to (66 ± 7)% for murine myoblasts, and from (15 ± 3)% to (36 ± 2)% for human primary myoblasts, using a patterning width of 100 and 500 μm, respectively. These results are consistent with both computational modeling data and cond... --- paper_title: Tissue geometry patterns epithelial-mesenchymal transition via intercellular mechanotransduction paper_content: Epithelial-mesenchymal transition (EMT) is a phenotypic change in which epithelial cells detach from their neighbors and become motile. Whereas soluble signals such as growth factors and cytokines are responsible for stimulating EMT, here we show that gradients of mechanical stress define the spatial locations at which EMT occurs. When treated with transforming growth factor (TGF)-beta, cells at the corners and edges of square mammary epithelial sheets expressed EMT markers, whereas those in the center did not. Changing the shape of the epithelial sheet altered the spatial pattern of EMT. Traction force microscopy and finite element modeling demonstrated that EMT-permissive regions experienced the highest mechanical stress. Myocardin-related transcription factor (MRTF)-A was localized to the nuclei of cells located in high-stress regions, and inhibiting cytoskeletal tension or MRTF-A expression abrogated the spatial patterning of EMT. These data suggest a causal role for tissue geometry and endogenous mechanical stresses in the spatial patterning of EMT. --- paper_title: In situ collagen assembly for integrating microfabricated three-dimensional cell-seeded matrices paper_content: The contractile forces of cells can cause extracellular matrices to detach from their surroundings, which is problematic for biological studies and tissue engineering. Now, multiple phases of cell-seeded hydrogels can be integrated using a collagen-fibre-mediated method, resulting in the construction of well-defined and stable patterns of three-dimensional matrices. --- paper_title: Migration of tumor cells in 3D matrices is governed by matrix stiffness along with cell-matrix adhesion and proteolysis paper_content: Cell migration on 2D surfaces is governed by a balance between counteracting tractile and adhesion forces. Although biochemical factors such as adhesion receptor and ligand concentration and binding, signaling through cell adhesion complexes, and cytoskeletal structure assembly/disassembly have been studied in detail in a 2D context, the critical biochemical and biophysical parameters that affect cell migration in 3D matrices have not been quantitatively investigated. We demonstrate that, in addition to adhesion and tractile forces, matrix stiffness is a key factor that influences cell movement in 3D. Cell migration assays in which Matrigel density, fibronectin concentration, and β1 integrin binding are systematically varied show that at a specific Matrigel density the migration speed of DU-145 human prostate carcinoma cells is a balance between tractile and adhesion forces. However, when biochemical parameters such as matrix ligand and cell integrin receptor levels are held constant, maximal cell movement shifts to matrices exhibiting lesser stiffness. This behavior contradicts current 2D models but is predicted by a recent force-based computational model of cell movement in a 3D matrix. As expected, this 3D motility through an extracellular environment of pore size much smaller than cellular dimensions does depend on proteolytic activity as broad-spectrum matrix metalloproteinase (MMP) inhibitors limit the migration of DU-145 cells and also HT-1080 fibrosarcoma cells. Our experimental findings here represent, to our knowledge, discovery of a previously undescribed set of balances of cell and matrix properties that govern the ability of tumor cells to migration in 3D environments. --- paper_title: Satellite cells delivered by micro-patterned scaffolds: a new strategy for cell transplantation in muscle diseases. paper_content: Myoblast transplantation is a potentially useful therapeutic tool in muscle diseases, but the lack of an efficient delivery system has hampered its application. Here we have combined cell biology and polymer processing to create an appropriate microenvironment for in vivo transplantation of murine satellite cells (mSCs). Cells were prepared from single muscle fibers derived from C57BL/6-Tgn enhanced green fluorescent protein (GFP) transgenic mice. mSCs were expanded and seeded within micro-patterned polyglycolic acid 3-dimensional scaffolds fabricated using soft lithography and thermal membrane lamination. Myogenicity was then evaluated in vitro using immunostaining, flow cytometry, and reverse transcription polymerase chain reaction analyses. Scaffolds containing mSCs were implanted in pre-damaged tibialis anterior muscles of GFP-negative syngenic mice. Cells detached from culture dishes were directly injected into contra-lateral limbs as controls. In both cases, delivered cells participated in muscle re... --- paper_title: Fabrication of PLGA scaffolds using soft lithography and microsyringe deposition paper_content: Construction of biodegradable, three-dimensional scaffolds for tissue engineering has been previously described using a variety of molding and rapid prototyping techniques. In this study, we report and compare two methods for fabricating poly(Image -lactide-co-glycolide) (PLGA) scaffolds with feature sizes of approximately 10–30 μm. The first technique, the pressure assisted microsyringe, is based on the use of a microsyringe that utilizes a computer-controlled, three-axis micropositioner, which allows the control of motor speeds and position. A PLGA solution is deposited from the needle of a syringe by the application of a constant pressure of 20–300 mm Hg, resulting in a controlled polymer deposition. The second technique is based on ‘soft lithographic’ approaches that utilize a poly(dimethylsiloxane) mold. Three variations of the second technique are presented: polymer casting, microfluidic perfusion, and spin coating. Polymer concentration, solvent composition, and mold dimensions influenced the resulting scaffolds as evaluated by light and electron microscopy. As a proof-of-concept for scaffold utility in tissue engineering applications, multilayer structures were formed by thermal lamination, and scaffolds were rendered porous by particulate leaching. These simple methods for forming PLGA scaffolds with microscale features may serve as useful tools to explore structure/function relationships in tissue engineering. --- paper_title: Adult cell therapy for brain neuronal damages and the role of tissue engineering. paper_content: No long term effective treatments are currently available for brain neurological disorders such as stroke/cerebral ischemia, traumatic brain injury and neurodegenerative disorders. Cell therapy is a promising strategy, although alternatives to embryonic/foetal cells are required to overcome ethical, tissue availability and graft rejection concerns. Adult cells may be easily isolated from the patient body, therefore permitting autologous grafts to be performed. Here, we describe the use of adult neural stem cells, adrenal chromaffin cells and retinal pigment epithelium cells for brain therapy, with a special emphasis on mesenchymal stromal cells. However, major problems like cell survival, control of differentiation and engraftment remain and may be overcome using a tissue engineering strategy, which provides a 3D support to grafted cells improving their survival. New developments, such as the biomimetic approach which combines the use of scaffolds with extracellular matrix molecules, may improve the control of cell proliferation, survival, migration, differentiation and engraftment in vivo. Therefore, we later discuss scaffold properties required for brain cell therapy as well as new tissue engineering advances that may be implemented in combination with adult cells for brain therapy. Finally, we describe an approach developed in our laboratory to repair/protect lesioned tissues: the pharmacologically active microcarriers. --- paper_title: Temperature measurements in microfluidic systems: Heat dissipation of negative dielectrophoresis barriers paper_content: The manipulation of living biological cells in microfluidic channels by a combination of negative dielectrophoretic barriers and pressure-driven flows is widely employed in lab-on-a-chip systems. However, electric fields in conducting media induce Joule heating. This study investigates if the local temperatures reached under typical experimental conditions in miniaturized systems cause a potential risk for hyperthermic stress or cell damage. Two methods of optical in situ temperature detection have been tested and compared: (i) the exposure of the thermo-dependent fluorescent dye Rhodamine B to heat sources situated in microfluidic channels, and (ii) the use of thermoprecipitating N-alkyl-substituted acrylamide polymers as temperature threshold probes. Two-dimensional images of temperature distributions in the vicinity of active negative dielectrophoresis (nDEP)-barriers have been obtained and local temperature variations of more than 20 degrees C have been observed at the electrode edges. Heat propagation via both buffer and channel walls lead to significant temperature increases within a perimeter of 100 microm and more. These data indicate that power dissipation has to be taken into account when experiments at physiological temperatures are planned. --- paper_title: Reconstruction and functional analysis of altered molecular pathways in human atherosclerotic arteries paper_content: BackgroundAtherosclerosis affects aorta, coronary, carotid, and iliac arteries most frequently than any other body vessel. There may be common molecular pathways sustaining this process. Plaque presence and diffusion is revealed by circulating factors that can mediate systemic reaction leading to plaque rupture and thrombosis.ResultsWe used DNA microarrays and meta-analysis to study how the presence of calcified plaque modifies human coronary and carotid gene expression. We identified a series of potential human atherogenic genes that are integrated in functional networks involved in atherosclerosis. Caveolae and JAK/STAT pathways, and S100A9/S100A8 interacting proteins are certainly involved in the development of vascular disease. We found that the system of caveolae is directly connected with genes that respond to hormone receptors, and indirectly with the apoptosis pathway.Cytokines, chemokines and growth factors released in the blood flux were investigated in parallel. High levels of RANTES, IL-1ra, MIP-1alpha, MIP-1beta, IL-2, IL-4, IL-5, IL-6, IL-7, IL-17, PDGF-BB, VEGF and IFN-gamma were found in plasma of atherosclerotic patients and might also be integrated in the molecular networks underlying atherosclerotic modifications of these vessels.ConclusionThe pattern of cytokine and S100A9/S100A8 up-regulation characterizes atherosclerosis as a proinflammatory disorder. Activation of the JAK/STAT pathway is confirmed by the up-regulation of IL-6, STAT1, ISGF3G and IL10RA genes in coronary and carotid plaques. The functional network constructed in our research is an evidence of the central role of STAT protein and the caveolae system to contribute to preserve the plaque. Moreover, Cav-1 is involved in SMC differentiation and dyslipidemia confirming the importance of lipid homeostasis in the atherosclerotic phenotype. --- paper_title: From 3D cell culture to organs-on-chips. paper_content: 3D cell-culture models have recently garnered great attention because they often promote levels of cell differentiation and tissue organization not possible in conventional 2D culture systems. We review new advances in 3D culture that leverage microfabrication technologies from the microchip industry and microfluidics approaches to create cell-culture microenvironments that both support tissue differentiation and recapitulate the tissue-tissue interfaces, spatiotemporal chemical gradients, and mechanical microenvironments of living organs. These 'organs-on-chips' permit the study of human physiology in an organ-specific context, enable development of novel in vitro disease models, and could potentially serve as replacements for animals used in drug development and toxin testing. --- paper_title: Multiphase electropatterning of cells and biomaterials{ paper_content: Tissues formed by cells encapsulated in hydrogels have uses in biotechnology, cell-based assays, and tissue engineering. We have previously presented a 3D micropatterning technique that rapidly localizes live cells within hydrogels using dielectrophoretic (DEP) forces, and have demonstrated the ability to modulate tissue function through the control of microscale cell architecture. A limitation of this method is the requirement that a single biomaterial must simultaneously harbor biological properties that support cell survival and function and material properties that permit efficient dielectrophoretic patterning. Here, we resolve this issue by forming multiphase tissues consisting of microscale tissue sub-units in a 'local phase' biomaterial, which, in turn, are organized by DEP forces in a separate, mechanically supportive 'bulk phase' material. We first define the effects of medium conductivity on the speed and quality of DEP cell patterning. As a case study, we then produce multiphase tissues with microscale architecture that combine high local hydrogel conductivity for enhanced survival of sensitive liver progenitor cells with low bulk conductivity required for efficient DEP micropatterning. This approach enables an expanded range of studies examining the influence of 3D cellular architecture on diverse cell types, and in the future may improve the biological function of inhomogeneous tissues assembled from a variety of modular tissue sub-units. --- paper_title: Cardiac regeneration: stem cells and beyond. paper_content: After myocardial infarction, the lost healthy myocardium is replaced by non-contractile scar tissue which may lead to the development of heart failure and death. There is no curative therapy for the irreversible myocardial cell loss. This review will give an overview of the current options to restore the contractile force of the heart: the different stem cell sources as therapeutic agents in cardiac repair as well as more novel approaches like the activation of endogenous cell populations, the use of paracrine factors and engineered heart tissue. --- paper_title: Enhancement of Viability of Muscle Precursor Cells on 3D Scaffold in a Perfusion Bioreactor paper_content: The aim of this study was to develop a methodology for the in vitro expansion of skeletal-muscle precursor cells (SMPC) in a three-dimensional (3D) environment in order to fabricate a cellularized artificial graft characterized by high density of viable cells and uniform cell distribution over the entire 3D domain. Cell seeding and culture within 3D porous scaffolds by conventional static techniques can lead to a uniform cell distribution only on the scaffold surface, whereas dynamic culture systems have the potential of allowing a uniform growth of SMPCs within the entire scaffold structure. In this work, we designed and developed a perfusion bioreactor able to ensure long-term culture conditions and uniform flow of medium through 3D collagen sponges. A mathematical model to assist the design of the experimental setup and of the operative conditions was developed. The effects of dynamic vs static culture in terms of cell viability and spatial distribution within 3D collagen scaffolds were evaluated at 1, 4 and 7 days and for different flow rates of 1, 2, 3.5 and 4.5 ml/min using C2C12 muscle cell line and SMPCs derived from satellite cells. C2C12 cells, after 7 days of culture in our bioreactor, perfused applying a 3.5 ml/min flow rate, showed a higher viability resulting in a three-fold increase when compared with the same parameter evaluated for cultures kept under static conditions. In addition, dynamic culture resulted in a more uniform 3D cell distribution. The 3.5 ml/min flow rate in the bioreactor was also applied to satellite cell-derived SMPCs cultured on 3D collagen scaffolds. The dynamic culture conditions improved cell viability leading to higher cell density and uniform distribution throughout the entire 3D collagen sponge for both C2C12 and satellite cells. --- paper_title: Microfluidic fabrication of microengineered hydrogels and their application in tissue engineering paper_content: Microfluidic technologies are emerging as an enabling tool for various applications in tissue engineering and cell biology. One emerging use of microfluidic systems is the generation of shape-controlled hydrogels (i.e., microfibers, microparticles, and hydrogel building blocks) for various biological applications. Furthermore, the microfluidic fabrication of cell-laden hydrogels is of great benefit for creating artificial scaffolds. In this paper, we review the current development of microfluidic-based fabrication techniques for the creation of fibers, particles, and cell-laden hydrogels. We also highlight their emerging applications in tissue engineering and regenerative medicine. --- paper_title: Fabrication of microfluidic hydrogels using molded gelatin as a sacrificial element { paper_content: This paper describes a general procedure for the formation of hydrogels that contain microfluidic networks. In this procedure, micromolded meshes of gelatin served as sacrificial materials. Encapsulation of gelatin meshes in a hydrogel and subsequent melting and flushing of the gelatin left behind interconnected channels in the hydrogel. The channels were as narrow as approximately 6 microm, and faithfully replicated the features in the original gelatin mesh. Fifty micrometre wide microfluidic networks in collagen and fibrin readily enabled delivery of macromolecules and particles into the channels and transport of macromolecules from channels into the bulk of the gels. Microfluidic gels were also suitable as scaffolds for cell culture, and could be seeded by human microvascular endothelial cells to form rudimentary endothelial networks for potential use in tissue engineering. --- paper_title: Development of a tissue-engineered vascular graft combining a biodegradable scaffold, muscle-derived stem cells and a rotational vacuum seeding technique. paper_content: Abstract There is a clinical need for a tissue-engineered vascular graft (TEVG), and combining stem cells with biodegradable tubular scaffolds appears to be a promising approach. The goal of this study was to characterize the incorporation of muscle-derived stem cells (MDSCs) within tubular poly(ester urethane) urea (PEUU) scaffolds in vitro to understand their interaction, and to evaluate the mechanical properties of the constructs for vascular applications. Porous PEUU scaffolds were seeded with MDSCs using our recently described rotational vacuum seeding device, and cultured inside a spinner flask for 3 or 7 days. Cell viability, number, distribution and phenotype were assessed along with the suture retention strength and uniaxial mechanical behavior of the TEVGs. The seeding device allowed rapid even distribution of cells within the scaffolds. After 3 days, the constructs appeared completely populated with cells that were spread within the polymer. Cells underwent a population doubling of 2.1-fold, with a population doubling time of 35 h. Stem cell antigen-1 (Sca-1) expression by the cells remained high after 7 days in culture (77±20% vs. 66±6% at day 0) while CD34 expression was reduced (19±12% vs. 61±10% at day 0) and myosin heavy chain expression was scarce (not quantified). The estimated burst strength of the TEVG constructs was 2127±900 mmHg and suture retention strength was 1.3±0.3 N. We conclude from this study that MDSCs can be rapidly seeded within porous biodegradable tubular scaffolds while maintaining cell viability and high proliferation rates and without losing stem cell phenotype for up to 7 days of in-vitro culture. The successful integration of these steps is thought necessary to provide rapid availability of TEVGs, which is essential for clinical translation. --- paper_title: Matrix Elasticity Directs Stem Cell Lineage Specification paper_content: Microenvironments appear important in stem cell lineage specification but can be difficult to adequately characterize or control with soft tissues. Naive mesenchymal stem cells (MSCs) are shown here to specify lineage and commit to phenotypes with extreme sensitivity to tissue-level elasticity. Soft matrices that mimic brain are neurogenic, stiffer matrices that mimic muscle are myogenic, and comparatively rigid matrices that mimic collagenous bone prove osteogenic. During the initial week in culture, reprogramming of these lineages is possible with addition of soluble induction factors, but after several weeks in culture, the cells commit to the lineage specified by matrix elasticity, consistent with the elasticity-insensitive commitment of differentiated cell types. Inhibition of nonmuscle myosin II blocks all elasticity-directed lineage specification-without strongly perturbing many other aspects of cell function and shape. The results have significant implications for understanding physical effects of the in vivo microenvironment and also for therapeutic uses of stem cells. --- paper_title: Alginate encapsulation technology supports embryonic stem cells differentiation into insulin-producing cells. paper_content: This work investigates an application of the alginate encapsulation technology to the differentiation of embryonic stem (ES) cells into insulin-producing cells. It shows that the ES cells can efficiently be encapsulated within the alginate beads, retaining a high level of cell viability. The alginate encapsulation achieves approximately 10-fold increase in the cell density in the culture, in comparison to the two-dimensional conditions, opening a potential benefit of the technology in large-scale cell culture applications. Manipulations of encapsulation conditions, particularly of the initial alginate concentration, allow the control over both the diffusion of molecules into the alginate matrix (e.g. differentiation factors) as well as control over the matrix porosity/flexibility to permit the proliferation and growth of encapsulated ES aggregates within the bead. Post-differentiation analysis confirms the presence of insulin-positive cells, as judged from immunostaining, insulin ELISA and RT-PCR analysis. The functionality of the encapsulated and differentiated cells was confirmed by their insulin production capability, whereby on glucose challenge the insulin production by the cells differentiated within alginate beads was found to be statistically significantly higher than for the cells from conventional two-dimensional differentiation system. --- paper_title: Vascularized organoid engineered by modular assembly enables blood perfusion paper_content: Tissue engineering is one approach to address the donor-organ shortage, but to attain clinically significant viable cell densities in thick tissues, laboratory-constructed tissues must have an internal vascular supply. We have adopted a biomimetic approach and assembled microscale modular components, consisting of submillimeter-sized collagen gel rods seeded with endothelial cells (ECs) into a (micro)vascularized tissue; in some prototypes the gel contained HepG2 cells to illustrate the possibilities. The EC-covered modules then were assembled into a larger tube and perfused with medium or whole blood. The interstitial spaces among the modules formed interconnected channels that enabled this perfusion. Viable cell densities were high, within an order of magnitude of cell densities within tissues, and the percolating nature of the flow through the construct was evident in microcomputed tomography and Doppler ultrasound measurements. Most importantly, the ECs retained their nonthrombogenic phenotype and delayed clotting times and inhibited the loss of platelets associated with perfusion of whole blood through the construct. Unlike the conventional scaffold and cell-seeding paradigm of other tissue-engineering approaches, this modular construct has the potential to be scalable, uniform, and perfusable with whole blood, circumventing the limitations of other approaches. --- paper_title: Micromolding of shape-controlled, harvestable cell-laden hydrogels paper_content: Encapsulation of mammalian cells within hydrogels has great utility for a variety of applications ranging from tissue engineering to cell-based assays. In this work, we present a technique to encapsulate live cells in three-dimensional (3D) microscale hydrogels (microgels) of controlled shapes and sizes in the form of harvestable free standing units. Cells were suspended in methacrylated hyaluronic acid (MeHA) or poly(ethylene glycol) diacrylate (PEGDA) hydrogel precursor solution containing photoinitiator, micromolded using a hydrophilic poly(dimethylsiloxane) (PDMS) stamp, and crosslinked using ultraviolet (UV) radiation. By controlling the features on the PDMS stamp, the size and shape of the molded hydrogels were controlled. Cells within microgels were well distributed and remained viable. These shape-specific microgels could be easily retrieved, cultured and potentially assembled to generate structures with controlled spatial distribution of multiple cell types. Further development of this technique may lead to applications in 3D co-cultures for tissue/organ regeneration and cell-based assays in which it is important to mimic the architectural intricacies of physiological cell–cell interactions. r 2006 Elsevier Ltd. All rights reserved. --- paper_title: A planar interdigitated ring electrode array via dielectrophoresis for uniform patterning of cells. paper_content: Uniform patterning of cells is highly desirable for most cellular studies involving cell-cell interactions but is often difficult in an in vitro environment. This paper presents the development of a collagen-coated planar interdigitated ring electrode (PIRE) array utilizing positive dielectrophoresis to pattern cells uniformly. Key features of the PIRE design include: (1) maximizing length along the edges where the localized maximum in the electric field exists; (2) making the inner gap slightly smaller than the outer gap in causing the electric field strength near the center of a PIRE being generally stronger than that near the outer edge of the same PIRE. Results of human hepatocellular carcinoma cells, HepG2, adhered on a 6x6 PIRE array show that cells patterned within minutes with good uniformity (48+/-6 cells per PIRE). Cell viability test revealed healthy patterned cells after 24h that were still confined to the collagen-coated PIREs. Furthermore, quantification of fluorescence intensity of living cells shows an acceptable reproducibility of cell viability among PIREs (mean normalized intensity per PIRE was 1+/-0.138). The results suggest that the PIRE array would benefit applications that desire uniform cellular patterning, and improve both response and reproducibility of cell-based biosensors. --- paper_title: Stimulation of Ca2+ signals in neurons by electrically coupled electrolyte-oxide-semiconductor capacitors paper_content: Electrolyte-oxide-semiconductor capacitors (EOSCs) are a class of microtransducers for extracellular electrical stimulation that have been successfully employed to activate voltage-dependent sodium channels at the neuronal soma to generate action potentials in vitro. In the present work, we report on their use to control Ca²+ signalling in cultured mammalian cells, including neurons. Evidence is provided that EOSC stimulation with voltage waveforms in the microsecond or nanosecond range activates two distinct Ca²+ pathways, either by triggering Ca²+ entry through the plasma membrane or its release from intracellular stores. Ca²+ signals were activated in non-neuronal and neuronal cell lines, CHO-K1 and SH-SY5Y. On this basis, stimulation was tailored to rat and bovine neurons to mimic physiological somatic Ca²+ transients evoked by glutamate. Being minimally invasive and easy to use, the new method represents a versatile complement to standard electrophysiology and imaging techniques for the investigation of Ca²+ signalling in dissociated primary neurons and cell lines. --- paper_title: Space and time-resolved gene expression experiments on cultured mammalian cells by a single-cell electroporation microarray paper_content: Single-cell experiments represent the next frontier for biochemical and gene expression research. Although bulk-scale methods averaging populations of cells have been traditionally used to investigate cellular behavior, they mask individual cell features and can lead to misleading or insufficient biological results. We report on a single-cell electroporation microarray enabling the transfection of pre-selected individual cells at different sites within the same culture (space-resolved), at arbitrarily chosen time points and even sequentially to the same cells (time-resolved). Delivery of impermeant molecules by single-cell electroporation was first proven to be finely tunable by acting on the electroporation protocol and then optimized for transfection of nucleic acids into Chinese Hamster Ovary (CHO-K1) cells. We focused on DNA oligonucleotides (ODNs), short interfering RNAs (siRNAs), and DNA plasmid vectors, thus providing a versatile and easy-to-use platform for time-resolved gene expression experiments in single mammalian cells. --- paper_title: A three-dimensional multi-electrode array for multi-site stimulation and recording in acute brain slices paper_content: Several multi-electrode array devices integrating planar metal electrodes were designed in the past 30 years for extracellular stimulation and recording from cultured neuronal cells and organotypic brain slices. However, these devices are not well suited for recordings from acute brain slice preparations due to a dead cell layer at the tissue slice border that appears during the cutting procedure. To overcome this problem, we propose the use of protruding 3D electrodes, i.e. tip-shaped electrodes, allowing tissue penetration in order to get closer to living neurons in the tissue slice. In this paper, we describe the design and fabrication of planar and 3D protruding multi-electrode arrays. The electrical differences between planar and 3D protruding electrode configuration were simulated and verified experimentally. Finally, a comparison between the planar and 3D protruding electrode configuration was realized by stimulation and recording from acute rat hippocampus slices. The results show that larger signal amplitudes in the millivolt range can be obtained with the 3D electrode devices. Spikes corresponding to single cell activity could be monitored in the hippocampus CA3 and CA1 region using 3D electrodes. --- paper_title: Electrical characterization of human mesenchymal stem cell growth on microelectrode paper_content: To support the development of stem cell therapies with in vitro assays, non-destructive methods are required for the quality control of stem cell culture. In this article, it was investigated whether an electrode based chip with electrical impedance spectroscopy can be used to characterize the growth of human mesenchymal stem cells on electrodes without any chemical marker. From finite element method simulations, the electrical characteristics of cell layer under weak alternating electric fields were investigated with respect to the different cell/cell or cell/substrate gap, and modelled as an equivalent circuit. The impedance spectra were measured during the long-term cultivation of human mesenchymal stem cells on platinum electrodes. By fitting analyses the equivalent circuit to measured spectra, an extra cellular resistance reflecting the cell growth was extrapolated. --- paper_title: Adult neural progenitor cells reactivate superbursting in mature neural networks paper_content: Behavioral recovery in animal models of human CNS syndromes suggests that transplanted stem cell derivatives can augment damaged neural networks but the mechanisms behind potentiated recovery remain elusive. Here we use microelectrode array (MEA) technology to document neural activity and network integration as rat primary neurons and rat hippocampal neural progenitor cells (NPCs) differentiate and mature. The natural transition from neuroblast to functional excitatory neuron consists of intermediate phases of differentiation characterized by coupled activity. High-frequency network-wide bursting or "superbursting" is a hallmark of early plasticity that is ultimately refined into mature stable neural network activity. Microelectrode array (MEA)-plated neurons transition through this stage of coupled superbursting before establishing mature neuronal phenotypes in vitro. When plated alone, adult rat hippocampal NPC-derived neurons fail to establish the synchronized bursting activity that neurons in primary and embryonic stem cell-derived cultures readily form. However, adult rat hippocampal NPCs evoke re-emergent superbursting in electrophysiologically mature rat primary neural cultures. Developmental superbursting is thought to accompany transient states of heightened plasticity both in culture preparations and across brain regions. Future work exploring whether NPCs can re-stimulate developmental states in injury models would be an interesting test of their regenerative potential. --- paper_title: Adult mesenchymal stem cells and cell-based tissue engineering paper_content: The identification of multipotential mesenchymal stem cells (MSCs) derived from adult human tissues, including bone marrow stroma and a number of connective tissues, has provided exciting prospects for cell-based tissue engineering and regeneration. This review focuses on the biology of MSCs, including their differentiation potentials in vitro and in vivo, and the application of MSCs in tissue engineering. Our current understanding of MSCs lags behind that of other stem cell types, such as hematopoietic stem cells. Future research should aim to define the cellular and molecular fingerprints of MSCs and elucidate their endogenous role(s) in normal and abnormal tissue functions. --- paper_title: Human Cell-Based Micro Electrode Array Platform for Studying Neurotoxicity paper_content: At present, most of the neurotoxicological analyses are based on in vitro and in vivo models utilizing animal cells or animal models. In addition, the used in vitro models are mostly based on molecular biological end-point analyses. Thus, for neurotoxicological screening, human cell-based analysis platforms in which the functional neuronal networks responses for various neurotoxicants can be also detected real-time are highly needed. Microelectrode array (MEA) is a method which enables the measurement of functional activity of neuronal cell networks in vitro for long periods of time. Here, we utilize MEA to study the neurotoxicity of methyl mercury chloride (MeHgCl, concentrations 0.5-500 nM) to human embryonic stem cell (hESC)-derived neuronal cell networks exhibiting spontaneous electrical activity. The neuronal cell cultures were matured on MEAs into networks expressing spontaneous spike train-like activity before exposing the cells to MeHgCl for 72 hours. MEA measurements were performed acutely and 24, 48, and 72 hours after the onset of the exposure. Finally, exposed cells were analyzed with traditional molecular biological methods for cell proliferation, cell survival, and gene and protein expression. Our results show that 500 nM MeHgCl decreases the electrical signaling and alters the pharmacologic response of hESC-derived neuronal networks in delayed manner whereas effects can not be detected with qRT-PCR, immunostainings, or proliferation measurements. Thus, we conclude that human cell-based MEA-platform is a sensitive online method for neurotoxicological screening. --- paper_title: Prediction of drug-induced cardiotoxicity using human embryonic stem cell-derived cardiomyocytes. paper_content: Recent withdrawals of prescription drugs from clinical use because of unexpected side effects on the heart have highlighted the need for more reliable cardiac safety pharmacology assays. Block of the human Ether-a-go go Related Gene (hERG) ion channel in particular is associated with life-threatening arrhythmias, such as Torsade de Pointes (TdP). Here we investigated human cardiomyocytes derived from pluripotent (embryonic) stem cells (hESC) as a renewable, scalable, and reproducible system on which to base cardiac safety pharmacology assays. Analyses of extracellular field potentials in hESC-derived cardiomyocytes (hESC-CM) and generation of derivative field potential duration (FPD) values showed dose-dependent responses for 12 cardiac and noncardiac drugs. Serum levels in patients of drugs with known effects on QT interval overlapped with prolonged FPD values derived from hESC-CM, as predicted. We thus propose hESC-CM FPD prolongation as a safety criterion for preclinical evaluation of new drugs in development. This is the first study in which dose responses of such a wide range of compounds on hESC-CM have been generated and shown to be predictive of clinical effects. We propose that assays based on hESC-CM could complement or potentially replace some of the preclinical cardiac toxicity screening tests currently used for lead optimization and further development of new drugs. --- paper_title: Learning in human neural networks on microelectrode arrays paper_content: This paper describes experiments involving the growth of human neural networks of stem cells on a MEA (microelectrode array) support. The microelectrode arrays (MEAs) are constituted by a glass support in which a set of tungsten electrodes are inserted. The artificial neural network (ANN) paradigm was used by stimulating the neurons in parallel with digital patterns distributed on eight channels, then by analyzing a parallel multichannel output. In particular, the microelectrodes were connected following two different architectures, one inspired by the Kohonen’s SOM, the other by the Hopfield network. The output signals have been analyzed in order to evaluate the possibility of organized reactions by the natural neurons. The results show that the network of human neurons reacts selectively to the subministered digital signals, i.e., it produces similar output signals referred to identical or similar patterns, and clearly differentiates the outputs coming from different stimulations. Analyses performed with a special artificial neural network called ITSOM show the possibility to codify the neural responses to different patterns, thus to interpret the signals coming from the network of biological neurons, assigning a code to each output. It is straightforward to verify that identical codes are generated by the neural reactions to similar patterns. Further experiments are to be designed that improve the hybrid neural networks’ capabilities and to test the possibility of utilizing the organized answers of the neurons in several ways. © 2006 Elsevier Ireland Ltd. All rights reserved. --- paper_title: Real-Time Monitoring of Neural Differentiation of Human Mesenchymal Stem Cells by Electric Cell-Substrate Impedance Sensing paper_content: Stem cells are useful for cell replacement therapy. Stem cell differentiation must be monitored thoroughly and precisely prior to transplantation. In this study we evaluated the usefulness of electric cell-substrate impedance sensing (ECIS) for in vitro real-time monitoring of neural differentiation of human mesenchymal stem cells (hMSCs). We cultured hMSCs in neural differentiation media (NDM) for 6 days and examined the time-course of impedance changes with an ECIS array. We also monitored the expression of markers for neural differentiation, total cell count, and cell cycle profiles. Cellular expression of neuron and oligodendrocyte markers increased. The resistance value of cells cultured in NDM was automatically measured in real-time and found to increase much more slowly over time compared to cells cultured in non-differentiation media. The relatively slow resistance changes observed in differentiating MSCs were determined to be due to their lower growth capacity achieved by induction of cell cycle arrest in G0/G1. Overall results suggest that the relatively slow change in resistance values measured by ECIS method can be used as a parameter for slowly growing neural-differentiating cells. However, to enhance the competence of ECIS for in vitro real-time monitoring of neural differentiation of MSCs, more elaborate studies are needed. --- paper_title: Paracrine signalling events in embryonic stem cell renewal mediated by affinity targeted nanoparticles paper_content: Abstract Stem cell growth and differentiation is controlled by intrinsic and extrinsic factors. The latter includes growth factors, which are conventionally supplied in vitro in media exchanged daily. Here, we illustrate the use of affinity targeted biodegradable nanoparticles to mediate paracrine stimulation as an alternative approach to sustain the growth and pluripotency of mouse embryonic stem cells. Leukaemia Inhibitory Factor (LIF) was encapsulated in biodegradable nanoparticles and targeted to the cell surface using an antibody to the oligosaccharide antigen SSEA-1. Sustained release of LIF from nanoparticles composed of a solid Poly(lactide-co-glycolic acid) polyester or a hydrogel-based liposomal system, we term Nanolipogel, replenished once after each cell passage, proved as effective as daily replenishment with soluble LIF for maintenance of pluripotency after 5 passages using 10 4 -fold less LIF. Our study constitutes an alternative paradigm for stem cell culture, providing dynamic microenvironmental control of extrinsic bioactive factors benefiting stem cell manufacturing. --- paper_title: Real-time label-free monitoring of adipose-derived stem cell differentiation with electric cell-substrate impedance sensing paper_content: †Real-time monitoring of stem cells (SCs) differentiation will be critical to scale-up SC technologies, while label-free techniques will be desirable to quality-control SCs without precluding their therapeutic potential. We cultured adipose-derived stem cells (ADSCs) on top of multielectrode arrays and measured variations in the complex impedance Z* throughout induction of ADSCs toward osteoblasts and adipocytes. Z* was measured up to 17 d, every 180 s, over a 62.5–64kHz frequency range with an ECIS Zθ instrument. We found that osteogenesis and adipogenesis were characterized by distinct Z* time-courses. Significant differences were found (P = 0.007) as soon as 12 h post induction. An increase in the barrier resistance (Rb) up to 1.7 ohm·cm 2 was associated with early osteo-induction, whereas Rb peaked at 0.63 ohm·cm 2 for adipo-induced cells before falling to zero at t = 129 h. Dissimilarities in Z* throughout early induction (<24 h) were essentially attributed to variations in the cell-substrate parameter α .F our days after induction, cell membrane capacitance (Cm) of osteoinduced cells (Cm = 1.72 ± 0.10 μF/cm 2 ) was significantly different from that of adipo-induced cells (Cm = 2.25 ± 0.27 μF/cm 2 ), indicating that Cm could be used as an early marker of differentiation. Finally, we demonstrated long-term monitoring and measured a shift in the complex plane in the middle frequency range (1 kHz to 8 kHz) between early (t = 100 h) and late induction (t = 380 h). This study demonstrated that the osteoblast and adipocyte lineages have distinct dielectric properties and that such differences can be used to perform real-time label-free quantitative monitoring of adult stem cell differentiation with impedance sensing. --- paper_title: The human adipose tissue is a source of multipotent stem cells. paper_content: Multipotent stem cells constitute an unlimited source of differentiated cells that could be used in pharmacological studies and in medicine. Recently, several publications have reported that adipose tissue contains a population of cells able to differentiate into different cell types including adipocytes, osteoblasts, myoblasts, and chondroblasts. More recently, stem cells with a multi-lineage potential at the single cell level have been isolated from human adipose tissue. These cells, called human Multipotent Adipose-Derived Stem (hMADS) cells, have been established in culture and interestingly, maintain their characteristics with long-term passaging. The adipocyte differentiation of hMADS cells has been thoroughly studied and differentiated cells exhibit the unique feature of human adipocytes. Finally, potential applications of stem cells isolated from adipose tissue in medicine will be discussed. --- paper_title: An integrated semiconductor device enabling non-optical genome sequencing paper_content: The seminal importance of DNA sequencing to the life sciences, biotechnology and medicine has driven the search for more scalable and lower-cost solutions. Here we describe a DNA sequencing technology in which scalable, low-cost semiconductor manufacturing techniques are used to make an integrated circuit able to directly perform non-optical DNA sequencing of genomes. Sequence data are obtained by directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides on this massively parallel semiconductor-sensing device or ion chip. The ion chip contains ion-sensitive, field-effect transistor-based sensors in perfect register with 1.2 million wells, which provide confinement and allow parallel, simultaneous detection of independent sequencing reactions. Use of the most widely used technology for constructing integrated circuits, the complementary metal-oxide semiconductor (CMOS) process, allows for low-cost, large-scale production and scaling of the device to higher densities and larger array sizes. We show the performance of the system by sequencing three bacterial genomes, its robustness and scalability by producing ion chips with up to 10 times as many sensors and sequencing a human genome. --- paper_title: Field-effect devices for detecting cellular signals. paper_content: The integration of living cells together with silicon field-effect devices challenges a new generation of biosensors and bioelectronic devices. Cells are representing highly organised complex systems, optimised by millions of years of evolution and offering a broad spectrum of bioanalytical receptor "tools" such as enzymes, nucleic acids proteins, etc. Their combination with semiconductor-based electronic chips allows the construction of functional hybrid systems with unique functional and electronic properties for both fundamental studies and biosensoric applications. This review article summarises recent advances and trends in research and development of cell/transistor hybrids (cell-based field-effect transistors) as well as light-addressable potentiometric sensors. --- paper_title: A comparative study on fabrication techniques for on-chip microelectrodes paper_content: This paper presents an experimental study on different microelectrode fabrication techniques, with particular focus on the robustness of the surface insulation towards typical working conditions required in lab-on-a-chip applications. Pt microelectrodes with diameters of 50 μm, 100 μm and 200 μm are patterned on a Si substrate with SiO2 film. Sputtered SiO2, low-pressure chemical vapor deposition (LPCVD) low-temperature oxide (LTO), Parylene C, SU-8, and dry-film were deposited and patterned on top of the chips as the passivation layer. This paper provides the detailed fabrication processes, the adhesion enhancement strategies, and the major advantages and disadvantages of each fabrication technique. Firstly, the quality and adhesion strength of the passivations were investigated by means of hydrolysis tests, in which sputtered SiO2 and dry-film resist showed serious delamination issues and LTO showed minor defects. Secondly, the reliability of the microelectrodes was tested by impedance measurements after overnight ethanol incubation and self-assembled monolayer (SAM) formation. Thirty chips, representing a total of 300 electrodes, were measured, and statistical analyses of the results were conducted for each passivation technique. All of the electrodes passivated with these five techniques showed consistent impedance values after ethanol incubation. On the other hand, only LTO, Parylene C, and SU-8 ensured uniform electrical behavior after SAM formation. Having used both hydrolysis and impedance tests to verify the superior quality of the Parylene-based passivation, electrochemical experiments were performed to study the long-term stability of the passivation layer. Finally, the electrodes were incubated with electroactive alkanethiols functionalized with ferrocene. Square-wave voltammetry measurements demonstrated reproducible results on electrochemical label detection, which confirms the suitability of the Parylene passivation for charge-transfer-based measurements. --- paper_title: Development of an Ion-Sensitive Solid-State Device for Neurophysiological Measurements paper_content: The development of an ion-sensitive solid-state device is described. The device combines the principles of an MOS transistor and a glass electrode and can be used for measurements of ion activities in electrochemical and biological environments. Some preliminary results are given. --- paper_title: Macroporous nanowire nanoelectronic scaffolds for synthetic tissues paper_content: The development of three-dimensional (3D) synthetic biomaterials as structural and bioactive scaffolds is central to fields ranging from cellular biophysics to regenerative medicine. As of yet, these scaffolds cannot electrically probe the physicochemical and biological microenvironments throughout their 3D and macroporous interior, although this capability could have a marked impact in both electronics and biomaterials. Here, we address this challenge using macroporous, flexible and free-standing nanowire nanoelectronic scaffolds (nanoES), and their hybrids with synthetic or natural biomaterials. 3D macroporous nanoES mimic the structure of natural tissue scaffolds, and they were formed by self-organization of coplanar reticular networks with built-in strain and by manipulation of 2D mesh matrices. NanoES exhibited robust electronic properties and have been used alone or combined with other biomaterials as biocompatible extracellular scaffolds for 3D culture of neurons, cardiomyocytes and smooth muscle cells. Furthermore, we show the integrated sensory capability of the nanoES by real-time monitoring of the local electrical activity within 3D nanoES/cardiomyocyte constructs, the response of 3D-nanoES-based neural and cardiac tissue models to drugs, and distinct pH changes inside and outside tubular vascular smooth muscle constructs. --- paper_title: Integrating bio-sensing functions on CMOS chips paper_content: The paper discusses the recent achievements in the development of chips with integrated sensing of biomolecules. In particular, it focuses on integrated sensing electrodes on silicon and presents innovative solutions for the enhanced robustness of the electrodes towards cleaning processes and electrolytes. In this study, a microfabrication technology for 3D-integrated disposable chip layers that enables the reusability of the overall system for many times is presented. --- paper_title: Neuron–transistor coupling: interpretation of individual extracellular recorded signals paper_content: The electrical coupling of randomly migrating neurons from rat explant brain-stem slice cultures to the gates of non-metallized field-effect transistors (FETs) has been investigated. The objective of our work is the precise interpretation of extracellular recorded signal shapes in comparison to the usual patch-clamp protocols to evaluate the possible use of the extracellular recording technique in electrophysiology. The neurons from our explant cultures exhibited strong voltage-gated potassium currents through the plasma membrane. With an improved noise level of the FET set-up, it was possible to record individual extracellular responses without any signal averaging. Cells were attached by patch-clamp pipettes in voltage-clamp mode and stimulated by voltage step pulses. The point contact model, which is the basic model used to describe electrical contact between cell and transistor, has been implemented in the electrical simulation program PSpice. Voltage and current recordings and compensation values from the patch-clamp measurement have been used as input data for the simulation circuit. Extracellular responses were identified as composed of capacitive current and active potassium current inputs into the adhesion region between the cell and transistor gate. We evaluated the extracellular signal shapes by comparing the capacitive and the slower potassium signal amplitudes. Differences in amplitudes were found, which were interpreted in previous work as enhanced conductance of the attached membrane compared to the average value of the cellular membrane. Our results suggest rather that additional effects like electrodiffusion, ion sensitivity of the sensors or more detailed electronic models for the small cleft between the cell and transistor should be included in the coupling model. --- paper_title: Polarization-Controlled Differentiation of Human Neural Stem Cells Using Synergistic Cues from the Patterns of Carbon Nanotube Monolayer Coating paper_content: We report a method for selective growth and structural-polarization-controlled neuronal differentiation of human neural stem cells (hNSCs) into neurons using carbon nanotube network patterns. The CNT patterns provide synergistic cues for the differentiation of hNSCs in physiological solution and an optimal nanotopography at the same time with good biocompatibility. We demonstrated a polarization-controlled neuronal differentiation at the level of individual NSCs. This result should provide a stable and versatile platform for controlling the hNSC growth because CNT patterns are known to be stable in time unlike commonly used organic molecular patterns. --- paper_title: 3D integration technology for lab-on-a-chip applications paper_content: A review is presented of advances and challenges in fully integrated systems for personalised medicine applications. One key issue for the commercialisation of such systems is the disposability of the assay-substrate at a low cost. This work adds a new dimension to the integrated circuits technology for lab-on-a-chip systems by employing 3D integration for improved performance and functionality. It is proposed that a disposable biosensing layer can be aligned and temporarily attached to the 3D CMOS stack by the vertical interconnections, and can be replaced after each measurement. --- paper_title: Carbon Nanotube Monolayer Cues for Osteogenesis of Mesenchymal Stem Cells paper_content: Recent advances in nanotechnology present synthetic bio-inspired materials to create new controllable microenvironments for stem cell growth, which have allowed directed differentiation into specific lineages.[1,2] Carbon nanotubes (CNTs), one of the most extensively studied nanomaterials, can provide a favorable extracellular environment for intimate cell adhesion due to their similar dimension to collagen. It has been shown that CNTs support the attachment and growth of adult stem cells[3-6] and progenitor cells including osteoblasts and myoblasts.[7,8] In addition, surface-functionalized CNTs provide new opportunities in controlling cell growth. Surface functionalization improves the attachment of biomolecules, such as proteins, DNA, and aptamers, to CNTs.[9] Zanello et al. cultured osteoblasts on CNTs with various functional groups and showed reduced cell growth on positively charged CNTs.[10] Recent reports have shown that human mesenchymal stem cells (hMSCs) formed focal adhesions and grew well on single-walled CNTs (swCNTs).[5,6] However, the effect of naive swCNT substrates on the differentiation of stem cells has not been reported before. Herein, we report the osteogenic differentiation of hMSCs induced by swCNT monolayer cues without any chemical treatments. Interestingly, the surface treatment of swCNTs via oxygen plasma showed synergistic effects on the differentiation as well as the adhesion of hMSCs. The stress due to the enhanced cell spreading on swCNT layers was proposed as a possible explanation for the enhanced osteogenesis of hMSCs on the swCNT monolayers. Previous reports showed that the stress to stretch stem cells on microscale molecular patterns generated the tension on actin filaments, which eventually enhanced the osteogenesis.[11,12] Since our method relies on monolayer coating of swCNTs, it can be applied to a wide range of substrates including conventional scaffolds without any complicated fabrication processes. --- paper_title: Neurochip based on light-addressable potentiometric sensor with wavelet transform de-noising paper_content: Neurochip based on light-addressable potentiometric sensor (LAPS), whose sensing elements are excitable cells, can monitor electrophysiological properties of cultured neuron networks with cellular signals well analyzed. Here we report a kind of neurochip with rat pheochromocytoma (PC12) cells hybrid with LAPS and a method of de-noising signals based on wavelet transform. Cells were cultured on LAPS for several days to form networks, and we then used LAPS system to detect the extracellular potentials with signals de-noised according to decomposition in the time-frequency space. The signal was decomposed into various scales, and coefficients were processed based on the properties of each layer. At last, signal was reconstructed based on the new coefficients. The results show that after de-noising, baseline drift is removed and signal-to-noise ratio is increased. It suggests that the neurochip of PC12 cells coupled to LAPS is stable and suitable for long-term and non-invasive measurement of cell electrophysiological properties with wavelet transform, taking advantage of its time-frequency localization analysis to reduce noise. --- paper_title: Cell-based biosensors based on light-addressable potentiometric sensors for single cell monitoring paper_content: Abstract Cell-based biosensors incorporate cells as sensing elements that convert changes in immediate environment to signals for processing. This paper reports an investigation on light-addressable potentiometric sensor (LAPS) to be used as a possible cell-base biosensor that will enable us to monitor extracellular action potential of single living cell under stimulant. In order to modify chip surface and immobilize cells, we coat a layer of poly- l -ornithine and laminin on surface of LAPS chip on which rat cortical cells are grown well. When 10 μg/ml acetylcholine solution is administrated, the light pointer is focused on a single neuronal cell and the extracellular action potential of the targeted cell is recorded with cell-based biosensor based on LAPS. The results demonstrate that this kind of biosensor has potential to monitor electrophysiology of living cell non-invasive for a long term, and to evaluate drugs primarily. --- paper_title: Cytosensor techniques for examining signal transduction of neurohormones paper_content: This review describes the principles of microphysiometry and how they can be applied, using the Cytosensor, to the investigation of the signal transduction mechanisms activated by both G-protein an... --- paper_title: Detection of heavy metal toxicity using cardiac cell-based biosensor paper_content: Abstract Biosensors incorporating mammalian cells have a distinct advantage of responding in a manner which offers insight into the physiological effect of an analyte. To investigate the potential applications of cell-based biosensors on heavy metal toxicity detection, a novel biosensor for monitoring electrophysiological activity was developed by light-addressable potentiometric sensor (LAPS). Extracellular field potentials of spontaneously beating cardiomyocytes could be recorded by LAPS in the range of 20 μV to nearly 40 μV with frequency of 0.5–3 Hz. After exposed to different heavy metal ions (Hg 2+ , Pb 2+ , Cd 2+ , Fe 3+ , Cu 2+ , Zn 2+ ; in concentration of 10 μM), cardiomyocytes demonstrated characteristic changes in terms of beating frequency, amplitude and duration under the different toxic effects of ions in less than 15 min. This study suggests that, with the physiological monitoring, it is possible to use the cardiac cell-based biosensor to study acute and eventually chronic toxicities induced by heavy metal ions in a long-term and no-invasive way. --- paper_title: Light-addressable potentiometric sensor for biochemical systems. paper_content: Numerous biochemical reactions can be measured potentiometrically through changes in pH, redox potential, or transmembrane potential. An alternating photocurrent through an electrolyte-insulator-semiconductor interface provides a highly sensitive means to measure such potential changes. A spatially selectable photoresponse permits the determination of a multiplicity of chemical events with a single semiconductor device. --- paper_title: In vitro assessing the risk of drug-induced cardiotoxicity by embryonic stem cell-based biosensor paper_content: Abstract Drug-induced prolongation of ventricular repolarization with arrhythmia is a major concern in clinic safety pharmacology, and has been a common reason for the withdrawal of several promising drugs from the market. Therefore, novel techniques should be developed to evaluate cardiotoxicity of new drugs in preclinical research. A cardiomyocyte based biosensor was developed using the light addressable potentiometric sensor (LAPS). Mouse embryonic stem cells cultured on the surface of LAPS were induced to differentiate into synchronized spontaneity beating cardiomyocytes. Changes of extracellular potentials and cell shapes with their mechanical beatings could induce modulation of photocurrents in the LAPS system, and finally change the output of the sensor. With the characteristics of light addressability, LAPS can record cell clusters at any desired position. The sensor can be used to record the prolongation of ventricular action potentials with the cardiotoxicity induced by drugs such as amiodarone, levofloxacin, sparfloxacin, and noradrenaline. The quick and on time characteristics of the sensor were promising to establish a high-throughput platform for pharmacological toxicity investigation. --- paper_title: Embryonic Stem Cells Biosensor and Its Application in Drug Analysis and Toxin Detection paper_content: To investigate the use of stem cells as biosensor elements, a novel cell-based light-addressable potentiometric sensor (LAPS) was developed for monitoring cellular beating. Mouse embryonic stem cells were induced to differentiate into cardiomyocytes in vitro. Extracellular field potentials of spontaneously beating cardiomyocytes induced from stem cells were recorded by LAPS in the potential and frequency ranges of 25-45 muV and 0.5-3 Hz, respectively. Due to its capability of monitoring important physiological parameters such as potential and frequency in vitro, the sensor can be used in drug analysis and toxin detection in a long-term and noninvasive way. The pharmacological and toxicological researches make it possible to use stem cells-based biosensor for biomedical assays. --- paper_title: Applying silicon micromachining to cellular metabolism: measuring the rate of acidification induced in the extracellular environment paper_content: Describes how a microphysiometer works and shows how silicon micromachining can be used to provide a multichannel capability. The authors consider extracellular acidification, metabolic rate detection, LAPS pH sensors, wells for the capture of nonadherent cells, and a multichannel flow-through microphysiometer chip. > --- paper_title: Development of a Surface Plasmon Resonance Biosensor for Real-Time Detection of Osteogenic Differentiation in Live Mesenchymal Stem Cells paper_content: Surface plasmon resonance (SPR) biosensors have been recognized as a useful tool and widely used for real-time dynamic analysis of molecular binding affinity because of its high sensitivity to the change of the refractive index of tested objects. The conventional methods in molecular biology to evaluate cell differentiation require cell lysis or fixation, which make investigation in live cells difficult. In addition, a certain amount of cells are needed in order to obtain adequate protein or messenger ribonucleic acid for various assays. To overcome this limitation, we developed a unique SPR-based biosensing apparatus for real-time detection of cell differentiation in live cells according to the differences of optical properties of the cell surface caused by specific antigen-antibody binding. In this study, we reported the application of this SPR-based system to evaluate the osteogenic differentiation of mesenchymal stem cells (MSCs). OB-cadherin expression, which is up-regulated during osteogenic differentiation, was targeted under our SPR system by conjugating antibodies against OB-cadherin on the surface of the object. A linear relationship between the duration of osteogenic induction and the difference in refractive angle shift with very high correlation coefficient was observed. To sum up, the SPR system and the protocol reported in this study can rapidly and accurately define osteogenic maturation of MSCs in a live cell and label-free manner with no need of cell breakage. This SPR biosensor will facilitate future advances in a vast array of fields in biomedical research and medical diagnosis. --- paper_title: Osteogenic differentiation of human mesenchymal stem cells on poly(ethylene glycol)-variant biomaterials paper_content: This study evaluated the osteogenic differentiation of human mesenchymal stem cells (MSCs), on tyrosine-derived polycarbonates copolymerized with poly(ethylene glycol) (PEG) to determine their potential as a scaffold for bone tissue engineering applications. The addition of PEG in the backbone of polycarbonates has been shown to alter mechanical properties, degradation rates, degree of protein adsorption, and subsequent cell adhesion and motility in mature cell phenotypes. Its effect on MSC behavior is unknown. MSC morphology, motility, proliferation, and osteogenic differentiation were evaluated on polycarbonates containing 0-5% PEG over a 14 day culture. MSCs on polycarbonates containing 0% or 3% PEG content upregulated the expression of osteogenic markers as demonstrated by alkaline phosphatase activity and osteocalcin expression although at different stages in the 14 day culture. Cells on polycarbonates containing no PEG were characterized as having early onset of cell spreading and osteogenic differentiation. Cells on 3% PEG surfaces were delayed in cell spreading and osteogenic differentiation, but had the highest motility as compared with cells on substrates containing no PEG and substrates containing 5% PEG at early time points. Throughout the culture, cells on polycarbonates containing 5% PEG had the lowest levels of osteogenic markers, displayed poor cell-substrate adhesion, and established cell-cell aggregates. Thus, designing substrates with minute variations in PEG may serve as a tool to guide MSC adhesion and motility accompanying osteogenic differentiation, and may be beneficial for abundant bone tissue formation in vivo. --- paper_title: Cell shape and spreading of stromal (mesenchymal) stem cells cultured on fibronectin coated gold and hydroxyapatite surfaces. paper_content: In order to identify the cellular mechanisms leading to the biocompatibility of hydroxyapatite implants, we studied the interaction of human bone marrow derived stromal (mesenchymal) stem cells (hMSCs) with fibronectin-coated gold (Au) and hydroxyapatite (HA) surfaces. The adsorption of fibronectin was monitored by Quartz Crystal Microbalance with Dissipation (QCM-D) at two different concentrations, 20 μg/ml and 200 μg/ml, and the fibronectin adsorption experiments were complemented with antibody measurements. The QCM-D results show that the surface mass uptake is largest on the Au surfaces, while the number of polyclonal and monoclonal antibodies directed against the cell-binding domain (CB-domain) on the fibronectin (Fn) is significantly larger on the (HA) surfaces. Moreover, a higher number of antibodies bound to the fibronectin coatings formed from the highest bulk fibronection concentration. In subsequent cell studies with hMSC's we studied the cell spreading, cytoskeletal organization and cell morphology on the respective surfaces. When the cells were adsorbed on the uncoated substrates, a diffuse cell actin cytoskeleton was revealed, and the cells had a highly elongated shape. On the fibronectin coated surfaces the cells adapted to a more polygonal shape with a well-defined actin cytoskeleton, while a larger cell area and roundness values were observed for cells cultured on the coated surfaces. Among the coated surfaces a slightly larger cell area and roundness values was observed on HA as compared to Au. Moreover, the results revealed that the morphology of cells cultured on fibronectin coated HA surfaces were less irregular. In summary we find that fibronectin adsorbs in a more activated state on the HA surfaces, resulting in a slightly different cellular response as compared to the fibronectin coated Au surfaces. --- paper_title: Surface plasmon resonance imaging for medical and biosensing paper_content: A novel surface plasmon resonance (SPR) microscope/imager which allows for both angular and wavelength scanning has been constructed. Images of mesenchymal stem cells were obtained using this versatile instrument and found to be superior in clarity to those captured by the angular scanning dependent SPR microscopy/imaging techniques. --- paper_title: Revisiting lab-on-a-chip technology for drug discovery paper_content: Manz and colleagues discuss recent progress in the development of microfluidic techniques (lab-on-a-chip technology) and their applications in drug discovery. Highlights include high-throughput droplet technology and applications such as 'organs on a chip', which could help reduce reliance on animal testing. --- paper_title: High-throughput microfluidic single-cell RT-qPCR paper_content: A long-sought milestone in microfluidics research has been the development of integrated technology for scalable analysis of transcription in single cells. Here we present a fully integrated microfluidic device capable of performing high-precision RT-qPCR measurements of gene expression from hundreds of single cells per run. Our device executes all steps of single-cell processing, including cell capture, cell lysis, reverse transcription, and quanti- tative PCR. In addition to higher throughput and reduced cost, we show that nanoliter volume processing reduced measurement noise, increased sensitivity, and provided single nucleotide specifi- city. We apply this technology to 3,300 single-cell measurements of (i) miRNA expression in K562 cells, (ii) coregulation of a miRNA and one of its target transcripts during differentiation in embryonic stem cells, and (iii) single nucleotide variant detection in primary lobular breast cancer cells. The core functionality established here provides the foundation from which a variety of on-chip single-cell transcription analyses will be developed. --- paper_title: Microfluidic device generating stable concentration gradients for long term cell culture: application to Wnt3a regulation of β-catenin signaling paper_content: In developing tissues, proteins and signaling molecules present themselves in the form of concentration gradients, which determine the fate specification and behavior of the sensing cells. To mimic these conditions in vitro, we developed a microfluidic device designed to generate stable concentration gradients at low hydrodynamic shear and allowing long term culture of adhering cells. The gradient forms in a culture space between two parallel laminar flow streams of culture medium at two different concentrations of a given morphogen. The exact algorithm for defining the concentration gradients was established with the aid of mathematical modeling of flow and mass transport. Wnt3a regulation of β-catenin signaling was chosen as a case study. The highly conserved Wnt-activated β-catenin pathway plays major roles in embryonic development, stem cell proliferation and differentiation. Wnt3a stimulates the activity of β-catenin pathway, leading to translocation of β-catenin to the nucleus where it activates a series of target genes. We cultured A375 cells stably expressing a Wnt/β-catenin reporter driving the expression of Venus, pBARVS, inside the microfluidic device. The extent to which the β-catenin pathway was activated in response to a gradient of Wnt3a was assessed in real time using the BARVS reporter gene. On a single cell level, the β-catenin signaling was proportionate to the concentration gradient of Wnt3a; we thus propose that the modulation of Wnt3a gradients in real time can provide new insights into the dynamics of β-catenin pathway, under conditions that replicate some aspects of the actual cell-tissue milieu. Our device thus offers a highly controllable platform for exploring the effects of concentration gradients on cultured cells. --- paper_title: A microfluidic processor for gene expression profiling of single human embryonic stem cells. paper_content: The gene expression of human embryonic stem cells (hESC) is a critical aspect for understanding the normal and pathological development of human cells and tissues. Current bulk gene expression assays rely on RNA extracted from cell and tissue samples with various degree of cellular heterogeneity. These 'cell population averaging' data are difficult to interpret, especially for the purpose of understanding the regulatory relationship of genes in the earliest phases of development and differentiation of individual cells. Here, we report a microfluidic approach that can extract total mRNA from individual single-cells and synthesize cDNA on the same device with high mRNA-to-cDNA efficiency. This feature makes large-scale single-cell gene expression profiling possible. Using this microfluidic device, we measured the absolute numbers of mRNA molecules of three genes (B2M, Nodal and Fzd4) in a single hESC. Our results indicate that gene expression data measured from cDNA of a cell population is not a good representation of the expression levels in individual single cells. Within the G0/G1 phase pluripotent hESC population, some individual cells did not express all of the 3 interrogated genes in detectable levels. Consequently, the relative expression levels, which are broadly used in gene expression studies, are very different between measurements from population cDNA and single-cell cDNA. The results underscore the importance of discrete single-cell analysis, and the advantages of a microfluidic approach in stem cell gene expression studies. --- paper_title: On-chip differentiation of human mesenchymal stem cells into adipocytes paper_content: A microfluidic chip was designed to produce with four different cell densities after seeding has been fabricated by soft lithography. It consists of eight parallel chambers with a single inlet and a single outlet. Each chamber was divided into four compartments, separated by a row of pillars. Human mesenchymal stem cells (hMSCs) were first grown on-chip using a normal culture medium for one day. Then, the differentiation of hMSCs into adipocytes was successfully induced by the perfusion of a differentiation medium for 14days. The results showed that the differentiation rate of hMSCs in the chip critically depends on the initial cell density. Typically, with the increase of the cell density from 800 to 5000cells/cm2, the differentiation rate increases from 21% to 41%. --- paper_title: microRNAs in cancer management. paper_content: Since the identification of microRNAs (miRNAs) in 1993, and the subsequent discovery of their highly conserved nature in 2000, the amount of research into their function--particularly how they contribute to malignancy--has greatly increased. This class of small RNA molecules control gene expression and provide a previously unknown control mechanism for protein synthesis. As such, it is unsurprising that miRNAs are now known to play an essential part in malignancy, functioning as tumour suppressors and oncogenes. This Review summarises the present understanding of how miRNAs operate at the molecular level; how their dysregulation is a crucial part of tumour formation, maintenance, and metastasis; how they can be used as biomarkers for disease type and grade; and how miRNA-based treatments could be used for diverse types of malignancies. --- paper_title: Model of an Interdigitated Microsensor to Detect and Quantify Cells Flowing in a Test Chamber paper_content: A finite elements model of an interdigitated microsensor has been used to investigate the sensitivity of the sensor to the detection and the quantification of cells flowing in a test chamber. In particular the sensitivity of the sensor towards the geometry of sensors and the presence of a cell flowing through the channels was evaluated; several sensors topologies were considered in order to define proper guide-lines for the design of the microsensor. --- paper_title: Detecting particles flowing through interdigitated 3D microelectrodes paper_content: Counting cells in a large microchannel remains challenging and is particularly critical for in vitro assays, such as cell adhesion assays. This paper addresses this issue, by presenting the development of interdigitated three-dimensional electrodes, which are fabricated around passivated pillarshaped silicon microstructures, to detect particles in a flow. The arrays of micropillars occupy the entire channel height and detect the passage of the particle through their gaps by monitoring changes in the electrical resistance. Impedance measurements were employed in order to characterize the electrical equivalent model of the system and to detect the passage of particles in real-time. Three different geometrical micropillar configurations were evaluated and numerical simulations that supported the experimental activity were used to characterize the sensitive volume in the channel. Moreover, the signal-to-noise-ratio related to the passage of a single particle through an array was plotted as a function of the dimension and number of micropillars. --- paper_title: Automated multiparametric platform for high-content and high-Throughput Analytical screening on living cells paper_content: Addressing the increasing biomedical and pharmacological interest in multiparametric screening assays, a concept has been developed, which integrates multiparametric, bioelectric, and biochemical sensors for the analytical monitoring of intra- and extra cellular parameters and an automated imaging microscope for high-content screening into a single embedded platform. Utilizing a topology of distributed intelligences and hardware-based synchronization, the platform concept allows precisely timed and synchronized operation of all integrated platform components. The concept is highly modular, and its design-inherent versatility allows a multitude of platform configurations suiting widely differing user requirements. They include the future integration of probe-manipulation systems such as climate control, fluidic systems, and automated probe placement. Hardware-level synchronization is achieved with a newly developed digital signal processor based "Integration Control Unit," which runs a real-time environment and provides standard electrical interfaces to connect to other platform components. The platform can be operated "online" by user-interactive control or precisely timed and completely automated within high-throughput applications by executing experiment protocols that have been composed in advance. Possible applications of the integrated platform employ a parallel optical and sensory monitoring of extra- and intracellular parameters, yielding detailed insight into cellular functions and intercellular interrelations. Thus, the proposed automated platform may develop into an enabling technology for future screening assays, especially in the field of pharmacological drug screening. Note to Practitioners-One trend in screening applications is to measure an increasing number of cellular parameters in parallel (high-content screening) to obtain a comprehensive view on the investigated cellular process. This motivated us to develop a platform, which provides simultaneous acquisition of multisensor and fluorescence microscopic data. Since both techniques are well established in numerous screening applications, we hope that this new combination stimulates the development of new assays (e.g., in pharmacological drug screening, clinical diagnostics, or environmental monitoring). Since prototypes for high-content screening are available, we are interested in cooperation with scientists working on assay development. --- paper_title: Continuous perfusion microfluidic cell culture array for high-throughput cell-based assays paper_content: We present for the first time a microfluidic cell culture array for long-term cellular monitoring. The 10 × 10 array could potentially assay 100 different cell-based experiments in parallel. The device was designed to integrate the processes used in typical cell culture experiments on a single self-contained microfluidic system. Major functions include repeated cell growth/passage cycles, reagent introduction, and real-time optical analysis. The single unit of the array consists of a circular microfluidic chamber, multiple narrow perfusion channels surrounding the main chamber, and four ports for fluidic access. Human carcinoma (HeLa) cells were cultured inside the device with continuous perfusion of medium at 37°C. The observed doubling time was 1.4 ± 0.1 days with a peak cell density of ∼2.5*105 cells/cm2. Cell assay was demonstrated by monitoring the fluorescence localization of calcein AM from 1 min to 10 days after reagent introduction. Confluent cell cultures were passaged within the microfluidic chambers using trypsin and successfully regrown, suggesting a stable culture environment suitable for continuous operation. The cell culture array could offer a platform for a wide range of assays with applications in drug screening, bioinformatics, and quantitative cell biology. © 2004 Wiley Periodicals, Inc. --- paper_title: Functional cellular assays with multiparametric silicon sensor chips paper_content: Multiparametric silicon sensor chips mounted into biocompatible cell culture units have been used for investigations on cellular microphysiological patterns. Potentiometric, amperometric and impedimetric microsensors are combined on a common cell culture surface on the chip with an area of ∼29 mm2. Extracellular acidification rates (with pH-sensitive field effect transistors, ISFETs), cellular oxygen consumption rates (with amperometric electrode structures) and cell morphological alterations (with impedimetric electrode structures, IDES) are monitored on single chips simultaneously for up to several days. The corresponding test device accommodates six of such sensor chips in parallel, provides electronic circuitry and maintains the required cell culture conditions (temperature, fluid perfusion system). Sensor data are transformed into quantitative information about microphysiologic conditions. The outcome of this transformation as well as reliability and sensitivity in detection of drug effects is discussed. This is the first report on multiparametric cell based assays with data obtained solely with integrated sensors on silicon chips. Those assays are required in different fields of application such as pharmaceutical drug screening, tumor chemosensitivity tests and environmental monitoring. --- paper_title: Patch clamping by numbers. paper_content: Abstract Many ion channels are recognized as amenable targets for a range of disease states and conditions. However, the process of discovering drugs is highly influenced by the chemical doability, the biological confidence in rationale of the approach and the ‘screenability’. To date, the absence of informative high throughput technologies for ion channel screening has resulted in ion channels remaining a largely unexplored class of drug targets. This, however, is about to change – a large increase in the number of data points per day should be achieved by the introduction of automated ‘high throughput’ patch clamp machines. --- paper_title: Heterogeneity of Embryonic and Adult Stem Cells paper_content: New studies suggest that stem cells of embryonic, neural, and hematopoietic origin are heterogeneous, with cells moving between two or more metastable states. These cell states show a bias in their differentiation potential and correlate with specific patterns of transcription factor expression and chromatin modifications. --- paper_title: Microelectronic sensor system for microphysiological application on living cells paper_content: Abstract Living cells can be considered as complex biochemical plants. Biochemical and biophysical processes enable a cell to maintain itself, to grow, to reproduce and to communicate with the environment. Getting more information about the multifunctional cellular processing of input- and output-signals in different cellular plants is essential for basic research as well as for various fields of biomedical applications. For in-vitro investigations on living cells, the cellular environment differs from the native environment found in vivo. As a first approach for on-line monitoring of cellular reactions under well controlled experimental conditions we have developed the so called Cell Monitoring System (CMS ® ). It allows parallel and non-invasive measurement of different parameters from cellular systems by the use of microsensors. Microelectronic sensors are the adequate choice for the non-invasive measurement of environmental—as well as in- and output—parameters of cells. In this paper we present a measurement system with pH-sensitive ISFETs (ionsensitive fieldeffect transistors) for the measurement of extracellular pH-related signals on cells and tissues. --- paper_title: Microfabricated Platform for Studying Stem Cell Fates paper_content: Platforms that allow parallel, quantitative analysis of single cells will be integral to realizing the potential of postgenomic biology. In stem cell biology, the study of clonal stem cells in multiwell formats is currently both inefficient and time-consuming. Thus, to investigate low-frequency events of interest, large sample sizes must be interrogated. We report a simple, versatile, and efficient micropatterned arraying system conducive to the culture and dynamic monitoring of stem cell proliferation. This platform enables: 1) parallel, automated, long-term (∼days to weeks), live-cell microscopy of single cells in culture; 2) tracking of individual cell fates over time (proliferation, apoptosis); and 3) correlation of differentiated progeny with founder clones. To achieve these goals, we used microfabrication techniques to create an array of ∼10,000 microwells on a glass coverslip. The dimensions of the wells are tunable, ranging from 20 to >500 μm in diameter and 10–500 μm in height. The microarray can be coated with adhesive proteins and is integrated into a culture chamber that permits rapid (∼min), addressable monitoring of each well using a standard programmable microscope stage. All cells share the same media (including paracrine survival signals), as opposed to cells in multiwell formats. The incorporation of a coverslip as a substrate also renders the platform compatible with conventional, high-magnification light and fluorescent microscopy. We validated this approach by analyzing the proliferation dynamics of a heterogeneous adult rat neural stem cell population. Using this platform, one can further interrogate the response of distinct stem cell subpopulations to microenvironmental cues (mitogens, cell–cell interactions, and cell–extracellular matrix interactions) that govern their behavior. In the future, the platform may also be adapted for the study of other cell types by tailoring the surface coatings, microwell dimensions, and culture environment, thereby enabling parallel investigation of many distinct cellular responses. © 2004 Wiley Periodicals, Inc. --- paper_title: From Understanding Cellular Function to Novel Drug Discovery: The Role of Planar Patch-Clamp Array Chip Technology paper_content: All excitable cell functions rely upon ion channels that are embedded in their plasma membrane. Perturbations of ion channel structure or function result in pathologies ranging from cardiac dysfunction to neurodegenerative disorders. Consequently, to understand the functions of excitable cells and to remedy their pathophysiology, it is important to understand the ion channel functions under various experimental conditions - including exposure to novel drug targets. Glass pipette patch-clamp is the state of the art technique to monitor the intrinsic and synaptic properties of neurons. However, this technique is labor intensive and has low data throughput. Planar patch-clamp chips, integrated into automated systems, offer high throughputs but are limited to isolated cells from suspensions, thus limiting their use in modeling physiological function. These chips are therefore not most suitable for studies involving neuronal communication. Multielectrode arrays (MEAs), in contrast, have the ability to monitor network activity by measuring local field potentials from multiple extracellular sites, but specific ion channel activity is challenging to extract from these multiplexed signals. Here we describe a novel planar patch-clamp chip technology that enables the simultaneous high-resolution electrophysiological interrogation of individual neurons at multiple sites in synaptically connected neuronal networks, thereby combining the advantages of MEA and patch-clamp techniques. Each neuron can be probed through an aperture that connects to a dedicated subterranean microfluidic channel. Neurons growing in networks are aligned to the apertures by physisorbed or chemisorbed chemical cues. In this review, we describe the design and fabrication process of these chips, approaches to chemical patterning for cell placement, and present physiological data from cultured neuronal cells. --- paper_title: An automatic and quantitative on-chip cell migration assay using self-assembled monolayers combined with real-time cellular impedance sensing. paper_content: Cell migration is crucial in many physiological and pathological processes including embryonic development, immune response and cancer metastasis. Traditional methods for cell migration detection such as wound healing assay usually involve physical scraping of a cell monolayer followed by an optical observation of cell movement. However, these methods require hand-operation with low repeatability. Moreover, it's a qualitative observation not a quantitative measurement, which is hard to scale up to a high-throughput manner. In this article, a novel and reliable on-chip cell migration detection method integrating surface chemical modification of gold electrodes using self-assembled monolayers (SAMs) and real-time cellular impedance sensing is presented. The SAMs are used to inhibit cell adherence forming an area devoid of cells, which could effectively mimic wounds in a cell monolayer. After a DC electrical signal was applied, the SAMs were desorbed from the electrodes and cells started to migrate. The process of cell migration was monitored by real-time impedance sensing. This demonstrates the first occurrence of integrating cellular impedance sensing and wound-forming with SAMs, which makes cell migration assay being real-time, quantitative and fully automatic. We believe this method could be used for high-throughput anti-migratory drug screening and drug discovery. --- paper_title: Monitoring impedance changes associated with motility and mitosis of a single cell paper_content: We present a device enabling impedance measurements that probe the motility and mitosis of a single adherent cell in a controlled way. The micrometre-sized electrodes are designed for adhesion of an isolated cell and enhanced sensitivity to cell motion. The electrode surface is switched electro-chemically to favour cell adhesion, and single cells are attracted to the electrode using positive dielectrophoresis. Periods of linear variation in impedance with time correspond to the motility of a single cell adherent to the surface estimated at 0.6 μm h−1. In the course of our study we observed the impedance changes associated with mitosis of a single cell. Electrical measurements, carried out concomitantly with optical observations, revealed three phases, prophase, metaphase and anaphase in the time variation of the impedance during cell division. Maximal impedance was observed at metaphase with a 20% increase of the impedance. We argue that at mitosis, the changes detected were due to the charge density distribution at the cell surface. Our data demonstrate subtle electrical changes associated with cell motility and for the first time with division at the single-cell level. We speculate that this could open up new avenues for characterizing healthy and pathological cells. --- paper_title: Neurochip based on light-addressable potentiometric sensor with wavelet transform de-noising paper_content: Neurochip based on light-addressable potentiometric sensor (LAPS), whose sensing elements are excitable cells, can monitor electrophysiological properties of cultured neuron networks with cellular signals well analyzed. Here we report a kind of neurochip with rat pheochromocytoma (PC12) cells hybrid with LAPS and a method of de-noising signals based on wavelet transform. Cells were cultured on LAPS for several days to form networks, and we then used LAPS system to detect the extracellular potentials with signals de-noised according to decomposition in the time-frequency space. The signal was decomposed into various scales, and coefficients were processed based on the properties of each layer. At last, signal was reconstructed based on the new coefficients. The results show that after de-noising, baseline drift is removed and signal-to-noise ratio is increased. It suggests that the neurochip of PC12 cells coupled to LAPS is stable and suitable for long-term and non-invasive measurement of cell electrophysiological properties with wavelet transform, taking advantage of its time-frequency localization analysis to reduce noise. --- paper_title: Cell-based biosensors based on light-addressable potentiometric sensors for single cell monitoring paper_content: Abstract Cell-based biosensors incorporate cells as sensing elements that convert changes in immediate environment to signals for processing. This paper reports an investigation on light-addressable potentiometric sensor (LAPS) to be used as a possible cell-base biosensor that will enable us to monitor extracellular action potential of single living cell under stimulant. In order to modify chip surface and immobilize cells, we coat a layer of poly- l -ornithine and laminin on surface of LAPS chip on which rat cortical cells are grown well. When 10 μg/ml acetylcholine solution is administrated, the light pointer is focused on a single neuronal cell and the extracellular action potential of the targeted cell is recorded with cell-based biosensor based on LAPS. The results demonstrate that this kind of biosensor has potential to monitor electrophysiology of living cell non-invasive for a long term, and to evaluate drugs primarily. --- paper_title: A Real-time Electrical Impedance Based Technique to Measure Invasion of Endothelial Cell Monolayer by Cancer Cells paper_content: Metastatic dissemination of malignant cells requires degradation of basement membrane, attachment of tumor cells to vascular endothelium, retraction of endothelial junctions and finally invasion and migration of tumor cells through the endothelial layer to enter the bloodstream as a means of transport to distant sites in the host1-3. Once in the circulatory system, cancer cells adhere to capillary walls and extravasate to the surrounding tissue to form metastatic tumors4,5. The various components of tumor cell-endothelial cell interaction can be replicated in vitro by challenging a monolayer of human umbilical vein endothelial cells (HUVEC) with cancer cells. Studies performed with electron and phase-contrast microscopy suggest that the in vitro sequence of events fairly represent the in vivo metastatic process6. Here, we describe an electrical-impedance based technique that monitors and quantifies in real-time the invasion of endothelial cells by malignant tumor cells. ::: ::: Giaever and Keese first described a technique for measuring fluctuations in impedance when a population of cells grow on the surface of electrodes7,8. The xCELLigence instrument, manufactured by Roche, utilizes a similar technique to measure changes in electrical impedance as cells attach and spread in a culture dish covered with a gold microelectrode array that covers approximately 80% of the area on the bottom of a well. As cells attach and spread on the electrode surface, it leads to an increase in electrical impedance9-12. The impedance is displayed as a dimensionless parameter termed cell-index, which is directly proportional to the total area of tissue-culture well that is covered by cells. Hence, the cell-index can be used to monitor cell adhesion, spreading, morphology and cell density. ::: ::: The invasion assay described in this article is based on changes in electrical impedance at the electrode/cell interphase, as a population of malignant cells invade through a HUVEC monolayer (Figure 1). The disruption of endothelial junctions, retraction of endothelial monolayer and replacement by tumor cells lead to large changes in impedance. These changes directly correlate with the invasive capacity of tumor cells, i.e., invasion by highly aggressive cells lead to large changes in cell impedance and vice versa. This technique provides a two-fold advantage over existing methods of measuring invasion, such as boyden chamber and matrigel assays: 1) the endothelial cell-tumor cell interaction more closely mimics the in vivo process, and 2) the data is obtained in real-time and is more easily quantifiable, as opposed to end-point analysis for other methods. --- paper_title: On-chip epithelial barrier function assays using electrical impedance spectroscopy. paper_content: A bio-impedance chip has been developed for real-time monitoring of the kinetics of epithelial cell monolayers in vitro. The human bronchial epithelial cell line (16-HBE 14o-) was cultured in Transwells creating a sustainable and interactive model of the airway epithelium. Conducting polymer polypyrrole (PPy) doped with polystyrene sulfonate (PSS) was electrochemically deposited onto the surface of gold-plated electrodes to reduce the influence of the electrical double layer on the impedance measurements. Finite element and equivalent circuit models were used to model and determine the electrical properties of the epithelial cell monolayer from the impedance spectra. Electrically tight, confluent monolayers of 16 HBE 14o- cells were treated with increasing concentrations of either Triton X-100 to solubilize cell membranes or ethylene glycol-bis(2-aminoethyl-ether)-N,N,N'N'-tetraacetic acid (EGTA) to disrupt cell-cell adhesion. Experimental impedance data showed that disruption of epithelial barrier function in response to Triton X-100 and EGTA can be successfully measured by the bio-impedance chip. The results were consistent with the conventional hand-held trans-epithelial electrical resistance measurements. Immunofluorescent staining of the ZO-1 tight junction protein in the untreated and treated 16HBEs was performed to verify the disruption of the tight junctions by EGTA. --- paper_title: Label-free electrical discrimination of cells at normal, apoptotic and necrotic status with a microfluidic device. paper_content: As a label-free alternative of conventional flow cytometry, chip-based impedance measurement for single cell analysis has attracted increasing attentions in recent years. In this paper, we designed a T-shape microchannel and fabricated a pair of gold electrodes located horizontally on each side of the microchannel using a transfer printing method. Instant electric signals of flowing-through single cells were then detected by connecting the electrodes to a Keithley resistance and capacitance measurement system. Experimental results based on the simultaneous measurement of resistance and capacitance demonstrated that HL-60 and SMMC-7721 cells could be differentiated effectively. Moreover, SMMC-7721 cells at normal, apoptotic and necrotic status can also be discriminated in the flow. We discussed the possible mechanism for the discrimination of cell size and cell status by electrical analysis, and it is believed that the improvement of detection with our design results from more uniform distribution of the electric field. This microfluidic design may potentially become a promising approach for the label-free cell sorting and screening. --- paper_title: Cell shape and spreading of stromal (mesenchymal) stem cells cultured on fibronectin coated gold and hydroxyapatite surfaces. paper_content: In order to identify the cellular mechanisms leading to the biocompatibility of hydroxyapatite implants, we studied the interaction of human bone marrow derived stromal (mesenchymal) stem cells (hMSCs) with fibronectin-coated gold (Au) and hydroxyapatite (HA) surfaces. The adsorption of fibronectin was monitored by Quartz Crystal Microbalance with Dissipation (QCM-D) at two different concentrations, 20 μg/ml and 200 μg/ml, and the fibronectin adsorption experiments were complemented with antibody measurements. The QCM-D results show that the surface mass uptake is largest on the Au surfaces, while the number of polyclonal and monoclonal antibodies directed against the cell-binding domain (CB-domain) on the fibronectin (Fn) is significantly larger on the (HA) surfaces. Moreover, a higher number of antibodies bound to the fibronectin coatings formed from the highest bulk fibronection concentration. In subsequent cell studies with hMSC's we studied the cell spreading, cytoskeletal organization and cell morphology on the respective surfaces. When the cells were adsorbed on the uncoated substrates, a diffuse cell actin cytoskeleton was revealed, and the cells had a highly elongated shape. On the fibronectin coated surfaces the cells adapted to a more polygonal shape with a well-defined actin cytoskeleton, while a larger cell area and roundness values were observed for cells cultured on the coated surfaces. Among the coated surfaces a slightly larger cell area and roundness values was observed on HA as compared to Au. Moreover, the results revealed that the morphology of cells cultured on fibronectin coated HA surfaces were less irregular. In summary we find that fibronectin adsorbs in a more activated state on the HA surfaces, resulting in a slightly different cellular response as compared to the fibronectin coated Au surfaces. --- paper_title: Capacitive microsystems for biological sensing paper_content: Abstract The growing interest in personalized medicine leads to the need for fast, cheap and portable devices that reveal the genetic profile easily and accurately. To this direction, several ideas to avoid the classical methods of diagnosis and treatment through miniaturized and label-free systems have emerged. Capacitive biosensors address these requirements and thus have the perspective to be used in advanced diagnostic devices that promise early detection of potential fatal conditions. The operation principles, as well as the design and fabrication of several capacitive microsystems for the detection of biomolecular interactions are presented in this review. These systems are micro-membranes based on surface stress changes, interdigitated micro-electrodes and electrode–solution interfaces. Their applications extend to DNA hybridization, protein–ligand binding, antigen–antibody binding, etc. Finally, the limitations and prospects of capacitive microsystems in biological applications are discussed. --- paper_title: Microfluidic impedance‐based flow cytometry paper_content: Microfabricated flow cytometers can detect, count, and analyze cells or particles using microfluidics and electronics to give impedance-based characterization. Such systems are being developed to provide simple, low-cost, label-free, and portable solutions for cell analysis. Recent work using microfabricated systems has demonstrated the capability to analyze micro-organisms, erythrocytes, leukocytes, and animal and human cell lines. Multifrequency impedance measurements can give multiparametric, high-content data that can be used to distinguish cell types. New combinations of microfluidic sample handling design and microscale flow phenomena have been used to focus and position cells within the channel for improved sensitivity. Robust designs will enable focusing at high flowrates while reducing requirements for control over multiple sample and sheath flows. Although microfluidic impedance-based flow cytometers have not yet or may never reach the extremely high throughput of conventional flow cytometers, the advantages of portability, simplicity, and ability to analyze single cells in small populations are, nevertheless, where chip-based cytometry can make a large impact. © 2010 International Society for Advancement of Cytometry ---
Title: Overview of Micro- and Nano-Technology Tools for Stem Cell Applications: Micropatterned and Microelectronic Devices Section 1: Introduction Description 1: Introduce the exponential growth and advances in biosensor science, specifically focusing on silicon micromachining, genomics, and cell culture technology in the context of biochips, DNA microarrays, and cell-based chips. Section 2: Sensing and Transducer Element: The Cell; Variables and Constants Description 2: Discuss the dynamic nature of cell interactions, the importance of the cell microenvironment, and the technological advancements required to mimic in vivo conditions. Section 3: Genechip Description 3: Provide an overview of the development and applications of cell microarrays, emphasizing their role in high throughput screening, gene function discovery, and stem cell differentiation. Section 4: Micropattering: Microengineering Meets Cell Biology Description 4: Explain the use of microfabrication techniques to create well-defined surfaces and patterns for cell culture, with a focus on how these techniques can mimic natural cell niches and influence stem cell behavior. Section 5: 3D Cell Culture and Tissue Organization Description 5: Explore recent advances in 3D cell culture models that better simulate in vivo environments and their application in tissue differentiation and stem cell research. Section 6: Integration of Microelectronics and Cells Description 6: Detail the integration of cells with microelectronic devices, highlighting various types of sensors such as MEA, EICS, FET, LAPS, SPR, and QCM, and their applications in studying stem cell functions and drug discovery. Section 7: Microelectrodes Array: MEA Description 7: Discuss the structure, benefits, and applications of MEAs in cell pattering, drug screening, and neuronal and cardiac cell studies. Section 8: Electric Cell-Substrate Impedance Sensor: EICS Description 8: Examine the use of EICS sensors to study the bioelectrical properties of cells and monitor stem cell differentiation, adhesion, and response to stimuli. Section 9: Field-Effect Transistor: FET Description 9: Describe the development, structure, and applications of FETs in measuring extracellular potentials, pH levels, and cell signaling in stem cell research. Section 10: Light Addressable Potentiometric Sensor: LAPS Description 10: Present the working principle and applications of LAPS in monitoring cell metabolism, extracellular potentials, and stem cell differentiation. Section 11: Surface Plasmon Resonance Chip (SPR) and Quartz Crystal Microbalance Chip (QCM) Description 11: Discuss the use of SPR and QCM techniques in analyzing cell attachment, proliferation, and cell-substrate interaction, focusing on stem cell studies. Section 12: Integration of Different Sensors and Microfluidic Approaches: Future Perspectives on Single Cell Analysis Description 12: Explore the integration of multiple sensors and microfluidic platforms for comprehensive cell analysis, emphasizing potential advancements in single-cell analysis and drug discovery. Section 13: Conclusions/Outlook Description 13: Summarize the potential of micro- and nano-technology tools in revolutionizing stem cell research and personalized medicine, highlighting future challenges and opportunities in the field.
Survey on Impact of Software Metrics on Software Quality
9
--- paper_title: Software metrics for the Boeing 777: a case study paper_content: This article describes rapid, midstream introduction of elementary software metrics into a large engineering-development programme. It is presented as a case study for use by organizations considering deployment of software metrics. The 777 airplane, under development by The Boeing Company, will contain over two million lines of newly-developed source code. The 777 marks the first time The Boeing Company has applied software metrics uniformly across a new commercial-airplane programme. This was done to ensure simple, consistent communication of information pertinent to software schedules among Boeing, its software suppliers, and its customers-at all engineering and management levels. In the short term, uniform application of software metrics has resulted in improved visibility and reduced risk for 777 on-board software. Looking to the longer term, the metric information collected provides a basis for analysis of internal and supplier processes, and it will serve as a baseline for future commercial airplane programmes of The Boeing Company. ---
Title: Survey on Impact of Software Metrics on Software Quality Section 1: INTRODUCTION Description 1: Introduce the importance of software metrics in the software life cycle, types of metrics, their definition, and optimal characteristics. Section 2: Classification of Software Metrics Description 2: Explain the three types of software metrics: process metrics, project metrics, and product metrics. Section 3: Mathematical Analysis Description 3: Discuss the mathematical aspect of metrics, including the definition and properties of metrics in mathematical terms and predictive models. Section 4: IMPORTANCE OF SOFTWARE QUALITY Description 4: Explain the significance of software quality, different perspectives on assessing quality, and the challenges in defining and ensuring software quality. Section 5: Views on Software Quality Description 5: Detail the various perspectives on software quality, including user view, manufacturing view, product view, and value-based view. Section 6: CASE STUDY ON SOFTWARE QUALITY Description 6: Analyze the Boeing 777 project as a case study to illustrate the implementation and importance of software quality management and metrics. Section 7: COMPARISON OF SOFTWARE METRICS-STRENGTHS AND WEAKNESSES Description 7: Compare different software metrics, including source code metrics, function point metrics, and object-oriented metrics, and discuss their strengths and weaknesses. Section 8: FUTURE SCOPE Description 8: Discuss future research directions, potential improvements in metrics, and the anticipated increase in the importance of software metrics. Section 9: SUMMARY AND CONCLUSION Description 9: Summarize the key points discussed in the paper and conclude with the current state and future potential of software metrics.
A Review of PVT Compensation Circuits for Advanced CMOS Technologies
7
--- paper_title: A PVT-insensitive CMOS output driver with constant slew rate paper_content: In this paper, we propose an output driver which has constant slew rate over PVT variation. To make output driver's slew rate to be constant, rising and failing time of pre-driving node of output driver is kept constant by predriver, which has constant pull-up and pull-down path resistance. To make output voltage swing level to be constant, another replica bias circuit was used with CMOS push-pull output driver. HSPICE simulations show that the slew rate is 3.21 V/nS on fast condition and 3.12V/nS on slow condition with 1.4K pre-driver's path resistance. This circuit was implemented with 0.18 /spl mu/m CMOS process. --- paper_title: Fast frequency acquisition all-digital PLL using PVT calibration paper_content: Fast frequency acquisition is crucial for phase-locked loops (PLLs) used in portable devices, as on-chip clocks are frequently scaled down or up in order to manage power consumption. This paper describes a new frequency acquisition method that is effective in all-digital PLLs (ADPLLs). To achieve fast frequency acquisition, the codeword of the digitally controlled oscillator (DCO) is predicted by measuring the variations of process, supply voltage and temperature (PVT). A PVT sensor implemented with a ring oscillator is employed to monitor the variations. As the sensor frequency at the current operating condition is directly related to the PVT variations, the sensor frequency is taken into account to compensate such variations in predicting the DCO codeword. The proposed method enables one-cycle frequency acquisition, and the frequency error is less than 1.5%. The proposed ADPLL implemented in a 0.18 mum CMOS process operates from 150 MHz to 500 MHz and occupies 0.075 mm2. --- paper_title: On-chip PVT compensation techniques for low-voltage CMOS digital LSIs paper_content: An on-chip process, supply voltage, and temperature (PVT) compensation technique for a low-voltage CMOS digital circuit is proposed. Because the degradation of circuit performance originates from the variation of the saturation current, a compensation technique that uses a reference current that is independent of PVT variations was developed. The operations of the circuit were confirmed by SPICE simulation with a set of 0.35-µm standard CMOS parameters. Moreover, Monte Carlo simulations assuming process spread and device mismatch in all MOSFETs showed the effectiveness of the proposed technique and achieved performance improvement of 74%. The circuit is useful for on-chip compensation to mitigate the degradation of circuit performance with PVT variation in low-voltage digital circuits. --- paper_title: On-chip PVT compensation techniques for low-voltage CMOS digital LSIs paper_content: An on-chip process, supply voltage, and temperature (PVT) compensation technique for a low-voltage CMOS digital circuit is proposed. Because the degradation of circuit performance originates from the variation of the saturation current, a compensation technique that uses a reference current that is independent of PVT variations was developed. The operations of the circuit were confirmed by SPICE simulation with a set of 0.35-µm standard CMOS parameters. Moreover, Monte Carlo simulations assuming process spread and device mismatch in all MOSFETs showed the effectiveness of the proposed technique and achieved performance improvement of 74%. The circuit is useful for on-chip compensation to mitigate the degradation of circuit performance with PVT variation in low-voltage digital circuits. --- paper_title: A PVT-insensitive CMOS output driver with constant slew rate paper_content: In this paper, we propose an output driver which has constant slew rate over PVT variation. To make output driver's slew rate to be constant, rising and failing time of pre-driving node of output driver is kept constant by predriver, which has constant pull-up and pull-down path resistance. To make output voltage swing level to be constant, another replica bias circuit was used with CMOS push-pull output driver. HSPICE simulations show that the slew rate is 3.21 V/nS on fast condition and 3.12V/nS on slow condition with 1.4K pre-driver's path resistance. This circuit was implemented with 0.18 /spl mu/m CMOS process. ---
Title: A Review of PVT Compensation Circuits for Advanced CMOS Technologies Section 1: Introduction Description 1: This section introduces the importance of PVT compensation circuits in maintaining high performance and signal integrity in ICs, especially for high-speed interfaces. Section 2: Scope of the Problem Description 2: This section describes the challenges posed by PVT variations and the necessity of compensation circuits for impedance matching and reducing signal reflection. Section 3: Analog Compensation: General Background Description 3: This section provides an overview of different implementations of analog PVT compensation circuits and their operational principles. Section 4: Digital Compensation: General Background Description 4: This section outlines the digital compensation techniques, highlighting their advantages over analog methods in terms of noise sensitivity and implementation ease. Section 5: Digital Compensation: Implementation in CMOS065 nm Bulk Technology Description 5: This section details the practical implementations of digital PVT compensation techniques for DDR2 and DDR3 I/O circuits in CMOS065 nm technology. Section 6: Analog Compensation: Implementation in CMOS045nm SOI Technology Description 6: This section explains the analog compensation method for maintaining constant output driver transconductance in CMOS045nm SOI technology. Section 7: Conclusions Description 7: This section summarizes the advantages and disadvantages of the discussed PVT compensation circuits, comparing analog and digital approaches and their suitability for different applications.
Automatic speech recognition for under-resourced languages: A survey
6
--- paper_title: Automatic Speech Recognition for Under-Resourced Languages: Application to Vietnamese Language paper_content: This paper presents our work in automatic speech recognition (ASR) in the context of under-resourced languages with application to Vietnamese. Different techniques for bootstrapping acoustic models are presented. First, we present the use of acoustic-phonetic unit distances and the potential of crosslingual acoustic modeling for under-resourced languages. Experimental results on Vietnamese showed that with only a few hours of target language speech data, crosslingual context independent modeling worked better than crosslingual context dependent modeling. However, it was outperformed by the latter one, when more speech data were available. We concluded, therefore, that in both cases, crosslingual systems are better than monolingual baseline systems. The proposal of grapheme-based acoustic modeling, which avoids building a phonetic dictionary, is also investigated in our work. Finally, since the use of sub-word units (morphemes, syllables, characters, etc.) can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling for under-resourced languages, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. The proposed lattice combination scheme results in a relative syllable error rate reduction of 6.6% over the sentence MAP baseline method for a Vietnamese ASR task. --- paper_title: Language-independent and language-adaptive acoustic modeling for speech recognition paper_content: Abstract With the distribution of speech technology products all over the world, the portability to new target languages becomes a practical concern. As a consequence our research focuses on the question of how to port large vocabulary continuous speech recognition (LVCSR) systems in a fast and efficient way. More specifically we want to estimate acoustic models for a new target language using speech data from varied source languages, but only limited data from the target language. For this purpose, we introduce different methods for multilingual acoustic model combination and a polyphone decision tree specialization procedure. Recognition results using language-dependent, independent and language-adaptive acoustic models are presented and discussed in the framework of our GlobalPhone project which investigates LVCSR systems in 15 languages. --- paper_title: Automatic Error Recovery for Pronunciation Dictionaries. paper_content: In this paper, we present our latest investigations on pronunciation modeling and its impact on ASR. We propose completely automatic methods to detect, remove, and substitute inconsistent or flawed entries in pronunciation dictionaries. The experiments were conducted on different tasks, namely (1) word-pronunciation pairs from the Czech, English, French, German, Polish, and Spanish Wiktionary [1], a multilingual wiki-based open content dictionary, (2) our GlobalPhone Hausa pronunciation dictionary [2], and (3) pronunciations to complement our Mandarin-English SEAME code-switch dictionary [3]. In the final results, we fairly observed on average an improvement of 2.0% relative in terms of word error rate and even 27.3% for the case of English Wiktionary word-pronunciation pairs. --- paper_title: Unsupervised segmentation of words into morphemes – Morpho Challenge 2005: Application to automatic speech recognition paper_content: Within the EU Network of Excellence PASCAL, a challenge was organized to design a statistical machine learning algorithm that segments words into the smallest meaning-bearing units of language, morphemes. Ideally, these are basic vocabulary units suitable for different tasks, such as speech and text understanding, machine translation, information retrieval, and statistical language modeling. Twelve research groups participated in the challenge and had submitted segmentation results obtained by their algorithms. In this paper, we evaluate the application of these segmentation algorithms to large vocabulary speech recognition using statistical n-gram language models based on the proposed word segments instead of entire words. Experiments were done for two agglutinative and morphologically rich languages: Finnish and Turkish. We also investigate combining various segmentations to improve the performance of the recognizer. Index Terms: speech recognition, language modelling, morphemes, unsupervised learning. --- paper_title: Multilingual large vocabulary speech recognition: the European SQALE project paper_content: Abstract This paper describes the S qale project in which the ARPA large vocabulary evaluation paradigm was adapted to meet the needs of European multilingual speech recognition development. It involved establishing a framework for sharing training and test materials, defining common protocols for training and testing systems, developing systems, running an evaluation and analysing the results. The specifically multilingual issues addressed included the impact of the language on corpora and test set design, transcription issues, evaluation metrics, recognition system design, cross-system and cross-language performance, and results analysis. The project started in December 1993 and finished in September 1995. The paper describes the evaluation framework and the results obtained. The overall conclusions of the project were that the same general approach to recognition system design is applicable to all the languages studied although there were some language specific problems to solve. It was found that the evaluation paradigm used within ARPA could be used within the European context with little difficulty and the consequent sharing amongst the sites of training and test materials and language-specific expertise was highly beneficial. --- paper_title: A multilingual phoneme and model set: toward a universal base for automatic speech recognition paper_content: The amount of time, effort and expense that is required to incorporate a new language into an ASR system is extensive. It also is usually not possible to provide more than one language for speech recognition per system. The Core Technology Group of Conversant Voice Information Systems, Bell Laboratories has developed a multilingual phoneme and model set (MPMS) that is being used as a base for all telephone-based ASR continuous speech systems being developed in new languages. With a unified set such as the MPMS, it is possible that multiple languages could be available on one system. While the idea of a multilingual phoneme model set is not new, there has been no work that has used a large, telephone-based database consisting of continuous speech samples in more than two languages, obtains commercially acceptable word recognition rates, and that is ready to be marketed. Our system's phoneme set represents six different languages; we have built models based on three languages and tested them using two other languages (for which there were no models); and we have achieved very acceptable word recognition rates of better than 92% (field accuracy). These languages can be incorporated into an existing speech recognition system, available for customers. --- paper_title: Language-independent and language-adaptive acoustic modeling for speech recognition paper_content: Abstract With the distribution of speech technology products all over the world, the portability to new target languages becomes a practical concern. As a consequence our research focuses on the question of how to port large vocabulary continuous speech recognition (LVCSR) systems in a fast and efficient way. More specifically we want to estimate acoustic models for a new target language using speech data from varied source languages, but only limited data from the target language. For this purpose, we introduce different methods for multilingual acoustic model combination and a polyphone decision tree specialization procedure. Recognition results using language-dependent, independent and language-adaptive acoustic models are presented and discussed in the framework of our GlobalPhone project which investigates LVCSR systems in 15 languages. --- paper_title: Dynamic Bayesian network based speech recognition with pitch and energy as auxiliary variables paper_content: Pitch and energy are two fundamental features describing speech, having importance in human speech recognition. However, when incorporated as features in automatic speech recognition (ASR), they usually result in a significant degradation on recognition performance due to the noise inherent in estimating or modeling them. We show experimentally how this can be corrected by either conditioning the emission distributions upon these features or by marginalizing out these features in recognition. Since to do this is not obvious with standard hidden Markov models (HMMs), this work has been performed in the framework of dynamic Bayesian networks (DBNs), resulting in more flexibility in defining the topology of the emission distributions and in specifying whether variables should be marginalized out. --- paper_title: Cross-lingual Portability of MLP-Based Tandem Features – A Case Study for English and Hungarian paper_content: One promising approach for building ASR systems for lessresourced languages is cross-lingual adaptation. Tandem ASR is particularly well suited to such adaptation, as it includes two cascaded modelling steps: feature extraction using multi-layer perceptrons (MLPs), followed by modelling using a standard HMM. The language-specific tuning can be performed by adjusting the HMM only, leaving the MLP untouched. Here we examine the portability of feature extractor MLPs between an Indo-European (English) and a Finno-Ugric (Hungarian) language. We present experiments which use both conventional phone-posterior and articulatory feature (AF) detector MLPs, both trained on a much larger quantity of (English) data than the monolingual (Hungarian) system. We find that the cross-lingual configurations achieve similar performance to the monolingual system, and that, interestingly, the AF detectors lead to slightly worse performance, despite the expectation that they should be more language-independent than phone-based MLPs. However, the cross-lingual system outperforms all other configurations when the English phone MLP is adapted on the Hungarian data. --- paper_title: Speech recognition system based improved DTW algorithm paper_content: Introduced the design and implementation of isolated word speech recognition system with application of the improved dynamic time warping (DTW) algorithm. Experimental results show that the use of the improved DTW algorithm effectively reduces the amount of data to be processed and the recognition time, improved the system speed. The algorithm showed more obvious advantage as the number of speech signal to be recognized increased. --- paper_title: Feature engineering in Context-Dependent Deep Neural Networks for conversational speech transcription paper_content: We investigate the potential of Context-Dependent Deep-Neural-Network HMMs, or CD-DNN-HMMs, from a feature-engineering perspective. Recently, we had shown that for speaker-independent transcription of phone calls (NIST RT03S Fisher data), CD-DNN-HMMs reduced the word error rate by as much as one third—from 27.4%, obtained by discriminatively trained Gaussian-mixture HMMs with HLDA features, to 18.5%—using 300+ hours of training data (Switchboard), 9000+ tied triphone states, and up to 9 hidden network layers. --- paper_title: Multilingual large vocabulary speech recognition: the European SQALE project paper_content: Abstract This paper describes the S qale project in which the ARPA large vocabulary evaluation paradigm was adapted to meet the needs of European multilingual speech recognition development. It involved establishing a framework for sharing training and test materials, defining common protocols for training and testing systems, developing systems, running an evaluation and analysing the results. The specifically multilingual issues addressed included the impact of the language on corpora and test set design, transcription issues, evaluation metrics, recognition system design, cross-system and cross-language performance, and results analysis. The project started in December 1993 and finished in September 1995. The paper describes the evaluation framework and the results obtained. The overall conclusions of the project were that the same general approach to recognition system design is applicable to all the languages studied although there were some language specific problems to solve. It was found that the evaluation paradigm used within ARPA could be used within the European context with little difficulty and the consequent sharing amongst the sites of training and test materials and language-specific expertise was highly beneficial. --- paper_title: Recurrent neural network based language model paper_content: A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition --- paper_title: Woefzela - an open-source platform for ASR data collection in the developing world paper_content: This project was made possible through the support of the South ::: African National Centre for Human Language Technology, an ::: initiative of the South African Department of Arts and Culture. ::: The authors would also like to thank Pedro Moreno, Thad ::: Hughes and Ravindran Rajakumar of Google Research for valuable ::: inputs at various stages of this work. --- paper_title: SWITCHBOARD: telephone speech corpus for research and development paper_content: SWITCHBOARD is a large multispeaker corpus of conversational speech and text which should be of interest to researchers in speaker authentication and large vocabulary speech recognition. About 2500 conversations by 500 speakers from around the US were collected automatically over T1 lines at Texas Instruments. Designed for training and testing of a variety of speech processing algorithms, especially in speaker verification, it has over an 1 h of speech from each of 50 speakers, and several minutes each from hundreds of others. A time-aligned word for word transcription accompanies each recording. > --- paper_title: Toward better crowdsourced transcription: Transcription of a year of the Let's Go Bus Information System data paper_content: Transcription is typically a long and expensive process. In the last year, crowdsourcing through Amazon Mechanical Turk (MTurk) has emerged as a way to transcribe large amounts of speech. This paper presents a two-stage approach for the use of MTurk to transcribe one year of Let's Go Bus Information System data, corresponding to 156.74 hours (257,658 short utterances). This data was made available for the Spoken Dialog Challenge 2010 [1]1. While others have used a one stage approach, asking workers to label, for example, words and noises in the same pass, the present approach is closer to what expert transcribers do, dividing one complicated task into several less complicated ones with the goal of obtaining a higher quality transcript. The two stage approach shows better results in terms of agreement with experts and the quality of acoustic modeling. When “gold-standard” quality control is used, the quality of the transcripts comes close to NIST published expert agreement, although the cost doubles. --- paper_title: Cross-lingual portability of Chinese and english neural network features for French and German LVCSR paper_content: This paper investigates neural network (NN) based cross-lingual probabilistic features. Earlier work reports that intra-lingual features consistently outperform the corresponding cross-lingual features. We show that this may not generalize. Depending on the complexity of the NN features, cross-lingual features reduce the resources used for training —the NN has to be trained on one language only— without any loss in performance w.r.t. word error rate (WER). To further investigate this inconsistency concerning intra- vs. cross-lingual neural network features, we analyze the performance of these features w.r.t. the degree of kinship between training and testing language, and the amount of training data used. Whenever the same amount of data is used for NN training, a close relationship between training and testing language is required to achieve similar results. By increasing the training data the relationship becomes less, as well as changing the topology of the NN to the bottle neck structure. Moreover, cross-lingual features trained on English or Chinese improve the best intra-lingual system for German up to 2% relative in WER and up to 3% relative for French and achieve the same improvement as for discriminative training. Moreover, we gain again up to 8% relative in WER by combining intra- and cross-lingual systems. --- paper_title: Cross-lingual Portability of MLP-Based Tandem Features – A Case Study for English and Hungarian paper_content: One promising approach for building ASR systems for lessresourced languages is cross-lingual adaptation. Tandem ASR is particularly well suited to such adaptation, as it includes two cascaded modelling steps: feature extraction using multi-layer perceptrons (MLPs), followed by modelling using a standard HMM. The language-specific tuning can be performed by adjusting the HMM only, leaving the MLP untouched. Here we examine the portability of feature extractor MLPs between an Indo-European (English) and a Finno-Ugric (Hungarian) language. We present experiments which use both conventional phone-posterior and articulatory feature (AF) detector MLPs, both trained on a much larger quantity of (English) data than the monolingual (Hungarian) system. We find that the cross-lingual configurations achieve similar performance to the monolingual system, and that, interestingly, the AF detectors lead to slightly worse performance, despite the expectation that they should be more language-independent than phone-based MLPs. However, the cross-lingual system outperforms all other configurations when the English phone MLP is adapted on the Hungarian data. --- paper_title: The language-independent bottleneck features paper_content: In this paper we present novel language-independent bottleneck (BN) feature extraction framework. In our experiments we have used Multilingual Artificial Neural Network (ANN), where each language is modelled by separate output layer, while all the hidden layers jointly model the variability of all the source languages. The key idea is that the entire ANN is trained on all the languages simultaneously, thus the BN-features are not biased towards any of the languages. Exactly for this reason, the final BN-features are considered as language independent. In the experiments with GlobalPhone database, we show that Multilingual BN-features consistently outperform Monolingual BN-features. Also, cross-lingual generalization is evaluated, where we train on 5 source languages and test on 3 other languages. The results show that the ANN can produce very good BN-features even for unseen languages, in some cases even better than if we trained the ANN on the target language only. --- paper_title: Tandem connectionist feature extraction for conventional HMM systems paper_content: Hidden Markov model speech recognition systems typically use Gaussian mixture models to estimate the distributions of decorrelated acoustic feature vectors that correspond to individual subword units. By contrast, hybrid connectionist-HMM systems use discriminatively-trained neural networks to estimate the probability distribution among subword units given the acoustic observations. In this work we show a large improvement in word recognition performance by combining neural-net discriminative feature processing with Gaussian-mixture distribution modeling. By training the network to generate the subword probability posteriors, then using transformations of these estimates as the base features for a conventionally-trained Gaussian-mixture based system, we achieve relative error rate reductions of 35% or more on the multicondition Aurora noisy continuous digits task. --- paper_title: Integrating Thai grapheme based acoustic models into the ML-MIX framework - For language independent and cross-language ASR paper_content: Grapheme based speech recognition is a powerful tool for rapidly creating automatic speech recognition (ASR) systems in new languages. For purposes of language independent or cross language speech recognition it is necessary to identify similar models in the different languages involved. For phoneme based multilingual ASR systems this is usually achieved with the help of a language independent phoneme set and the corresponding phoneme identities in the different languages. For grapheme based multilingual ASR systems this is only possible when there is an overlap in graphemes of the different scripts involved. Often this is not the case, as for example for Thai which graphemes does not have any overlap with the graphemes of the languages that we used for multilingual grapheme based ASR in the past. In order to be able to apply our multilingual grapheme model to Thai, and in order to incorporate Thai into our multilingual recognizer, we examined and evaluated a number of data driven distance measures between the multilingual grapheme models. For our purposes distance measures that rely directly on the parameters of the models, such as the Kullback-Leibler and the Bhatthacharya distance yield the best performance. --- paper_title: Data-driven posterior features for low resource speech recognition applications paper_content: In low resource settings, with very few hours of training data, state-of-the-art speech recognition systems that require large amounts of task specific training data perform very poorly. We address this issue by building data-driven speech recognition front-ends on significant amounts of task independent data from different languages and genres collected in similar acoustic conditions as the data in the low resource scenario. We show that features derived from these trained front-ends perform significantly better and can alleviate the effect of reduced task specific training data in low resource settings. The proposed features provide a absolute improvement of about 12% (18% relative) in an low-resource LVCSR setting with only one hour of training data. We also demonstrate the usefulness of these features for zero-resource speech applications like spoken term discovery, which operate without any transcribed speech to train systems. The proposed features provide significant gains over conventional acoustic features on various information retrieval metrics for this task. --- paper_title: Automatic Speech Recognition for Under-Resourced Languages: Application to Vietnamese Language paper_content: This paper presents our work in automatic speech recognition (ASR) in the context of under-resourced languages with application to Vietnamese. Different techniques for bootstrapping acoustic models are presented. First, we present the use of acoustic-phonetic unit distances and the potential of crosslingual acoustic modeling for under-resourced languages. Experimental results on Vietnamese showed that with only a few hours of target language speech data, crosslingual context independent modeling worked better than crosslingual context dependent modeling. However, it was outperformed by the latter one, when more speech data were available. We concluded, therefore, that in both cases, crosslingual systems are better than monolingual baseline systems. The proposal of grapheme-based acoustic modeling, which avoids building a phonetic dictionary, is also investigated in our work. Finally, since the use of sub-word units (morphemes, syllables, characters, etc.) can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling for under-resourced languages, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. The proposed lattice combination scheme results in a relative syllable error rate reduction of 6.6% over the sentence MAP baseline method for a Vietnamese ASR task. --- paper_title: Pooling ASR data for closely related languages paper_content: Proceedings of the Workshop on Spoken Languages Technologies for Under-Resourced Languages (SLTU 2010), Penang, Malaysia, May 2010 --- paper_title: Boosting attribute and phone estimation accuracies with deep neural networks for detection-based speech recognition paper_content: Generation of high-precision sub-phonetic attribute (also known as phonological features) and phone lattices is a key frontend component for detection-based bottom-up speech recognition. In this paper we employ deep neural networks (DNNs) to improve detection accuracy over conventional shallow MLPs (multi-layer perceptrons) with one hidden layer. A range of DNN architectures with five to seven hidden layers and up to 2048 hidden units per layer have been explored. Training on the SI84 and testing on the Nov92 WSJ data, the proposed DNNs achieve significant improvements over the shallow MLPs, producing greater than 90% frame-level attribute estimation accuracies for all 21 attributes tested for the full system. On the phone detection task, we also obtain excellent frame-level accuracy of 86.6%. With this level of high-precision detection of basic speech units we have opened the door to a new family of flexible speech recognition system design for both top-down and bottom-up, lattice-based search strategies and knowledge integration. --- paper_title: Rapid building of an ASR system for Under-Resourced Languages based on Multilingual Unsupervised Training paper_content: This paper presents our work on rapid language adaptation of acoustic models based on multilingual cross-language bootstrapping and unsupervised training. We used Automatic Speech Recognition (ASR) systems in the six source languages English, French, German, Spanish, Bulgarian and Polish to build from scratch an ASR system for Vietnamese, an underresourced language. System building was performed without using any transcribed audio data by applying three consecutive steps, i.e. cross-language transfer, unsupervised training based on the “multilingual A-stabil” confidence score [1], and bootstrapping. We investigated the correlation between performance of “multilingual A-stabil” and the number of source languages and improved the performance of “multilingual A-stabil” by applying it at the syllable level. Furthermore, we showed that increasing the amount of source language ASR systems for the multilingual framework results in better performance of the final ASR system in the target language Vietnamese. The final Vietnamese recognition system has a Syllable Error Rate (SyllER) of 16.8% on the development set and 16.1% on the evaluation set. Index Terms: rapid language adaptation of ASR, unsupervised training, multilingual A-Stabil --- paper_title: Using different acoustic, lexical and language modeling units for ASR of an under-resourced language - Amharic paper_content: State-of-the-art large vocabulary continuous speech recognition systems use mostly phone based acoustic models (AMs) and word based lexical and language models. However, phone based AMs are not efficient in modeling long-term temporal dependencies and the use of words in lexical and language models leads to out-of-vocabulary (OOV) problem, which is a serious issue for morphologically rich languages. This paper presents the results of our contributions on the use of different units for acoustic, lexical and language modeling for an under-resourced language (Amharic spoken in Ethiopia). Triphone, Syllable and hybrid (syllable-phone) units have been investigated for acoustic modeling. Word and morphemes have been investigated for lexical and language modeling. We have also investigated the use of longer (syllable) acoustic units and shorter (morpheme) lexical as well as language modeling units in a speech recognition system. Although hybrid AMs did not bring much improvement over context dependent syllable based recognizers in speech recognition performance with word based lexical and language model (i.e. word based speech recognition), we observed a significant word error rate (WER) reduction compared to triphone-based systems in morpheme-based speech recognition. Syllable AMs also led to a WER reduction over the triphone-based systems both in word based and morpheme based speech recognition. It was possible to obtain a 3% absolute WER reduction as a result of using syllable acoustic units in morpheme-based speech recognition. Overall, our result shows that syllable and hybrid AMs are best fitted in morpheme-based speech recognition. --- paper_title: Universal attribute characterization of spoken languages for automatic spoken language recognition paper_content: We propose a novel universal acoustic characterization approach to spoken language recognition (LRE). The key idea is to describe any spoken language with a common set of fundamental units that can be defined ''universally'' across all spoken languages. In this study, speech attributes, such as manner and place of articulation, are chosen to form this unit inventory and used to build a set of language-universal attribute models with data-driven modeling techniques. The vector space modeling approach to LRE is adopted, where a spoken utterance is first decoded into a sequence of attributes independently of its language. Then, a feature vector is generated by using co-occurrence statistics of manner or place units, and the final LRE decision is implemented with a vector space language classifier. Several architectural configurations will be studied, and it will be shown that best performance is attained using a maximal figure-of-merit language classifier. Experimental evidence not only demonstrates the feasibility of the proposed techniques, but it also shows that the proposed technique attains comparable performance to standard approaches on the LRE tasks investigated in this work when the same experimental conditions are adopted. --- paper_title: Language-independent and language-adaptive acoustic modeling for speech recognition paper_content: Abstract With the distribution of speech technology products all over the world, the portability to new target languages becomes a practical concern. As a consequence our research focuses on the question of how to port large vocabulary continuous speech recognition (LVCSR) systems in a fast and efficient way. More specifically we want to estimate acoustic models for a new target language using speech data from varied source languages, but only limited data from the target language. For this purpose, we introduce different methods for multilingual acoustic model combination and a polyphone decision tree specialization procedure. Recognition results using language-dependent, independent and language-adaptive acoustic models are presented and discussed in the framework of our GlobalPhone project which investigates LVCSR systems in 15 languages. --- paper_title: Syllable-Based and Hybrid Acoustic Models for Amharic Speech Recognition paper_content: This paper presents the results of our experiments on the use of hybrid acoustic units in speech recognition and the use of syllable and hybrid acoustic models (AM) in morphemebased speech recognition. Although hybrid AMs did not bring improvement in speech recognition performance when words are used as dictionary entries and units in a language model (LM), we observed a significant word error rate (WER) reduction (compared to triphone-based systems) in morpheme-based speech recognition. Syllable AMs also led to a significant WER reduction over the triphone-based systems. It was possible to obtain a 3% absolute WER reduction as a result of using syllable acoustic units. Generally, our result shows that syllable and hybrid AMs are best fitted in morpheme-based speech recognition. --- paper_title: Discriminative pronunciation learning for speech recognition for resource scarce languages paper_content: In this paper, we describe a method to create speech recognition capability for small vocabularies in resource-scarce languages. By resource-scarce languages, we mean languages that have a small or economically disadvantaged user base which are typically ignored by the commercial world. We use a high-quality well-trained speech recognizer as our baseline to remove the dependence on large audio data for an accurate acoustic model. Using cross-language phoneme mapping, the baseline recognizer effectively recognizes words in our target language. We automate the generation of pronunciations and generate a set of initial pronunciations for each word in the vocabulary. Next, we remove potential conflicts in word recognition by discriminative training. --- paper_title: Automatic Speech Recognition for Under-Resourced Languages: Application to Vietnamese Language paper_content: This paper presents our work in automatic speech recognition (ASR) in the context of under-resourced languages with application to Vietnamese. Different techniques for bootstrapping acoustic models are presented. First, we present the use of acoustic-phonetic unit distances and the potential of crosslingual acoustic modeling for under-resourced languages. Experimental results on Vietnamese showed that with only a few hours of target language speech data, crosslingual context independent modeling worked better than crosslingual context dependent modeling. However, it was outperformed by the latter one, when more speech data were available. We concluded, therefore, that in both cases, crosslingual systems are better than monolingual baseline systems. The proposal of grapheme-based acoustic modeling, which avoids building a phonetic dictionary, is also investigated in our work. Finally, since the use of sub-word units (morphemes, syllables, characters, etc.) can reduce the high out-of-vocabulary rate and improve the lack of text resources in statistical language modeling for under-resourced languages, we propose several methods to decompose, normalize and combine word and sub-word lattices generated from different ASR systems. The proposed lattice combination scheme results in a relative syllable error rate reduction of 6.6% over the sentence MAP baseline method for a Vietnamese ASR task. --- paper_title: Integrating Thai grapheme based acoustic models into the ML-MIX framework - For language independent and cross-language ASR paper_content: Grapheme based speech recognition is a powerful tool for rapidly creating automatic speech recognition (ASR) systems in new languages. For purposes of language independent or cross language speech recognition it is necessary to identify similar models in the different languages involved. For phoneme based multilingual ASR systems this is usually achieved with the help of a language independent phoneme set and the corresponding phoneme identities in the different languages. For grapheme based multilingual ASR systems this is only possible when there is an overlap in graphemes of the different scripts involved. Often this is not the case, as for example for Thai which graphemes does not have any overlap with the graphemes of the languages that we used for multilingual grapheme based ASR in the past. In order to be able to apply our multilingual grapheme model to Thai, and in order to incorporate Thai into our multilingual recognizer, we examined and evaluated a number of data driven distance measures between the multilingual grapheme models. For our purposes distance measures that rely directly on the parameters of the models, such as the Kullback-Leibler and the Bhatthacharya distance yield the best performance. --- paper_title: Grapheme based speech recognition paper_content: Large vocabulary speech recognition systems traditionally represent words in terms of subword units, usually phonemes. This paper investigates the potential of graphemes acting as subunits. In order to develop context dependent grapheme based speech recognizers several decision tree based clustering procedures are performed and compared to each other. Grapheme based speech recognizers in three languages - English, German, and Spanish - are trained and compared to their phoneme based counterparts. The results show that for languages with a close grapheme-to-phoneme relation, grapheme based modeling is as good as the phoneme based one. Furthermore, multilingual grapheme based recognizers are designed to investigate whether grapheme based information can be successfully shared among languages. Finally, some bootstrapping experiments for Swedish were performed to test the potential for rapid language deployment. --- paper_title: Automatic Error Recovery for Pronunciation Dictionaries. paper_content: In this paper, we present our latest investigations on pronunciation modeling and its impact on ASR. We propose completely automatic methods to detect, remove, and substitute inconsistent or flawed entries in pronunciation dictionaries. The experiments were conducted on different tasks, namely (1) word-pronunciation pairs from the Czech, English, French, German, Polish, and Spanish Wiktionary [1], a multilingual wiki-based open content dictionary, (2) our GlobalPhone Hausa pronunciation dictionary [2], and (3) pronunciations to complement our Mandarin-English SEAME code-switch dictionary [3]. In the final results, we fairly observed on average an improvement of 2.0% relative in terms of word error rate and even 27.3% for the case of English Wiktionary word-pronunciation pairs. --- paper_title: Wiktionary as a Source for Automatic Pronunciation Extraction paper_content: In this paper, we analyze whether dictionaries from the World Wide Web which contain phonetic notations, may support the rapid creation of pronunciation dictionaries within the speech recognition and speech synthesis system building process. As a representative dictionary, we selected Wiktionary [1] since it is at hand in multiple languages and, in addition to the definitions of the words, many phonetic notations in terms of the International Phonetic Alphabet (IPA) are available. Given word lists in four languages English, French, German, and Spanish, we calculated the percentage of words with phonetic notations in Wiktionary. Furthermore, two quality checks were performed: First, we compared pronunciations from Wiktionary to pronunciations from dictionaries based on the GlobalPhone project, which had been created in a rule-based fashion and were manually cross-checked [2]. Second, we analyzed the impact of Wiktionary pronunciations on automatic speech recognition (ASR) systems. French Wiktionary achieved the best pronunciation coverage, containing 92.58% phonetic notations for the French GlobalPhone word list as well as 76.12% and 30.16% for country and international city names. In our ASR systems evaluation, the Spanish system gained the most improvement from Wiktionary pronunciations with 7.22% relative word error rate reduction. --- paper_title: A Morpho-graphemic Approach for the Recognition of Spontaneous Speech in Agglutinative Languages - like Hungarian paper_content: A coupled acoustic- and language-modeling approach is presented for the recognition of spontaneous speech primarily in agglutinative languages. The effectiveness of the approach in large vocabulary spontaneous speech recognition is demonstrated on the Hungarian MALACH corpus. The derivation of morphs from word forms is based on a statistical morphological segmentation tool while the mapping of morphs into graphemes is obtained trivially by splitting each morph into individual letters. Using morphs instead of words in language modeling gives significant WER reductions in case of both phoneme- and grapheme-based acoustic modeling. The improvements are larger after speaker adaptation of the acoustic models. In conclusion, morphophonemic and the proposed morpho-graphemic ASR approaches yield the same best WERs, which are significantly lower than the word-based baselines but essentially without language dependent rules or pronunciation dictionaries in the latter case. Index Terms: spontaneous speech recognition, morphology. --- paper_title: Morphology-based language modeling for Arabic speech recognition paper_content: Abstract : Language modeling is a difficult problem for languages with rich morphology. In this paper we investigate the use of morphology-based language models at different stages in a speech recognition system for conversational Arabic. Class-based and single-stream factored language models using morphological word representations are applied within an N-best list rescoring framework. In addition, we explore the use of factored language models in first-pass recognition, which is facilitated by two novel procedures: the data-driven optimization of a multi-stream language model structure, and the conversion of a factored language model to a standard word-based model. We evaluate these techniques on a large-vocabulary recognition task and demonstrate that they lead to perplexity and word error rate reductions. --- paper_title: Turkish LVCSR: towards better speech recognition for agglutinative languages paper_content: The Turkish language belongs to the Turkic family. All members of this family are close to one another in terms of linguistic structure. Typological similarities are vowel harmony, verb-final word order and agglutinative morphology. This latter property causes a very fast vocabulary growth resulting in a large number of out-of-vocabulary words. In this paper we describe our first experiments in a speaker independent LVCSR engine for Modern Standard Turkish. First results on our Turkish speech recognition system are presented. The currently best system shows very promising results achieving 16.9% word error rate. To overcome the OOV-problem we propose a morphem-based and the Hypothesis Driven Lexical Adaptation approach. The final Turkish system is integrated into the multilingual recognition engine of the GlobalPhone project. --- paper_title: Joint Morphological-Lexical Language Modeling (JMLLM) for Arabic paper_content: Language modeling for inflected languages such as Arabic poses new challenges for speech recognition due to rich morphology. The rich morphology results in large increases in perplexity and out-of-vocabulary (OOV) rate. In this study, we present a new language modeling method that takes advantage of Arabic morphology by combining morphological segments with the underlying lexical items and additional available information sources with regards to morphological segments and lexical items within a single joint model. Joint representation and modeling of morphological and lexical items reduces the OOV rate and provides smooth probability estimates. Preliminary experiments detailed in this paper show satisfactory improvements over word and morpheme based trigram language models and their interpolations. --- paper_title: Unsupervised segmentation of words into morphemes – Morpho Challenge 2005: Application to automatic speech recognition paper_content: Within the EU Network of Excellence PASCAL, a challenge was organized to design a statistical machine learning algorithm that segments words into the smallest meaning-bearing units of language, morphemes. Ideally, these are basic vocabulary units suitable for different tasks, such as speech and text understanding, machine translation, information retrieval, and statistical language modeling. Twelve research groups participated in the challenge and had submitted segmentation results obtained by their algorithms. In this paper, we evaluate the application of these segmentation algorithms to large vocabulary speech recognition using statistical n-gram language models based on the proposed word segments instead of entire words. Experiments were done for two agglutinative and morphologically rich languages: Finnish and Turkish. We also investigate combining various segmentations to improve the performance of the recognizer. Index Terms: speech recognition, language modelling, morphemes, unsupervised learning. --- paper_title: Large vocabulary continuous speech recognition of an inflected language using stems and endings paper_content: In this article, we focus on creating a large vocabulary speech recognition system for the Slovenian language. Currently, state-of-the-art recognition systems are able to use vocabularies with sizes of 20,000 to 100,000 words. These systems have mostly been developed for English, which belongs to a group of uninflectional languages. Slovenian, as a Slavic language, belongs to a group of inflectional languages. Its rich morphology presents a major problem in large vocabulary speech recognition. Compared to English, the Slovenian language requires a vocabulary approximately 10 times greater for the same degree of text coverage. Consequently, the difference in vocabulary size causes a high degree of OOV (out-of-vocabulary words). Therefore OOV words have a direct impact on recognizer efficiency. The characteristics of inflectional languages have been considered when developing a new search algorithm with a method for restricting the correct order of sub-word units, and to use separate language models based on sub-words. This search algorithm combines the properties of sub-word-based models (reduced OOV) and word-based models (the length of context). The algorithm also enables better search-space limitation for sub-word models. Using sub-word models, we increase recognizer accuracy and achieve a comparable search space to that of a standard word-based recognizer. Our methods were evaluated in experiments on a SNABI speech database. --- paper_title: Cross-language bootstrapping for unsupervised acoustic model training: Rapid development of a polish speech recognition system paper_content: This paper describes the rapid development of a Polish language speech recognition system. The system development was performed without access to any transcribed acoustic training data. This was achieved through the combined use of cross-language bootstrapping and confidence based unsupervised acoustic model training. A Spanish acoustic model was ported to Polish, through the use of a manually constructed phoneme mapping. This initial model was refined through iterative recognition and retraining of the untranscribed audio data. The system was trained and evaluated on recordings from the European Parliament, and included several state-of-the-art speech recognition techniques in addition to the use of unsupervised model training. Confidence based speaker adaptive training using features space transform adaptation, as well as vocal tract length normalization and maximum likelihood linear regression, was used to refine the acoustic model. Through the combination of the different techniques, good performance was achieved on the domain of parliamentary speeches. Index Terms: speech recognition, unsupervised training, crosslanguage bootstrapping --- paper_title: Uyghur morpheme-based language models and ASR paper_content: Uyghur language is an agglutinative language in which words are formed by suffixes attaching to a stem (or root). Because of the explosive nature in vocabulary of the agglutinative languages, several morpheme-based language models are built and experiments are implemented. Morpheme is the smallest meaning bearing unit. In this research, morpheme is referred to any of prefix, stem, or suffix. As a result, a large vocabulary ASR system is built on the basis of Julius system. Several ASR results on language models based on different units (word, morpheme, and syllable) are compared. --- paper_title: Exploiting Syntactic Structure for Language Modeling paper_content: The paper presents a language model that develops syntactic structure and uses it to extract meaningful information from the word history, thus enabling the use of long distance dependencies. The model assigns probability to every joint sequence of words-binary-parse-structure with headword annotation and operates in a left-to-right manner --- therefore usable for automatic speech recognition. The model, its probabilistic parameterization, and a set of experiments meant to evaluate its predictive power are presented; an improvement over standard trigram modeling is achieved. --- paper_title: Using different acoustic, lexical and language modeling units for ASR of an under-resourced language - Amharic paper_content: State-of-the-art large vocabulary continuous speech recognition systems use mostly phone based acoustic models (AMs) and word based lexical and language models. However, phone based AMs are not efficient in modeling long-term temporal dependencies and the use of words in lexical and language models leads to out-of-vocabulary (OOV) problem, which is a serious issue for morphologically rich languages. This paper presents the results of our contributions on the use of different units for acoustic, lexical and language modeling for an under-resourced language (Amharic spoken in Ethiopia). Triphone, Syllable and hybrid (syllable-phone) units have been investigated for acoustic modeling. Word and morphemes have been investigated for lexical and language modeling. We have also investigated the use of longer (syllable) acoustic units and shorter (morpheme) lexical as well as language modeling units in a speech recognition system. Although hybrid AMs did not bring much improvement over context dependent syllable based recognizers in speech recognition performance with word based lexical and language model (i.e. word based speech recognition), we observed a significant word error rate (WER) reduction compared to triphone-based systems in morpheme-based speech recognition. Syllable AMs also led to a WER reduction over the triphone-based systems both in word based and morpheme based speech recognition. It was possible to obtain a 3% absolute WER reduction as a result of using syllable acoustic units in morpheme-based speech recognition. Overall, our result shows that syllable and hybrid AMs are best fitted in morpheme-based speech recognition. --- paper_title: . Localization of Speech Recognition in Spoken Dialog Systems: How Machine Translation Can Make Our Lives Easier . paper_content: The localization of speech recognition for large-scale spoken dialog systems can be a tremendous exercise. Usually, all in- volved grammars have to be translated by a language expert, and new data has to be collected, transcribed, and annotated for statistical utterance classifiers resulting in a time-consuming and expensive undertaking. Often though, a vast number of transcribed and annotated utterances exists for the source lan- guage. In this paper, we propose to use such data and translate it into the target language using machine translation. The trans- lated utterances and their associated (original) annotations are then used to train statistical grammars for all contexts of the target system. As an example, we localize an English spoken dialog system for Internet troubleshooting to Spanish by trans- lating more than 4 million source utterances without any human intervention. In an application of the localized system to more than 10,000 utterances collected on a similar Spanish Internet troubleshooting system, we show that the overall accuracy was only 5.7% worse than that of the English source system. Index Terms: spoken dialog systems, machine translation, lo- calization --- paper_title: Morph-based speech recognition and modeling of out-of-vocabulary words across languages paper_content: We explore the use of morph-based language models in large-vocabulary continuous-speech recognition systems across four so-called morphologically rich languages: Finnish, Estonian, Turkish, and Egyptian Colloquial Arabic. The morphs are subword units discovered in an unsupervised, data-driven way using the Morfessor algorithm. By estimating n-gram language models over sequences of morphs instead of words, the quality of the language model is improved through better vocabulary coverage and reduced data sparsity. Standard word models suffer from high out-of-vocabulary (OOV) rates, whereas the morph models can recognize previously unseen word forms by concatenating morphs. It is shown that the morph models do perform fairly well on OOVs without compromising the recognition accuracy on in-vocabulary words. The Arabic experiment constitutes the only exception since here the standard word model outperforms the morph model. Differences in the datasets and the amount of data are discussed as a plausible explanation. --- paper_title: Morphological random forests for language modeling of inflectional languages paper_content: In this paper, we are concerned with using decision trees (DT) and random forests (RF) in language modeling for Czech LVCSR. We show that the RF approach can be successfully implemented for language modeling of an inflectional language. Performance of word-based and morphological DTs and RFs was evaluated on lecture recognition task. We show that while DTs perform worse than conventional trigram language models (LM), RFs of both kind outperform the latter. WER (up to 3.4% relative) and perplexity (10%) reduction over the trigram model can be gained with morphological RFs. Further improvement is obtained after interpolation of DT and RF LMs with the trigram one (up to 15.6% perplexity and 4.8% WER relative reduction). In this paper we also investigate distribution of morphological feature types chosen for splitting data at different levels of DTs. --- paper_title: A new ASR evaluation measure and minimum Bayes-risk decoding for open-domain speech understanding paper_content: A new evaluation measure of speech recognition and a decoding strategy for keyword-based open-domain speech understanding are presented. Conventionally, WER (word error rate) has been widely used as an evaluation measure of speech recognition, which treats all words in a uniform manner. We define a weighted keyword error rate (WKER) which gives a weight on errors from a viewpoint of information retrieval. We first demonstrate that this measure is more appropriate for predicting the performance of key sentence indexing of oral presentations. Then, we formulate a decoding method to minimize WKER based on a minimum Bayes-risk (MBR) framework, and show that the decoding method works reasonably for improving WKER and key sentence indexing. --- paper_title: SMT-based ASR domain adaptation methods for under-resourced languages: Application to Romanian paper_content: This study investigates the possibility of using statistical machine translation to create domain-specific language resources. We propose a methodology that aims to create a domain-specific automatic speech recognition (ASR) system for a low-resourced language when in-domain text corpora are available only in a high-resourced language. Several translation scenarios (both unsupervised and semi-supervised) are used to obtain domain-specific textual data. Moreover this paper shows that a small amount of manually post-edited text is enough to develop other natural language processing systems that, in turn, can be used to automatically improve the machine translated text, leading to a significant boost in ASR performance. An in-depth analysis, to explain why and how the machine translated text improves the performance of the domain-specific ASR, is also made at the end of this paper. As bi-products of this core domain-adaptation methodology, this paper also presents the first large vocabulary continuous speech recognition system for Romanian, and introduces a diacritics restoration module to process the Romanian text corpora, as well as an automatic phonetization module needed to extend the Romanian pronunciation dictionary. --- paper_title: . Localization of Speech Recognition in Spoken Dialog Systems: How Machine Translation Can Make Our Lives Easier . paper_content: The localization of speech recognition for large-scale spoken dialog systems can be a tremendous exercise. Usually, all in- volved grammars have to be translated by a language expert, and new data has to be collected, transcribed, and annotated for statistical utterance classifiers resulting in a time-consuming and expensive undertaking. Often though, a vast number of transcribed and annotated utterances exists for the source lan- guage. In this paper, we propose to use such data and translate it into the target language using machine translation. The trans- lated utterances and their associated (original) annotations are then used to train statistical grammars for all contexts of the target system. As an example, we localize an English spoken dialog system for Internet troubleshooting to Spanish by trans- lating more than 4 million source utterances without any human intervention. In an application of the localized system to more than 10,000 utterances collected on a similar Spanish Internet troubleshooting system, we show that the overall accuracy was only 5.7% worse than that of the English source system. Index Terms: spoken dialog systems, machine translation, lo- calization --- paper_title: ASR domain adaptation methods for low-resourced languages: Application to Romanian language paper_content: This study investigates the possibility of using statistical machine translation to create domain-specific language resources. We propose a methodology that aims to create a domain-specific automatic speech recognition system for a low-resourced language when in-domain text corpora are available only in a high-resourced language. We evaluate a new semi-supervised method and compare it with previously developed semi-supervised and unsupervised approaches. Moreover, in the effort of creating an out-of-domain language model for Romanian, we introduce and experiment an effective diacritics restoration algorithm. --- paper_title: Using the Web for fast language model construction in minority languages paper_content: The design and construction of a language model for minority languages is a hard task. By minority language, we mean a language with small available resources, especially for the statistical learning problem. In this paper, a new methodology for fast language model construction in minority languages is proposed. It is based on the use of Web resources to collect and make efficient textual corpora. By using some filtering techniques, this methodology allows a quick and efficient construction of a language model with a small cost in term of computational and human resources. Our primary experiments have shown excellent performance of the Web language models vs newspaper language models using the proposed filtering methods on a majority language (French). Following the same way for a minority language (Vietnamese), a valuable language model was constructed in 3 month with only 15% new development to modify some filtering tools. --- paper_title: Transcribing Southern Min speech corpora with a Web-Based language learning system paper_content: The paper proposes a human-computation-based scheme for transcribing Southern Min speech corpora. The core idea is to implement a Web-based language learning system to collect orthographic and phonetic labels from a large amount of language learners and choose the commonly input labels as the transcriptions of the corpora. It is essentially a technology of distributed knowledge acquisition. Some computer-aided mechanisms are also used to verify the collected transcriptions. The benefit of the scheme is that it makes the transcribing task neither tedious nor costly. No significant budget should be made for transcribing large corpora. The design of a system for transcribing Min Nan speech corpora is described in detail. The application of a prototype version of the system shows that this transcribing scheme is an effective and economical way to generate orthographic and phonetic transcriptions. --- paper_title: A new ASR evaluation measure and minimum Bayes-risk decoding for open-domain speech understanding paper_content: A new evaluation measure of speech recognition and a decoding strategy for keyword-based open-domain speech understanding are presented. Conventionally, WER (word error rate) has been widely used as an evaluation measure of speech recognition, which treats all words in a uniform manner. We define a weighted keyword error rate (WKER) which gives a weight on errors from a viewpoint of information retrieval. We first demonstrate that this measure is more appropriate for predicting the performance of key sentence indexing of oral presentations. Then, we formulate a decoding method to minimize WKER based on a minimum Bayes-risk (MBR) framework, and show that the decoding method works reasonably for improving WKER and key sentence indexing. --- paper_title: Uyghur morpheme-based language models and ASR paper_content: Uyghur language is an agglutinative language in which words are formed by suffixes attaching to a stem (or root). Because of the explosive nature in vocabulary of the agglutinative languages, several morpheme-based language models are built and experiments are implemented. Morpheme is the smallest meaning bearing unit. In this research, morpheme is referred to any of prefix, stem, or suffix. As a result, a large vocabulary ASR system is built on the basis of Julius system. Several ASR results on language models based on different units (word, morpheme, and syllable) are compared. --- paper_title: Accent Modeling Based On Pronunciation Dictionary Adaptation For Large Vocabulary Mandarin Speech Recognition paper_content: A method of accent modeling through Pronunciation Dictionary Adaptation (PDA) is presented. We derive the pronunciation variation between canonical speaker groups and accent groups and add an encoding of the differences to a canonical dictionary to create a new, adapted dictionary that reflects the accent characteristics. The pronunciation variation information is then integrated with acoustic and language models into a one-pass search framework. It is assumed that acoustic deviation and pronunciation variation are independent but complementary phenomena that cause poor performance among accented speakers. Therefore, MLLR, an efficient model adaptation technique, is also presented both alone and in combination with PDA. It is shown that when PDA, MLLR and PDA+MLLR are used, error rate reductions of 13.9%, 24.1% and 28.4% respectively are achieved. --- paper_title: Avaaj Otalo: a field study of an interactive voice forum for small farmers in rural India paper_content: In this paper we present the results of a field study of Avaaj Otalo (literally, "voice stoop"), an interactive voice application for small-scale farmers in Gujarat, India. Through usage data and interviews, we describe how 51 farmers used the system over a seven month pilot deployment. The most popular feature of Avaaj Otalo was a forum for asking questions and browsing others' questions and responses on a range of agricultural topics. The forum developed into a lively social space with the emergence of norms, persistent moderation, and a desire for both structured interaction with institutionally sanctioned authorities and open discussion with peers. For all 51 users this was the first experience participating in an online community of any sort. In terms of usability, simple menu-based navigation was readily learned, with users preferring numeric input over speech. We conclude by discussing implications of our findings for designing voice-based social media serving rural communities in India and elsewhere. --- paper_title: Integrating Thai grapheme based acoustic models into the ML-MIX framework - For language independent and cross-language ASR paper_content: Grapheme based speech recognition is a powerful tool for rapidly creating automatic speech recognition (ASR) systems in new languages. For purposes of language independent or cross language speech recognition it is necessary to identify similar models in the different languages involved. For phoneme based multilingual ASR systems this is usually achieved with the help of a language independent phoneme set and the corresponding phoneme identities in the different languages. For grapheme based multilingual ASR systems this is only possible when there is an overlap in graphemes of the different scripts involved. Often this is not the case, as for example for Thai which graphemes does not have any overlap with the graphemes of the languages that we used for multilingual grapheme based ASR in the past. In order to be able to apply our multilingual grapheme model to Thai, and in order to incorporate Thai into our multilingual recognizer, we examined and evaluated a number of data driven distance measures between the multilingual grapheme models. For our purposes distance measures that rely directly on the parameters of the models, such as the Kullback-Leibler and the Bhatthacharya distance yield the best performance. --- paper_title: Word segmentation through cross-lingual word-to-phoneme alignment paper_content: We present our new alignment model Model 3P for cross-lingual word-to-phoneme alignment, and show that unsupervised learning of word segmentation is more accurate when information of another language is used. Word segmentation with cross-lingual information is highly relevant to bootstrap pronunciation dictionaries from audio data for Automatic Speech Recognition, bypass the written form in Speech-to-Speech Translation or build the vocabulary of an unseen language, particularly in the context of under-resourced languages. Using Model 3P for the alignment between English words and Spanish phonemes outperforms a state-of-the-art monolingual word segmentation approach [1] on the BTEC corpus [2] by up to 42% absolute in F-Score on the phoneme level and a GIZA++ alignment based on IBM Model 3 by up to 17%. --- paper_title: Rapid building of an ASR system for Under-Resourced Languages based on Multilingual Unsupervised Training paper_content: This paper presents our work on rapid language adaptation of acoustic models based on multilingual cross-language bootstrapping and unsupervised training. We used Automatic Speech Recognition (ASR) systems in the six source languages English, French, German, Spanish, Bulgarian and Polish to build from scratch an ASR system for Vietnamese, an underresourced language. System building was performed without using any transcribed audio data by applying three consecutive steps, i.e. cross-language transfer, unsupervised training based on the “multilingual A-stabil” confidence score [1], and bootstrapping. We investigated the correlation between performance of “multilingual A-stabil” and the number of source languages and improved the performance of “multilingual A-stabil” by applying it at the syllable level. Furthermore, we showed that increasing the amount of source language ASR systems for the multilingual framework results in better performance of the final ASR system in the target language Vietnamese. The final Vietnamese recognition system has a Syllable Error Rate (SyllER) of 16.8% on the development set and 16.1% on the evaluation set. Index Terms: rapid language adaptation of ASR, unsupervised training, multilingual A-Stabil --- paper_title: Multilingual a-stabil: A new confidence score for multilingual unsupervised training paper_content: This paper presents our work in Automatic Speech Recognition (ASR) in the context of multilingual unsupervised training with application to Czech. Starting without any transcribed acoustic training data we built a Czech ASR by combining cross-language bootstrapping and confidence based unsupervised training. We present our new method called “multilingual A-stabil” to compute confidence scores and explore the relative effectiveness of acoustic models from more than one language such as Russian, Bulgarian, Polish and Croatian for unsupervised training. While conventional confidence measures such as gamma and A-stabil [1] [2] work well with well-trained acoustic models but have problems with poorly estimated acoustic models, our new method works well in both cases. We describe our multilingual unsupervised training framework which gives very promising results in our experiments. We were able to select 80.5% of the audio training data (18.5 hours) with a transcription WER of 14.5% when using a small amount of untranscribed data (only about 23 hours). The final best WER on Czech is 23.6% on the development set and 22.9% on the evaluation set by using cross-lingual boostrapping, which is very close to the performance of the Czech ASR trained with 23 hours audio data with manual transcriptions (23.1% on the development set and 22.3% on the evaluation set). --- paper_title: Woefzela - an open-source platform for ASR data collection in the developing world paper_content: This project was made possible through the support of the South ::: African National Centre for Human Language Technology, an ::: initiative of the South African Department of Arts and Culture. ::: The authors would also like to thank Pedro Moreno, Thad ::: Hughes and Ravindran Rajakumar of Google Research for valuable ::: inputs at various stages of this work. --- paper_title: Uyghur morpheme-based language models and ASR paper_content: Uyghur language is an agglutinative language in which words are formed by suffixes attaching to a stem (or root). Because of the explosive nature in vocabulary of the agglutinative languages, several morpheme-based language models are built and experiments are implemented. Morpheme is the smallest meaning bearing unit. In this research, morpheme is referred to any of prefix, stem, or suffix. As a result, a large vocabulary ASR system is built on the basis of Julius system. Several ASR results on language models based on different units (word, morpheme, and syllable) are compared. ---
Title: Automatic Speech Recognition for Under-Resourced Languages: A Survey Section 1: Introduction Description 1: This section focuses on the language diversity and the motivation to address the topic of automatic speech recognition for under-resourced languages. Section 2: Under-Resourced (UR) Languages: Definition and Challenges Description 2: This section provides a definition of what constitutes under-resourced languages and discusses the various challenges associated with them. Section 3: Literature Review Description 3: This section reviews the recent contributions and developments in the field of automatic speech recognition for under-resourced languages. Section 4: Past Projects on U-ASR Description 4: This section gives examples of past projects that have worked on automatic speech recognition for under-resourced languages, detailing their approaches and outcomes. Section 5: Future Trends Description 5: This section presents the future trends and directions in the field when dealing with under-resourced languages. Section 6: Conclusion Description 6: This section summarizes the findings of the survey, highlighting the key points and the progress made so far. It also discusses the importance of continued efforts in speech recognition for under-resourced languages.
Advanced Medical Displays: A Literature Review of Augmented Reality
12
--- paper_title: A Survey of Augmented Reality paper_content: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality. --- paper_title: A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope. paper_content: A computer-based system has been developed for the integration and display of computerized tomography (CT) image data in the operating microscope in the correct perspective without requiring a stereotaxic frame. Spatial registration of the CT image data is accomplished by determination of the position of the operating microscope as its focal point is brought to each of three CT-imaged fiducial markers on the scalp. Monitoring of subsequent microscope positions allows appropriate reformatting of CT data into a common coordinate system. The position of the freely moveable microscope is determined by a non-imaging ultrasonic range-finder consisting of three spark gaps attached to the microscope and three microphones on a rigid support in the operating room. Measurement of the acoustic impulse transit times from the spark gaps to the microphones enables calculation of those distances and unique determination of the microscope position. The CT data are reformatted into a plane and orientation corresponding to the microscope's focal plane or to a deeper parallel plane if required. This reformatted information is then projected into the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. The operating surgeon sees the CT information (such as a tumor boundary) superimposed upon the operating field in proper position, orientation, and scale. --- paper_title: Recent Advances in Augmented Reality paper_content: In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies. --- paper_title: The computer scientist as toolsmith II paper_content: A process for producing a photographic film having thereon a magnetic recording layer by applying a dispersion of a magnetic substance to an antihalation layer of a light-sensitive film for cinema, where the dispersion contains a compound or compounds, having at least two or more isocyanato groups or thioisocyanato groups or the anti-halation layer is previously processed with a solution containing the compound. A photographic film where either the magnetic layer contains, or the antihalation layer has been processed with, such a compound or compounds. --- paper_title: A TAXONOMY OF MIXED REALITY VISUAL DISPLAYS paper_content: Paul Milgram received the B.A.Sc. degree from the University of Toronto in 1970, the M.S.E.E. degree from the Technion (Israel) in 1973 and the Ph.D. degree from the University of Toronto in 1980. From 1980 to 1982 he was a ZWO Visiting Scientist and a NATO Postdoctoral in the Netherlands, researching automobile driving behaviour. From 1982 to 1984 he was a Senior Research Engineer in Human Engineering at the National Aerospace Laboratory (NLR) in Amsterdam, where his work involved the modelling of aircraft flight crew activity, advanced display concepts and control loops with human operators in space teleoperation. Since 1986 he has worked at the Industrial Engineering Department of the University of Toronto, where he is currently an Associate Professor and Coordinator of the Human Factors Engineering group. He is also cross appointed to the Department of Psychology. In 1993-94 he was an invited researcher at the ATR Communication Systems Research Laboratories, in Kyoto, Japan. His research interests include display and control issues in telerobotics and virtual environments, stereoscopic video and computer graphics, cognitive engineering, and human factors issues in medicine. He is also President of Translucent Technologies, a company which produces "Plato" liquid crystal visual occlusion spectacles (of which he is the inventor), for visual and psychomotor research. --- paper_title: Analysis of head pose accuracy in augmented reality paper_content: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone. --- paper_title: Optical Versus Video See-Through Head-Mounted Displays in Medical Visualization paper_content: We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology. --- paper_title: Head-worn displays: a review paper_content: Head-worn display design is inherently an interdisciplinary subject fusing optical engineering, optical materials, optical coatings, electronics, manufacturing techniques, user interface design, computer science, human perception, and physiology for assessing these displays. This paper summarizes the state-of-the-art in head-worn display design (HWD) and development. This review is focused on the optical engineering aspects, divided into different sections to explore principles and applications. Building on the guiding fundamentals of optical design and engineering, the principles section includes a summary of microdisplay or laser sources, the Lagrange invariant for understanding the trade-offs in optical design of HWDs, modes of image presentation (i.e., monocular, biocular, and stereo) and operational modes such as optical and video see-through. A brief summary of the human visual system pertinent to the design of HWDs is provided. Two optical design forms, namely, pupil forming and non-pupil forming are discussed. We summarize the results from previous design work using aspheric, diffractive, or holographic elements to achieve compact and lightweight systems. The applications section is organized in terms of field of view requirements and presents a reasonable collection of past designs --- paper_title: Technologies for augmented reality systems: realizing ultrasound-guided needle biopsies paper_content: We present a real-time stereoscopic video-see-through augmented reality (AR) system applied to the medical procedure known as ultrasound-guided needle biopsy of the breast. The AR system was used by a physician during procedures on breast models and during non-invasive examinations of human subjects. The system merges rendered live ultrasound data and geometric elements with stereo images of the patient acquired through head-mounted video cameras and presents these merged images to the physician in a head-mounted display. The physician sees a volume visualization of the ultrasound data directly under the ultrasound probe, properly registered within the patient and with the biopsy needle. Using this system, a physician successfully guided a needle into an artificial tumor within a training phantom of a human breast. We discuss the construction of the AR system and the issues and decisions which led to the system architecture and the design of the video see-through head-mounted display. We designed methods to properly resolve occlusion of the real and synthetic image elements. We developed techniques for realtime volume visualization of timeand position-varying ultrasound data. We devised a hybrid tracking system which achieves improved registration of synthetic and real imagery and we improved on previous techniques for calibration of a magnetic tracker. CR --- paper_title: Use of an augmented-vision device for visual search by patients with tunnel vision paper_content: PURPOSE ::: To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. ::: ::: ::: METHODS ::: Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. ::: ::: ::: RESULTS ::: Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. ::: ::: ::: CONCLUSIONS ::: Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. --- paper_title: Synchronizing 3D movements for quantitative comparison and simultaneous visualization of actions paper_content: In our poster presentation at ISMAR '04, we proposed the idea of an AR training solution including capture and 3D replays of subtle movements. The crucial part missing for realizing such a training system was an appropriate way of synchronizing trajectories of similar movements with varying speed in order to simultaneously visualize the motion of experts and trainees, and to study trainees' performances quantitatively. In this paper we review the research from different communities on synchronization problems of similar complexity. We give a detailed description of the two most applicable algorithms. We then present results using our AR based forceps delivery training system and therefore evaluate both methods for synchronization of experts' and trainees' 3D movements. We also introduce the first concepts of an online synchronization system allowing the trainee to follow movements of an expert and the experts to annotate 3D trajectories for initiation of actions such as display of timely information. A video demonstration provides an overview of the work and a visual idea of what users of the proposed system could observe through their video-see through HMD. --- paper_title: Dynamic superimposition of synthetic objects on rigid and simple-deformable real objects paper_content: A current challenge in augmented reality applications is the accurate superimposition of synthetic objects on real objects within the environment. This challenge is heightened when the real objects are in motion and/or are nonrigid. In this article, we present a robust method for realtime, optical superimposition of synthetic objects on dynamic rigid and simple-deformable real objects. Moreover, we illustrate this general method with the VRDA Tool, a medical education application related to the visualization of internal human knee joint anatomy on a real human knee. --- paper_title: Augmented workspace: designing an AR testbed paper_content: We have implemented a tabletop setup to explore augmented reality (AR) visualization. We call this setup an "augmented workspace". The user sits at the table and performs a manual task, guided by computer graphics overlaid on to his view. The setup serves as a testbed for developing the technology and for studying visual perception issues. The user wears a custom video see-through head mounted display (HMD). Two color video cameras attached to the HMD provide a stereo view of the scene, and a third video camera is added for tracking. The system runs at the full 30-Hz video frame rate with a latency of about 0.1 s, generating a stable augmentation with no apparent jitter visible in the composite images. Two SGI Visual Workstations provide the computing power for the system. In this paper, we describe the augmented workspace system in more detail and discuss several design issues. --- paper_title: Dynamic Registration Correction in Video-Based Augmented Reality Systems paper_content: Augmented reality systems allow users to interact with real and computer-generated objects by displaying 3D virtual objects registered in a user's natural environment. Applications of this powerful visualization tool include previewing proposed buildings in their natural settings, interacting with complex machinery for purposes of construction or maintenance training, and visualizing in-patient medical data such as ultrasound. In all these applications, computer-generated objects must be visually registered with respect to real-world objects in every image the user sees. If the application does not maintain accurate registration, the computer-generated objects appear to float around in the user's natural environment without having a specific 3D spatial position. Registration error is the observed displacement in the image between the actual and intended positions of virtual objects. > --- paper_title: Using virtual reality to teach radiographic positioning. paper_content: Using virtual reality to teach radiographic positioning overcomes many of the limitations of traditional teaching methods and offers several unique advantages. This article describes a virtual reality prototype that could be used to teach radiographic positioning of the elbow joint. By using virtual reality, students are able to see the movement of bones as the arm is manipulated. The article also describes the development and challenges of using virtual reality in medical education. --- paper_title: Neurosurgical Guidance Using the Stereo Microscope paper_content: Many neuro- and ENT surgical procedures are performed using the operating microscope. Conventionally, the surgeon cannot accurately relate information from preoperative radiological images to the appearance of the surgical field. We propose that the best way do this is to superimpose image derived data upon the operative scene. We create a model of relevant structures (e.g. tumor volume, blood vessels and nerves) from multimodality preoperative images. By calibrating microscope optics, registering the patient in-theatre to image coordinates, and tracking the microscope intra-operatively, we can generate stereo projections of the 3D model and project them into the microscope eyepieces, allowing critical structures to be overlayed on the operative scene in the correct position. We have completed initial evaluation with a head phantom, and are about to start clinical evaluation on patients. With the head phantom a theoretical accuracy of 4.6mm was calculated and the observed accuracy ranged from 2mm to 5mm. --- paper_title: Computer-vision-enabled augmented reality fundus biomicroscopy. paper_content: PURPOSE ::: To guide treatment for macular diseases and to facilitate real-time image measurement and comparison, investigations were initiated to permit overlay of previously stored photographic and angiographic images directly onto the real-time slit-lamp biomicroscopic fundus image. ::: ::: ::: DESIGN ::: Experimental study in model eyes, and preliminary observations in human subjects. ::: ::: ::: METHODS ::: A modified, binocular video slit lamp interfaced to a personal computer and framegrabber allows for image acquisition and rendering of stored images overlaid onto the real-time slit-lamp biomicroscopic fundus image. Development proceeds with rendering on a computer monitor, while construction is completed on a miniature display interfaced directly with one of the slit-lamp oculars. Registration and tracking are performed with in-house-developed software. ::: ::: ::: MAIN OUTCOME MEASURES ::: Tracking speed and accuracy, ergonomic acceptability. ::: ::: ::: RESULTS ::: Computer-vision algorithms permit robust montaging, tracking, registration, and rendering of previously stored photographic and angiographic images onto the real-time slit-lamp fundus biomicroscopic image. In model eyes and in preliminary studies in a human eye, optimized registration permits near-video-rate image overlay with updates at 3 to 10 Hz and misregistration errors on the order of 1 to 5 pixels. ::: ::: ::: CONCLUSIONS ::: A prototype for ophthalmic augmented reality (image overlay) is presented. The current hardware/software implementation allows for robust performance. --- paper_title: A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope. paper_content: A computer-based system has been developed for the integration and display of computerized tomography (CT) image data in the operating microscope in the correct perspective without requiring a stereotaxic frame. Spatial registration of the CT image data is accomplished by determination of the position of the operating microscope as its focal point is brought to each of three CT-imaged fiducial markers on the scalp. Monitoring of subsequent microscope positions allows appropriate reformatting of CT data into a common coordinate system. The position of the freely moveable microscope is determined by a non-imaging ultrasonic range-finder consisting of three spark gaps attached to the microscope and three microphones on a rigid support in the operating room. Measurement of the acoustic impulse transit times from the spark gaps to the microphones enables calculation of those distances and unique determination of the microscope position. The CT data are reformatted into a plane and orientation corresponding to the microscope's focal plane or to a deeper parallel plane if required. This reformatted information is then projected into the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. The operating surgeon sees the CT information (such as a tumor boundary) superimposed upon the operating field in proper position, orientation, and scale. --- paper_title: A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus paper_content: Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope/spl trade/ M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment. --- paper_title: Development of the Varioscope AR. A see-through HMD for computer-aided surgery paper_content: In computer-aided surgery (CAS), an undesired side-effect of the necessity of handling sophisticated equipment in the operating room is the fact that the surgeon's attention is drawn from the operating field, since surgical progress is partially monitored on the computer's screen. Augmented reality (AR), the overlay of computer-generated graphics over a real-world scene, provides a possibility to solve this problem. The technical problems associated with this approach, such as viewing of the scenery within a common focal range on the head-mounted display (HMD) or latency in the display on the HMD, have, however, kept AR from widespread usage in CAS. The concept of the Varioscope AR, a lightweight head-mounted operating microscope used as a HMD, is introduced. The registration of the patient to the pre-operative image data, as well as pre-operative planning, take place on VISIT, a surgical navigation system developed at our hospital. Tracking of the HMD and stereoscopic visualisation take place on a separate POSIX.4-compliant real-time operating system running on PC hardware. We were able to overcome the technical problems described above; our work resulted in an AR visualisation system with an update rate of 6 Hz and a latency below 130 ms. It integrates seamlessly into a surgical navigation system and provides a common focus for both virtual and real-world objects. First evaluations of the photogrammetric 2D/3D registration have resulted in a match of 1.7 pixels on the HMD display. The Varioscope AR with its real-time visualisation unit is a major step towards the introduction of AR into clinical routine. --- paper_title: A head-mounted operating binocular for augmented reality visualization in medicine - design and initial evaluation paper_content: Computer-aided surgery (CAS), the intraoperative application of biomedical visualization techniques, appears to be one of the most promising fields of application for augmented reality (AR), the display of additional computer-generated graphics over a real-world scene. Typically a device such as a head-mounted display (HMD) is used for AR. However, considerable technical problems connected with AR have limited the intraoperative application of HMDs up to now. One of the difficulties in using HMDs is the requirement for a common optical focal plane for both the realworld scene and the computer-generated image, and acceptance of the HMD by the user in a surgical environment. In order to increase the clinical acceptance of AR, we have adapted the Varioscope (Life Optics, Vienna), a miniature, cost-effective head-mounted operating binocular, for AR. In this paper, we present the basic design of the modified HMD, and the method and results of an extensive laboratory study for photogrammetric calibration of the Varioscope's computer displays to a real-world scene. In a series of 16 calibrations with varying zoom factors and object distances, mean calibration error was found to be 1.24 /spl plusmn/ 0.38 pixels or 0.12 /spl plusmn/ 0.05 mm for a 640 /spl times/ 480 display. Maximum error accounted for 3.33 /spl plusmn/ 1.04 pixels or 0.33 /spl plusmn/ 0.12 mm. The location of a position measurement probe of an optical tracking system was transformed to the display with an error of less than 1 mm in the real world in 56% of all cases. For the remaining cases, error was below 2 mm. We conclude that the accuracy achieved in our experiments is sufficient for a wide range of CAS applications. --- paper_title: A frameless stereotaxic operating microscope for neurosurgery paper_content: The purpose of the frameless stereotaxic operating microscope is to display computed tomography (CT) or other image data in the operating microscope in the correct scale, orientation, and position without the use of a stereotaxic frame. A nonimaging ultrasonic rangefinder allows the position of the operating microscope and the position of the patient to be determined. Discrete fiducial points on the patient's external anatomy are located in both image space and operating room space, linking the image data and the operating room. Physician-selected image information, e.g. tumor contours or guidance to predetermined targets, is projected through the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. Projected images superpose the surgical field, reconstructed from image data to match the focal plane of the operating microscope. The algorithms on which the system is based are described, and the sources and effects of errors are discussed. The system's performance is simulated, providing an estimate of accuracy. Two phantoms are used to measure accuracy experimentally. Clinical results and observations are given. > --- paper_title: Registration Error Analysis for Augmented Reality paper_content: Augmented reality AR systems typically use see-through head-mounted displays STHMDs to superimpose images of computer-generated objects onto the user's view of the real environment in order to augment it with additional information. The main failing of current AR systems is that the virtual objects displayed in the STHMD appear in the wrong position relative to the real environment. This registration error has many causes: system delay, tracker error, calibration error, optical distortion, and misalignment of the model, to name only a few. Although some work has been done in the area of system calibration and error correction, very little work has been done on characterizing the nature and sensitivity of the errors that cause misregistration in AR systems. This paper presents the main results of an end-to-end error analysis of an optical STHMD-based tool for surgery planning. The analysis was done with a mathematical model of the system and the main results were checked by taking measurements on a real system under controlled circumstances. The model makes it possible to analyze the sensitivity of the system-registration error to errors in each part of the system. The major results of the analysis are: 1 Even for moderate head velocities, system delay causes more registration error than all other sources combined; 2 eye tracking is probably not necessary; 3 tracker error is a significant problem both in head tracking and in system calibration; 4 the World or reference coordinate system adds error and should be omitted when possible; 5 computational correction of optical distortion may introduce more delay-induced registration error than the distortion error it corrects, and 6 there are many small error sources that will make submillimeter registration almost impossible in an optical STHMD system without feedback. Although this model was developed for optical STHMDs for surgical planning, many of the results apply to other HMDs as well. --- paper_title: Computer-assisted stereotactic microsurgery for the treatment of intracranial neoplasms. paper_content: This paper describes a stereotactic CO2 laser system for the removal of intra-axial, intracranial neoplasms. The volume of the neoplasm is transferred into stereotactic space by computer reconstruction of data derived by computed tomography (CT) performed under stereotactic conditions. The tumor volume is sliced in a plane orthogonal to the surgical approach, and slices at specific distances from the focal point of the stereotactic frame are displayed on a graphics monitor in the operating suite along with a cursor representing the position of the surgical laser. Laser vaporization of sequential slices of the tumor results in a cavity, the formation of which is monitored by anteroposterior and lateral roentgenograms. Fifteen stereotactic laser procedures have been performed on 13 patients, and the results are discussed. By this method, it is theoretically possible to remove all of an intracranial neoplasm detected by CT scanning. --- paper_title: Intra-operative Real-Time 3-D Information Display System based on Integral Videography paper_content: A real-time 3-D surgical navigation system that superimposes the real, intuitive 3-D image for medical diagnosis and operation was developed in this paper. This system creates 3-D image based on the principle of integral photography (IP), named "Integral Videography (IV)", which can display geometrically accurate 3-D image and reproduce motion parallax without any need of special devices. 3-D image was supenmposed on the surgical fields in the patient via a half-silvered mirror as if they could be seen through the body. In addition, a real-time IV algorithm for calculating the 3-D image of surgical instruments was used for registration between the location of surgical instruments and the organ during the operation. The experimental results of puncturing a point location and avoiding critical area showed the errors of this navigation system were in the range of 2-3mm. By introducing a display device with higher pixel density, accuracy of the system can be improved. --- paper_title: Accuracy of Needle Implantation in Brachytherapy Using a Medical AR System - A phantom study paper_content: Brachytherapy is the treatment method of choice for patients with a tumor relapse after a radiation therapy ::: with external beams or tumors in regions with sensitive surrounding organs-at-risk, e. g. prostate tumors. The ::: standard needle implantation procedure in brachytherapy uses pre-operatively acquired image data displayed as ::: slices on a monitor beneath the operation table. Since this information allows only a rough orientation for the ::: surgeon, the position of the needles has to be verified repeatedly during the intervention. ::: Within the project Medarpa a transparent display being the core component of a medical Augmented ::: Reality (AR) system has been developed. There, pre-operatively acquired image data is displayed together with ::: the position of the tracked instrument allowing a navigated implantation of the brachytherapy needles. The ::: surgeon is enabled to see the anatomical information as well as the virtual instrument in front of the operation ::: area. Thus, the Medarpa system serves as "window into the patient". ::: This paper deals with the results of first clinical trials of the system. Phantoms have been used for evaluating ::: the achieved accuracy of the needle implantation. This has been done by comparing the output of the system ::: (instrument positions relative to the phantom) with the real positions of the needles measured by means of a ::: verification CT scan. --- paper_title: Surgical navigation by autostereoscopic image overlay of integral videography paper_content: This paper describes an autostereoscopic image overlay technique that is integrated into a surgical navigation system to superimpose a real three-dimensional (3-D) image onto the patient via a half-silvered mirror. The images are created by employing a modified version of integral videography (IV), which is an animated extension of integral photography. IV records and reproduces 3-D images using a microconvex lens array and flat display; it can display geometrically accurate 3-D autostereoscopic images and reproduce motion parallax without the need for special devices. The use of semitransparent display devices makes it appear that the 3-D image is inside the patient's body. This is the first report of applying an autostereoscopic display with an image overlay system in surgical navigation. Experiments demonstrated that the fast IV rendering technique and patient-image registration method produce an average registration accuracy of 1.13 mm. Experiments using a target in phantom agar showed that the system can guide a needle toward a target with an average error of 2.6 mm. Improvement in the quality of the IV display will make this system practical and its use will increase surgical accuracy and reduce invasiveness. --- paper_title: An image overlay system for medical data visualization. paper_content: Image Overlay is a computer display technique which superimposes computer images over the user's direct view of the real world. The images are transformed in real-time so they appear to the user to be an integral part of the surrounding environment. By using Image Overlay with three-dimensional medical images such as CT reconstructions, a surgeon can visualize the data 'in-vivo', exactly positioned within the patient's anatomy, and potentially enhance the surgeon's ability to perform a complex procedure. This paper describes prototype Image Overlay systems and initial experimental results from those systems. --- paper_title: An Accuracy Certified Augmented Reality System for Therapy Guidance paper_content: Our purpose is to provide an augmented reality system for Radio-Frequency guidance that could superimpose a 3D model of the liver, its vessels and tumors (reconstructed from CT images) on external video images of the patient. In this paper, we point out that clinical usability not only need the best affordable registration accuracy, but also a certification that the required accuracy is met, since clinical conditions change from one intervention to the other. Beginning by addressing accuracy performances, we show that a 3D/2D registration based on radio-opaque fiducials is more adapted to our application constraints than other methods. Then, we outline a lack in their statistical assumptions which leads us to the derivation of a new extended 3D/2D criterion. Careful validation experiments on real data show that an accuracy of 2 mm can be achieved in clinically relevant conditions, and that our new criterion is up to 9% more accurate, while keeping a computation time compatible with real-time at 20 to 40 Hz. --- paper_title: An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization paper_content: There is a need for frameless guidance systems to help neurosurgeons to plan the exact location of a craniotomy, to define the margins of tumors and to precisely identify locations of neighboring critical structures. We have developed an automatic technique for registering clinical data, such as segmented MRI or CT reconstructions, with the patient's head on the operating table. A second method calibrates the position of a video camera relative to the patient. The combination allows a visual mix of live video of the patient with the segmented 3D MRI or CT model, enabling enhanced reality techniques for planning and guiding neurosurgical procedures, and to interactively view extracranial or intracranial structures non-intrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures and clinical studies involving change detection over time sequences of images. > --- paper_title: Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization paper_content: This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data. --- paper_title: Enhancing reality in the operating room paper_content: Three dimensional computer models of the anatomy generated from volume acquisitions of computed tomography and magnetic resonance imaging are useful adjuncts to 2D images. This paper describes a system that merges the computer generated 3D models with live video to enhance the surgeon's understanding of the anatomy beneath the surface. The system can be used as a planning aid before the operation and provide additional information during an operation. The application of the system to a brain operation is described. > --- paper_title: Robust Hand-Eye Calibration of an Endoscopic Surgery Robot Using Dual Quaternions paper_content: This paper presents an approach for applying a dual quaternion hand-eye calibration algorithm on an endoscopic surgery robot. Special focus is on robustness, since the error of position and orientation data provided by the robot can be large depending on the movement actually executed. Another inherent problem to all hand-eye calibration methods is that non-parallel rotation axes must be used; otherwise, the calibration will fail. Thus we propose a method for increasing the numerical stability by selecting an optimal set of relative movements from the recorded sequence. Experimental evaluation shows the error in the estimated transformation when using well-suited and ill-suited data. Additionally, we show how a RANSAC approach can be used for eliminating the erroneous robot data from the selected movements. --- paper_title: Hybrid method for both calibration and registration of an endoscope with an active optical tracker paper_content: Abstract In this paper, we present a hybrid method for calibration of an endoscope and its registration with an active optical tracker. Practically, both operations are done simultaneously by moving an active optical marker in the field of view of the two devices. By segmenting image data, the LEDs composing the marker are extracted and the transformation matrix between the two referentials (homography) is calculated. By reformulating the calibration problem, registration and calibration parameters are extracted from the homography. As camera calibration and registration is an indispensable step for augmented reality or image-guided applications, this technique can easily be used in the operating field because it is fast, accurate and reliable. We currently are using this technique with an augmented reality system for laparoscopic procedures. --- paper_title: Development of an endoscopic navigation system based on digital image processing paper_content: We developed a new system to couple the endoscope to an optical position measurement system (OPMS) so that the image frames from the endoscope camera can be labeled with the accurate endoscopic position. This OPMS is part of the EasyGuide Neuro navigation system, which is used for microsurgery and neuroendoscopy. Using standard camera calibration techniques and a newly developed system calibration, any 3-dimensional (3-D) world point can be mapped on to the view from the endoscope. in particular, we can display the coordinates of any anatomical landmark of the patient as it is viewed from the current position of the camera. This and other image-processing techniques are applied to the labeled frame sequence in order to offer the neurosurgeon a variety of control modules that increase the safety and flexibility of neuroendoscopic operations. Several modules, including a new motion alarm system and the “tracking” and “virtual map” modules, were tested in a human cadaveric model using the frontal and occipit... --- paper_title: Volumetric Image Guidance via a Stereotactic Endoscope paper_content: We have developed a surgical setup based on modern frameless stereotactic techniques that enables surgeons to visualize the field of view of the surgical endoscope, overlaid with the real-time and volumetrically reconstructed medical images, of a localized area of the patient’s anatomy. Using this navigation system, the surgeon visualizes the surgical site via the surgical endoscope, while exploring the inner layers of the patient’s anatomy by utilizing the three-dimensionally reconstructed image updates obtained by pre-operative images, such as Magnetic Resonance and/or Computed Tomography Imaging. This system also allows the surgeon to virtually “fly through and around” the site of the surgery to visualize several alternatives and qualitatively determine the best surgical approach. Moving endoscopes are tracked with infra-red stereovision cameras and diodes, allowing the determination of their spatial relation to the target lesion and the fiducial based patient/image registration. --- paper_title: Camera-marker alignment framework and comparison with hand-eye calibration for augmented reality applications paper_content: An integral part of every augmented reality system is the calibration between camera and camera-mounted tracking markers. Accuracy and robustness of the AR overlay process is greatly influenced by the quality of this step. In order to meet the very high precision requirements of medical skill training applications, we have set up a calibration environment based on direct sensing of LED markers. A simulation framework has been developed to predict and study the achievable accuracy of the backprojection needed for the scene augmentation process. We demonstrate that the simulation is in good agreement with experimental results. Even if a slight improvement of the precision has been observed compared to well-known hand-eye calibration methods, the subpixel accuracy required by our application cannot be achieved even when using commercial tracking systems providing marker positions within very low error limits. --- paper_title: Development of a camera model and calibration procedure for oblique-viewing endoscopes paper_content: Oblique-viewing endoscopes (oblique scopes) are widely used in medical practice. They are essential for certain procedures such as laparoscopy, arthroscopy and sinus endoscopy. In an oblique scope the viewing directions are changeable by rotating the scope cylinder. Although a camera calibration method is necessary to apply augmented reality technologies to oblique endoscopic procedures, no method for oblique scope calibration has yet been developed. In the present paper, we formulate a camera model and a calibration procedure for oblique scopes. In the calibration procedure, Tsai's calibration is performed at zero rotation of the scope cylinder, then the variation of the external camera parameters corresponding to the rotation of the scope cylinder is modeled and estimated as a function of the rotation angle. Accurate estimation of the rotational axis is included in the procedure. The accuracy of this estimation was demonstrated to have a significant effect on overall calibration accuracy in the experimental evaluation, especially with large rotation angles. The projection error in the image plane was approximately two pixels. The proposed method was shown to be clinically applicable. --- paper_title: Endoscope Calibration and Accuracy Testing for 3D/2D Image Registration paper_content: New surgical navigation techniques incorporate the use of live surgical endoscope video with 3D reconstructed MRI or CT images of a patient's anatomy. This image-enhanced endoscopy requires calibration of the endoscope to accurately the register the real endoscope video to the virtual image. The calibration and accuracy testing of such a system and a simple yet effective linear method for lens-distortion compensation are described. --- paper_title: Fiducial-free registration procedure for navigated bronchoscopy paper_content: Navigated bronchoscopy has been developed by various groups within the last decades. Systems based on CT data and electromagnetic tracking enable the visualization of the position and orientation of the bronchoscope, forceps, and biopsy tools within CT data. Therefore registration between the tracking space and the CT volume is required. Standard procedures are based on point-based registration methods that require selecting corresponding natural landmarks in both coordinate systems by the examiner. We developed a novel algorithm for a fully automatic registration procedure in navigated bronchoscopy based on the trajectory recorded during routine examination of the airways at the beginning of an intervention. The proposed system provides advantages in terms of an unchanged medical workflow and high accuracy. We compared the novel method with point-based and ICP-based registration. Experiments demonstrate that the novel method transforms up to 97% of tracking points inside the segmented airways, which was the best performance compared to the other methods. --- paper_title: Implementation, calibration and accuracy testing of an image-enhanced endoscopy system paper_content: This paper presents a new method for image-guided surgery called image-enhanced endoscopy. Registered real and virtual endoscopic images (perspective volume renderings generated from the same view as the endoscope camera using a preoperative image) are displayed simultaneously; when combined with the ability to vary tissue transparency in the virtual images, this provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery. A mount with four photoreflective spheres is rigidly attached to the endoscope and its position and orientation is tracked using an optical position sensor. Generation of virtual images that are accurately registered to the real endoscopic images requires calibration of the tracked endoscope. The calibration process determines intrinsic parameters (that represent the projection of three-dimensional points onto the two-dimensional endoscope camera imaging plane) and extrinsic parameters (that represent the transformation from the coordinate system of the tracker mount attached to the endoscope to the coordinate system of the endoscope camera), and determines radial lens distortion. The calibration routine is fast, automatic, accurate and reliable, and is insensitive to rotational orientation of the endoscope. The routine automatically detects, localizes, and identifies dots in a video image snapshot of the calibration target grid and determines the calibration parameters from the sets of known physical coordinates and localized image coordinates of the target grid dots. Using nonlinear lens-distortion correction, which can be performed at real-time rates (30 frames per second), the mean projection error is less than 0.5 mm at distances up to 25 mm from the endoscope tip, and less than 1.0 mm up to 45 mm. Experimental measurements and point-based registration error theory show that the tracking error is about 0.5-0.7 mm at the tip of the endoscope and less than 0.9 mm for all points in the field of view of the endoscope camera at a distance of up to 65 mm from the tip. It is probable that much of the projection error is due to endoscope tracking error rather than calibration error. Two examples of clinical applications are presented to illustrate the usefulness of image-enhanced endoscopy. This method is a useful addition to conventional image-guidance systems, which generally show only the position of the tip (and sometimes the orientation) of a surgical instrument or probe on reformatted image slices. --- paper_title: Correction of distortion in endoscope images paper_content: Images formed with endoscopes suffer from a spatial distortion due to the wide-angle nature of the endoscope's objective lens. This change in the size of objects with position precludes quantitative measurement of the area of the objects, which is important in endoscopy for accurately measuring ulcer and lesion sizes over time. A method for correcting the distortion characteristic of endoscope images is presented. A polynomial correction formula was developed for the endoscope lens and validated by comparing quantitative test areas before and after the distortion correction. The distortion correction has been incorporated into a computer program that could readily be applied to electronic images obtained at endoscopy using a desk-top computer. The research presented here is a key step towards the quantitative determination of the area of regions of interest in endoscopy. > --- paper_title: Endoscopic Surgery: The History, the Pioneers paper_content: The introduction of endoscopy into surgical practice is one of the biggest success stories in the history of medicine. Endoscopy has its roots in the nineteenth century and was initially developed by urologists and internists. During the 1960s and 1970s gynecologists took the lead in the development of endoscopic surgery while most of the surgical community continued to ignore the possibilities of the new technique. This was due in part to the introduction of ever more sophisticated drugs, the impressive results of intensive care medicine, and advances in anesthesia, which led to the development of more radical and extensive operations, or "major surgery." The idea that large problems require large incisions so deeply dominated surgical thinking that there was little room to appreciate the advances of "key-hole" surgery. Working against this current, some general surgeons took up the challenge. In 1976 the Surgical Study Group on Endoscopy and Ultrasound (CAES) was formed in Hamburg. Five years later, on the other side of the Atlantic, the Society of American Gastrointestinal Endoscopic Surgeons (SAGES) was called into being. In 1987 the first issue of the journal Surgical Endoscopy was published, and the following year the First World Congress on Surgical Endoscopy took place in Berlin. The sweeping success of the "laparoscopic revolution" (1989-1990) marked the end of traditional open surgery and encouraged surgeons to consider new perspectives. By the 1990s the breakthrough had been accomplished: endoscopy was incorporated into surgical thinking. --- paper_title: Evaluation of a novel calibration technique for optically tracked oblique laparoscopes paper_content: This paper proposes an evaluation of a novel calibration method for an optically tracked oblique laparoscope. We present the necessary tools to track an oblique scope and a camera model which includes changes to the intrinsic camera parameters thereby extending previously proposed methods. Because oblique scopes offer a wide 'virtual' view on the surgical field, the method is of great interest for augmented reality guidance of laparoscopic interventions using an oblique scope. ::: ::: The model and an approximated version are evaluated in an extensive validation study. Using 5 sets of 40 calibration images, we compare both camera models (i.e. model and approximation) and 2 interpolation schemes. The selected model and interpolation scheme reaches an average accuracy of 2.60 pixel and an equivalent 3D error of 0.60 mm. ::: ::: Finally, we present initial experience of the presented approach with an oblique scope and optical tracking in a clinical setup. During a laparoscopic rectum resection surgery the setup was used to augment the scene with a model of the pelvis. The method worked properly and the attached probes did not interfere with normal procedure. --- paper_title: A Method for Tracking the Camera Motion of Real Endoscope by Epipolar Geometry Analysis and Virtual Endoscopy System paper_content: This paper describes a method for tracking the camera motion of a real endoscope by epipolar geometry analysis and image-based registration. In an endoscope navigation system, which provides navigation information to a medical doctor during an endoscopic examination, tracking the camera motion of the endoscopic camera is one of the fundamental functions. With a flexible endoscope, it is hard to directly sense the position of the camera, since we cannot attach a positional sensor at the tip of the endoscope. The proposed method consists of three parts: (1) calculation of corresponding point-pairs of two time-adjacent frames, (2) coarse estimation of the camera motion by solving the epipolar equation, and (3) fine estimation by executing image-based registration between real and virtual endoscopic views. In the method, virtual endoscopic views are obtained from X-ray CT images of real endoscopic images of the same patient. To evaluate the method, we applied it a real endoscopic video camera and X-ray CT images. The experimental results showed that the method could track the motion of the camera satisfactorily. --- paper_title: A four-step camera calibration procedure with implicit image correction paper_content: In geometrical camera calibration the objective is to determine a set of camera parameters that describe the mapping between 3-D reference coordinates and 2-D image coordinates. Various methods for camera calibration can be found from the literature. However surprisingly little attention has been paid to the whole calibration procedure, i.e., control point extraction from images, model fitting, image correction, and errors originating in these stages. The main interest has been in model fitting, although the other stages are also important. In this paper we present a four-step calibration procedure that is an extension to the two-step method. There is an additional step to compensate for distortion caused by circular features, and a step for correcting the distorted image coordinates. The image correction is performed with an empirical inverse model that accurately compensates for radial and tangential distortions. Finally, a linear method for solving the parameters of the inverse model is presented. --- paper_title: Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration paper_content: In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz. --- paper_title: Modeling and calibration of automated zoom lenses paper_content: Camera systems with automated zoom lenses are inherently more useful than those with fixed-parameter lenses. Variable-parameter lenses enable us to produce better images by matching the camera's sensing characteristics to the conditions in a scene. They also allow us to make measurements by noting how the scene's image changes as the parameters are varied. The reason variable-parameter lenses are not more commonly used in machine vision is that they are difficult to model for continuous ranges of lens settings. ::: We show in this thesis that traditional modeling approaches cannot capture the complex relationships between control parameters and imaging processes. Furthermore, we demonstrate that the assumption of idealized behavior in traditional models can lead to significant performance problems in color imaging and focus ranging. By using more complex models and control strategies we were able to reduce or eliminate these performance problems. ::: The principal contribution of our research is a methodology for empirically producing accurate camera models for systems with variable-parameter lenses. We also developed a comprehensive taxonomy for the property of "image center." To demonstrate the effectiveness of our methodology we applied it to produce an "adjustable," perspective-projection camera model based on Tsai's fixed camera model. We calibrated and tested our model on two different automated camera systems. In both cases the calibrated model operated across continuous ranges of focus and zoom with an average error of less than 0.14 pixels between the predicted and the measured positions of features in the image plane. We also calibrated and tested our model on one automated camera system across a continuous range of aperture and achieved similar results. --- paper_title: Flexible Calibration of Actuated Stereoscopic Endoscope for Overlay in Robot Assisted Surgery paper_content: Robotic assistance have greatly benefited the operative gesture in mini-invasive surgery. Nevertheless, the surgeon is still suffering from the restricted vision of the operating field through the endoscope. We thus propose to augment the endoscopic images with preoperative data.This paper focuses on the use of a priori information to initialise the overlay by a precise calibration of the actuated stereoscopic endoscope. The flexibility of the proposed method avoids any additional tracking system in the operating room and can be applied to other augmented reality systems. We present quantitative experimental calibration results with the da Vinci? surgical system, as well as the use of these results to initialise the overlay of endoscopic images of a plastic heart with a coronary artery model. --- paper_title: A system to support laparoscopic surgery by augmented reality visualization paper_content: This paper describes the development of an augmented reality system for intra-operative laparoscopic surgery support.The goal of this system is to reveal structures, otherwise hidden within the laparoscope view. To allow flexible movement of the laparoscope we use optical tracking to track both patient and laparoscope.The necessary calibration and registration procedures were developed and bundled where possible in order to facilitate integration in a current laparoscopic procedure. Care was taken to achieve high accuracy by including radial distortion components without compromising real time speed.Finally a visual error assessment is performed, the usefulness is demonstrated within a test setup and some preliminary quantitative evaluation is done. --- paper_title: A flexible new technique for camera calibration paper_content: We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use. --- paper_title: Endoscopic Augmented Reality Navigation System for Endonasal Transsphenoidal Surgery to Treat Pituitary Tumors: Technical Note paper_content: OBJECTIVE ::: Endoscopes have been commonly used in transsphenoidal surgery to treat pituitary tumors, to compensate for the narrow surgical field. Although many navigation systems have been introduced for neurosurgical procedures, there have been few reports of navigation systems for endoscopic operations. This report presents our recently developed, endoscopic, augmented reality (AR) navigation system. ::: ::: ::: METHODS ::: The technology is based on the principles of AR environment technology. The system consisted of a rigid endoscope with light-emitting diodes, an optical tracking system, and a controller. The operation of the optical tracking system was based on two sets of infrared light-emitting diodes, which measured the position and orientation of the endoscope relative to the patient's head. We used the system during endonasal transsphenoidal operations to treat pituitary tumors in 12 recent cases. ::: ::: ::: RESULTS ::: Anatomic, "real," three-dimensional, virtual images of the tumor and nearby anatomic structures (including the internal carotid arteries, sphenoid sinuses, and optic nerves) were superimposed on real- time endoscopic live images. The system also indicated the positions and directions of the endoscope and the endoscopic beam in three-dimensional magnetic resonance imaging or computed tomographic planes. Furthermore, the colors of the wire-frame images of the tumor changed according to the distance between the tip of the endoscope and the tumor. These features were superior to those of conventional navigation systems, which are available only for operating microscopes. ::: ::: ::: CONCLUSION ::: The endoscopic AR navigation system allows surgeons to perform accurate, safe, endoscope-assisted operations to treat pituitary tumors; it is particularly useful for reoperations, in which midline landmarks may be absent. We consider the AR navigation system to be a promising tool for safe, minimally invasive, endonasal, transsphenoidal surgery to treat pituitary tumors. --- paper_title: Development of a navigation system for endoluminal brachytherapy in human lungs paper_content: The endoluminal brachytherapy of peripherally located bronchial carcinoma is difficult because of the complexity to position an irradiation catheter led by a bronchoscope to a desired spot inside a human lung. Furthermore the size of the bronchoscope permits only rarely the insertion of a catheter into the fine segment bronchi. We are developing an image-guided navigation system which indicates a path for guidance to the desired bronchus. Thereby a thin catheter with an enclosed navigation probe can be led up directly to the target bronchus, either by the use of the video of the bronchoscope or by the use of virtual bronchoscopy. Because of the thin bronchi and their moving soft tissue, the navigation system has to be very precise. This accuracy is reached by a gradually registering navigation component which improves the accuracy in the course of the intervention through mapping the already covered path to the preoperatively generated graph based bronchial tree description. The system includes components for navigation, segmentation, preoperative planning, and intraoperative guidance. Furthermore the visualization of the path can be adapted to the lung specialist's habits (video of bronchoscope, 2D, 3D, virtual bronchoscopy etc.). --- paper_title: A Navigation System for Augmenting Laparoscopic Ultrasound paper_content: Establishing image context is the major difficulty of performing laparoscopic ultrasound. The standard techniques used by transabdominal ultrasonographers to understand image orientation are difficult to apply with laparoscopic instruments. In this paper, we describe a navigation system that displays the position and orientation of laparoscopic ultrasound images to the operating surgeon in real time. The display technique we developed for showing the orientation information uses a 3D model of the aorta as the main visual reference. This technique is helpful because it provides surgeons with important spatial cues, which we show improves their ability to interpret the laparoscopic ultrasound. --- paper_title: Laparoscope Self-calibration for Robotic Assisted Minimally Invasive Surgery paper_content: For robotic assisted minimal access surgery, recovering 3D soft tissue deformation is important for intra-operative surgical guidance, motion compensation, and prescribing active constraints. We propose in this paper a method for determining varying focal lengths of stereo laparoscope cameras during robotic surgery. Laparoscopic images typically feature dynamic scenes of soft-tissue deformation and self-calibration is difficult with existing approaches due to the lack of rigid temporal constraints. The proposed method is based on the direct derivation of the focal lengths from the fundamental matrix of the stereo cameras with known extrinsic parameters. This solves a restricted self-calibration problem, and the introduction of the additional constraints improves the inherent accuracy of the algorithm. The practical value of the method is demonstrated with analysis of results from both synthetic and in vivo data sets. --- paper_title: Augmented Reality Visualization for Laparoscopic Surgery paper_content: We present the design and a prototype implementation of a three-dimensional visualization system to assist with laparoscopic surgical procedures. The system uses 3D visualization, depth extraction from laparoscopic images, and six degree-of-freedom head and laparoscope tracking to display a merged real and synthetic image in the surgeon’s video-see-through head-mounted display. We also introduce a custom design for this display. A digital light projector, a camera, and a conventional laparoscope create a prototype 3D laparoscope that can extract depth and video imagery. --- paper_title: Registration of real and CT-derived virtual bronchoscopic images to assist transbronchial biopsy paper_content: This paper describes research work motivated by an innovative medical application: computer-assisted transbronchial biopsy. This project involves the registration, with no external localization device, of a preoperative three-dimensional (3-D) computed tomography (CT) scan of the thoracic cavity (showing a tumor that requires a needle biopsy), and an intraoperative endoscopic two-dimensional (2-D) image sequence, in order to provide assistance in transbronchial puncture of the tumor. Because of the specific difficulties resulting from the data being processed, a multilevel strategy was introduced. For each analysis level, the relevant information to process and the corresponding algorithms were defined. This multilevel strategy, thus, provides the best possible accuracy. Original image processing methods were elaborated, dealing with segmentation, registration and 3-D reconstruction of the bronchoscopic images. In particular, these methods involve adapted mathematical morphology tools, a "daemon-based" registration algorithm, and a model-based shape-from-shading algorithm. This pilot study presents the application of these algorithms to recorded bronchoscopic video sequences for five patients. The preliminary results presented here demonstrate that it is possible to precisely localize the endoscopic camera within the CT data coordinate system. The computer can thus synthesize in near real-time the CT-derived virtual view that corresponds to the actual endoscopic view. --- paper_title: Image overlay guidance for needle insertion in CT scanner paper_content: We present an image overlay system to aid needle insertion procedures in computed tomography (CT) scanners. The device consists of a display and a semitransparent mirror that is mounted on the gantry. Looking at the patient through the mirror, the CT image appears to be floating inside the patient with correct size and position, thereby providing the physician with two-dimensional (2-D) "X-ray vision" to guide needle insertions. The physician inserts the needle following the optimal path identified in the CT image rendered on the display and, thus, reflected in the mirror. The system promises to reduce X-ray dose, patient discomfort, and procedure time by significantly reducing faulty insertion attempts. It may also increase needle placement accuracy. We report the design and implementation of the image overlay system followed by the results of phantom and cadaver experiments in several clinical applications. --- paper_title: Merging visible and invisible: two Camera-Augmented Mobile C-arm (CAMC) applications paper_content: This paper presents the basic concept of CAMC and some of its applications. A CCD camera is attached to a mobile C-arm fluoroscopy X-ray system. Both optical and X-ray imaging systems are calibrated in the same coordinate system in an off-line process. The new system is able to provide X-ray and optical images simultaneously. The CAMC framework has great potential for medical augmented reality. We briefly introduce two new CAMC applications to the augmented reality research community. The first application aims at merging video images with a pre-computed tomographic reconstruction of the 3D volume of interest. This is a logical continuation of our work on 3D reconstruction using a CAMC (1999). The second approach is a totally new CAMC design where using a double mirror system and an appropriate calibration procedure the X-ray and optical images are merged in real-time. This new system enables the user to see an optical image, an X-ray image, or an augmented image where both visible and invisible are combined in real-time. The paper is organized in two independent sections describing each of the above. Experimental results are provided at the same time as the methods and apparatus are described for each section. --- paper_title: Three-Dimensional Slice Image Overlay System with Accurate Depth Perception for Surgery paper_content: In this paper, we describe a three-dimensional (3-D) display, containing a flat two-dimensional (2-D) display, an actuator and a half-silvered mirror. This system creates a superimposed slice view on the patient and gives accurate depth perception. The clinical significance of this system is that it displays raw image data at an accurate location on the patient’s body. Moreover, it shows previously acquired image information, giving the capacity for accurate direction to the surgeon who is thus able to perform less-invasive therapy. Compared with conventional 3-D displays, such as stereoscopy, this system only requires raw 3-D data that are acquired in advance. Simpler data processing is required, and the system has the potential for rapid development. We describe a novel algorithm, registering positional data between the image and the patient. The accuracy of the system is evaluated and confirmed by an experiment in which an image is superimposed on a test object. The results indicate that the system could be readily applied in clinical situations, considering the resolution of the pre-acquired images. --- paper_title: Tomographic reflection to merge ultrasound images with direct vision paper_content: Tomographic reflection is a method that may be used to merge the visual outer surface of a patient with a simultaneous ultrasound scan of the patient's interior. The technique combines a flat-panel monitor with a half-silvered mirror such that the image on the monitor is reflected precisely at the proper location within the patient. In this way, the ultrasound image is superimposed in real time on the view of the patient along with the operator's hands and any invasive tools in the field of view. Instead of looking away at an ultrasound monitor, the operator can manipulate needles and scalpels with direct hand-eye coordination. Invasive tools are visible up to where they enter the skin, permitting natural visual extrapolation to targets in the ultrasound slice. Tomographic reflection is independent of viewer location, requires no special apparatus to be worn by the operator, nor any registration of the patient. --- paper_title: Laser projection augmented reality system for computer-assisted surgery paper_content: A new augmented reality apparatus was evaluated. The device uses scanned infrared and visible lasers to project computer generated information such as surgical plans, entry pints for probes etc, directly onto the patient. In addition to projecting the plan, the device can be integrated with a 3D camera and is capable of measuring the location of projected infrared laser spots. This can be used to ensure that the display is accurate, apply corrections to the projection path and to assist in registration. The projection system has its own Application Programmer’s Interface (API) and is a stand-alone add-on unit to any host computer system. Tests were conducted to evaluate the accuracy and repeatability of the system. We compared the locations of points projected on a flat surface with the measurements obtained from a tracked probe. The surface was rotated through 60 degrees in 5 degree increments and locations measured from the two devices agreed to within 2mm. An initial host application was also developed to demonstrate the new unit. Fiducials representing vertices along a proposed craniotomy were embedded into a plastic skull and a projection path defining the craniotomy was calculated. A feedback-based optimization of the plan was performed by comparing the measurement taken by the camera of these coordinates. The optimized plan was projected onto the skull. On average, the projection deviated by approximately 1mm from the plan. Applications include identification of critical anatomical structures, visualization of preplanned paths and targets, and telesurgery or teleconsultation. --- paper_title: A Novel Laser Guidance System for Alignment of Linear Surgical Tools: Its Principles and Performance Evaluation as a Man-Machine System paper_content: A novel laser guidance system that uses dual laser beam shooters for the alignment of linear surgical tools is presented. In the proposed system, the intersection of two laser planes generated by dual laser shooters placed at two fixed locations defines the straight insertion path of a surgical tool. The guidance information is directly projected onto the patient and the surgical tool. Our assumption is that a linear surgical tool has cylindrical shape or that a cylindrical sleeve is attached to the tool so that the sleeve and tool axes are aligned. The guidance procedure is formulated mainly using the property that the two laser planes are projected as two parallel straight lines onto the cylindrical tool surface if and only if the cylinder axis direction is the same as the direction of the intersection of the two laser planes. Unlike conventional augmented reality systems, the proposed system does not require the wearing of glasses or mirrors to be placed between the surgeon and patient. In our experiments, a surgeon used the system to align wires according to the alignment procedure, and the overall accuracy and alignment time were evaluated. The evaluations were considered not to be simply of a mechanical system but of a man-machine system, since the performance depends on both the system accuracy and the surgeon's perceptual ability. The evaluations showed the system to be highly effective in providing linear alignment assistance. --- paper_title: A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope. paper_content: A computer-based system has been developed for the integration and display of computerized tomography (CT) image data in the operating microscope in the correct perspective without requiring a stereotaxic frame. Spatial registration of the CT image data is accomplished by determination of the position of the operating microscope as its focal point is brought to each of three CT-imaged fiducial markers on the scalp. Monitoring of subsequent microscope positions allows appropriate reformatting of CT data into a common coordinate system. The position of the freely moveable microscope is determined by a non-imaging ultrasonic range-finder consisting of three spark gaps attached to the microscope and three microphones on a rigid support in the operating room. Measurement of the acoustic impulse transit times from the spark gaps to the microphones enables calculation of those distances and unique determination of the microscope position. The CT data are reformatted into a plane and orientation corresponding to the microscope's focal plane or to a deeper parallel plane if required. This reformatted information is then projected into the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. The operating surgeon sees the CT information (such as a tumor boundary) superimposed upon the operating field in proper position, orientation, and scale. --- paper_title: Navigated three dimensional beta probe for optimal cancer resection paper_content: In minimally invasive tumor resection, the desirable goal is to perform a minimal but complete removal of cancerous cells. In the last decades interventional nuclear medicine probes supported the detection of remaining tumor cells. However, scanning the patient with an intraoperative probe and applying the treatment are not done simultaneously. The main contribution of this work is to extend the one dimensional signal of a beta-probe to a four dimensional signal including the spatial information of the distal end of the probe. We generate a color encoded surface map of the scanned activity and guide any tracked surgical instrument back to the regions with measured activity. For navigation, we implement an augmented reality visualization that superimposes the acquired surface on a visual image of the real anatomy. Alternatively, a simulated beta-probe count rate in the tip of a tracked therapeutic instrument is simulated showing the count number and coding it as an acoustic signal. Preliminary tests were performed showing the feasibility of the new designed system and the viability of such a three dimensional intraoperative molecular imaging modality. --- paper_title: Hybrid navigation interface for orthopedic and trauma surgery paper_content: Several visualization methods for intraoperative navigation systems were proposed in the past. In standard slice based navigation, three dimensional imaging data is visualized on a two dimensional user interface in the surgery room. Another technology is the in-situ visualization i.e. the superimposition of imaging data directly into the view of the surgeon, spatially registered with the patient. Thus, the three dimensional information is represented on a three dimensional interface. We created a hybrid navigation interface combining an augmented reality visualization system, which is based on a stereoscopic head mounted display, with a standard two dimensional navigation interface. Using an experimental setup, trauma surgeons performed a drilling task using the standard slice based navigation system, different visualization modes of an augmented reality system, and the combination of both. The integration of a standard slice based navigation interface into an augmented reality visualization overcomes the shortcomings of both systems. --- paper_title: Integration of stereoscopic DSA and 3D MRI for image-guided neurosurgery paper_content: Abstract We demonstrate the feasibility and utility of using anatomical/vascular correlation in image-guided surgery, by interfacing a PC-based stereoscopic Digital Subtraction Angiography (DSA) analysis system to a three-dimensional (3D) image based surgical workstation that has been modified to allow presentation of stereoscopic images. Numerical values representing the position and angulation of a hand-held probe are transmitted to both systems simultaneously, enabling the probe to be visualized stereoscopically in both anatomical and vascular images during the surgical procedure. The integration of the patient's vascular and anatomical data in this way provides the surgeon with a complete overview of brain structures through which he is passing the electrode-guiding cannulas, enabling him to avoid critical vessels en route to the targets. --- paper_title: New Visualization Techniques for in Utero Surgery: Amnioscopy with a Three-Dimensional Head-Mounted Display and a Computer-Controlled Endoscope paper_content: ABSTRACT Endoscopic fetal surgery may reduce preterm labor associated with open hysterotomy but is partially limited by current visualization technology. We investigated a three-dimensional (3D) imaging system coupled to a head-mounted display (3D-HMD) and also employed a computer-controlled zoom endoscope for noninsufflated amnioscopy. Pregnant sheep were prepared in aseptic fashion for general anesthesia. Uterine access was obtained following maternal laparoscopy. A 10-mm zoom endoscope (Vista Medical Technologies, Carlsbad, CA) was used to examine the fetus and uterine contents. Fetal limbs were exteriorized for microsurgery. A new system (Vista Medical Technologies) was attached to an operative microscope, permitting projection of a 3D image via an HMD. The fetus and umbilical cord were inspected using the zoom endoscope, which changes the depth of focus under computer control. Basic manipulations of the fetus and cord were easily completed. Real-time 3D fetal imaging was accomplished. The added depth... --- paper_title: Task performance in endoscopic surgery is influenced by location of the image display. paper_content: ObjectiveTo investigate the influence of image display location on endoscopic task performance in endoscopic surgery.Summary Background DataThe image display system is the only visual interface between the surgeon or interventionist and the operative field. Several factors influence the correct --- paper_title: Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization paper_content: We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes ::: transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented ::: Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and ::: subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position ::: and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any ::: external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative ::: planning step in which the navigation targets are defined, the procedure consists of two main steps which ::: are carried out during the intervention: First, the preoperatively prepared planning data is registered with an ::: intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are ::: continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical ::: structures can be superimposed on the video image. ::: This paper focuses on the latter step. We have implemented several promising real-time algorithms and ::: incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them ::: for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment ::: has been developed, which allows for the simulation of navigation targets and navigation aids, including their ::: measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an ::: inaccurate pose and the resulting displacement of navigation targets in Augmented Reality. --- paper_title: Recent Advances in Augmented Reality paper_content: In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies. --- paper_title: Determination of ventilatory liver movement via radiographic evaluation of diaphragm position. paper_content: PURPOSE ::: To determine the accuracy of estimation of liver movement inferred by observing diaphragm excursion on radiographic images. ::: ::: ::: METHODS AND MATERIALS ::: Eight patients with focal liver cancer had platinum embolization microcoils implanted in their livers during catheterization of the hepatic artery for delivery of regional chemotherapy. These patients underwent fluoroscopy, during which normal breathing movement was recorded on videotape. Movies of breathing movement were digitized, and the relative projected positions of the diaphragm and coils were recorded. For 6 patients, daily radiographs were also acquired during treatment. Retrospective measurements of coil position were taken after the diaphragm was aligned with the superior portion of the liver on digitally reconstructed radiographs. ::: ::: ::: RESULTS ::: Coil movement of 4.9 to 30.4 mm was observed during normal breathing. Diaphragm position tracked inferior-superior coil displacement accurately (population sigma 1.04 mm) throughout the breathing cycle. The range of coil movement was predicted by the range of diaphragm movement with an accuracy of 2.09 mm (sigma). The maximum error observed measuring coil movement using diaphragm position was 3.8 mm for a coil 9.8 cm inferior to the diaphragm. However, the distance of the coil from the top of the diaphragm did not correlate significantly with the error in predicting liver excursion. Analysis of daily radiographs showed that the error in predicting coil position using the diaphragm as an alignment landmark was 1.8 mm (sigma) in the inferior-superior direction and 2.2 mm in the left-right direction, similar in magnitude to the inherent uncertainty in alignment. ::: ::: ::: CONCLUSIONS ::: This study demonstrated that the range of ventilatory movement of different locations within the liver is predicted by diaphragm position to an accuracy that matches or exceeds existing systems for ventilatory tracking. This suggests that the diaphragm is an acceptable anatomic landmark for radiographic estimation of liver movement in anterior-posterior projections for most patients. --- paper_title: Scale-Invariant Registration of Monocular Endoscopic Images to CT-Scans for Sinus Surgery paper_content: We present a scale-invariant registration method for 3D structures reconstructed from a monocular endoscopic camera to pre-operative CT-scans. The presented approach is based on a previously presented method [2] for reconstruction of a scaled 3D model of the environment from unknown camera motion. We use this scaleless reconstruction as input to a PCA-based algorithm that recovers the scale and pose parameters of the camera in the coordinate frame of the CT scan. The result is used in an ICP registration method to refine the registration estimates. --- paper_title: Automatic Patient Registration for Port Placement in Minimally Invasive Endoscopic Surgery paper_content: Optimal port placement is a delicate issue in minimally invasive endoscopic surgery, particularly in robotically assisted surgery. A good choice of the instruments' and endoscope's ports can avoid timeconsuming consecutive new port placement. We present a novel method to intuitively and precisely plan the port placement. The patient is registered to its pre-operative CT by just moving the endoscope around fiducials, which are attached to the patient's thorax and are visible in its CT. Their 3D positions are automatically reconstructed. Without prior time-consuming segmentation, the pre-operative CT volume is directly rendered with respect to the endoscope or instruments. This enables the simulation of a camera flight through the patient's interior along the instruments' axes to easily validate possible ports. --- paper_title: Calibration Requirements and Procedures for a Monitor-Based Augmented Reality System paper_content: Augmented reality entails the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they blend into the real world in terms of alignments, perspectives, illuminations, etc. For practical reasons the information necessary to obtain this realistic blending cannot be known a priori, and cannot be hard wired into a system. Instead a number of calibration procedures are necessary so that the location and parameters of each of the system components are known. We identify the calibration steps necessary to build a computer model of the real world and then, using the monitor based augmented reality system developed at ECRC (GRASP) as an example, we describe each of the calibration processes. These processes determine the internal parameters of our imaging devices (scan converter, frame grabber, and video camera), as well as the geometric transformations that relate all of the physical objects of the system to a known world coordinate system. > --- paper_title: Markerless endoscopic registration and referencing paper_content: Accurate patient registration and referencing is a key element in navigated surgery. Unfortunately all existing methods are either invasive or very time consuming. We propose a fully non-invasive optical approach using a tracked monocular endoscope to reconstruct the surgical scene in 3D using photogrammetric methods. The 3D reconstruction can then be used for matching the pre-operative data to the intra-operative scene. In order to cope with the near real-time requirements for referencing, we use a novel, efficient 3D point management method during 3D model reconstruction. ::: ::: The presented prototype system provides a reconstruction accuracy of 0.1 mm and a tracking accuracy of 0.5 mm on phantom data. The ability to cope with real data is demonstrated by cadaver experiments. --- paper_title: Magneto-Optic Tracking of a Flexible Laparoscopic Ultrasound Transducer for Laparoscope Augmentation paper_content: In abdominal surgery, a laparoscopic ultrasound transducer is commonly used to detect lesions such as metastases. The determination and visualization of position and orientation of its flexible tip in relation to the patient or other surgical instruments can be of much help to (novice) surgeons utilizing the transducer intraoperatively. This difficult subject has recently been paid attention to by the scientific community [1, 2, 3, 4, 5, 6]. Electromagnetic tracking systems can be applied to track the flexible tip. However, the magnetic field can be distorted by ferromagnetic material. This paper presents a new method based on optical tracking of the laparoscope and magneto-optic tracking of the transducer, which is able to automatically detect field distortions. This is used for a smooth augmentation of the B-scan images of the transducer directly on the camera images in real time. --- paper_title: Methods for Modeling and Predicting Mechanical Deformations of the Breast Under External Perturbations paper_content: Currently, High Field (1.5T) Superconducting MR imaging does not allow live guidance during needle breast procedures, which allow only to calculate approximately the location of a cancerous tumor in the patient breast before inserting the needle. It can then become relatively uncertain that the tissue specimen removed during the biopsy actually belongs to the lesion of interest. A new method for guiding clinical breast biopsy is presented, based on a deformable finite element model of the breast. The geometry of the model is constructed from MR data, and its mechanical properties are modeled using a non-linear material model. This method allows imaging the breast without or with mild compression before the procedure, then compressing the breast and using the finite element model to predict the tumor's position during the procedure in a total time of less than a half-hour. --- paper_title: The use of active breathing control (ABC) to reduce margin for breathing motion paper_content: Abstract Purpose: For tumors in the thorax and abdomen, reducing the treatment margin for organ motion due to breathing reduces the volume of normal tissues that will be irradiated. A higher dose can be delivered to the target, provided that the risk of marginal misses is not increased. To ensure safe margin reduction, we investigated the feasibility of using active breathing control (ABC) to temporarily immobilize the patient's breathing. Treatment planning and delivery can then be performed at identical ABC conditions with minimal margin for breathing motion. Methods and Materials: An ABC apparatus is constructed consisting of 2 pairs of flow monitor and scissor valve, 1 each to control the inspiration and expiration paths to the patient. The patient breathes through a mouth-piece connected to the ABC apparatus. The respiratory signal is processed continuously, using a personal computer that displays the changing lung volume in real-time. After the patient's breathing pattern becomes stable, the operator activates ABC at a preselected phase in the breathing cycle. Both valves are then closed to immobilize breathing motion. Breathing motion of 12 patients were held with ABC to examine their acceptance of the procedure. The feasibility of applying ABC for treatment was tested in 5 patients by acquiring volumetric scans with a spiral computed tomography (CT) scanner during active breath-hold. Two patients had Hodgkin's disease, 2 had metastatic liver cancer, and 1 had lung cancer. Two intrafraction ABC scans were acquired at the same respiratory phase near the end of normal or deep inspiration. An additional ABC scan near the end of normal expiration was acquired for 2 patients. The ABC scans were also repeated 1 week later for a Hodgkin's patient. In 1 liver patient, ABC scans were acquired at 7 different phases of the breathing cycle to facilitate examination of the liver motion associated with ventilation. Contours of the lungs and livers were outlined when applicable. The variation of the organ positions and volumes for the different scans were quantified and compared. Results: The ABC procedure was well tolerated in the 12 patients. When ABC was applied near the end of normal expiration, the minimal duration of active breath-hold was 15 s for 1 patient with lung cancer, and 20 s or more for all other patients. The duration was greater than 40 s for 2 patients with Hodgkin's disease when ABC was applied during deep inspiration. Scan artifacts associated with normal breathing motion were not observed in the ABC scans. The analysis of the small set of intrafraction scan data indicated that with ABC, the liver volumes were reproducible at about 1%, and lung volumes to within 6%. The excursions of a "center of target" parameter for the livers were less than 1 mm at the same respiratory phase, but were larger than 4 mm at the extremes of the breathing cycle. The inter-fraction scan study indicated that daily setup variation contributed to the uncertainty in assessing the reproducibility of organ immobilization with ABC between treatment fractions. Conclusion: The results were encouraging; ABC provides a simple means to minimize breathing motion. When applied for CT scanning and treatment, the ABC procedure requires no more than standard operation of the CT scanner or the medical accelerator. The ABC scans are void of motion artifacts commonly seen on fast spiral CT scans. When acquired at different points in the breathing cycle, these ABC scans show organ motion in three-dimension (3D) that can be used to enhance treatment planning. Reproducibility of organ immobilization with ABC throughout the course of treatment must be quantified before the procedure can be applied to reduce margin for conformal treatment. --- paper_title: A Survey of Medical Image Registration paper_content: The purpose of this paper is to present a survey of recent (published in 1993 or later) publications concerning medical image registration techniques. These publications will be classified according to a model based on nine salient criteria, the main dichotomy of which is extrinsic versus intrinsic methods. The statistics of the classification show definite trends in the evolving registration techniques, which will be discussed. At this moment, the bulk of interesting intrinsic methods is based on either segmented points or surfaces, or on techniques endeavouring to use the full information content of the images involved. --- paper_title: Registration of head volume images using implantable fiducial markers paper_content: In this paper, we describe an extrinsic point-based, interactive image-guided neurosurgical system designed at Vanderbilt University as part of a collaborative effort among the departments of neurological surgery, computer science, and biomedical engineering. Multimodal image-to- image and image-to-physical registration is accomplished using implantable markers. Physical space tracking is accomplished with optical triangulation. We investigate the theoretical accuracy of point-based registration using numerical simulations, the experimental accuracy of our system using data obtained with a phantom, and the clinical accuracy of our system using data acquired in a prospective clinical trial by six neurosurgeons at four medical centers from 158 patients undergoing craniotomies to resect cerebral lesions. We can determine the position of our markers with an error of approximately 0.4 mm in x-ray computed tomography (CT) and magnetic resonance (MR) images and 0.3 mm in physical space. The theoretical registration error using four such markers distributed around the head in a configuration that is clinically practical is approximately 0.5 - 0.6 mm. The mean CT-physical registration error for the phantom experiments is 0.5 mm and for the clinical data obtained with rigid head fixation during scanning is 0.7 mm. The mean CT-MR registration error for the clinical data obtained without rigid head fixation during scanning is 1.4 mm, which is the highest mean error that we observed. These theoretical and experimental findings indicate that this system is an accurate navigational aid that can provide real-time feedback to the surgeon about anatomical structures encountered in the surgical field. --- paper_title: Registration-free laparoscope augmentation for intra-operative liver resection planning paper_content: In recent years, an increasing number of liver tumor indications were treated by minimally invasive laparoscopic ::: resection. Besides the restricted view, a major issue in laparoscopic liver resection is the enhanced visualization ::: of (hidden) vessels, which supply the tumorous liver segment and thus need to be divided prior to the resection. ::: To navigate the surgeon to these vessels, pre-operative abdominal imaging data can hardly be used due to intraoperative ::: organ deformations mainly caused by appliance of carbon dioxide pneumoperitoneum and respiratory ::: motion. While regular respiratory motion can be gated and synchronized intra-operatively, motion caused by ::: pneumoperitoneum is individual for every patient and difficult to estimate. ::: Therefore, we propose to use an optically tracked mobile C-arm providing cone-beam CT imaging capability intraoperatively. ::: The C-arm is able to visualize soft tissue by means of its new flat panel detector and is calibrated ::: offline to relate its current position and orientation to the coordinate system of a reconstructed volume. Also ::: the laparoscope is optically tracked and calibrated offline, so both laparoscope and C-arm are registered in the ::: same tracking coordinate system. ::: Intra-operatively, after patient positioning, port placement, and carbon dioxide insufflation, the liver vessels are ::: contrasted and scanned during patient exhalation. Immediately, a three-dimensional volume is reconstructed. ::: Without any further need for patient registration, the volume can be directly augmented on the live laparoscope ::: video, visualizing the contrasted vessels. This augmentation provides the surgeon with advanced visual aid for ::: the localization of veins, arteries, and bile ducts to be divided or sealed. --- paper_title: An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization paper_content: There is a need for frameless guidance systems to help neurosurgeons to plan the exact location of a craniotomy, to define the margins of tumors and to precisely identify locations of neighboring critical structures. We have developed an automatic technique for registering clinical data, such as segmented MRI or CT reconstructions, with the patient's head on the operating table. A second method calibrates the position of a video camera relative to the patient. The combination allows a visual mix of live video of the patient with the segmented 3D MRI or CT model, enabling enhanced reality techniques for planning and guiding neurosurgical procedures, and to interactively view extracranial or intracranial structures non-intrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures and clinical studies involving change detection over time sequences of images. > --- paper_title: Endoscope-based hybrid navigation system for minimally invasive ventral spine surgeries. paper_content: The availability of high-resolution, magnified, and relatively noise-free endoscopic images in a small workspace, 4-10 cm from the endoscope tip, opens up the possibility of using the endoscope as a tracking tool. We are developing a hybrid navigation system in which image-analysis-based 2D-3D tracking is combined with optoelectronic tracking (Optotrak) for computer-assisted navigation in laparoscopic ventral spine surgeries. Initial results are encouraging and confirm the ability of the endoscope to serve as a tracking tool in surgical navigation where sub-millimetric accuracy is mandatory. --- paper_title: Intraoperative Laparoscope Augmentation for Port Placement and Resection Planning in Minimally Invasive Liver Resection paper_content: In recent years, an increasing number of liver tumor indications were treated by minimally invasive laparoscopic resection. Besides the restricted view, two major intraoperative issues in laparoscopic liver resection are the optimal planning of ports as well as the enhanced visualization of (hidden) vessels, which supply the tumorous liver segment and thus need to be divided (e.g., clipped) prior to the resection. We propose an intuitive and precise method to plan the placement of ports. Preoperatively, self-adhesive fiducials are affixed to the patient's skin and a computed tomography (CT) data set is acquired while contrasting the liver vessels. Immediately prior to the intervention, the laparoscope is moved around these fiducials, which are automatically reconstructed to register the patient to its preoperative imaging data set. This enables the simulation of a camera flight through the patient's interior along the laparoscope's or instruments' axes to easily validate potential ports. Intraoperatively, surgeons need to update their surgical planning based on actual patient data after organ deformations mainly caused by application of carbon dioxide pneumoperitoneum. Therefore, preoperative imaging data can hardly be used. Instead, we propose to use an optically tracked mobile C-arm providing cone-beam CT imaging capability intraoperatively. After patient positioning, port placement, and carbon dioxide insufflation, the liver vessels are contrasted and a 3-D volume is reconstructed during patient exhalation. Without any further need for patient registration, the reconstructed volume can be directly augmented on the live laparoscope video, since prior calibration enables both the volume and the laparoscope to be positioned and oriented in the tracking coordinate frame. The augmentation provides the surgeon with advanced visual aid for the localization of veins, arteries, and bile ducts to be divided or sealed. --- paper_title: 3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery paper_content: A three-dimensional ultrasound (3D-US) system suitable for laparoscopic surgery that uses a novel magneto-optic hybrid tracker configuration. Our aim is to integrate 3D-US into a laparoscopic AR system. A 5D miniature magnetic tracker is combined with a 6D optical tracker outside the body to perform 6D tracking of a flexible US probe tip in the abdominal cavity. 6D tracking parameters at the tip are obtained by combining the 5D parameters at the tip inside the body, the 6D parameters at the US probe handle outside the body, and the restriction of the tip motion relative to the handle. The system was evaluated in comparison with a conventional 3D ultrasound system. Although the accuracy of the proposed system was somewhat inferior to that of the conventional one, both the accuracy and sweet spot area were found to be acceptable for clinical use. --- paper_title: 1 Navigation in Endoscopic Soft Tissue Surgery- Perspectives and Limitations paper_content: Despite rapid developments in the research areas of medical imaging, medical image processing, and robotics, the use of computer assistance in surgical routine is still limited to diagnostics, surgical planning, and interventions on mostly rigid structures. In order to establish a computer-aided workflow from diagnosis to surgical treatment and follow-up, several proposals for computer-assisted soft tissue interventions have been made in recent years. By means of different pre- and intraoperative information sources, such as surgical planning, intraoperative imaging, and tracking devices, surgical navigation systems aim to support surgeons in localizing anatomical targets, observing critical structures, and sparing healthy tissue. Current research in particular addresses the problem of organ shift and tissue deformation, and obstacles in communication between navigation system and surgeon. In this paper, we review computer-assisted navigation systems for soft tissue surgery. We concentrate on approaches that can be applied in endoscopic thoracic and abdominal surgery, because endoscopic surgery has special needs for image guidance due to limitations in perception. Furthermore, this paper informs the reader about new trends and technologies in the area of computer-assisted surgery. Finally, a balancing of the key challenges and possible benefits of endoscopic navigation refines the perspectives of this increasingly important discipline of computer-aided medical procedures. --- paper_title: Registration of physical space to laparoscopic image space for use in minimally invasive hepatic surgery paper_content: While laparoscopes are used for numerous minimally invasive (MI) procedures, MI liver resection and ablative surgery is infrequently performed. The paucity of cases is due to the restriction of the field of view by the laparoscope and the difficulty in determining tumor location and margins under video guidance. By merging MI surgery with interactive, image-guided surgery (IIGS), the authors hope to overcome localization difficulties present in laparoscopic liver procedures. One key component of any IIGS system is the development of accurate registration techniques to map image space to physical or patient space. This manuscript focuses on the accuracy and analysis of the direct linear transformation (DLT) method to register physical space with laparoscopic image space on both distorted and distortion-corrected video images. Experiments were conducted on a liver-sized plastic phantom affixed with 20 markers at various depths, After localizing the points in both physical and laparoscopic image space, registration accuracy was assessed for different combinations and numbers of control points (n) to determine the quantity necessary to develop a robust registration matrix. For n=11, average target registration error (TRE) was 0.70/spl plusmn/0.20 mm. The authors also studied the effects of distortion correction on registration accuracy For the particular distortion correction method and laparoscope used in the authors' experiments, there was no statistical significance between physical to image registration error for distorted and corrected images. In cases where a minimum number of control points (n=6) are acquired, the DLT is often not stable and the mathematical process can lead to high TRE values. Mathematical filters developed through the analysis of the DLT were used to prospectively eliminate outlier cases where the TRE was high. For n=6, prefilter average TRE was 17.4/spl plusmn/153 mm for all trials; when the filters were applied, average TRE decreased to 1.64/spl plusmn/1.10 mm for the remaining trials. --- paper_title: Predicting error in rigid-body point-based registration paper_content: Guidance systems designed for neurosurgery, hip surgery, and spine surgery, and for approaches to other anatomy that is relatively rigid can use rigid-body transformations to accomplish image registration. These systems often rely on point-based registration to determine the transformation, and many such systems use attached fiducial markers to establish accurate fiducial points for the registration, the points being established by some fiducial localization process. Accuracy is important to these systems, as is knowledge of the level of that accuracy. An advantage of marker-based systems, particularly those in which the markers are bone-implanted, is that registration error depends only on the fiducial localization error (FLE) and is thus to a large extent independent of the particular object being registered. Thus, it should be possible to predict the clinical accuracy of marker-based systems on the basis of experimental measurements made with phantoms or previous patients. This paper presents two new expressions for estimating registration accuracy of such systems and points out a danger in using a traditional measure of registration accuracy. The new expressions represent fundamental theoretical results with regard to the relationship between localization error and registration error in rigid-body, point-based registration. Rigid-body, point-based registration is achieved by finding the rigid transformation that minimizes "fiducial registration error" (FRE), which is the root mean square distance between homologous fiducials after registration. Closed form solutions have been known since 1966. The expected value (FRE/sup 2/) depends on the number N of fiducials and expected squared value of FLE, (FLE/sup 2/), but in 1979 it was shown that (FRE/sup 2/) is approximately independent of the fiducial configuration C. The importance of this surprising result seems not yet to have been appreciated by the registration community: Poor registrations caused by poor fiducial configurations may appear to be good due to a small FRE value. A more critical and direct measure of registration error is the "target registration error" (TRE), which is the distance between homologous points other than the centroids of fiducials. Efforts to characterize its behavior have been made since 1989. Published numerical simulations have shown that (TRE/sup 2/) is roughly proportional to (FLE/sup 2/)/N and, unlike (FRE/sup 2/), does depend in some way on C. Thus, FRE, which is often used as feedback to the surgeon using a point-based guidance system, is in fact an unreliable indicator of registration-accuracy. In this work the authors derive approximate expressions for (TRE/sup 2/), and for the expected squared alignment error of an individual fiducial. They validate both approximations through numerical simulations. The former expression can be used to provide reliable feedback to the surgeon during surgery and to guide the placement of markers before surgery, or at least to warn the surgeon of potentially dangerous fiducial placements; the latter expression leads to a surprising conclusion: Expected registration accuracy (TRE) is worst near the fiducials that are most closely aligned! This revelation should be of particular concern to surgeons who may at present be relying on fiducial alignment as an indicator of the accuracy of their point-based guidance systems. --- paper_title: An automatic six-degree-of-freedom image registration algorithm for image-guided frameless stereotaxic radiosurgery. paper_content: A frameless radiosurgical treatment system has been developed by coupling an orthogonal pair of real-time x-ray cameras to a robotically manipulated linear accelerator to guide the therapy beam to treatment sites within a patient's cranium. The two cameras observe the position and orientation of the patient's head in the treatment system coordinate frame. An image registration algorithm compares the two real-time radiographs to a corresponding pair of digitally synthesized radiographs derived from a CT study of the patient. The algorithm determines all six degrees of translational and rotational difference between the position of the head in the CT coordinate frame and its position in the treatment room coordinate frame. This allows translation of treatment planning coordinates into treatment room coordinates without rigidly fixing the patient's head position during either the CT scan or treatment. In this paper the image registration algorithm is described and measurements of the precision and speed with which the process can determine the patient's position are reported. The tests have demonstrated translational uncertainty of 0.5-1.0 mm per axis and rotational uncertainty of 0.6-1.3 degrees per axis, accomplished in approximately 2 s elapsed time. --- paper_title: Depth perception: a major issue in medical AR: evaluation study by twenty surgeons paper_content: The idea of in-situ visualization for surgical procedures has been widely discussed in the community [1,2,3,4]. While the tracking technology offers nowadays a sufficient accuracy and visualization devices have been developed that fit seamlessly into the operational workflow [1,3], one crucial problem remains, which has been discussed already in the first paper on medical augmented reality [4]. Even though the data is presented at the correct place, the physician often perceives the spatial position of the visualization to be closer or further because of virtual/real overlay. ::: ::: This paper describes and evaluates novel visualization techniques that are designed to overcome misleading depth perception of trivially superimposed virtual images on the real view. We have invited 20 surgeons to evaluate seven different visualization techniques using a head mounted display (HMD). The evaluation has been divided into two parts. In the first part, the depth perception of each kind of visualization is evaluated quantitatively. In the second part, the visualizations are evaluated qualitatively in regard to user friendliness and intuitiveness. This evaluation with a relevant number of surgeons using a state-of-the-art system is meant to guide future research and development on medical augmented reality. --- paper_title: Measurement of absolute latency for video see through augmented reality paper_content: Latency is a key property of video see through AR systems since users' performance is strongly related to it. However, there is no standard way of latency measurement of an AR system in the literature. We have created a stable and comparable way of estimating the latency in a video see through AR system. The latency is estimated by encoding the time in the image and decoding the time after camera feedback. We have encoded the time as a translation of a circle in the image. The cross ratio has been used as an image feature that is preserved in a projective transformation. The encoding allows for a simple but accurate way of decoding. We show that this way of encoding has an adequate accuracy for latency measurements. As the method allows for a series of automatic measurements we propose to visualize the measurements in a histogram. This histogram reveals meaningful information about the system other than the mean value and standard deviation of the latency. The method has been tested on four different AR systems that use different camera technology, resolution and frame rates. --- paper_title: Registration Error Analysis for Augmented Reality paper_content: Augmented reality AR systems typically use see-through head-mounted displays STHMDs to superimpose images of computer-generated objects onto the user's view of the real environment in order to augment it with additional information. The main failing of current AR systems is that the virtual objects displayed in the STHMD appear in the wrong position relative to the real environment. This registration error has many causes: system delay, tracker error, calibration error, optical distortion, and misalignment of the model, to name only a few. Although some work has been done in the area of system calibration and error correction, very little work has been done on characterizing the nature and sensitivity of the errors that cause misregistration in AR systems. This paper presents the main results of an end-to-end error analysis of an optical STHMD-based tool for surgery planning. The analysis was done with a mathematical model of the system and the main results were checked by taking measurements on a real system under controlled circumstances. The model makes it possible to analyze the sensitivity of the system-registration error to errors in each part of the system. The major results of the analysis are: 1 Even for moderate head velocities, system delay causes more registration error than all other sources combined; 2 eye tracking is probably not necessary; 3 tracker error is a significant problem both in head tracking and in system calibration; 4 the World or reference coordinate system adds error and should be omitted when possible; 5 computational correction of optical distortion may introduce more delay-induced registration error than the distortion error it corrects, and 6 there are many small error sources that will make submillimeter registration almost impossible in an optical STHMD system without feedback. Although this model was developed for optical STHMDs for surgical planning, many of the results apply to other HMDs as well. --- paper_title: Augmented workspace: designing an AR testbed paper_content: We have implemented a tabletop setup to explore augmented reality (AR) visualization. We call this setup an "augmented workspace". The user sits at the table and performs a manual task, guided by computer graphics overlaid on to his view. The setup serves as a testbed for developing the technology and for studying visual perception issues. The user wears a custom video see-through head mounted display (HMD). Two color video cameras attached to the HMD provide a stereo view of the scene, and a third video camera is added for tracking. The system runs at the full 30-Hz video frame rate with a latency of about 0.1 s, generating a stable augmentation with no apparent jitter visible in the composite images. Two SGI Visual Workstations provide the computing power for the system. In this paper, we describe the augmented workspace system in more detail and discuss several design issues. --- paper_title: Reaching for objects in VR displays: lag and frame rate paper_content: This article reports the results from three experimental studies of reaching behavior in a head-coupled stereo display system with a hand-tracking subsystem for object selection. It is found that lag in the head-tracking system is relatively unimportant in predicting performance, whereas lag in the hand-tracking system is critical. The effect of hand lag can be modeled by means of a variation on Fitts' Law with the measured system lag introduced as a multiplicative variable to the Fitts' Law index of difficulty. This means that relatively small lags can cause considerable degradation in performance if the targets are small. Another finding is that errors are higher for movement in and out of the screen, as compared to movements in the plane of the screen, and there is a small (10%) time penalty for movement in the Z direction in all three experiments. Low frame rates cause a degradation in performance; however, this can be attributed to the lag which is caused by low frame rates, particularly if double buffering is used combined with early sampling of the hand-tracking device. --- paper_title: Analysis of head pose accuracy in augmented reality paper_content: A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone. --- paper_title: An Accuracy Certified Augmented Reality System for Therapy Guidance paper_content: Our purpose is to provide an augmented reality system for Radio-Frequency guidance that could superimpose a 3D model of the liver, its vessels and tumors (reconstructed from CT images) on external video images of the patient. In this paper, we point out that clinical usability not only need the best affordable registration accuracy, but also a certification that the required accuracy is met, since clinical conditions change from one intervention to the other. Beginning by addressing accuracy performances, we show that a 3D/2D registration based on radio-opaque fiducials is more adapted to our application constraints than other methods. Then, we outline a lack in their statistical assumptions which leads us to the derivation of a new extended 3D/2D criterion. Careful validation experiments on real data show that an accuracy of 2 mm can be achieved in clinically relevant conditions, and that our new criterion is up to 9% more accurate, while keeping a computation time compatible with real-time at 20 to 40 Hz. --- paper_title: Method for estimating dynamic EM tracking accuracy of surgical navigation tools paper_content: Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included. --- paper_title: Estimating and adapting to registration errors in augmented reality systems paper_content: All augmented reality (AR) systems must deal with registration errors. While most AR systems attempt to minimize registration errors through careful calibration, registration errors can never be completely eliminated in any realistic system. In this paper, we describe a robust and efficient statistical method for estimating registration errors. Our method generates probabilistic error estimates for points in the world, in either 3D world coordinates or 2D screen coordinates. We present a number of examples illustrating how registration error estimates can be used in AR interfaces, and describe a method for estimating registration errors of objects based on the expansion and contraction of their 2D convex hulls. --- paper_title: Online estimation of the target registration error for n-ocular optical tracking systems paper_content: For current surgical navigation systems optical tracking is state of the art. The accuracy of these tracking systems is currently determined statically for the case of full visibility of all tracking targets. We propose a dynamic determination of the accuracy based on the visibility and geometry of the tracking setup. This real time estimation of accuracy has a multitude of applications. For multiple camera systems it allows reducing line of sight problems and guaranteeing a certain accuracy. The visualization of these accuracies allows surgeons to perform the procedures taking to the tracking accuracy into account. It also allows engineers to design tracking setups interactively guaranteeing a certain accuracy. ::: ::: Our model is an extension to the state of the art models of Fitzpatrick et al. [1] and Hoff et al. [2]. We model the error in the camera sensor plane. The error is propagated using the internal camera parameter, camera poses, tracking target poses, target geometry and marker visibility, in order to estimate the final accuracy of the tracked instrument. --- paper_title: Predicting error in rigid-body point-based registration paper_content: Guidance systems designed for neurosurgery, hip surgery, and spine surgery, and for approaches to other anatomy that is relatively rigid can use rigid-body transformations to accomplish image registration. These systems often rely on point-based registration to determine the transformation, and many such systems use attached fiducial markers to establish accurate fiducial points for the registration, the points being established by some fiducial localization process. Accuracy is important to these systems, as is knowledge of the level of that accuracy. An advantage of marker-based systems, particularly those in which the markers are bone-implanted, is that registration error depends only on the fiducial localization error (FLE) and is thus to a large extent independent of the particular object being registered. Thus, it should be possible to predict the clinical accuracy of marker-based systems on the basis of experimental measurements made with phantoms or previous patients. This paper presents two new expressions for estimating registration accuracy of such systems and points out a danger in using a traditional measure of registration accuracy. The new expressions represent fundamental theoretical results with regard to the relationship between localization error and registration error in rigid-body, point-based registration. Rigid-body, point-based registration is achieved by finding the rigid transformation that minimizes "fiducial registration error" (FRE), which is the root mean square distance between homologous fiducials after registration. Closed form solutions have been known since 1966. The expected value (FRE/sup 2/) depends on the number N of fiducials and expected squared value of FLE, (FLE/sup 2/), but in 1979 it was shown that (FRE/sup 2/) is approximately independent of the fiducial configuration C. The importance of this surprising result seems not yet to have been appreciated by the registration community: Poor registrations caused by poor fiducial configurations may appear to be good due to a small FRE value. A more critical and direct measure of registration error is the "target registration error" (TRE), which is the distance between homologous points other than the centroids of fiducials. Efforts to characterize its behavior have been made since 1989. Published numerical simulations have shown that (TRE/sup 2/) is roughly proportional to (FLE/sup 2/)/N and, unlike (FRE/sup 2/), does depend in some way on C. Thus, FRE, which is often used as feedback to the surgeon using a point-based guidance system, is in fact an unreliable indicator of registration-accuracy. In this work the authors derive approximate expressions for (TRE/sup 2/), and for the expected squared alignment error of an individual fiducial. They validate both approximations through numerical simulations. The former expression can be used to provide reliable feedback to the surgeon during surgery and to guide the placement of markers before surgery, or at least to warn the surgeon of potentially dangerous fiducial placements; the latter expression leads to a surprising conclusion: Expected registration accuracy (TRE) is worst near the fiducials that are most closely aligned! This revelation should be of particular concern to surgeons who may at present be relying on fiducial alignment as an indicator of the accuracy of their point-based guidance systems. --- paper_title: Neurosurgical Guidance Using the Stereo Microscope paper_content: Many neuro- and ENT surgical procedures are performed using the operating microscope. Conventionally, the surgeon cannot accurately relate information from preoperative radiological images to the appearance of the surgical field. We propose that the best way do this is to superimpose image derived data upon the operative scene. We create a model of relevant structures (e.g. tumor volume, blood vessels and nerves) from multimodality preoperative images. By calibrating microscope optics, registering the patient in-theatre to image coordinates, and tracking the microscope intra-operatively, we can generate stereo projections of the 3D model and project them into the microscope eyepieces, allowing critical structures to be overlayed on the operative scene in the correct position. We have completed initial evaluation with a head phantom, and are about to start clinical evaluation on patients. With the head phantom a theoretical accuracy of 4.6mm was calculated and the observed accuracy ranged from 2mm to 5mm. --- paper_title: Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth paper_content: Publisher Summary ::: This chapter discusses three questions: Why are there so many sources of information about layout? How is it that one perceives layout with near-metric accuracy when none of these sources yields metric information about it? Can one not do better, theoretically, in understanding the perception of layout than simply make a list? The answer to the first question begins with Todd's answer. Perceiving layout is extremely important to human beings, so important that it must be redundantly specified so that the redundancy can guard against the failure of any given source or the failure of any of the assumptions on which a given source is based. However, information redundancy is only part of the answer. Different sources of information about layout metrically reinforce and contrast with each other, providing a powerful network of constraints. The answer to the second proceeds from this idea. Through the analysis of depth-threshold functions for nine different sources of information about layout, one can begin to understand how those sources of information sharing the same-shaped functions across distances can help ramify judgments of layout by serving to correct measurement errors in each. Third, on the basis of the analyses and the pattern of functions, it suggests that list making has misled about space and layout. Psychologists and other vision scientists have generally considered layout, space, and distance as a uniform commodity in which observers carry out their day-to-day activities. --- paper_title: Perceptual issues in augmented reality paper_content: Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic video, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real environment, with graphic enhancements or augmentations. Augmented Virtuality describes that class of displays that enhance the virtual experience by adding elements of the real environment. All Mixed Reality systems are limited in their capability of accurately displaying and controlled all relevant depth cues, and as a result, perceptual biases can interfere with task performance. In this paper we identify and discuss eighteen issues that pertain to Mixed Reality in general, and Augmented Reality in particular. --- paper_title: Surface transparency makes stereo overlays unpredictable: the implications for augmented reality. paper_content: The principle of using stereoscopic displays is to present the viewer with an accurate perception of 3D space. Stereopsis is a powerful binocular cue that will supplement any monocular information in the scene. Our work with an optical augmented reality system has highlighted one scenario where an accurate sense of depth cannot be easily achieved from stereoscopic images. In our augmented reality system we use stereo images of anatomical structures overlaid on the patient for surgical guidance. It is essential that the surgeon can accurately localize the images during surgery. When the stereo images are presented behind the transparent physical surface the perception of the depth of the images can become unstable and ambiguous, despite good system calibration, registration and tracking. This paper reviews possible reasons for the failure in accurate depth perception and presents some ideas on how this might be corrected for in an optical augmented reality --- paper_title: Depth perception: a major issue in medical AR: evaluation study by twenty surgeons paper_content: The idea of in-situ visualization for surgical procedures has been widely discussed in the community [1,2,3,4]. While the tracking technology offers nowadays a sufficient accuracy and visualization devices have been developed that fit seamlessly into the operational workflow [1,3], one crucial problem remains, which has been discussed already in the first paper on medical augmented reality [4]. Even though the data is presented at the correct place, the physician often perceives the spatial position of the visualization to be closer or further because of virtual/real overlay. ::: ::: This paper describes and evaluates novel visualization techniques that are designed to overcome misleading depth perception of trivially superimposed virtual images on the real view. We have invited 20 surgeons to evaluate seven different visualization techniques using a head mounted display (HMD). The evaluation has been divided into two parts. In the first part, the depth perception of each kind of visualization is evaluated quantitatively. In the second part, the visualizations are evaluated qualitatively in regard to user friendliness and intuitiveness. This evaluation with a relevant number of surgeons using a state-of-the-art system is meant to guide future research and development on medical augmented reality. --- paper_title: pq-space Based Non-Photorealistic Rendering for Augmented Reality paper_content: The increasing use of robotic assisted minimally invasive surgery (MIS) provides an ideal environment for using Augmented Reality (AR) for performing image guided surgery. Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real environment. Traditional overlaid AR approaches generally suffer from a loss of depth perception. This paper presents a new AR method for robotic assisted MIS, which uses a novel pq-space based non-photorealistic rendering technique for providing see-through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. Experimental results with both phantom and in vivo lung lobectomy data demonstrate the visual realism achieved for the proposed method and its accuracy in providing high fidelity AR depth perception. --- paper_title: Virtual Eyes Can Rearrange Your Body: Adaptation to Visual Displacement in See-Through, Head-Mounted Displays paper_content: Among the most critical issues in the design of immersive virtual environments are those that deal with the problem of technologically induced intersensory conflict and one of the results, sensorimotor adaptation. An experiment was conducted to support the design of a prototype see-through, head-mounted display (HMD). When wearing video see-through HMDs in augmented reality systems, subjects see the world around them through a pair of head-mounted video cameras. The study looked at the effects of sensory rearrangement caused by a HMD design that displaced the user's “virtual” eye position forward (165 mm) and above (62 mm) toward the spatial position of the cameras. The position of the cameras creates images of the world that are slightly downward and inward from normal. Measures of hand-eye coordination and speed on a manual pegboard task revealed substantial perceptual costs of the eye displacement initially, but also evidence of adaptation. Upon first wearing the video see-through HMD, subjects' pointing errors increased significantly along the spatial dimensions displaced (the y dimension, above-below the target, and z dimension, in front-behind the target). Speed of performance on the pegboard task decreased by 43% compared to baseline performance. Pointing accuracy improved by approximately 33% as subjects adapted to the sensory rearrangement, but it did not reach baseline performance. When subjects removed the see-through HMD, there was evidence that their hand-eye coordination had been altered. Negative aftereffects were observed in the form of greater errors in pointing accuracy compared to baseline. Although these aftereffects are temporary, the results may have serious practical implications for the use of video see-through HMDs by users (e.g., surgeons) who depend on very accurate hand-eye coordination. --- paper_title: Augmented-reality visualizations guided by cognition: perceptual heuristics for combining visible and obscured information paper_content: One unique feature of mixed and augmented reality (MR/AR) systems is that hidden and occluded objects an be readily visualized. We call this specialized use of MR/AR, obscured information visualization (OIV). In this paper, we describe the beginning of a research program designed to develop such visualizations through the use of principles derived from perceptual psychology and cognitive science. In this paper we surveyed the cognitive science literature as it applies to such visualization tasks, described experimental questions derived from these cognitive principles, and generated general guidelines that can be used in designing future OIV systems (as well improving AR displays more generally). We also report the results from an experiment that utilized a functioning AR-OIV system: we found that in relative depth judgment, subjects reported rendered objects as being in front of real-world objects, except when additional occlusion and motion cues were presented together. --- paper_title: Resolving multiple occluded layers in augmented reality paper_content: A useful function of augmented reality (AR) systems is their ability to visualize occluded infrastructure directly in a user's view of the environment. This is especially important for our application context, which utilizes mobile AR for navigation and other operations in an urban environment. A key problem in the AR field is how to best depict occluded objects in such a way that the viewer can correctly infer the depth relationships between different physical and virtual objects. Showing a single occluded object with no depth context presents an ambiguous picture to the user. But showing all occluded objects in the environments leads to the "Superman's X-ray vision" problem, in which the user sees too much information to make sense of the depth relationships of objects. Our efforts differ qualitatively from previous work in AR occlusion, because our application domain involves far-field occluded objects, which are tens of meters distant from the user. Previous work has focused on near-field occluded objects, which are within or just beyond arm's reach, and which use different perceptual cues. We designed and evaluated a number of sets of display attributes. We then conducted a user study to determine which representations best express occlusion relationships among far-field objects. We identify a drawing style and opacity settings that enable the user to accurately interpret three layers of occluded objects, even in the absence of perspective constraints. --- paper_title: Direct volume rendering with shading via three-dimensional textures paper_content: A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image. --- paper_title: Hybrid navigation interface for orthopedic and trauma surgery paper_content: Several visualization methods for intraoperative navigation systems were proposed in the past. In standard slice based navigation, three dimensional imaging data is visualized on a two dimensional user interface in the surgery room. Another technology is the in-situ visualization i.e. the superimposition of imaging data directly into the view of the surgeon, spatially registered with the patient. Thus, the three dimensional information is represented on a three dimensional interface. We created a hybrid navigation interface combining an augmented reality visualization system, which is based on a stereoscopic head mounted display, with a standard two dimensional navigation interface. Using an experimental setup, trauma surgeons performed a drilling task using the standard slice based navigation system, different visualization modes of an augmented reality system, and the combination of both. The integration of a standard slice based navigation interface into an augmented reality visualization overcomes the shortcomings of both systems. --- paper_title: Marching cubes: A high resolution 3D surface construction algorithm paper_content: We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities. --- paper_title: Acceleration techniques for GPU-based volume rendering paper_content: Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card. --- paper_title: Recovery of surgical workflow without explicit models paper_content: Workflow recovery is crucial for designing context-sensitive service systems in future operating rooms. Abstract knowledge about actions which are being performed is particularly valuable in the OR. This knowledge can be used for many applications such as optimizing the workflow, recovering average workflows for guiding and evaluating training surgeons, automatic report generation and ultimately for monitoring in a context aware operating room. ::: ::: This paper describes a novel way for automatic recovery of the surgical workflow. Our algorithms perform this task without an implicit or explicit model of the surgery. This is achieved by the synchronization of multidimensional state vectors of signals recorded in different operations of the same type. We use an enhanced version of the dynamic time warp algorithm to calculate the temporal registration. The algorithms have been tested on 17 signals of six different surgeries of the same type. The results on this dataset are very promising because the algorithms register the steps in the surgery correctly up to seconds, which is our sampling rate. Our software visualizes the temporal registration by displaying the videos of different surgeries of the same type with varying duration precisely synchronized to each other. The synchronized videos of one surgery are either slowed down or speeded up in order to show the same steps as the ones presented in the videos of the other surgery. --- paper_title: Liver Surgery Planning Using Virtual Reality paper_content: We have developed LiverPlanner, a virtual liver surgery planning system that uses high-level image analysis algorithms and virtual reality technology to help physicians find the best resection plan for each individual patient. Preliminary user studies of LiverPlanner show that the proposed tools are well accepted by doctors and lead to much shorter planning times --- paper_title: Laparoscopic Virtual Mirror New Interaction Paradigm for Monitor Based Augmented Reality paper_content: A major roadblock for using augmented reality in many medical and industrial applications is the fact that the user cannot take full advantage of the 3D virtual data. This usually requires the user to move the virtual object, which disturbs the real/virtual alignment, or to move his head around the real objects, which is not always possible and/or practical. This problem becomes more dramatic when a single camera is used for monitor based augmentation, such as in augmented laparoscopic surgery. In this paper we introduce an interaction and 3D visualization paradigm, which presents a new solution to this old problem. The interaction paradigm uses an interactive virtual mirror positioned into the augmented scene, which allows easy and complete interactive visualization of 3D virtual data. This paper focuses on the exemplary application of such visualization techniques to laparoscopic interventions. A large number of such interventions aims at regions inside a specific organ, e.g. blood vessels to be clipped for tumor resection. We use high-resolution intra-operative imaging data generated by a mobile C-arm with cone-beam CT imaging capability. Both the C-arm and the laparoscope are optically tracked and registered in a common world coordinate frame. After patient positioning, port placement, and carbon dioxide insufflation, a C-arm volume is reconstructed during patient exhalation and superimposed in real time on the laparoscopic live video without any need for an additional patient registration procedure. To overcome the missing perception of 3D depth and shape when rendering virtual volume data directly on top of the organ's surface view, we introduce the concept of a laparoscopic virtual mirror: A virtual reflection plane within the live laparoscopic video, which is able to visualize a reflected side view of the organ and its interior. This enables the surgeon to observe the 3D structure of, for example, blood vessels by moving the virtual mirror within the augmented monocular view of the laparoscope. --- paper_title: pq-space Based Non-Photorealistic Rendering for Augmented Reality paper_content: The increasing use of robotic assisted minimally invasive surgery (MIS) provides an ideal environment for using Augmented Reality (AR) for performing image guided surgery. Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real environment. Traditional overlaid AR approaches generally suffer from a loss of depth perception. This paper presents a new AR method for robotic assisted MIS, which uses a novel pq-space based non-photorealistic rendering technique for providing see-through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. Experimental results with both phantom and in vivo lung lobectomy data demonstrate the visual realism achieved for the proposed method and its accuracy in providing high fidelity AR depth perception. --- paper_title: Interactive Focus and Context Visualization for Augmented Reality paper_content: In this article we present interactive focus and context (F+C) visualizations for augmented reality (AR) applications. We demonstrate how F+C visualizations are used to affect the user's perception of hidden objects by presenting contextual information in the area of augmentation. We carefully overlay synthetic data on top of the real world imagery by taking into account the information that is about to be occluded. Furthermore, we present operations to control the amount of augmented information. Additionally, we developed an interaction tool, based on the magic lens technique, which allows for interactive separation of focus from context. We integrated our work into a rendering framework developed on top of the Studierstube augmented reality system. We finally show examples to demonstrate how our work benefits AR. --- paper_title: Laparoscopic Virtual Mirror New Interaction Paradigm for Monitor Based Augmented Reality paper_content: A major roadblock for using augmented reality in many medical and industrial applications is the fact that the user cannot take full advantage of the 3D virtual data. This usually requires the user to move the virtual object, which disturbs the real/virtual alignment, or to move his head around the real objects, which is not always possible and/or practical. This problem becomes more dramatic when a single camera is used for monitor based augmentation, such as in augmented laparoscopic surgery. In this paper we introduce an interaction and 3D visualization paradigm, which presents a new solution to this old problem. The interaction paradigm uses an interactive virtual mirror positioned into the augmented scene, which allows easy and complete interactive visualization of 3D virtual data. This paper focuses on the exemplary application of such visualization techniques to laparoscopic interventions. A large number of such interventions aims at regions inside a specific organ, e.g. blood vessels to be clipped for tumor resection. We use high-resolution intra-operative imaging data generated by a mobile C-arm with cone-beam CT imaging capability. Both the C-arm and the laparoscope are optically tracked and registered in a common world coordinate frame. After patient positioning, port placement, and carbon dioxide insufflation, a C-arm volume is reconstructed during patient exhalation and superimposed in real time on the laparoscopic live video without any need for an additional patient registration procedure. To overcome the missing perception of 3D depth and shape when rendering virtual volume data directly on top of the organ's surface view, we introduce the concept of a laparoscopic virtual mirror: A virtual reflection plane within the live laparoscopic video, which is able to visualize a reflected side view of the organ and its interior. This enables the surgeon to observe the 3D structure of, for example, blood vessels by moving the virtual mirror within the augmented monocular view of the laparoscope. ---
Title: Advanced Medical Displays: A Literature Review of Augmented Reality Section 1: Introduction Description 1: This section provides an overview of the evolution and development of augmented reality in the medical field, including historical milestones and key technological advancements. Section 2: Overview of Medical AR Systems and Technologies Description 2: This section introduces different categories of medical augmented reality systems and technologies, highlighting their limitations and advantages. Section 3: HMD Based AR System Description 3: This section discusses head-mounted display (HMD)-based AR systems, including both optical and video see-through designs, their tracking technologies, and applications in medicine. Section 4: Augmented Optics Description 4: This section covers augmented optics including operating microscopes and binoculars, focusing on their use in brain surgery, neurosurgical interventions, and other medical procedures. Section 5: AR Windows Description 5: This section describes the concept of AR windows, involving semi-transparent mirrors and autostereoscopic screens for augmenting medical images without the need for head-worn displays. Section 6: Augmented Monitors Description 6: This section explains how video images can be augmented on ordinary monitors, defining the point of view with tracked video cameras and applying this system in medical settings. Section 7: Augmented Endoscopes Description 7: This section explores the integration of augmented reality techniques in endoscopic procedures, addressing calibration, tracking, and visualization challenges. Section 8: Augmented Medical Imaging Devices Description 8: This section outlines devices that augment images without a tracking system, including fluoroscopic overlays and tomographic reflection systems, for enhancing visualization in medical imaging. Section 9: Projections on the Patient Description 9: This section discusses systems that project augmented data directly onto the patient, detailing the benefits and limitations of such approaches for medical applications. Section 10: Potential Benefits of AR Visualization Description 10: This section evaluates the advantages of augmented reality in medical visualization, including image fusion, 3D interaction, stereoscopic visualization, and improved hand-eye coordination. Section 11: Current Issues Description 11: This section highlights the current limitations and challenges faced by medical AR systems, such as registration, tracking, calibration, time synchronization, error estimation, and depth perception. Section 12: Perspective Description 12: This section provides insights into the future prospects of medical augmented reality, emphasizing the need for seamless integration into clinical workflows and advancements in user interaction and depth perception.
A survey on knotoids, braidoids and their applications
7
--- paper_title: Braids, Links, and Mapping Class Groups. paper_content: The central theme of this study is Artin's braid group and the many ways that the notion of a braid has proved to be important in low-dimensional topology. In Chapter 1 the author is concerned with the concept of a braid as a group of motions of points in a manifold. She studies structural and algebraic properties of the braid groups of two manifolds, and derives systems of defining relations for the braid groups of the plane and sphere. In Chapter 2 she focuses on the connections between the classical braid group and the classical knot problem. After reviewing basic results she proceeds to an exploration of some possible implications of the Garside and Markov theorems. Chapter 3 offers discussion of matrix representations of the free group and of subgroups of the automorphism group of the free group. These ideas come to a focus in the difficult open question of whether Burau's matrix representation of the braid group is faithful. Chapter 4 is a broad view of recent results on the connections between braid groups and mapping class groups of surfaces. Chapter 5 contains a brief discussion of the theory of "plats." Research problems are included in an appendix. --- paper_title: Threading knot diagrams paper_content: Alexander [ 1 ] showed that an oriented link K in S 3 can always be represented as a closed braid. Later Markov [ 5 ] described (without full details) how any two such representations of K are related. In her book [ 3 ], Birman gives an extensive description, with a detailed combinatorial proof of both these results. --- paper_title: Representation of links by braids: A new algorithm paper_content: If a is a braid with n components, the closure of a, denoted #, is constructed by connecting the endpoints at the top level to the bottom endpoints with n standard curves. This procedure yields an oriented link ~ having the same number of crossings as a. A classical result of Alexander [I], [2], [3] states that every oriented link is isotopic to a closed braid #. In his proof Alexander modifies the diagram of an oriented link by a sequence of elementary operations to obtain a closed braid. During this transformation the "geometry of the picture" is completely changed. In many applications of Alexander's algorithms links with few crossings yield closed braid with a large number of crossings. On the other hand many algebraic invariants of links are first defined on braids. If we wish to compute these invariants for a "small" link L, it will be very useful to have the following principle: --- paper_title: Markov’s theorem in 3-manifolds paper_content: Abstract In this paper we first give a one-move version of Markov's braid theorem for knot isotopy in S 3 that sharpens the classical theorem. Then we give a relative version of Markov's theorem concerning a fixed braided portion in the knot. We also prove an analogue of Markov's theorem for knot isotopy in knot complements. Finally we extend this last result to prove a Markov theorem for links in an arbitrary orientable 3-manifold. --- paper_title: A new proof of Markov's braid theorem paper_content: The purpose of this paper is to introduce a new proof of Markov’s braid theorem, in terms of Seifert circles and Reidemeister moves. This means that the proof will be of combinatorial and essentially 2-dimensional nature. One characteristic feature of our approach is that nowhere in the proof will we use or refer to the braid axis. This allows for greater flexibility in various transformations of the diagrams considered. Other proofs of Markov’s theorem can be found in [2], [3], [4] and [5]. As in Vogel’s paper [6], ”closed braid” will be understood to mean a special kind of diagram of an oriented link: the diagram lives on a 2-dimensional sphere and is supposed to have Seifert circles that are nested in one another forming a single chain of concentric circles with compatible orientations. Let us recall that every diagram may be transformed into a closed braid (this is called Alexander’s theorem, see [1]). One particularly nice way to do it is by using reducing moves of Vogel, see [6]. Such a move is simply a type II Reidemeister move that is performed on two arcs belonging to different Seifert circles with orientations as in figure 1. Reduction moves will be the main tool in our proof of Markov’s theorem. --- paper_title: Knots Related by Knotoids paper_content: AbstractTuraev recently introduced the concept of a knotoid as a particular sort of knotted arc. There are several maps from knotoids to ordinary knots, or knotted circles. The two most natural way... --- paper_title: On Markov's Theorem paper_content: We give a new proof of Markov's classical theorem relating any two closed braid representations of the same knot or link. The proof is based upon ideas in a forthcoming paper by the authors,"Stabilization in the braid groups". The new proof of the classical Markov theorem is used by Nancy Wrinkle in her forthcoming manuscript"The Markov Theorem for transverse knots". --- paper_title: A study of braids in 3-manifolds paper_content: This work provides the topological background and a preliminary study for the analogue of the 2-variable Jones polynomial as an invariant of oriented links in arbitrary 3- manifolds via normalized traces of appropriate algebras, and it is organized as follows: ::: ::: Chapter 1: Motivated by the study of the Jones polynomial, we produce and present a new algorithm for turning oriented link diagrams in S3 into braids. Using this algorithm we then provide a new, short proof of Markov's theorem and its relative version. ::: ::: Chapter 2: The objective of the first part of Chapter 2 is to state and prove an analogue of Markov's theorem for oriented links in arbitrary 3-manifolds. We do this by modifying first our algorithm, so as to produce an analogue of Alexander's theorem for oriented links in arbitrary 3-manifolds. In the second part we show that the study of links (up to isotopy) in a 3-manifold can be restricted to the study of cosets of the braid groups Bn,m, which are subgroups of the usual braid groups Bn+m . ::: ::: Chapter 3: In this chapter we try to use the above topological set-up in a procedure analogous to the way V.F.R. Jones derived his famous link invariant. The analogy amounts to the following: We observe that Bn,1 - the braid group related to the solid torus and to the lens spaces L(p, 1) - is the Artin group of the Coxeter group of Bn-type. This implies the existence of an epimorphism of eEn,1 onto the Hecke algebra of Bn-type. Then we give an analogue of Ocneanu's trace function for the above algebras. This trace, after being properly normalized, yields a HOMFLY-PTtype isotopy invariant for oriented links inside a solid torus. Finally, by forcing a strong condition, we normalize this trace, so as to obtain a link invariant in SI x S2. --- paper_title: The minimal number of Seifert circles equals the braid index of a link paper_content: A method for the production of textile webs having high porosity and absorbency, with a leather-like surface grain comprising roughening a basic fabric of shrinkable fibers, optionally stretching the fabric if necessary in at least the warp or weft direction; impregnating at least one side of the fabric with an aqueous fluid foam containing a dispersion of a polymeric compound hardenable at a temperature above 120 DEG C, until the foam penetrates at least completely through the pile; drying the impregnated fabric; heating the dried impregnated fabric at 120 DEG to 160 DEG C for a time sufficient to harden said polymeric compound; shrinking the dried impregnated fabric by treatment with hot water or steam; and recovering said textile webs having a high porosity and absorbency with a leather-like surface grain. --- paper_title: Parity in Knotoids paper_content: This paper investigates the parity concept in knotoids in $S^2$ and in $\mathbb{R}^2$ in relation with virtual knots. We show that the virtual closure map is not surjective and give specific examples of virtual knots that are not in the image. We introduce a planar version of the parity bracket polynomial for knotoids in $\mathbb{R}^2$. By using the Nikonov/Manturov theorem on minimal diagrams of virtual knots we prove a conjecture of Turaev showing that minimal diagrams of knot-type knotoids have zero height. --- paper_title: New Invariants of Knotoids paper_content: Abstract In this paper we construct new invariants of knotoids including the odd writhe, the parity bracket polynomial, the affine index polynomial and the arrow polynomial, and give an introduction to the theory of virtual knotoids. The invariants in this paper are defined for both classical and virtual knotoids in analogy to corresponding invariants of virtual knots. We show that knotoids in S 2 have symmetric affine index polynomials. The affine index polynomial and the arrow polynomial provide bounds on the height (minimum crossing distance between endpoints) of a knotoid in S 2 . --- paper_title: Parity in Knotoids paper_content: This paper investigates the parity concept in knotoids in $S^2$ and in $\mathbb{R}^2$ in relation with virtual knots. We show that the virtual closure map is not surjective and give specific examples of virtual knots that are not in the image. We introduce a planar version of the parity bracket polynomial for knotoids in $\mathbb{R}^2$. By using the Nikonov/Manturov theorem on minimal diagrams of virtual knots we prove a conjecture of Turaev showing that minimal diagrams of knot-type knotoids have zero height. --- paper_title: A spanning set and potential basis of the mixed Hecke algebra on two fixed strands paper_content: The mixed braid groups \(B_{2,n}, \ n \in \mathbb {N}\), with two fixed strands and n moving ones, are known to be related to the knot theory of certain families of 3-manifolds. In this paper, we define the mixed Hecke algebra \(\mathrm {H}_{2,n}(q)\) as the quotient of the group algebra \({\mathbb Z}\, [q^{\pm 1}] \, B_{2,n}\) over the quadratic relations of the classical Iwahori–Hecke algebra for the braiding generators. We further provide a potential basis \(\Lambda _n\) for \(\mathrm {H}_{2,n}(q)\), which we prove is a spanning set for the \(\mathbb {Z}[q^{\pm 1}]\)-additive structure of this algebra. The sets \(\Lambda _n,\ n \in \mathbb {Z}\) appear to be good candidates for an inductive basis suitable for the construction of Homflypt-type invariants for knots and links in the above 3-manifolds. --- paper_title: New Invariants of Knotoids paper_content: Abstract In this paper we construct new invariants of knotoids including the odd writhe, the parity bracket polynomial, the affine index polynomial and the arrow polynomial, and give an introduction to the theory of virtual knotoids. The invariants in this paper are defined for both classical and virtual knotoids in analogy to corresponding invariants of virtual knots. We show that knotoids in S 2 have symmetric affine index polynomials. The affine index polynomial and the arrow polynomial provide bounds on the height (minimum crossing distance between endpoints) of a knotoid in S 2 . --- paper_title: Topological Models for Open-Knotted Protein Chains Using the Concepts of Knotoids and Bonded Knotoids paper_content: In this paper we introduce a method that offers a detailed overview of the entanglement of an open protein chain. Further, we present a purely topological model for classifying open protein chains by also taking into account any bridge involving the backbone. To this end, we implemented the concepts of planar knotoids and bonded knotoids. We show that the planar knotoids technique provides more refined information regarding the knottedness of a protein when compared to established methods in the literature. Moreover, we demonstrate that our topological model for bonded proteins is robust enough to distinguish all types of lassos in proteins. --- paper_title: Parity in knot theory paper_content: In this work we study knot theories with a parity property for crossings: every crossing is declared to be even or odd according to a certain preassigned rule. If this rule satisfies a set of simple axioms related to the Reidemeister moves, then certain simple invariants solving the minimality problem can be defined, and invariant maps on the set of knots can be constructed. The most important example of a knot theory with parity is the theory of virtual knots. Using the parity property arising from Gauss diagrams we show that even a gross simplification of the theory of virtual knots, namely, the theory of free knots, admits simple and highly nontrivial invariants. This gives a solution to a problem of Turaev, who conjectured that all free knots are trivial. In this work we show that free knots are generally not invertible, and provide invariants which detect the invertibility of free knots. The passage to ordinary virtual knots allows us to strengthen known invariants (such as the Kauffman bracket) using parity considerations. We also discuss other examples of knot theories with parity. Bibliography: 27 items. --- paper_title: New Invariants of Knotoids paper_content: Abstract In this paper we construct new invariants of knotoids including the odd writhe, the parity bracket polynomial, the affine index polynomial and the arrow polynomial, and give an introduction to the theory of virtual knotoids. The invariants in this paper are defined for both classical and virtual knotoids in analogy to corresponding invariants of virtual knots. We show that knotoids in S 2 have symmetric affine index polynomials. The affine index polynomial and the arrow polynomial provide bounds on the height (minimum crossing distance between endpoints) of a knotoid in S 2 . --- paper_title: Parity in Knotoids paper_content: This paper investigates the parity concept in knotoids in $S^2$ and in $\mathbb{R}^2$ in relation with virtual knots. We show that the virtual closure map is not surjective and give specific examples of virtual knots that are not in the image. We introduce a planar version of the parity bracket polynomial for knotoids in $\mathbb{R}^2$. By using the Nikonov/Manturov theorem on minimal diagrams of virtual knots we prove a conjecture of Turaev showing that minimal diagrams of knot-type knotoids have zero height. --- paper_title: Parity in Knotoids paper_content: This paper investigates the parity concept in knotoids in $S^2$ and in $\mathbb{R}^2$ in relation with virtual knots. We show that the virtual closure map is not surjective and give specific examples of virtual knots that are not in the image. We introduce a planar version of the parity bracket polynomial for knotoids in $\mathbb{R}^2$. By using the Nikonov/Manturov theorem on minimal diagrams of virtual knots we prove a conjecture of Turaev showing that minimal diagrams of knot-type knotoids have zero height. --- paper_title: Markov’s theorem in 3-manifolds paper_content: Abstract In this paper we first give a one-move version of Markov's braid theorem for knot isotopy in S 3 that sharpens the classical theorem. Then we give a relative version of Markov's theorem concerning a fixed braided portion in the knot. We also prove an analogue of Markov's theorem for knot isotopy in knot complements. Finally we extend this last result to prove a Markov theorem for links in an arbitrary orientable 3-manifold. --- paper_title: A study of braids in 3-manifolds paper_content: This work provides the topological background and a preliminary study for the analogue of the 2-variable Jones polynomial as an invariant of oriented links in arbitrary 3- manifolds via normalized traces of appropriate algebras, and it is organized as follows: ::: ::: Chapter 1: Motivated by the study of the Jones polynomial, we produce and present a new algorithm for turning oriented link diagrams in S3 into braids. Using this algorithm we then provide a new, short proof of Markov's theorem and its relative version. ::: ::: Chapter 2: The objective of the first part of Chapter 2 is to state and prove an analogue of Markov's theorem for oriented links in arbitrary 3-manifolds. We do this by modifying first our algorithm, so as to produce an analogue of Alexander's theorem for oriented links in arbitrary 3-manifolds. In the second part we show that the study of links (up to isotopy) in a 3-manifold can be restricted to the study of cosets of the braid groups Bn,m, which are subgroups of the usual braid groups Bn+m . ::: ::: Chapter 3: In this chapter we try to use the above topological set-up in a procedure analogous to the way V.F.R. Jones derived his famous link invariant. The analogy amounts to the following: We observe that Bn,1 - the braid group related to the solid torus and to the lens spaces L(p, 1) - is the Artin group of the Coxeter group of Bn-type. This implies the existence of an epimorphism of eEn,1 onto the Hecke algebra of Bn-type. Then we give an analogue of Ocneanu's trace function for the above algebras. This trace, after being properly normalized, yields a HOMFLY-PTtype isotopy invariant for oriented links inside a solid torus. Finally, by forcing a strong condition, we normalize this trace, so as to obtain a link invariant in SI x S2. --- paper_title: Topological Models for Open-Knotted Protein Chains Using the Concepts of Knotoids and Bonded Knotoids paper_content: In this paper we introduce a method that offers a detailed overview of the entanglement of an open protein chain. Further, we present a purely topological model for classifying open protein chains by also taking into account any bridge involving the backbone. To this end, we implemented the concepts of planar knotoids and bonded knotoids. We show that the planar knotoids technique provides more refined information regarding the knottedness of a protein when compared to established methods in the literature. Moreover, we demonstrate that our topological model for bonded proteins is robust enough to distinguish all types of lassos in proteins. --- paper_title: Markov’s theorem in 3-manifolds paper_content: Abstract In this paper we first give a one-move version of Markov's braid theorem for knot isotopy in S 3 that sharpens the classical theorem. Then we give a relative version of Markov's theorem concerning a fixed braided portion in the knot. We also prove an analogue of Markov's theorem for knot isotopy in knot complements. Finally we extend this last result to prove a Markov theorem for links in an arbitrary orientable 3-manifold. ---
Title: A Survey on Knotoids, Braidoids and Their Applications Section 1: Knotoids and Knotoid Isotopy Description 1: Discuss basic concepts and definitions related to knotoids, including knotoid diagrams, isotopy moves, and the distinction between planar and spherical knotoids. Section 2: Knotoids, Classical Knots, and Virtual Knots Description 2: Explore the relationship between knotoids and classical knots, the concept of knotoid closures, and how knotoids relate to virtual knots. Section 3: Geometric Interpretations of Knotoids Description 3: Present geometric interpretations of both spherical and planar knotoids, including their relation to Θ-graphs and open space curves. Section 4: Invariants of Knotoids Description 4: Review existing works on knotoid invariants, such as the bracket polynomial, the Jones polynomial, and novel invariants specific to knotoids. Section 5: The Theory of Braidoids Description 5: Introduce fundamental concepts of braidoids, including their diagrams, isotopy moves, and the closure operation. Section 6: Turning Knotoids into Braidoids Description 6: Explain the process of converting knotoid diagrams into braidoid diagrams, including the use of L-braidoiding moves and algorithms. Section 7: Applications of Knotoids and Braidoids Description 7: Highlight applications of knotoids and braidoids in studying proteins and propose an algebraic encoding method for open protein chains using braidoids.
An informal overview of triples and systems
16
--- paper_title: Algebraic Geometry Over Hyperrings paper_content: We develop basic notions and methods of algebraic geometry over the algebraic objects called hyperrings. Roughly speaking, hyperrings generalize rings in such a way that an addition is ‘multi-valued’. This paper largely consists of two parts; algebraic aspects and geometric aspects of hyperrings. We first investigate several technical algebraic properties of a hyperring. In the second part, we begin by giving another interpretation of a tropical variety as an algebraic set over the hyperfield which canonically arises from a totally ordered semifield. Then we define a notion of an integral hyperring scheme (X,O X ) ( X , O X ) and prove that Γ(X,O X )≃R Γ ( X , O X ) ≃ R for any integral affine hyperring scheme X = Spec R . --- paper_title: Supertropical matrix algebra paper_content: The objective of this paper is to develop a general algebraic theory of supertropical matrix algebra, extending [11]. Our main results are as follows: * The tropical determinant (i.e., permanent) is multiplicative when all the determinants involved are tangible. * There exists an adjoint matrix $\adj{A}$ such that the matrix $A \adj{A}$ behaves much like the identity matrix (times $|A|$). * Every matrix $A$ is a supertropical root of its Hamilton-Cayley polynomial $f_A$. If these roots are distinct, then $A$ is conjugate (in a certain supertropical sense) to a diagonal matrix. * The tropical determinant of a matrix $A$ is a ghost iff the rows of $A$ are tropically dependent, iff the columns of $A$ are tropically dependent. * Every root of $f_A$ is a"supertropical"eigenvalue of $A$ (appropriately defined), and has a tangible supertropical eigenvector. --- paper_title: Algebras with a negation map paper_content: Our objective in this project is three-fold, the first two covered in this paper. In tropical mathematics, as well as other mathematical theories involving semirings, when trying to formulate the tropical versions of classical algebraic concepts for which the negative is a crucial ingredient, such as determinants, Grassmann algebras, Lie algebras, Lie superalgebras, and Poisson algebras, one often is challenged by the lack of negation. Following an idea originating in work of Gaubert and the Max-Plus group and brought to fruition by Akian, Gaubert, and Guterman, we study algebraic structures with negation maps, called \textbf{systems}, in the context of universal algebra, showing how these unify the more viable (super)tropical versions, as well as hypergroup theory and fuzzy rings, thereby "explaining" similarities in their theories. Special attention is paid to \textbf{meta-tangible} $\mathcal T$-systems, whose algebraic theory includes all the main tropical examples and many others, but is rich enough to facilitate computations and provide a host of structural results. ::: Basic results also are obtained in linear algebra, linking determinants to linear independence. ::: Formulating the structure categorically enables us to view the tropicalization functor as a morphism, thereby further explaining the mysterious link between classical algebraic results and their tropical analogs, as well as with hyperfields. We utilize the tropicalization functor to propose tropical analogs of classical algebraic notions. ::: The systems studied here might be called "fundamental," since they are the underlying structure which can be studied via other "module" systems, which is to be the third stage of this project, involving a theory of sheaves and schemes and derived categories with a negation map. --- paper_title: The geometry of blueprints. Part I: Algebraic background and scheme theory paper_content: In this paper, we introduce the category of blueprints, which is a category of algebraic objects that include both commutative (semi)rings and commutative monoids. This generalization allows a simultaneous treatment of ideals resp.\ congruences for rings and monoids and leads to a common scheme theory. In particular, it bridges the gap between usual schemes and $\mathbb{F}_1$-schemes (after Kato, Deitmar and Connes-Consani). Beside this unification, the category of blueprints contains new interesting objects as"improved"cyclotomic field extensions $\mathbb{F}_{1^n}$ of $\mathbb{F}_1$ and"archimedean valuation rings". It also yields a notion of semiring schemes. This first paper lays the foundation for subsequent projects, which are devoted to the following problems: Tits' idea of Chevalley groups over $\mathbb{F}_1$, congruence schemes, sheaf cohomology, $K$-theory and a unified view on analytic geometry over $\mathbb{F}_1$, adic spaces (after Huber), analytic spaces (after Berkovich) and tropical geometry. --- paper_title: Tropical Arithmetic and Matrix Algebra paper_content: This article introduces a new structure of commutative semiring, generalizing the tropical semiring, and having an arithmetic that modifies the standard tropical operations, i.e., summation and maximum. Although our framework is combinatorial, notions of regularity and invertibility arise naturally for matrices over this semiring; we show that a tropical matrix is invertible if and only if it is regular. --- paper_title: Matroids over hyperfields paper_content: We present an algebraic framework which simultaneously generalizes the notion of linear subspaces, matroids, valuated matroids, and oriented matroids. We call the resulting objects matroids over hyperfields. In fact, there are (at least) two natural notions of matroid in this context, which we call weak and strong matroids. We give"cryptomorphic"axiom systems for such matroids in terms of circuits, Grassmann-Plucker functions, and dual pairs, and establish some basic duality theorems. We also show that if F is a doubly distributive hyperfield then the notions of weak and strong matroid over F coincide. --- paper_title: Algebraic structures of tropical mathematics paper_content: Tropical mathematics often is defined over an ordered cancellative monoid $\tM$, usually taken to be $(\RR, +)$ or $(\QQ, +)$. Although a rich theory has arisen from this viewpoint, cf. [L1], idempotent semirings possess a restricted algebraic structure theory, and also do not reflect certain valuation-theoretic properties, thereby forcing researchers to rely often on combinatoric techniques. In this paper we describe an alternative structure, more compatible with valuation theory, studied by the authors over the past few years, that permits fuller use of algebraic theory especially in understanding the underlying tropical geometry. The idempotent max-plus algebra $A$ of an ordered monoid $\tM$ is replaced by $R: = L\times \tM$, where $L$ is a given indexing semiring (not necessarily with 0). In this case we say $R$ layered by $L$. When $L$ is trivial, i.e, $L=\{1\}$, $R$ is the usual bipotent max-plus algebra. When $L=\{1,\infty\}$ we recover the"standard"supertropical structure with its"ghost"layer. When $L = \NN $ we can describe multiple roots of polynomials via a"layering function"$s: R \to L$. Likewise, one can define the layering $s: R^{(n)} \to L^{(n)}$ componentwise; vectors $v_1, \dots, v_m$ are called tropically dependent if each component of some nontrivial linear combination $\sum \a_i v_i$ is a ghost, for"tangible"$\a_i \in R$. Then an $n\times n$ matrix has tropically dependent rows iff its permanent is a ghost. We explain how supertropical algebras, and more generally layered algebras, provide a robust algebraic foundation for tropical linear algebra, in which many classical tools are available. In the process, we provide some new results concerning the rank of d-independent sets (such as the fact that they are semi-additive),put them in the context of supertropical bilinear forms, and lay the matrix theory in the framework of identities of semirings. --- paper_title: Supertropical algebra paper_content: We develop the algebraic polynomial theory for"supertropical algebra,"as initiated earlier over the real numbers by the first author. The main innovation there was the introduction of"ghost elements,"which also play the key role in our structure theory. Here, we work somewhat more generally over an ordered monoid, and develop a theory which contains the analogs of several basic theorems of classical commutative algebra. This structure enables one to develop a Zariski-type algebraic geometric approach to tropical geometry, viewing tropical varieties as sets of roots of (supertropical) polynomials, leading to an analog of the Hilbert Nullstellensatz. Particular attention is paid to factorization of polynomials. In one indeterminate, any polynomial can be factored into linear and quadratic factors, and unique factorization holds in a certain sense. On the other hand, the failure of unique factorization in several indeterminates is explained by geometric phenomena described in the paper. --- paper_title: Categories with negation paper_content: We continue the theory of $\tT$-systems from the work of the second author, describing both ground systems and module systems over a ground system (paralleling the theory of modules over an algebra). The theory, summarized categorically at the end, encapsulates general algebraic structures lacking negation but possessing a map resembling negation, such as tropical algebras, hyperfields and fuzzy rings. We see explicitly how it encompasses tropical algebraic theory and hyperfields. Prime ground systems are introduced as a way of developing geometry. The polynomial system over a prime system is prime, and there is a weak Nullstellensatz. Also, the polynomial $\mathcal A[\la_1, \dots, \la_n]$ and Laurent polynomial systems $\mathcal A[[\la_1, \dots, \la_n]]$ in $n$ commuting indeterminates over a $\tT$-semiring-group system have dimension $n$. For module systems, special attention also is paid to tensor products and $\Hom$. Abelian categories are replaced by"semi-abelian"categories (where $\Hom(A,B)$ is not a group) with a negation morphism. --- paper_title: On the relation between hyperrings and fuzzy rings paper_content: We construct a full embedding of the category of hyperfields into Dress’s category of fuzzy rings and explicitly characterize the essential image—it fails to be essentially surjective in a very minor way. This embedding provides an identification of Baker and Bowler’s theory of strong matroids over hyperfields with Dress’s theory of matroids over fuzzy rings (provided one restricts to those fuzzy rings in the essential image). The embedding functor extends from hyperfields to hyperrings, and we study this extension in detail. We also analyze the relation between hyperfields and Baker and Bowler’s partial demifields. --- paper_title: Projective module systems paper_content: We develop the basic theory of projective modules and splitting in the more general setting of systems. This enables us to prove analogues of classical theorems for tropical and hyperfield theory. In this context we prove a Dual Basis Lemma and develop Morita theory. We also prove a Schanuel's Lemma as a first step towards defining homological dimension. --- paper_title: Duality theory for finite and infinite matroids with coefficients paper_content: On presente une theorie de la dualite pour les matroides finis et infinis avec coefficients --- paper_title: Homological algebra in characteristic one paper_content: This article develops several main results for a general theory of homological algebra in categories such as the category of idempotent modules. In the analogy with the development of homological algebra for abelian categories the present paper should be viewed as the analogue of the development of homological algebra for abelian groups. Our selected prototype, the category $\bmod$ of modules over the Boolean semifield $\B:=\{0,1\}$ is the replacement for the category of abelian groups. We show that the semi-additive category $\bmod$ fulfills analogues of the axioms AB1 and AB2 for abelian categories. By introducing a precise comonad on $\bmod$ we obtain the conceptually related Kleisli and Eilenberg-Moore categories. The latter category $\b2$ is simply $\bmod$ in the topos of sets endowed with an involution and as such it shares with $\bmod$ most of its abstract categorical properties. The three main results of the paper are the following. First, when endowed with the natural ideal of null morphisms, the category $\b2$ is a semi-exact, homological category in the sense of M. Grandis. Second, there is a far reaching analogy between $\b2$ and the category of operators in Hilbert space, and in particular results relating null kernel and injectivity for morphisms. The third fundamental result is that, even for finite objects of $\b2$, the resulting homological algebra is non-trivial and gives rise to a computable Ext functor. We determine explicitly this functor in the case provided by the diagonal morphism of the Boolean semiring into its square. --- paper_title: Algebras with a negation map paper_content: Our objective in this project is three-fold, the first two covered in this paper. In tropical mathematics, as well as other mathematical theories involving semirings, when trying to formulate the tropical versions of classical algebraic concepts for which the negative is a crucial ingredient, such as determinants, Grassmann algebras, Lie algebras, Lie superalgebras, and Poisson algebras, one often is challenged by the lack of negation. Following an idea originating in work of Gaubert and the Max-Plus group and brought to fruition by Akian, Gaubert, and Guterman, we study algebraic structures with negation maps, called \textbf{systems}, in the context of universal algebra, showing how these unify the more viable (super)tropical versions, as well as hypergroup theory and fuzzy rings, thereby "explaining" similarities in their theories. Special attention is paid to \textbf{meta-tangible} $\mathcal T$-systems, whose algebraic theory includes all the main tropical examples and many others, but is rich enough to facilitate computations and provide a host of structural results. ::: Basic results also are obtained in linear algebra, linking determinants to linear independence. ::: Formulating the structure categorically enables us to view the tropicalization functor as a morphism, thereby further explaining the mysterious link between classical algebraic results and their tropical analogs, as well as with hyperfields. We utilize the tropicalization functor to propose tropical analogs of classical algebraic notions. ::: The systems studied here might be called "fundamental," since they are the underlying structure which can be studied via other "module" systems, which is to be the third stage of this project, involving a theory of sheaves and schemes and derived categories with a negation map. --- paper_title: The geometry of blueprints. Part I: Algebraic background and scheme theory paper_content: In this paper, we introduce the category of blueprints, which is a category of algebraic objects that include both commutative (semi)rings and commutative monoids. This generalization allows a simultaneous treatment of ideals resp.\ congruences for rings and monoids and leads to a common scheme theory. In particular, it bridges the gap between usual schemes and $\mathbb{F}_1$-schemes (after Kato, Deitmar and Connes-Consani). Beside this unification, the category of blueprints contains new interesting objects as"improved"cyclotomic field extensions $\mathbb{F}_{1^n}$ of $\mathbb{F}_1$ and"archimedean valuation rings". It also yields a notion of semiring schemes. This first paper lays the foundation for subsequent projects, which are devoted to the following problems: Tits' idea of Chevalley groups over $\mathbb{F}_1$, congruence schemes, sheaf cohomology, $K$-theory and a unified view on analytic geometry over $\mathbb{F}_1$, adic spaces (after Huber), analytic spaces (after Berkovich) and tropical geometry. --- paper_title: Welschinger invariants of real Del Pezzo surfaces of degree ≥ 3 paper_content: We give a recursive formula for purely real Welschinger invariants of real Del Pezzo surfaces of degree K 2 ≥ 3, where in the case of surfaces of degree 3 with two real components we introduce a certain modification of Welschinger invariants and enumerate exclusively the curves traced on the non-orientable component. As an application, we prove the positivity of the invariants under consideration and their logarithmic asymptotic equivalence, as well as congruence modulo 4, to genus zero Gromov–Witten invariants. --- paper_title: Tropicalization of Del Pezzo Surfaces paper_content: We determine the tropicalizations of very affine surfaces over a valued field that are obtained from del Pezzo surfaces of degree 5, 4 and 3 by removing their (-1)-curves. On these tropical surfaces, the boundary divisors are represented by trees at infinity. These trees are glued together according to the Petersen, Clebsch and Schl\"afli graphs, respectively. There are 27 trees on each tropical cubic surface, attached to a bounded complex with up to 73 polygons. The maximal cones in the 4-dimensional moduli fan reveal two generic types of such surfaces. --- paper_title: Methods and Applications of (max,+) Linear Algebra paper_content: Exotic semirings such as the “(max, +) semiring” (ℝ ∪ {−∞},max,+), or the “tropical semiring” (ℕ ∪ {+∞}, min, +), have been invented and reinvented many times since the late fifties, in relation with various fields: performance evaluation of manufacturing systems and discrete event system theory; graph theory (path algebra) and Markov decision processes, Hamilton-Jacobi theory; asymptotic analysis (low temperature asymptotics in statistical physics, large deviations, WKB method); language theory (automata with multiplicities). --- paper_title: Combinatorial and inductive methods for the tropical maximal rank conjecture paper_content: Abstract We produce new combinatorial methods for approaching the tropical maximal rank conjecture, including inductive procedures for deducing new cases of the conjecture on graphs of increasing genus from any given case. Using explicit calculations in a range of base cases, we prove this conjecture for the canonical divisor, and in a wide range of cases for m = 3 , extending previous results for m = 2 . --- paper_title: Max-algebra: the linear algebra of combinatorics? paper_content: Abstract Let a ⊕ b =max( a , b ), a ⊗ b = a + b for a,b∈ R := R ∪{−∞} . By max-algebra we understand the analogue of linear algebra developed for the pair of operations (⊕,⊗) extended to matrices and vectors. Max-algebra, which has been studied for more than 40 years, is an attractive way of describing a class of nonlinear problems appearing for instance in machine-scheduling, information technology and discrete-event dynamic systems. This paper focuses on presenting a number of links between basic max-algebraic problems like systems of linear equations, eigenvalue–eigenvector problem, linear independence, regularity and characteristic polynomial on one hand and combinatorial or combinatorial optimisation problems on the other hand. This indicates that max-algebra may be regarded as a linear-algebraic encoding of a class of combinatorial problems. The paper is intended for wider readership including researchers not familiar with max-algebra. --- paper_title: Introduction to Tropical Geometry paper_content: Tropical islands Building blocks Tropical varieties Tropical rain forest Tropical garden Toric connections Bibliography Index --- paper_title: Homological algebra in characteristic one paper_content: This article develops several main results for a general theory of homological algebra in categories such as the category of idempotent modules. In the analogy with the development of homological algebra for abelian categories the present paper should be viewed as the analogue of the development of homological algebra for abelian groups. Our selected prototype, the category $\bmod$ of modules over the Boolean semifield $\B:=\{0,1\}$ is the replacement for the category of abelian groups. We show that the semi-additive category $\bmod$ fulfills analogues of the axioms AB1 and AB2 for abelian categories. By introducing a precise comonad on $\bmod$ we obtain the conceptually related Kleisli and Eilenberg-Moore categories. The latter category $\b2$ is simply $\bmod$ in the topos of sets endowed with an involution and as such it shares with $\bmod$ most of its abstract categorical properties. The three main results of the paper are the following. First, when endowed with the natural ideal of null morphisms, the category $\b2$ is a semi-exact, homological category in the sense of M. Grandis. Second, there is a far reaching analogy between $\b2$ and the category of operators in Hilbert space, and in particular results relating null kernel and injectivity for morphisms. The third fundamental result is that, even for finite objects of $\b2$, the resulting homological algebra is non-trivial and gives rise to a computable Ext functor. We determine explicitly this functor in the case provided by the diagonal morphism of the Boolean semiring into its square. --- paper_title: Linear independence over tropical semirings and beyond paper_content: We investigate different notions of linear independence and of matrix rank that are relevant for max-plus or tropical semirings. The factor rank and tropical rank have already received attention, we compare them with the ranks defined in terms of signed tropical determinants or arising from a notion of linear independence introduced by Gondran and Minoux. To do this, we revisit the symmetrization of the max-plus algebra, establishing properties of linear spaces, linear systems, and matrices over the symmetrized max-plus algebra. In parallel we develop some general technique to prove combinatorial and polynomial identities for matrices over semirings that we illustrate by a number of examples. --- paper_title: Tropical Cramer Determinants Revisited paper_content: We prove general Cramer type theorems for linear systems over various extensions of the tropical semiring, in which tropical numbers are en- riched with an information of multiplicity, sign, or argument. We obtain exis- tence or uniqueness results, which extend or rene earlier results of Gondran and Minoux (1978), Plus (1990), Gaubert (1992), Richter-Gebert, Sturmfels and Theobald (2005) and Izhakian and Rowen (2009). Computational issues are also discussed; in particular, some of our proofs lead to Jacobi and Gauss- Seidel type algorithms to solve linear systems in suitably extended tropical semirings. --- paper_title: Supertropical matrix algebra paper_content: The objective of this paper is to develop a general algebraic theory of supertropical matrix algebra, extending [11]. Our main results are as follows: * The tropical determinant (i.e., permanent) is multiplicative when all the determinants involved are tangible. * There exists an adjoint matrix $\adj{A}$ such that the matrix $A \adj{A}$ behaves much like the identity matrix (times $|A|$). * Every matrix $A$ is a supertropical root of its Hamilton-Cayley polynomial $f_A$. If these roots are distinct, then $A$ is conjugate (in a certain supertropical sense) to a diagonal matrix. * The tropical determinant of a matrix $A$ is a ghost iff the rows of $A$ are tropically dependent, iff the columns of $A$ are tropically dependent. * Every root of $f_A$ is a"supertropical"eigenvalue of $A$ (appropriately defined), and has a tangible supertropical eigenvector. --- paper_title: Supertropical semirings and supervaluations paper_content: We interpret a valuation $v$ on a ring $R$ as a map $v: R \to M$ into a so called bipotent semiring $M$ (the usual max-plus setting), and then define a \textbf{supervaluation} $\phi$ as a suitable map into a supertropical semiring $U$ with ghost ideal $M$ (cf. [IR1], [IR2]) covering $v$ via the ghost map $U \to M$. The set $\Cov(v)$ of all supervaluations covering $v$ has a natural ordering which makes it a complete lattice. In the case that $R$ is a field, hence for $v$ a Krull valuation, we give a complete explicit description of $\Cov(v)$. ::: The theory of supertropical semirings and supervaluations aims for an algebra fitting the needs of tropical geometry better than the usual max-plus setting. We illustrate this by giving a supertropical version of Kapranov's lemma. --- paper_title: Tropical Arithmetic and Matrix Algebra paper_content: This article introduces a new structure of commutative semiring, generalizing the tropical semiring, and having an arithmetic that modifies the standard tropical operations, i.e., summation and maximum. Although our framework is combinatorial, notions of regularity and invertibility arise naturally for matrices over this semiring; we show that a tropical matrix is invertible if and only if it is regular. --- paper_title: Supertropical algebra paper_content: We develop the algebraic polynomial theory for"supertropical algebra,"as initiated earlier over the real numbers by the first author. The main innovation there was the introduction of"ghost elements,"which also play the key role in our structure theory. Here, we work somewhat more generally over an ordered monoid, and develop a theory which contains the analogs of several basic theorems of classical commutative algebra. This structure enables one to develop a Zariski-type algebraic geometric approach to tropical geometry, viewing tropical varieties as sets of roots of (supertropical) polynomials, leading to an analog of the Hilbert Nullstellensatz. Particular attention is paid to factorization of polynomials. In one indeterminate, any polynomial can be factored into linear and quadratic factors, and unique factorization holds in a certain sense. On the other hand, the failure of unique factorization in several indeterminates is explained by geometric phenomena described in the paper. --- paper_title: Layered Tropical Mathematics paper_content: Abstract Generalizing supertropical algebras, we present a “layered” structure, “sorted” by a semiring which permits varying ghost layers, and indicate how it is more amenable than the “standard” supertropical construction in factorizations of polynomials, description of varieties, and for mathematical analysis and calculus, in particular with respect to multiple roots of polynomials. This gives rise to a significantly better understanding of the tropical resultant and discriminant. Explicit examples and comparisons are given for various sorting semirings such as the natural numbers and the positive rational numbers, and we see how this theory relates to some recent developments in the tropical literature. --- paper_title: Supertropical Matrix Algebra II: Solving tropical equations paper_content: We continue the study of matrices over a supertropical algebra, proving the existence of a tangible adjoint of $A$, which provides the unique right (resp. left) quasi-inverse maximal with respect to the right (resp. left) quasi-identity matrix corresponding to $A$; this provides a unique maximal (tangible) solution to supertropical vector equations, via a version of Cramer's rule. We also describe various properties of this tangible adjoint, and use it to compute supertropical eigenvectors, thereby producing an example in which an $n\times n$ matrix has $n$ distinct supertropical eigenvalues but their supertropical eigenvectors are tropically dependent. --- paper_title: Categories of layered semirings paper_content: We generalize the constructions of [17,19] to layered semirings, in order to enrich the structure and provide finite examples for applications in arithmetic (including finite examples). The layered category theory of [19] is extended accordingly, to cover noncancellative monoids. --- paper_title: Valuations of Semirings paper_content: We develop a notion of valuations on a semiring. In particular, we classify valuations on the semifield $\mathbb{Q}_{max}$ and also valuations on the (suitably defined) `function field' $\mathbb{Q}_{max}(T)$ which are trivial on $\mathbb{Q}_{max}$. As a byproduct, we reinterpret the projective line $\mathbb{P}^1_{\mathbb{F}_1}$ over $\mathbb{F}_{1}$ as an abstract curve associated to $\mathbb{Q}_{max}(T)$. --- paper_title: Algebraic Geometry Over Hyperrings paper_content: We develop basic notions and methods of algebraic geometry over the algebraic objects called hyperrings. Roughly speaking, hyperrings generalize rings in such a way that an addition is ‘multi-valued’. This paper largely consists of two parts; algebraic aspects and geometric aspects of hyperrings. We first investigate several technical algebraic properties of a hyperring. In the second part, we begin by giving another interpretation of a tropical variety as an algebraic set over the hyperfield which canonically arises from a totally ordered semifield. Then we define a notion of an integral hyperring scheme (X,O X ) ( X , O X ) and prove that Γ(X,O X )≃R Γ ( X , O X ) ≃ R for any integral affine hyperring scheme X = Spec R . --- paper_title: Matroids over hyperfields paper_content: We present an algebraic framework which simultaneously generalizes the notion of linear subspaces, matroids, valuated matroids, and oriented matroids. We call the resulting objects matroids over hyperfields. In fact, there are (at least) two natural notions of matroid in this context, which we call weak and strong matroids. We give"cryptomorphic"axiom systems for such matroids in terms of circuits, Grassmann-Plucker functions, and dual pairs, and establish some basic duality theorems. We also show that if F is a doubly distributive hyperfield then the notions of weak and strong matroid over F coincide. --- paper_title: Tropical schemes, tropical cycles, and valuated matroids paper_content: The tropicalization of a subvariety Y in the n-dimensional algebraic torus T is a polyhedral complex trop(Y ) that is a \combinatorial shadow" of the original variety. Some invariants of Y , such as the dimension, are encoded in trop(Y ). The complex trop(Y ) comes equipped with positive integer weights on its top-dimensional cells, called multiplicities, that make it into a tropical cycle. This extra information encodes ::: information about the intersection theory of compacti cations of the original variety Y ; see for example [KP11]. --- paper_title: Matroids over hyperfields paper_content: We present an algebraic framework which simultaneously generalizes the notion of linear subspaces, matroids, valuated matroids, and oriented matroids. We call the resulting objects matroids over hyperfields. In fact, there are (at least) two natural notions of matroid in this context, which we call weak and strong matroids. We give"cryptomorphic"axiom systems for such matroids in terms of circuits, Grassmann-Plucker functions, and dual pairs, and establish some basic duality theorems. We also show that if F is a doubly distributive hyperfield then the notions of weak and strong matroid over F coincide. --- paper_title: On the relation between hyperrings and fuzzy rings paper_content: We construct a full embedding of the category of hyperfields into Dress’s category of fuzzy rings and explicitly characterize the essential image—it fails to be essentially surjective in a very minor way. This embedding provides an identification of Baker and Bowler’s theory of strong matroids over hyperfields with Dress’s theory of matroids over fuzzy rings (provided one restricts to those fuzzy rings in the essential image). The embedding functor extends from hyperfields to hyperrings, and we study this extension in detail. We also analyze the relation between hyperfields and Baker and Bowler’s partial demifields. --- paper_title: Duality theory for finite and infinite matroids with coefficients paper_content: On presente une theorie de la dualite pour les matroides finis et infinis avec coefficients --- paper_title: Linear independence over tropical semirings and beyond paper_content: We investigate different notions of linear independence and of matrix rank that are relevant for max-plus or tropical semirings. The factor rank and tropical rank have already received attention, we compare them with the ranks defined in terms of signed tropical determinants or arising from a notion of linear independence introduced by Gondran and Minoux. To do this, we revisit the symmetrization of the max-plus algebra, establishing properties of linear spaces, linear systems, and matrices over the symmetrized max-plus algebra. In parallel we develop some general technique to prove combinatorial and polynomial identities for matrices over semirings that we illustrate by a number of examples. --- paper_title: Prime congruences of idempotent semirings and a Nullstellensatz for tropical polynomials paper_content: A new definition of prime congruences in additively idempotent semirings is given using twisted products. This class turns out to exhibit some analogous properties to the prime ideals of commutative rings. In order to establish a good notion of radical congruences it is shown that the intersection of all primes of a semiring can be characterized by certain twisted power formulas. A complete description of prime congruences is given in the polynomial and Laurent polynomial semirings over the tropical semifield ${\pmb T}$, the semifield $\mathbb{Z}_{max}$ and the two element semifield $\mathbb{B}$. The minimal primes of these semirings correspond to monomial orderings, and their intersection is the congruence that identifies polynomials that have the same Newton polytope. It is then shown that every finitely generated congruence in each of these cases is an intersection of prime congruences with quotients of Krull dimension $1$. An improvement of a result of A. Bertram and R. Easton from 2013 is proven which can be regarded as a Nullstellensatz for tropical polynomials. --- paper_title: Methods and Applications of (max,+) Linear Algebra paper_content: Exotic semirings such as the “(max, +) semiring” (ℝ ∪ {−∞},max,+), or the “tropical semiring” (ℕ ∪ {+∞}, min, +), have been invented and reinvented many times since the late fifties, in relation with various fields: performance evaluation of manufacturing systems and discrete event system theory; graph theory (path algebra) and Markov decision processes, Hamilton-Jacobi theory; asymptotic analysis (low temperature asymptotics in statistical physics, large deviations, WKB method); language theory (automata with multiplicities). --- paper_title: Tropical Cramer Determinants Revisited paper_content: We prove general Cramer type theorems for linear systems over various extensions of the tropical semiring, in which tropical numbers are en- riched with an information of multiplicity, sign, or argument. We obtain exis- tence or uniqueness results, which extend or rene earlier results of Gondran and Minoux (1978), Plus (1990), Gaubert (1992), Richter-Gebert, Sturmfels and Theobald (2005) and Izhakian and Rowen (2009). Computational issues are also discussed; in particular, some of our proofs lead to Jacobi and Gauss- Seidel type algorithms to solve linear systems in suitably extended tropical semirings. --- paper_title: Categories with negation paper_content: We continue the theory of $\tT$-systems from the work of the second author, describing both ground systems and module systems over a ground system (paralleling the theory of modules over an algebra). The theory, summarized categorically at the end, encapsulates general algebraic structures lacking negation but possessing a map resembling negation, such as tropical algebras, hyperfields and fuzzy rings. We see explicitly how it encompasses tropical algebraic theory and hyperfields. Prime ground systems are introduced as a way of developing geometry. The polynomial system over a prime system is prime, and there is a weak Nullstellensatz. Also, the polynomial $\mathcal A[\la_1, \dots, \la_n]$ and Laurent polynomial systems $\mathcal A[[\la_1, \dots, \la_n]]$ in $n$ commuting indeterminates over a $\tT$-semiring-group system have dimension $n$. For module systems, special attention also is paid to tensor products and $\Hom$. Abelian categories are replaced by"semi-abelian"categories (where $\Hom(A,B)$ is not a group) with a negation morphism. --- paper_title: Homological algebra in characteristic one paper_content: This article develops several main results for a general theory of homological algebra in categories such as the category of idempotent modules. In the analogy with the development of homological algebra for abelian categories the present paper should be viewed as the analogue of the development of homological algebra for abelian groups. Our selected prototype, the category $\bmod$ of modules over the Boolean semifield $\B:=\{0,1\}$ is the replacement for the category of abelian groups. We show that the semi-additive category $\bmod$ fulfills analogues of the axioms AB1 and AB2 for abelian categories. By introducing a precise comonad on $\bmod$ we obtain the conceptually related Kleisli and Eilenberg-Moore categories. The latter category $\b2$ is simply $\bmod$ in the topos of sets endowed with an involution and as such it shares with $\bmod$ most of its abstract categorical properties. The three main results of the paper are the following. First, when endowed with the natural ideal of null morphisms, the category $\b2$ is a semi-exact, homological category in the sense of M. Grandis. Second, there is a far reaching analogy between $\b2$ and the category of operators in Hilbert space, and in particular results relating null kernel and injectivity for morphisms. The third fundamental result is that, even for finite objects of $\b2$, the resulting homological algebra is non-trivial and gives rise to a computable Ext functor. We determine explicitly this functor in the case provided by the diagonal morphism of the Boolean semiring into its square. --- paper_title: Algebraic Geometry Over Hyperrings paper_content: We develop basic notions and methods of algebraic geometry over the algebraic objects called hyperrings. Roughly speaking, hyperrings generalize rings in such a way that an addition is ‘multi-valued’. This paper largely consists of two parts; algebraic aspects and geometric aspects of hyperrings. We first investigate several technical algebraic properties of a hyperring. In the second part, we begin by giving another interpretation of a tropical variety as an algebraic set over the hyperfield which canonically arises from a totally ordered semifield. Then we define a notion of an integral hyperring scheme (X,O X ) ( X , O X ) and prove that Γ(X,O X )≃R Γ ( X , O X ) ≃ R for any integral affine hyperring scheme X = Spec R . --- paper_title: Algebras with a negation map paper_content: Our objective in this project is three-fold, the first two covered in this paper. In tropical mathematics, as well as other mathematical theories involving semirings, when trying to formulate the tropical versions of classical algebraic concepts for which the negative is a crucial ingredient, such as determinants, Grassmann algebras, Lie algebras, Lie superalgebras, and Poisson algebras, one often is challenged by the lack of negation. Following an idea originating in work of Gaubert and the Max-Plus group and brought to fruition by Akian, Gaubert, and Guterman, we study algebraic structures with negation maps, called \textbf{systems}, in the context of universal algebra, showing how these unify the more viable (super)tropical versions, as well as hypergroup theory and fuzzy rings, thereby "explaining" similarities in their theories. Special attention is paid to \textbf{meta-tangible} $\mathcal T$-systems, whose algebraic theory includes all the main tropical examples and many others, but is rich enough to facilitate computations and provide a host of structural results. ::: Basic results also are obtained in linear algebra, linking determinants to linear independence. ::: Formulating the structure categorically enables us to view the tropicalization functor as a morphism, thereby further explaining the mysterious link between classical algebraic results and their tropical analogs, as well as with hyperfields. We utilize the tropicalization functor to propose tropical analogs of classical algebraic notions. ::: The systems studied here might be called "fundamental," since they are the underlying structure which can be studied via other "module" systems, which is to be the third stage of this project, involving a theory of sheaves and schemes and derived categories with a negation map. --- paper_title: Linear independence over tropical semirings and beyond paper_content: We investigate different notions of linear independence and of matrix rank that are relevant for max-plus or tropical semirings. The factor rank and tropical rank have already received attention, we compare them with the ranks defined in terms of signed tropical determinants or arising from a notion of linear independence introduced by Gondran and Minoux. To do this, we revisit the symmetrization of the max-plus algebra, establishing properties of linear spaces, linear systems, and matrices over the symmetrized max-plus algebra. In parallel we develop some general technique to prove combinatorial and polynomial identities for matrices over semirings that we illustrate by a number of examples. --- paper_title: Tropical Cramer Determinants Revisited paper_content: We prove general Cramer type theorems for linear systems over various extensions of the tropical semiring, in which tropical numbers are en- riched with an information of multiplicity, sign, or argument. We obtain exis- tence or uniqueness results, which extend or rene earlier results of Gondran and Minoux (1978), Plus (1990), Gaubert (1992), Richter-Gebert, Sturmfels and Theobald (2005) and Izhakian and Rowen (2009). Computational issues are also discussed; in particular, some of our proofs lead to Jacobi and Gauss- Seidel type algorithms to solve linear systems in suitably extended tropical semirings. --- paper_title: Remarks on the Cayley-Hamilton Theorem paper_content: We revisit the classical theorem by Cayley and Hamilton,"{\em each endomorphism is a root of its own characteristic polynomial}", from the point of view of {\em Hasse--Schmidt derivations on an exterior algebra} --- paper_title: Grassman semialgebras and the Cayley-Hamilton theorem paper_content: We develop a theory of Grassmann triples via Hasse-Schmidt derivations, which formally generalizes results such as the Cayley-Hamilton theorem in linear algebra, thereby providing a unified approach to classical linear algebra and tropical algebra. --- paper_title: The geometry of blueprints. Part I: Algebraic background and scheme theory paper_content: In this paper, we introduce the category of blueprints, which is a category of algebraic objects that include both commutative (semi)rings and commutative monoids. This generalization allows a simultaneous treatment of ideals resp.\ congruences for rings and monoids and leads to a common scheme theory. In particular, it bridges the gap between usual schemes and $\mathbb{F}_1$-schemes (after Kato, Deitmar and Connes-Consani). Beside this unification, the category of blueprints contains new interesting objects as"improved"cyclotomic field extensions $\mathbb{F}_{1^n}$ of $\mathbb{F}_1$ and"archimedean valuation rings". It also yields a notion of semiring schemes. This first paper lays the foundation for subsequent projects, which are devoted to the following problems: Tits' idea of Chevalley groups over $\mathbb{F}_1$, congruence schemes, sheaf cohomology, $K$-theory and a unified view on analytic geometry over $\mathbb{F}_1$, adic spaces (after Huber), analytic spaces (after Berkovich) and tropical geometry. --- paper_title: Categories with negation paper_content: We continue the theory of $\tT$-systems from the work of the second author, describing both ground systems and module systems over a ground system (paralleling the theory of modules over an algebra). The theory, summarized categorically at the end, encapsulates general algebraic structures lacking negation but possessing a map resembling negation, such as tropical algebras, hyperfields and fuzzy rings. We see explicitly how it encompasses tropical algebraic theory and hyperfields. Prime ground systems are introduced as a way of developing geometry. The polynomial system over a prime system is prime, and there is a weak Nullstellensatz. Also, the polynomial $\mathcal A[\la_1, \dots, \la_n]$ and Laurent polynomial systems $\mathcal A[[\la_1, \dots, \la_n]]$ in $n$ commuting indeterminates over a $\tT$-semiring-group system have dimension $n$. For module systems, special attention also is paid to tensor products and $\Hom$. Abelian categories are replaced by"semi-abelian"categories (where $\Hom(A,B)$ is not a group) with a negation morphism. --- paper_title: Homological algebra in characteristic one paper_content: This article develops several main results for a general theory of homological algebra in categories such as the category of idempotent modules. In the analogy with the development of homological algebra for abelian categories the present paper should be viewed as the analogue of the development of homological algebra for abelian groups. Our selected prototype, the category $\bmod$ of modules over the Boolean semifield $\B:=\{0,1\}$ is the replacement for the category of abelian groups. We show that the semi-additive category $\bmod$ fulfills analogues of the axioms AB1 and AB2 for abelian categories. By introducing a precise comonad on $\bmod$ we obtain the conceptually related Kleisli and Eilenberg-Moore categories. The latter category $\b2$ is simply $\bmod$ in the topos of sets endowed with an involution and as such it shares with $\bmod$ most of its abstract categorical properties. The three main results of the paper are the following. First, when endowed with the natural ideal of null morphisms, the category $\b2$ is a semi-exact, homological category in the sense of M. Grandis. Second, there is a far reaching analogy between $\b2$ and the category of operators in Hilbert space, and in particular results relating null kernel and injectivity for morphisms. The third fundamental result is that, even for finite objects of $\b2$, the resulting homological algebra is non-trivial and gives rise to a computable Ext functor. We determine explicitly this functor in the case provided by the diagonal morphism of the Boolean semiring into its square. --- paper_title: Introduction to Tropical Geometry paper_content: Tropical islands Building blocks Tropical varieties Tropical rain forest Tropical garden Toric connections Bibliography Index --- paper_title: Tropical schemes, tropical cycles, and valuated matroids paper_content: The tropicalization of a subvariety Y in the n-dimensional algebraic torus T is a polyhedral complex trop(Y ) that is a \combinatorial shadow" of the original variety. Some invariants of Y , such as the dimension, are encoded in trop(Y ). The complex trop(Y ) comes equipped with positive integer weights on its top-dimensional cells, called multiplicities, that make it into a tropical cycle. This extra information encodes ::: information about the intersection theory of compacti cations of the original variety Y ; see for example [KP11]. --- paper_title: Supertropical linear algebra paper_content: The objective of this paper is to lay out the algebraic theory of supertropical vector spaces and linear algebra, utilizing the key antisymmetric relation of “ghost surpasses”. Special attention is paid to the various notions of “base”, which include d-base and s-base, and these are compared to other treatments in the tropical theory. Whereas the number of elements in various d-bases may differ, it is shown that when an s-base exists, it is unique up to permutation and multiplication by scalars, and can be identified with a set of “critical” elements. Then we turn to orthogonality of vectors, which leads to supertropical bilinear forms and a supertropical version of the Gram matrix, including its connection to linear dependence. We also obtain a supertropical version of a theorem of Artin, which says that if g-orthogonality is a symmetric relation, then the underlying bilinear form is (supertropically) symmetric. ---
Title: An Informal Overview of Triples and Systems Section 1: Introduction Description 1: Provide an overview of the goals, motivations, and the scope of the paper, including the unified theory of tropical algebra, hyperfields, and fuzzy rings. Section 2: Acquaintance with Basic Notions Description 2: Introduce the fundamental concepts and structures, including T-modules, quasi-zeros, and the importance of universal algebra in discussing diverse structures. Section 3: Motivating Examples Description 3: Elaborate on non-classical examples that motivate the theory, such as supertropical semirings, hyperfields, and fuzzy rings. Section 4: Idempotent Semirings Description 4: Discuss the role of idempotent semirings in tropical geometry and their key properties and applications. Section 5: Supertropical Semirings Description 5: Describe the structure and significance of supertropical semirings and how they relate to classical algebraic geometry and linear algebra. Section 6: Hyperfields and Other Related Constructions Description 6: Explore the concept of hyperfields and other algebraic constructions that replace elements with sets in sums. Section 7: Fuzzy Rings Description 7: Provide an overview of fuzzy rings and their relationship to hypergroups and matroids. Section 8: Symmetrization Description 8: Explain the symmetrization construction and its applications to algebra, including defining T-supermodules and switch maps. Section 9: Negation Maps, Triples, and Systems Description 9: Discuss the implementation of negation maps in semirings and their significance in the study of triples and systems. Section 10: Ground Triples versus Module Triples Description 10: Compare ground triples and module triples, highlighting their respective roles in structure theory and representation theory. Section 11: Contents of [64]: Meta-Tangible Systems Description 11: Summarize the key concepts and results from the paper [64], focusing on meta-tangible systems and their axioms. Section 12: Contents of [4]: Linear Algebra over Systems Description 12: Outline the main contributions of the paper [4] regarding linear algebra over semiring systems, including definitions of T-dependent vectors and matrix ranks. Section 13: Contents of [20]: Grassmann Semialgebras Description 13: Unify classical and tropical theory through the study of Grassmann semialgebras, including Hasse-Schmidt derivations and the generalization of the Cayley-Hamilton theorem. Section 14: Contents of [50]: Basic Categorical Considerations Description 14: Detail the categorical aspects of systems as discussed in [50], including functors, prime systems, and congruences. Section 15: Contents of [48]: Projective Module Systems Description 15: Discuss the definition and fundamental properties of projective module systems as elaborated in [48]. Section 16: Interface between Systems and Tropical Mathematics Description 16: Relate systems to other approaches in tropical mathematics and describe the systemic version of tropical ideals. Section 17: Areas for Further Research Description 17: Identify areas for further research, including the study of affine varieties, valuated matroids, and their formulation over systems.
A Survey of Word-sense Disambiguation Effective Techniques and Methods for Indian Languages
11
--- paper_title: A Context Expansion Method for Supervised Word Sense Disambiguation paper_content: Feature sparseness is one of the main causes for Word Sense Disambiguation (WSD) systems to fail, as it increases the probability of incorrect predictions. In this work, we present a WSD method to overcome this problem by using an automatically-created thesaurus to append related words to a specific context, in order to improve the effectiveness of candidate selection for an ambiguous word. We treat the context as a vector of words taken from sentences, and expand it with words from the thesaurus according to their mutual relatedness. Our results suggest that the method performs disambiguation with high precision. --- paper_title: An improved unsupervised learning probabilistic model of word sense disambiguation paper_content: Unsupervised learning can address the general limitation of supervised learning that sense-tagged text is not available for most domains and is expensive to create. However, the existing unsupervised learning probabilistic models are computationally expensive and convergence slowly because of large numbers and random initialization of model parameters. This paper reduces the noise jamming and the dimensionality of the models by using proposed feature selection and initial parameter estimation. Experimental result shows the accuracy and efficiency of the proposed probabilistic model are obviously improved. --- paper_title: HyperLex: Lexical Cartography for Information Retrieval paper_content: Abstract This article describes an algorithm called HyperLex that is capable of automatically determining word uses in a textbase without recourse to a dictionary. The algorithm makes use of the specific properties of word cooccurrence graphs, which are shown as having “small world” properties. Unlike earlier dictionary-free methods based on word vectors, it can isolate highly infrequent uses (as rare as 1% of all occurrences) by detecting “hubs” and high-density components in the cooccurrence graphs. The algorithm is applied here to information retrieval on the Web, using a set of highly ambiguous test words. An evaluation of the algorithm showed that it only omitted a very small number of relevant uses. In addition, HyperLex offers automatic tagging of word uses in context with excellent precision (97%, compared to 73% for baseline tagging, with an 82% recall rate). Remarkably good precision (96%) was also achieved on a selection of the 25 most relevant pages for each use (including highly infrequent ones). Finally, HyperLex is combined with a graphic display technique that allows the user to navigate visually through the lexicon and explore the various domains detected for each word use. --- paper_title: An improved unsupervised learning probabilistic model of word sense disambiguation paper_content: Unsupervised learning can address the general limitation of supervised learning that sense-tagged text is not available for most domains and is expensive to create. However, the existing unsupervised learning probabilistic models are computationally expensive and convergence slowly because of large numbers and random initialization of model parameters. This paper reduces the noise jamming and the dimensionality of the models by using proposed feature selection and initial parameter estimation. Experimental result shows the accuracy and efficiency of the proposed probabilistic model are obviously improved. --- paper_title: A Context Expansion Method for Supervised Word Sense Disambiguation paper_content: Feature sparseness is one of the main causes for Word Sense Disambiguation (WSD) systems to fail, as it increases the probability of incorrect predictions. In this work, we present a WSD method to overcome this problem by using an automatically-created thesaurus to append related words to a specific context, in order to improve the effectiveness of candidate selection for an ambiguous word. We treat the context as a vector of words taken from sentences, and expand it with words from the thesaurus according to their mutual relatedness. Our results suggest that the method performs disambiguation with high precision. --- paper_title: The SemEval-2007 WePS Evaluation: Establishing a benchmark for the Web People Search Task paper_content: This paper presents the task definition, resources, participation, and comparative results for the Web People Search task, which was organized as part of the SemEval-2007 evaluation exercise. This task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that name. --- paper_title: Active Learning With Sampling by Uncertainty and Density for Data Annotations paper_content: To solve the knowledge bottleneck problem, active learning has been widely used for its ability to automatically select the most informative unlabeled examples for human annotation. One of the key enabling techniques of active learning is uncertainty sampling, which uses one classifier to identify unlabeled examples with the least confidence. Uncertainty sampling often presents problems when outliers are selected. To solve the outlier problem, this paper presents two techniques, sampling by uncertainty and density (SUD) and density-based re-ranking. Both techniques prefer not only the most informative example in terms of uncertainty criterion, but also the most representative example in terms of density criterion. Experimental results of active learning for word sense disambiguation and text classification tasks using six real-world evaluation data sets demonstrate the effectiveness of the proposed methods. --- paper_title: Combining Knowledge- and Corpus-based Word-Sense-Disambiguation Methods paper_content: In this paper we concentrate on the resolution of the lexical ambiguity that arises when a given word has several different meanings. This specific task is commonly referred to as word sense disambiguation (WSD). The task of WSD consists of assigning the correct sense to words using an electronic dictionary as the source of word definitions. We present two WSD methods based on two main methodological approaches in this research area: a knowledge-based method and a corpus-based method. Our hypothesis is that word-sense disambiguation requires several knowledge sources in order to solve the semantic ambiguity of the words. These sources can be of different kinds-- for example, syntagmatic, paradigmatic or statistical information. Our approach combines various sources of knowledge, through combinations of the two WSD methods mentioned above. Mainly, the paper concentrates on how to combine these methods and sources of information in order to achieve good results in the disambiguation. Finally, this paper presents a comprehensive study and experimental work on evaluation of the methods and their combinations. --- paper_title: Active Learning With Sampling by Uncertainty and Density for Data Annotations paper_content: To solve the knowledge bottleneck problem, active learning has been widely used for its ability to automatically select the most informative unlabeled examples for human annotation. One of the key enabling techniques of active learning is uncertainty sampling, which uses one classifier to identify unlabeled examples with the least confidence. Uncertainty sampling often presents problems when outliers are selected. To solve the outlier problem, this paper presents two techniques, sampling by uncertainty and density (SUD) and density-based re-ranking. Both techniques prefer not only the most informative example in terms of uncertainty criterion, but also the most representative example in terms of density criterion. Experimental results of active learning for word sense disambiguation and text classification tasks using six real-world evaluation data sets demonstrate the effectiveness of the proposed methods. ---
Title: A Survey of Word-sense Disambiguation Effective Techniques and Methods for Indian Languages Section 1: INTRODUCTION Description 1: Write an introduction about the concept of word sense disambiguation (WSD), its importance in natural language processing, the main approaches, and its applications. Section 2: WSD APPROACHES Description 2: Discuss the various approaches for WSD, including knowledge-based and machine-learning-based approaches, and provide insights into each approach. Section 3: Machine Learning Based Approach Description 3: Detail the machine learning-based approach for WSD, including supervised, unsupervised, and semi-supervised techniques. Section 4: Dictionary Based Approach Description 4: Describe the dictionary-based approach for WSD, including specific methods such as HyperLex and extended Word Net. Section 5: Improved Unsupervised Learning Probabilistic Model Description 5: Provide an overview of the improved unsupervised learning probabilistic model for WSD, including its algorithm and parameter estimation. Section 6: Genetic Algorithm for WSD Description 6: Explain the genetic algorithm approach for WSD, specifically its application to the Arabian language. Section 7: Applications of WSD Description 7: Discuss the various applications of WSD, including information extraction, information retrieval, machine translation, text processing, and speech processing. Section 8: WSD FOR INDIAN LANGUAGES Description 8: Discuss the work done for WSD in various Indian languages including Manipuri, Malayalam, Tamil, Kannada, Hindi, and Punjabi. Section 9: Manipuri Description 9: Provide a detailed description of WSD work done in Manipuri, including the architecture and methodology used. Section 10: Malayalam Description 10: Describe the efforts and methodologies for WSD in Malayalam, including specific algorithms and approaches used. Section 11: Punjabi Description 11: Outline the WSD techniques used for Punjabi, focusing on specific algorithms like the modified Lesk’s algorithm.
Methods and techniques of complex systems science: An overview
56
--- paper_title: Robust perfect adaptation in bacterial chemotaxis through integral feedback control paper_content: ‡Integral feedback control is a basic engineering strategy for ensuring that the output of a system robustly tracks its desired value independent of noise or variations in system parameters. In biological systems, it is common for the response to an extracellular stimulus to return to its prestimulus value even in the continued presence of the signal—a process termed adaptation or desensitization. Barkai, Alon, Surette, and Leibler have provided both theoretical and experimental evidence that the precision of adaptation in bacterial chemotaxis is robust to dramatic changes in the levels and kinetic rate constants of the constituent proteins in this signaling network [Alon, U., Surette, M. G., Barkai, N. & Leibler, S. (1998) Nature (London) 397, 168 ‐171]. Here we propose that the robustness of perfect adaptation is the result of this system possessing the property of integral feedback control. Using techniques from control and dynamical systems theory, we demonstrate that integral control is structurally inherent in the Barkai‐ Leibler model and identify and characterize the key assumptions of the model. Most importantly, we argue that integral control in some form is necessary for a robust implementation of perfect adaptation. More generally, integral control may underlie the robustness of many homeostatic mechanisms. --- paper_title: The chemical basis of morphogenesis paper_content: It is suggested that a system of chemical substances, called morphogens, reacting together and diffusing through a tissue, is adequate to account for the main phenomena of morphogenesis. Such a system, although it may originally be quite homogeneous, may later develop a pattern or structure due to an instability of the homogeneous equilibrium, which is triggered off by random disturbances. Such reaction-diffusion systems are considered in some detail in the case of an isolated ring of cells, a mathematically convenient, though biologically unusual system. The investigation is chiefly concerned with the onset of instability. It is found that there are six essentially different forms which this may take. In the most interesting form stationary waves appear on the ring. It is suggested that this might account, for instance, for the tentacle patterns on Hydra and for whorled leaves. A system of reactions and diffusion on a sphere is also considered. Such a system appears to account for gastrulation. Another reaction system in two dimensions gives rise to patterns reminiscent of dappling. It is also suggested that stationary waves in two dimensions could account for the phenomena of phyllotaxis. The purpose of this paper is to discuss a possible mechanism by which the genes of a zygote may determine the anatomical structure of the resulting organism. The theory does not make any new hypotheses; it merely suggests that certain well-known physical laws are sufficient to account for many of the facts. The full understanding of the paper requires a good knowledge of mathematics, some biology, and some elementary chemistry. Since readers cannot be expected to be experts in all of these subjects, a number of elementary facts are explained, which can be found in text-books, but whose omission would make the paper difficult reading. --- paper_title: Adaptation and Optimal Chemotactic Strategy for E. Coli paper_content: Extending the classic works of Berg and Purcell on the biophysics of bacterial chemotaxis, we find the optimal chemotactic strategy for the peritrichous bacterium {ital E. coli} in the high and low signal to noise ratio limits. The optimal strategy depends on properties of the environment and properties of the individual bacterium, and is therefore highly adaptive. We review experiments relevant to testing both the form of the proposed strategy and its adaptability, and propose extensions of them which could test the limits of the adaptability in this simplest sensory processing system. {copyright} {ital 1998} {ital The American Physical Society} --- paper_title: Complexity: Hierarchical Structures and Scaling in Physics paper_content: Part I. Phenomenology and Models: 1. Introduction 2. Examples of complex behaviour 3. Mathematical models Part II: 4. Symbolic representations of physical systems 5. Probability, ergodic theory, and information 6. Thermodynamic formalism Part III. Formal Characterization of Complexity: 7. Physical and computational analysis of symbolic signals 8. Algorithmic and grammatical complexities 9. Hierarchical scaling complexities 10. Summary and perspectives. --- paper_title: The self-made tapestry: pattern formation in nature paper_content: From the Publisher: ::: Why do similar patterns and forms appear in nature in settings that seem to bear no relation to one another? The Windblown ripples of desert sand follow a sinuous course that resembles the stripes of a zebra or a marine fish. In the trellis-like shells of microscopic sea creatures we see the same geometry as in the bubble walls of foam, Forks of lightning mirror the branches of a river network or a tree. ::: This book explains why these are not coincidences. Nature commonly weaves its tapestry without any master plan or blueprint. Instead, these designs build themselves by self-organization. The interactions between the component parts -- whether they be grains of sand, molecules or living cells -- give rise to spontaneous patterns that are at the same time complex and beautiful. Many of these patterns are universal, recurring again and again in the natural order: spirals, spots, stripes, branches, honeycombs. Philip Ball conducts a profusely illustrated tour of this gallery, and reveals the secrets of how nature's patterns are made. --- paper_title: The Theory of Evolution and Dynamical Systems: Mathematical Aspects of Selection paper_content: Preface Part I. Selection Dynamics and Population Genetics: A Discrete Introduction: 1. The biological background 2. The Hardy-Weinberg law 3. Selection and the fundamental theorem 4. Mutation and recombination Part II. Growth Rates and Ecological Models: An ABC on ODE: 5. The ecology of populations 6. The logistic equation 7. Lotka-Volterra equations for predator prey-systems 8. Lotka-Volterra equations for two competing species 9. Lotka-Volterra equations for more than two populations Part III. Test Tube Evolution and Hypercycles: A Prebiotic Primer: 10. Prebiotic evolution 11. Branching processes and the complexity threshold 12. Catalytic groth f selfreproducing molecules 13. Permanence and the evolution of hypercycles Part IV. Strategies and Stability: An Opening in Game Dynamics: 14. Some aspects of sociobiology 15. Evolutionarily stable strategies 16. Game dynamics 17. Asymmetric conflicts Interlude. --- paper_title: Robustness in bacterial chemotaxis. paper_content: Networks of interacting proteins orchestrate the responses of living cells to a variety of external stimuli, but how sensitive is the functioning of these protein networks to variations in their biochemical parameters? One possibility is that to achieve appropriate function, the reaction rate constants and enzyme concentrations need to be adjusted in a precise manner, and any deviation from these 'fine-tuned' values ruins the network's performance. An alternative possibility is that key properties of biochemical networks are robust; that is, they are insensitive to the precise values of the biochemical parameters. Here we address this issue in experiments using chemotaxis of Escherichia coli, one of the best-characterized sensory systems. We focus on how response and adaptation to attractant signals vary with systematic changes in the intracellular concentration of the components of the chemotaxis network. We find that some properties, such as steady-state behaviour and adaptation time, show strong variations in response to varying protein concentrations. In contrast, the precision of adaptation is robust and does not vary with the protein concentrations. This is consistent with a recently proposed molecular mechanism for exact adaptation, where robustness is a direct consequence of the network's architecture. --- paper_title: Optimization Based on Bacterial Chemotaxis paper_content: We present an optimization algorithm based on a model of bacterial chemotaxis. The original biological model is used to formulate a simple optimization algorithm, which is evaluated on a set of standard test problems. Based on this evaluation, several features are added to the basic algorithm using evolutionary concepts in order to obtain an improved optimization strategy, called the bacteria chemotaxis (BC) algorithm. This strategy is evaluated on a number of test functions for local and global optimization, compared with other optimization techniques, and applied to the problem of inverse airfoil design. The comparisons show that on average, BC performs similar to standard evolution strategies and worse than evolution strategies with enhanced convergence properties. --- paper_title: Lattice-gas automata for the Navier-Stokes equation. paper_content: A very brief presentation of how lattice gas hydrodynamics is made. It includes key references. --- paper_title: Complexity, Entropy And The Physics Of Information paper_content: A water gel explosive composition and process for preparing the same is provided which comprises an oxidizer, water, a gelling agent, and a crosslinker, and wherein the improvement comprises including therein from about 1 to about 10 weight percent of at least one amine nitrate sensitizer selected from the group comprising lower alkyl and alkanol amine nitrates and from about 1 to about 10 weight percent of an aluminum sensitizer having a surface area per unit weight of from about 3 to about 9 square meters per gram, said weight percentages based upon the total weight of the explosive composition. --- paper_title: Emergence: From Chaos to Order paper_content: From the Publisher: ::: From one of today's most innovative thinkers comes the first book to carefully explore emergence - a surprisingly simple notion (the whole is more than the sum of its parts) with enormous implications for science, business, and the arts. In this work, John Holland, a leader in the study of complexity at the Santa Fe Institute, dramatically shows that a theory of emergence can predict many complex behaviors, and has much to teach us about life, the mind, and organizations. In Emergence, Holland demonstrates that a small number of rules of laws can generate systems of surprising complexity. Board games provide an ancient and direct example: Chess is defined by fewer than two dozen rules, but the myriad patterns that result lead to perpetual novelty and emergence. It took centuries of study to recognize certain patterns of play, such as the control of pawn formations. But once recognized, these patterns greatly enhance the possibility of winning the game. The discovery of similar patterns in other facets of our world opens the way to a deeper understanding of the complexity of life, answering such questions as: How does a fertilized egg program the development of a trillion-cell organism? How can we build human organizations that respond rapidly to change through innovation? Throughout the book, Holland compares different systems and models that exhibit emergence in the quest for common rules or laws. --- paper_title: Information Theory and an Extension of the Maximum Likelihood Principle paper_content: In this paper it is shown that the classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion. This observation shows an extension of the principle to provide answers to many practical problems of statistical model fitting. --- paper_title: A theory of the learnable paper_content: Humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. In this paper we regard learning as the phenomenon of knowledge acquisition in the absence of explicit programming. We give a precise methodology for studying this phenomenon from a computational viewpoint. It consists of choosing an appropriate information gathering mechanism, the learning protocol, and exploring the class of concepts that can be learnt using it in a reasonable (polynomial) number of steps. We find that inherent algorithmic complexity appears to set serious limits to the range of concepts that can be so learnt. The methodology and results suggest concrete principles for designing realistic learning systems. --- paper_title: Nonparametric Time Series Prediction Through Adaptive Model Selection paper_content: We consider the problem of one-step ahead prediction for time series generated by an underlying stationary stochastic process obeying the condition of absolute regularity, describing the mixing nature of process. We make use of recent results from the theory of empirical processes, and adapt the uniform convergence framework of Vapnik and Chervonenkis to the problem of time series prediction, obtaining finite sample bounds. Furthermore, by allowing both the model complexity and memory size to be adaptively determined by the data, we derive nonparametric rates of convergence through an extension of the method of structural risk minimization suggested by Vapnik. All our results are derived for general L error measures, and apply to both exponentially and algebraically mixing processes. --- paper_title: Measuring the VC-Dimension Using Optimized Experimental Design paper_content: VC-dimension is the measure of model complexity (capacity) used in VC-theory. The knowledge of the VC-dimension of an estimator is necessary for rigorous complexity control using analytic VC generalization bounds. Unfortunately, it is not possible to obtain the analytic estimates of the VC-dimension in most cases. Hence, a recent proposal is to measure the VC-dimension of an estimator experimentally by fitting the theoretical formula to a set of experimental measurements of the frequency of errors on artificially generated data sets of varying sizes (Vapnik, Levin, & Le Cun, 1994). However, it may be difficult to obtain an accurate estimate of the VC-dimension due to the variability of random samples in the experimental procedure proposed by Vapnik et al. (1994). We address this problem by proposing an improved design procedure for specifying the measurement points (i.e., the sample size and the number of repeated experiments at a given sample size). Our approach leads to a nonuniform design structure as opposed to the uniform design structure used in the original article (Vapnik et al., 1994). Our simulation results show that the proposed optimized design structure leads to a more accurate estimation of the VC-dimension using the experimental procedure. The results also show that a more accurate estimation of VC-dimension leads to improved complexity control using analytic VC-generalization bounds and, hence, better prediction accuracy. --- paper_title: Neural Network Learning: Theoretical Foundations paper_content: This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics. --- paper_title: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods paper_content: From the publisher: This is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory. SVMs deliver state-of-the-art performance in real-world applications such as text categorisation, hand-written character recognition, image classification, biosequences analysis, etc., and are now established as one of the standard tools for machine learning and data mining. Students will find the book both stimulating and accessible, while practitioners will be guided smoothly through the material required for a good grasp of the theory and its applications. The concepts are introduced gradually in accessible and self-contained stages, while the presentation is rigorous and thorough. Pointers to relevant literature and web sites containing software ensure that it forms an ideal starting point for further study. Equally, the book and its associated web site will guide practitioners to updated literature, new applications, and on-line software. --- paper_title: Learning Kernel Classifiers: Theory and Algorithms paper_content: From the Publisher: ::: Linear classifiers in kernel spaces have emerged as a major topic within the field of machine learning. The kernel technique takes the linear classifier--a limited, but well-established and comprehensively studied model--and extends its applicability to a wide range of nonlinear pattern-recognition tasks such as natural language processing, machine vision, and biological sequence analysis. This book provides the first comprehensive overview of both the theory and algorithms of kernel classifiers, including the most recent developments. It begins by describing the major algorithmic advances: kernel perceptron learning, kernel Fisher discriminants, support vector machines, relevance vector machines, Gaussian processes, and Bayes point machines. Then follows a detailed introduction to learning theory, including VC and PAC-Bayesian theory, data-dependent structural risk minimization, and compression bounds. Throughout, the book emphasizes the interaction between theory and algorithms: how learning algorithms work and why. The book includes many examples, complete pseudo code of the algorithms presented, and an extensive source code library. --- paper_title: The Helmholtz Machine paper_content: Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways. --- paper_title: Causality: Models, Reasoning, and Inference paper_content: 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect. --- paper_title: Causation, prediction, and search paper_content: What assumptions and methods allow us to turn observations into causal knowledge, and how can even incomplete causal knowledge be used in planning and prediction to influence and control our environment? In this book Peter Spirtes, Clark Glymour, and Richard Scheines address these questions using the formalism of Bayes networks, with results that have been applied in diverse areas of research in the social, behavioral, and physical sciences. The authors show that although experimental and observational study designs may not always permit the same inferences, they are subject to uniform principles. They axiomatize the connection between causal structure and probabilistic independence, explore several varieties of causal indistinguishability, formulate a theory of manipulation, and develop asymptotically reliable procedures for searching over equivalence classes of causal models, including models of categorical data and structural equation models with and without latent variables. The authors show that the relationship between causality and probability can also help to clarify such diverse topics in statistics as the comparative power of experimentation versus observation, Simpson's paradox, errors in regression models, retrospective versus prospective sampling, and variable selection. The second edition contains a new introduction and an extensive survey of advances and applications that have appeared since the first edition was published in 1993. --- paper_title: Empirical Processes in M-Estimation paper_content: A coating composition suitable for coating the interior of tins comprises a solvent having dissolved therein at least 18% of two copolymers A and B in a weight ratio between 98.2 and 75.25 together with 2 to 10% (by weight of A+B) of an esterified aminoplast resin C. The solvent must comprise at least 75% of a liquid mononuclear aromatic hydrocarbon; copolymer A is a copolymer of 55 to 75% vinyl chloride with a C6- 24 diester of maleic, fumaric and/or chloromaleic acid which is completely soluble at 25% solids in toluene and has a relative viscosity of 1.3 to 1.7 as a 1% solution in cyclohexanone at 20 DEG C.; B is a copolymer of 60 to 92% vinylchloride with a vinyl fatty acid ester saponified to 2 to 10% vinyl alcohol content; and resin C is soluble in the solvent and is obtained by etherifying a condensate of urea, melamine or benzoguanamine and excess formaldehyde with a C3- 8 primary saturated monohydric alcohol. The solvent may contain up to 25% of a polar solvent e.g. ketones, esters, cyclic oxygen compounds, ether alcohols and their esters, nitrocompounds, amides and nitriles. To obtain copolymers A having the specified properties (indicating low molecular weight) copolymerisation may be effected in the presence of 1 to 6.5% of a non-polymerising halohydrocarbon, many examples being given. Copolymers B should have a molecular weight of 5,000 to 20,000 and corrected iodine number 2 to 10. In examples the compositions are coated on primed electrolytic tinplate. Primers specified are oleoresinous phenol aldehyde varnishes, epoxy resins mixed with a urea-formaldehyde and a phenol-formaldehyde resin, and copolymers of butadiene 1, 2-with styrene. Specification 883,070 is referred to.ALSO:A coating composition suitable for coating metal to be subsequently formed into tins comprises a solvent having dissolved therein at least 18% of two copolymers A and B in a weight ratio between 98,2 and 75,25 together with 2-10% (by weight of A+B) of an esterified aminoplast resin C. The solvent must comprise at least 75% of a liquid mononuclear aromatic hydrocarbon; copolymer A is a copolymer of 55-75% vinyl chloride with a C6-24 diester of maleic, fumaric and/or chloromaleic acid which is completely soluble at 25% solids in toluene and has a relative viscosity of 1,3-1,7 as a 1% solution in cyclohexanone at 20 DEG C.; B is a copolymer of 60-92% vinyl-chloride with a vinyl fatty acid ester saponified to 2-10% vinyl alcohol content; and resin C is soluble in the solvent and is obtained by etherifying a condensate of urea, melamine or benzoguanamine and excess formaldehyde with a C3-8 primary saturated monohydric alcohol. The solvent may contain up to 25% of a polar solvent, e.g. ketones, esters, cyclic oxygen compounds, ether alcohols and their esters, nitrocompounds, amides and nitriles. Copolymers B should have a molecular weight of 5000-20,000 and corrected iodine number of 2-10. In examples the compositions are coated on primed electrolytic tinplate and baked. Primers specified are oleoresinous phenol aldehyde varnishes, epoxy resins mixed with a ureaformaldehyde and a phenol-formaldehyde resin, and copolymers of butadiene-1,2 with styrene. Specification 883,070 is referred to. --- paper_title: The Role of Occam's Razor in Knowledge Discovery paper_content: Many KDD systems incorporate an implicit or explicit preference for simpler models, but this use of “Occam‘s razor” has been strongly criticized by several authors (e.g., Schaffer, 1993s Webb, 1996). This controversy arises partly because Occam‘s razor has been interpreted in two quite different ways. The first interpretation (simplicity is a goal in itself) is essentially correct, but is at heart a preference for more comprehensible models. The second interpretation (simplicity leads to greater accuracy) is much more problematic. A critical review of the theoretical arguments for and against it shows that it is unfounded as a universal principle, and demonstrably false. A review of empirical evidence shows that it also fails as a practical heuristic. This article argues that its continued use in KDD risks causing significant opportunities to be missed, and should therefore be restricted to the comparatively few applications where it is appropriate. The article proposes and reviews the use of domain constraints as an alternative for avoiding overfitting, and examines possible methods for handling the accuracy–comprehensibility trade-off. --- paper_title: Error And The Growth Of Experimental Knowledge paper_content: Thank you very much for downloading error and the growth of experimental knowledge. Maybe you have knowledge that, people have search numerous times for their chosen readings like this error and the growth of experimental knowledge, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some malicious bugs inside their laptop. --- paper_title: Asymptotic optimal inference for non-ergodic models paper_content: 0. An Over-view.- 1. Introduction.- 2. The Classical Fisher-Rao Model for Asymptotic Inference.- 3. Generalisation of the Fisher-Rao Model to Non-ergodic Type Processes.- 4. Mixture Experiments and Conditional Inference.- 5. Non-local Results.- 1. A General Model and Its Local Approximation.- 1. Introduction.- 2. LAMN Families.- 3. Consequences of the LAMN Condition.- 4. Sufficient Conditions for the LAMN Property.- 5. Asymptotic Sufficiency.- 6. An Example (Galton-Watson Branching Process).- 7. Bibliographical Notes.- 2. Efficiency of Estimation.- 1. Introduction.- 2. Asymptotic Structure of Limit Distributions of Sequences of Estimators.- 3. An Upper Bound for the Concentration.- 4. The Existence and Optimality of the Maximum Likelihood Estimators.- 5. Optimality of Bayes Estimators.- 6. Bibliographical Notes.- 3. Optimal Asymptotic Tests.- 1. Introduction.- 2. The Optimality Criteria: Definitions.- 3. An Efficient Test of Simple Hypotheses: Contiguous Alternatives.- 4. Local Efficiency and Asymptotic Power of the Score Statistic.- 5. Asymptotic Power of the Likelihood Ratio Test: Simple Hypothesis.- 6. Asymptotic Powers of the Score and LR Statistics for Composite Hypotheses with Nuisance Parameters.- 7. An Efficient Test of Composite Hypotheses with Contiguous Alternatives.- 8. Examples.- 9. Bibliographical Notes.- 4. Mixture Experiments and Conditional Inference.- 1. Introduction.- 2. Mixture of Exponential Families.- 3. Some Examples.- 4. Efficient Conditional Tests with Reference to L.- 5. Efficient Conditional Tests with Reference to L?.- 6. Efficient Conditional Tests with Reference to LC: Bahadur Efficiency.- 7. Efficiency of Conditional Maximum Likelihood Estimators.- 8. Conditional Tests for Markov Sequences and Their Mixtures.- 9. Some Heuristic Remarks about Conditional Inference for the General Model.- 10. Bibliographical Notes.- 5. Some Non-local Results.- 1. Introduction.- 2. Non-local Behaviour of the Likelihood Ratio.- 3. Examples.- 4. Non-local Efficiency Results for Simple Likelihood Ratio Tests.- 5. Bibiographical Notes.- Appendices.- A.1 Uniform and Continuous Convergence.- A.2 Contiguity of Probability Measures.- References. --- paper_title: Entropy and Information Theory paper_content: This book is an updated version of the information theory classic, first published in 1990. About one-third of the book is devoted to Shannon source and channel coding theorems; the remainder addresses sources, channels, and codes and on information and distortion measures and their properties. New in this edition:Expanded treatment of stationary or sliding-block codes and their relations to traditional block codesExpanded discussion of results from ergodic theory relevant to information theoryExpanded treatment of B-processes -- processes formed by stationary coding memoryless sourcesNew material on trading off information and distortion, including the Marton inequalityNew material on the properties of optimal and asymptotically optimal source codesNew material on the relationships of source coding and rate-constrained simulation or modeling of random processesSignificant material not covered in other information theory texts includes stationary/sliding-block codes, a geometric view of information theory provided by process distance measures, and general Shannon coding theorems for asymptotic mean stationary sources, which may be neither ergodic nor stationary, and d-bar continuous channels. --- paper_title: Numerical Recipes In C The Art Of Scientific Computing paper_content: Thank you very much for reading numerical recipes in c the art of scientific computing. Maybe you have knowledge that, people have search hundreds times for their favorite readings like this numerical recipes in c the art of scientific computing, but end up in malicious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they are facing with some infectious bugs inside their desktop computer. --- paper_title: Time Series Analysis and Its Applications paper_content: Characteristics of time series.- Time series regression and exploratory data analysis.- ARIMA models.- Spectral analysis and filtering.- Additional time domain topics.- State-space models.- Statistical methods in the frequency domain. --- paper_title: Stochastic Dynamical Systems: Concepts, Numerical Methods, Data Analysis paper_content: A textbook designed for those commencing study in stochastic dynamical systems. It is addressed to students of physics, mathematics, chemistry, also those engaged in medical research, and aims to bridge mathematics and the applied sciences. It provides numerical methods and the tools to analyze complex systems. The book contains various stochastic and mathematical methods which are necessary for the analysis of complex systems such as polymeric melts, the human body and the atmosphere. Data analysis is treated, as well as simulation methods for given circumstances. --- paper_title: Time Series Analysis and Its Applications paper_content: Characteristics of time series.- Time series regression and exploratory data analysis.- ARIMA models.- Spectral analysis and filtering.- Additional time domain topics.- State-space models.- Statistical methods in the frequency domain. --- paper_title: Predictive Turbulence Modeling by Variational Closure paper_content: We show that a variational implementation of probability density function (PDF) closures has the potential to make predictions of general turbulence mean statistics for which a priori knowledge of the incorrectness is possible. This possibility exists because of realizability conditions on “effective potential” functions for general turbulence statistics. These potentials measure the cost for fluctuations to occur away from the ensemble-mean value in empirical time-averages of the given variable, and their existence is a consequence of a refined ergodic hypothesis for the governing dynamical system (Navier–Stokes dynamics). Approximations of the effective potentials can be calculated within PDF closures by an efficient Rayleigh–Ritz algorithm. The failure of realizability within a closure for the approximate potential of any chosen statistic implies a priori that the closure prediction for that statistic is not converged. The systematic use of these novel realizability conditions within PDF closures is shown in a simple 3-mode system of Lorenz to result in a statistically improved predictive ability. In certain cases the variational method allows an a priori optimum choice of free parameters in the closure to be made. --- paper_title: Parametric modelling of turbulence paper_content: Some steps are taken towards a parametric statistical model for the velocity and velocity derivative fields in stationary turbulence, building on the background of existing theoretical and empirical knowledge of such fields. While the ultimate goal is a model for the three-dimensional velocity components, and hence for the corresponding velocity derivatives, we concentrate here on the stream wise velocity component. Discrete and continuous time stochastic processes of the first-order autoregressive type and with one-dimensional marginals having log-linear tails are constructed and compared with two large data-sets. It turns out that a first-order autoregression that fits the local correlation structure well is not capable of describing the correlations over longer ranges. A good fit locally as well as at longer ranges is achieved by using a process that is the sum of two independent autoregressions. We study this type of model in some detail. We also consider a model derived from the above-mentioned autoregressions and with dependence structure on the borderline to long-range dependence. This model is obtained by means of a general method for construction of processes with long-range dependence. Some suggestions for future empirical and theoretical work are given. --- paper_title: A Variational Formulation of Optimal Nonlinear Estimation paper_content: We propose a variational method to solve all three estimation problems for nonlinear stochastic dynamical systems: prediction, filtering, and smoothing. Our new approach is based upon a proper choice of cost function, termed the {\it effective action}. We show that this functional of time-histories is the unique statistically well-founded cost function to determine most probable histories within empirical ensembles. The ensemble dispersion about the sample mean history can also be obtained from the Hessian of the cost function. We show that the effective action can be calculated by a variational prescription, which generalizes the ``sweep method'' used in optimal linear estimation. An iterative numerical scheme results which converges globally to the variational estimator. This scheme involves integrating forward in time a ``perturbed'' Fokker-Planck equation, very closely related to the Kushner-Stratonovich equation for optimal filtering, and an adjoint equation backward in time, similarly related to the Pardoux-Kushner equation for optimal smoothing. The variational estimator enjoys a somewhat weaker property, which we call ``mean optimality''. However, the variational scheme has the principal advantage---crucial for practical applications---that it admits a wide variety of finite-dimensional moment-closure approximations. The moment approximations are derived reductively from the Euler-Lagrange variational formulation and preserve the good structural properties of the optimal estimator. --- paper_title: The Lure of Modern Science: Fractal Thinking paper_content: Lure of modern science linear spaces and geometry in natural philosophy noise in natural philosophy self-similarity, fractals and measurements maps and dynamics dynamics in fractal dimensions. --- paper_title: Nonparametric statistics for stochastic processes : estimation and prediction paper_content: Synopsis.- 1. Inequalities for mixing processes.- 2. Density estimation for discrete time processes.- 3. Regression estimation and prediction for discrete time processes.- 4. Kernel density estimation for continuous time processes.- 5. Regression estimation and prediction in continuous time.- 6. The local time density estimator.- 7. Implementation of nonparametric method and numerical applications.- References. --- paper_title: Intrinsic limits on dimension calculations paper_content: The combined influences of boundary effects at large scales and nonzero nearest neighbor separations at small scales are used to compute intrinsic limits on the minimum size of a data set required for calculation of scaling exponents. A lower bound on the number of points required for a reliable estimation of the correlation exponent is given in terms of the dimension of the object and the desired accuracy. A method of estimating the correlation integral computed from a finite sample of a white noise signal is given. --- paper_title: Investigating nonlinear dynamics from time series: The influence of symmetries and the choice of observables paper_content: When a dynamical system is investigated from a time series, one of the most challenging problems is to obtain a model that reproduces the underlying dynamics. Many papers have been devoted to this problem but very few have considered the influence of symmetries in the original system and the choice of the observable. Indeed, it is well known that there are usually some variables that provide a better representation of the underlying dynamics and, consequently, a global model can be obtained with less difficulties starting from such variables. This is connected to the problem of observing the dynamical system from a single time series. The roots of the nonequivalence between the dynamical variables will be investigated in a more systematic way using previously defined observability indices. It turns out that there are two important ingredients which are the complexity of the coupling between the dynamical variables and the symmetry properties of the original system. As will be mentioned, symmetries and the choice of observables also has important consequences in other problems such as synchronization of nonlinear oscillators. (c) 2002 American Institute of Physics. --- paper_title: Nonlinear Time Series Analysis paper_content: Part I. Basic Concepts: 1. Introduction: why nonlinear methods? 2. Linear tools and general considerations 3. Phase space methods 4. Determinism and predictability 5. Instability: Lyapunov exponents 6. Self-similarity: dimensions 7. Using nonlinear methods when determinism is weak 8. Selected nonlinear phenomena Part II. Advanced Topics: 9. Advanced embedding methods 10. Chaotic data and noise 11. More about invariant quantities 12. Modeling and forecasting 13. Chaos control 14. Other selected topics Appendix 1. Efficient neighbour searching Appendix 2. Program listings Appendix 3. Description of the experimental data sets. --- paper_title: Lectures on Discrete Time Filtering paper_content: This text is based on a course given at the University of Southern California, at the University of Nice, and at Cheng Kung University in Taiwan. It discusses linear and nonlinear sequential filtering theory: that is, the problem of estimating the process underlying a stochastic signal. For the linear coloured-noise problem, the theory is due to Kalman, and in the case of white noise it is the continuous Kalman-Bucy theory. The techniques considered have applications in fields as diverse as economics (prediction of the money supply), geophysics (processing of sonar signals), electrical engineering (detection of radar signals), and numerical analysis (in integration packages). The nonlinear theory is treated thoroughly, along with some novel synthesis methods for this computationally demanding problem. The author also discusses the Burg technique, and gives a detailed analysis of the matrix Riccati equation. --- paper_title: A Variational Formulation of Optimal Nonlinear Estimation paper_content: We propose a variational method to solve all three estimation problems for nonlinear stochastic dynamical systems: prediction, filtering, and smoothing. Our new approach is based upon a proper choice of cost function, termed the {\it effective action}. We show that this functional of time-histories is the unique statistically well-founded cost function to determine most probable histories within empirical ensembles. The ensemble dispersion about the sample mean history can also be obtained from the Hessian of the cost function. We show that the effective action can be calculated by a variational prescription, which generalizes the ``sweep method'' used in optimal linear estimation. An iterative numerical scheme results which converges globally to the variational estimator. This scheme involves integrating forward in time a ``perturbed'' Fokker-Planck equation, very closely related to the Kushner-Stratonovich equation for optimal filtering, and an adjoint equation backward in time, similarly related to the Pardoux-Kushner equation for optimal smoothing. The variational estimator enjoys a somewhat weaker property, which we call ``mean optimality''. However, the variational scheme has the principal advantage---crucial for practical applications---that it admits a wide variety of finite-dimensional moment-closure approximations. The moment approximations are derived reductively from the Euler-Lagrange variational formulation and preserve the good structural properties of the optimal estimator. --- paper_title: Complexity: Hierarchical Structures and Scaling in Physics paper_content: Part I. Phenomenology and Models: 1. Introduction 2. Examples of complex behaviour 3. Mathematical models Part II: 4. Symbolic representations of physical systems 5. Probability, ergodic theory, and information 6. Thermodynamic formalism Part III. Formal Characterization of Complexity: 7. Physical and computational analysis of symbolic signals 8. Algorithmic and grammatical complexities 9. Hierarchical scaling complexities 10. Summary and perspectives. --- paper_title: Hidden Markov Models: Estimation and Control paper_content: Hidden Markov Model Processing.- Discrete-Time HMM Estimation.- Discrete States and Discrete Observations.- Continuous-Range Observations.- Continuous-Range States and Observations.- A General Recursive Filter.- Practical Recursive Filters.- Continuous-Time HMM Estimation.- Discrete-Range States and Observations.- Markov Chains in Brownian Motion.- Two-Dimensional HMM Estimation.- Hidden Markov Random Fields.- HMM Optimal Control.- Discrete-Time HMM Control.- Risk-Sensitive Control of HMM.- Continuous-Time HMM Control. --- paper_title: A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants paper_content: The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. --- paper_title: Variable length Markov chains paper_content: We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of high order, but with memory of variable length yielding a much bigger and structurally richer class of models than ordinary high-order Markov chains. From an algorithmic view, the VLMC model class has attracted interest in information theory and machine learning, but statistical properties have not yet been explored. Provided that good estimation is available, the additional structural richness of the model class enhances predictive power by finding a better trade-off between model bias and variance and allowing better structural description which can be of specific interest. The latter is exemplified with some DNA data. A version of the tree-structured context algorithm, proposed by Rissanen in an information theoretical set-up is shown to have new good asymptotic properties for estimation in the class of VLMCs. This remains true even when the underlying model increases in dimensionality. Furthermore, consistent estimation of minimal state spaces and mixing properties of fitted models are given. We also propose a new bootstrap scheme based on fitted VLMCs. We show its validity for quite general stationary categorical time series and for a broad range of statistical procedures. --- paper_title: The Context Tree Weighting Method: Basic Properties paper_content: Describes a sequential universal data compression procedure for binary tree sources that performs the "double mixture." Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding distribution for tree sources with an unknown model and unknown parameters. Computational and storage complexity of the proposed procedure are both linear in the source sequence length. The authors derive a natural upper bound on the cumulative redundancy of the method for individual sequences. The three terms in this bound can be identified as coding, parameter, and model redundancy, The bound holds for all source sequence lengths, not only for asymptotically large lengths. The analysis that leads to this bound is based on standard techniques and turns out to be extremely simple. The upper bound on the redundancy shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound. > --- paper_title: Context-tree modeling of observed symbolic dynamics. paper_content: Modern techniques invented for data compression provide efficient automated algorithms for the modeling of the observed symbolic dynamics. We demonstrate the relationship between coding and modeling, motivating the well-known minimum description length (MDL) principle, and give concrete demonstrations of the ``context-tree weighting'' and ``context-tree maximizing'' algorithms. The predictive modeling technique obviates many of the technical difficulties traditionally associated with the correct MDL analyses. These symbolic models, representing the symbol generating process as a finite-state automaton with probabilistic emission probabilities, provide excellent and reliable entropy estimations. The resimulations of estimated tree models satisfying the MDL model-selection criterion are faithful to the original in a number of measures. The modeling suggests that the automated context-tree model construction could replace fixed-order word lengths in many traditional forms of empirical symbolic analysis of the data. We provide an explicit pseudocode for implementation of the context-tree weighting and maximizing algorithms, as well as for the conversion to an equivalent Markov chain. --- paper_title: Complexity: Hierarchical Structures and Scaling in Physics paper_content: Part I. Phenomenology and Models: 1. Introduction 2. Examples of complex behaviour 3. Mathematical models Part II: 4. Symbolic representations of physical systems 5. Probability, ergodic theory, and information 6. Thermodynamic formalism Part III. Formal Characterization of Complexity: 7. Physical and computational analysis of symbolic signals 8. Algorithmic and grammatical complexities 9. Hierarchical scaling complexities 10. Summary and perspectives. --- paper_title: Estimating good discrete partitions from observed data: symbolic false nearest neighbors. paper_content: A symbolic analysis of observed time series requires a discrete partition of a continuous state space containing the dynamics. A particular kind of partition, called "generating," preserves all deterministic dynamical information in the symbolic representation, but such partitions are not obvious beyond one dimension. Existing methods to find them require significant knowledge of the dynamical evolution operator. We introduce a statistic and algorithm to refine empirical partitions for symbolic state reconstruction. This method optimizes an essential property of a generating partition, avoiding topological degeneracies, by minimizing the number of "symbolic false nearest neighbors." It requires only the observed time series and is sensible even in the presence of noise when no truly generating partition is possible. --- paper_title: Validity of threshold-crossing analysis of symbolic dynamics from chaotic time series paper_content: A practical and popular technique to extract the symbolic dynamics from experimentally measured chaotic time series is the threshold-crossing method, by which an arbitrary partition is utilized for determining the symbols. We address to what extent the symbolic dynamics so obtained can faithfully represent the phase-space dynamics. Our principal result is that such a practice can lead to a severe misrepresentation of the dynamical system. The measured topological entropy is a Devil's staircase-like, but surprisingly nonmonotone, function of a parameter characterizing the amount of misplacement of the partition. --- paper_title: Symbolic Dynamics: One-sided, Two-sided and Countable State Markov Shifts paper_content: 1. Background and Basics.- 1.1 Subshifts of Finite Type.- 1.2 Examples.- 1.3 Perron-Frobenius Theory.- 1.4 Basic Dynamics.- Notes.- References.- 2. Topology Conjugacy.- 2.1 Decomposition of Topological Conjugacies.- 2.2 Algebraic Consequences of Topological Conjugacy.- Notes.- References.- 3. Automorphisms.- 3.1 Automorphisms.- 3.2 Automorphisms as Conjugacies.- 3.3 Subgroups of the Automorphism Group.- 3.4 Actions of Automorphisms.- 3.5 Summary.- Notes.- References.- 4. Embeddinggs and Factor Maps.- 4.1 Factor Maps.- 4.2 Finite-to-one Factor Maps.- 4.3 Special Constructions Involving Factor Maps.- 4.4 Subsystems and Infinite-to-One Factor Maps.- Notes.- References.- 5. Almost-Topological Conjugacy.- 5.1 Reducible Subshifts of Finite Type.- 5.2 Almost-Topological Conjugacy.- Notes.- References.- 6. Further Topics.- 6.1 Sofic Systems.- 6.2 Markov Measures and the Maximal Measure.- 6.3 Markov Subgroups.- 6.4 Cellular Automata.- 6.5 Channnel Codes.- Notes.- References.- 7. Countable State Markov Shifts.- 7.1 Perron-Frobenius Theory.- 7.2 Basic Symbolic Dynamics.- Notes.- References.- Name Index. --- paper_title: Symbolic dynamics of noisy chaos paper_content: Abstract One model of randomness observed in physical systems is that low-dimensional deterministic chaotic attractors underly the observations. A phenomenological theory of chaotic dynamics requires an accounting of the information flow from the observed system to the observer, the amount of information available in observations, and just how this information affects predictions of the system's future behavior. In an effort to develop such a description, we discuss the information theory of highly discretized observations of random behavior. Metric entropy and topological entropy are well-defined invariant measures of such an attractor's “level of chaos”, and are computable using symbolic dynamics. Real physical systems that display low dimensional dynamics are, however, inevitably coupled to high-dimensional randomless, e.g. thermal noise. We investigate the effects of such fluctuations coupled to deterministic chaotic systems, in particular, the metric entropy's response to the fluctuations. We find that the entropy increases with a power law in the noise level, and that the convergence of the entropy and the effect of fluctuations can be cast as a scaling theory. We also argue that in addition to the metric entropy, there is a second scaling invariant quantity that characterizes a deterministic system with added fluctuations: I0, the maximum average information obtainable about the initial condition that produces a particular sequence of measurements (or symbols). --- paper_title: Unreconstructible At Any Radius paper_content: Abstract Modeling pattern data series with cellular automata fails for a wide range of deterministic nonlinear spatial processes. If the latter have finite spatially-local memory, reconstructed cellular automata with infinite radius may be required. In some cases, even this is not adequate: an irreducible stochasticity remains on the shortest time scales. The underlying problem is illustrated and quantitatively analyzed using an alternative model class called cellular transducers. --- paper_title: Lattice Gas Prediction is P-Complete paper_content: We show that predicting the HPP or FHP III lattice gas for finite time is equivalent to calculating the output of an arbitrary Boolean circuit, and is therefore P-complete: that is, it is just as hard as any other problem solvable by a serial computer in polynomial time. It is widely believed in computer science that there are inherently sequential problems, for which parallel processing gives no significant speedup. Unless this is false, it is impossible even with highly parallel processing to predict lattice gases much faster than by explicit simulation. More precisely, we cannot predict t time-steps of a lattice gas in parallel computation time O(log^{k}t) for any k, or O(t^{\alpha}) for --- paper_title: Majority-vote cellular automata, Ising dynamics, and P-completeness paper_content: We study cellular automata where the state at each site is decided by a majority vote of the sites in its neighborhood. These are equivalent, for a restricted set of initial conditions, to nonzero probability transitions in single spin-flip dynamics of the Ising model at zero temperature. We show that in three or more dimensions these systems can simulate Boolean circuits of AND and OR gates, and are therefore P-complete. That is, predicting their state t time-steps in the future is at least as hard as any other problem that takes polynomial time on a serial computer. Therefore, unless a widely believed conjecture in computer science is false, it is impossible even with parallel computation to predict majority-vote cellular automata, or zero-temperature single spin-flip Ising dynamics, qualitatively faster than by explicit simulation. --- paper_title: Threshold-range scaling of excitable cellular automata paper_content: Each cell of a two-dimensional lattice is painted one of κ colors, arranged in a ‘color wheel’. The colors advance (k tok+1 mod κ) either automatically or by contact with at least a threshold number of successor colors in a prescribed local neighborhood. Discrete-time parallel systems of this sort in which color 0 updates by contact and the rest update automatically are called Greenberg-Hastings (GH) rules. A system in which all colors update by contact is called a cyclic cellular automation (CCA). Started from appropriate initial conditions, these models generate periodic traveling waves. Started from random configurations the same rules exhibit complex self-organization, typically characterized by nucleation of locally periodic ‘ram's horns’ or spirals. Corresponding random processes give rise to a variety of ‘forest fire’ equilibria that display large-scale stochastic wave fronts. This paper describes a framework, theoretically based, but relying on extensive interactive computer graphics experimentation, for investigation of the complex dynamics shared by excitable media in a broad spectrum of scientific contexts. By focusing on simple mathematical prototypes we hope to obtain a better understanding of the basic organizational principles underlying spatially distributed oscillating systems. --- paper_title: Lattice-gas automata for the Navier-Stokes equation. paper_content: A very brief presentation of how lattice gas hydrodynamics is made. It includes key references. --- paper_title: Lattice-Gas Cellular Automata: Simple Models of Complex Hydrodynamics paper_content: Preface Acknowledgements 1. A simple model of fluid mechanics 2. Two routes to hydrodynamics 3. Inviscid two-dimensional lattice-gas hydrodynamics 4. Viscous two-dimensional hydrodynamics 5. Some simple 3D models 6. The lattice-Boltzmann method 7. Using the Boltzmann method 8. Miscible fluids 9. Immiscible lattice gases 10. Lattice-Boltzmann method for immiscible fluids 11. Immiscible lattice gases in three dimensions 12. Liquid-gas models 13. Flow through porous media 14. Equilibrium statistical mechanics 15. Hydrodynamics in the Boltzmann approximation 16. Phase separation 17. Interfaces 18. Complex fluids and patterns Appendices Author Index Subject Index. --- paper_title: Molecular dynamics of a classical lattice gas: Transport properties and time correlation functions paper_content: A study of the dynamics of a discrete two-dimensional system of classical particles is presented. In this model, dynamics and computations may be done exactly, by definition. The equilibrium state is investigated and the Navier-Stokes hydrodynamical equations are derived. Two hydrodynamical modes exist in the model: the sound waves and a kind of vorticity diffusion. In the Navier-Stokes equations one obtains a transport coefficient which is given by a Green-Kubo formula. The related time correlation function has been calculated in a numerical simulation up to a time of the order of 50 mean free flights. After a short time of exponential decay this time correlation behaves like ${t}^{\ensuremath{-}S}$, the exponent being compared to theoretical predictions. --- paper_title: Understanding Object-Oriented Programming with Java paper_content: From the Publisher: ::: Timothy Budd, leading author, educator and researcher in the object-oriented programming community, provides a deep understanding of object-oriented programming and Java. ::: Understanding Object-Oriented Programming with Java teaches readers why the Java language works the way it does, as opposed to many other books that focus on how Java works. ::: Readers learn about the development decisions that went into making the Java language, and leave with a more sophisticated knowledge of Java and how it fits in the context of object-oriented programming. Throughout the text, the focus remains on teaching readers to master the necessary object-oriented programming concepts. Dr. Budd explains to the reader in clear and simple terms the fundamental principles of object-oriented programming, illustrating these principles with extensive examples from the Java standard library. In short, he thoughtfully created this book, not as a reference manual for the Java language, but as a tool for understanding Java and the object-oriented programming philosophy. ::: Highlights: ::: Provides several graduated example programs in Part II (i.e., cannon and pinball games) for readers to work through and successively learn object-oriented programming features. ::: Includes extensive examples from the Java standard library so that readers can better understand the many design patterns found in the AWT, the multiple purposes for which inheritance is used in the standard classes, and more. Discusses features of Java in Part V that are important for students to understand, but not necessarily notable for their object-oriented features. Instructors have the flexibility to omit altogether, or introduce in parallel with earlier material. --- paper_title: Turtles, termites, and traffic jams: explorations in massively parallel microworlds paper_content: Part 1 Foundations: introduction the era of decentralization. Part 2 Constructions: constructionism LEGO/logo StarLogo objects and parallelism. Part 3 Explorations: simulations and stimulations slime mould artificial ants traffic jams termites turtles and frogs turtle ecology new turtle geometry forest fire recursive trees. Part 4 Reflections: the centralized mindset beyond the centralized mindset. Part 5 Projections: growing up. Appendices: student participants StarLogo overview. --- paper_title: Co-ordination in Artificial Agent Societies: Social Structures and Its Implications for Autonomous Problem-Solving Agents paper_content: Advances in Computer Science often arise from new ideas and concepts, that prove to be advantageous for the design of complex software systems. The con ception of multi agent systems is particularly attractive, as it prommodul ises arity based on the conceptual speciality of an agent, as well as flexibility in their inte gration through appropriate interaction models. While early systems drew upon co operative agents, recent developments have realised the importance of the notion of autonomy in the design of agent based applications. The emergence of systems of autonomous problem solving agents paves the way for complex Artificial Intelligence applications that allow fosca r lability and at the same time foster the reusability of their components. In consequence, an intelligent multi agent application can be seen as a collec tion of autonomous agents, usually specialised in different tasks, together with a social model of their interactions. This approach implies a dynamic generation of complex relational structures, that agents need to be knowledgeable of in order to successfully achieve their goals. Therefore, a multi agent system designer needs to think carefully about conceptualisation, representation and enactment of the different types of knowledge that its agents rely on, for individual problem solving as well as for mutual co ordination. --- paper_title: Reasoning about Rational Agents paper_content: One goal of modern computer science is to engineer computer programs that can act as autonomous, rational agents; software that can independently make good decisions about what actions to perform on our behalf and execute those actions. Applications range from small programs that intelligently search the Web buying and selling goods via electronic commerce, to autonomous space probes. This book focuses on the belief-desire-intention (BDI) model of rational agents, which recognizes the primacy of beliefs, desires, and intentions in rational action. The BDI model has three distinct strengths: an underlying philosophy based on practical reasoning in humans, a software architecture that is implementable in real systems, and a family of logics that support a formal theory of rational agency.The book introduces a BDI logic called LORA (Logic of Rational Agents). In addition to the BDI component, LORA contains a temporal component, which allows one to represent the dynamics of how agents and their environments change over time, and an action component, which allows one to represent the actions that agents perform and the effects of the actions. The book shows how LORA can be used to capture many components of a theory of rational agency, including such notions as communication and cooperation. --- paper_title: Putting Intentions into Cell Biochemistry: An Artificial Intelligence Perspective paper_content: The living cell exists by virtue of thousands of nonlinearly interacting processes. This complexity greatly impedes its understanding. The standard approach to the calculation of the behaviour of the living cell, or part thereof, integrates all the rate equations of the individual processes. If successful extremely intensive calculations often lead the calculation of coherent, apparently simple, cellular "decisions" taken in response to a signal: the complexity of the behavior of the cell is often smaller than it might have been. The "decisions" correspond to the activation of entire functional units of molecular processes, rather than individual ones. The limited complexity of signal and response suggests that there might be a simpler way to model at least some important aspects of cell function. In the field of Artificial Intelligence, such simpler modelling methods for complex systems have been developed. In this paper, it is shown how the Artificial Intelligence description method for deliberative agents functioning on the basis of beliefs, desires and intentions as known in Artificial Intelligence, can be used successfully to describe essential aspects of cellular regulation. This is demonstrated for catabolite repression and substrate induction phenomena in the bacterium Escherichia coli. The method becomes highly efficient when the computation is automated in a Prolog implementation. By defining in a qualitative way the food supply of the bacterium, the make-up of its catabolic pathways is readily calculated for cases that are sufficiently complex to make the traditional human reasoning tedious and error prone. --- paper_title: Principles of Condensed Matter Physics paper_content: Preface 1. Overview 2. Structure and scattering 3. Thermodynamics and statistical mechanics 4. Mean-field theory 5. Field theories, critical phenomena, and the renormalization group 6. Generalized elasticity 7. Dynamics: correlation and response 8. Hydrodynamics 9. Topological defects 10. Walls, kinks and solitons Glossary Index. --- paper_title: Equation of state calculations by fast computing machines paper_content: A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two‐dimensional rigid‐sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four‐term virial coefficient expansion. --- paper_title: A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems paper_content: Contents: Preface.- Introduction.- Preliminaries.- Problem Formulations.- Vapnik-Chervonenkis and Pollard (Pseudo-) Dimensions.- Uniform Convergence of Empirical Means.- Learning Under a Fixed Probability Measure.- Distribution-ree Learning.- Learning Under an Intermediate Family of Probabilities.- Alternate Models of Learning.- Applications to Neural Networks.- Applications to Control Systems.- Some Open Problems. --- paper_title: Technology and Market Structure: Theory and History paper_content: Economists have traditionally taken two very different approaches to studying market structure. One looks to "industry characteristics" to explain why different industries develop in different ways; the other looks to the pattern of firm growth within a "typical" industry to describe the evolution of the size distribution of firms. In his new book, John Sutton sets out a unified theory that encompasses both approaches, while generating a series of novel predictions as to how markets evolve. Using statistical analysis and a detailed examination of industry histories, he rigorously tests these new predictions. Data in the paperback edition have been revised and updated. --- paper_title: Stochastic Dynamical Systems: Concepts, Numerical Methods, Data Analysis paper_content: A textbook designed for those commencing study in stochastic dynamical systems. It is addressed to students of physics, mathematics, chemistry, also those engaged in medical research, and aims to bridge mathematics and the applied sciences. It provides numerical methods and the tools to analyze complex systems. The book contains various stochastic and mathematical methods which are necessary for the analysis of complex systems such as polymeric melts, the human body and the atmosphere. Data analysis is treated, as well as simulation methods for given circumstances. --- paper_title: Learning in graphical models paper_content: Part 1 Inference: introduction to inference for Bayesian networks, Robert Cowell advanced inference in Bayesian networks, Robert Cowell inference in Bayesian networks using nested junction trees, Uffe Kjoerulff bucket elimination - a unifying framework for probabilistic inference, R. Dechter an introduction to variational methods for graphical models, Michael I. Jordan et al improving the mean field approximation via the use of mixture distributions, Tommi S. Jaakkola and Michael I. Jordan introduction to Monte Carlo methods, D.J.C. MacKay suppressing random walls in Markov chain Monte Carlo using ordered overrelaxation, Radford M. Neal. Part 2 Independence: chain graphs and symmetric associations, Thomas S. Richardson the multiinformation function as a tool for measuring stochastic dependence, M. Studeny and J. Vejnarova. Part 3 Foundations for learning: a tutorial on learning with Bayesian networks, David Heckerman a view of the EM algorithm that justifies incremental, sparse and other variants, Radford M. Neal and Geoffrey E. Hinton. Part 4 Learning from data: latent variable models, Christopher M. Bishop stochastic algorithms for exploratory data analysis - data clustering and data visualization, Joachim M. Buhmann learning Bayesian networks with local structure, Nir Friedman and Moises Goldszmidt asymptotic model selection for directed networks with hidden variables, Dan Geiger et al a hierarchical community of experts, Geoffrey E. Hinton et al an information-theoretic analysis of hard and soft assignment methods for clustering, Michael J. Kearns et al learning hybrid Bayesian networks from data, Stefano Monti and Gregory F. Cooper a mean field learning algorithm for unsupervised neural networks, Lawrence Saul and Michael Jordan edge exclusion tests for graphical Gaussian models, Peter W.F. Smith and Joe Whittaker hepatitis B - a case study in MCMC, D.J. Spiegelhalter et al prediction with Gaussian processes - from linear regression to linear prediction and beyond, C.K.I. Williams. --- paper_title: Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues paper_content: Preface * 1 Probability Review * 2 Discrete Time Markov Models * 3 Recurrence and Ergodicity * 4 Long Run Behavior * 5 Lyapunov Functions and Martingales * 6 Eigenvalues and Nonhomogeneous Markov Chains * 7 Gibbs Fields and Monte Carlo Simulation * 8 Continuous-Time Markov Models 9 Poisson Calculus and Queues * Appendix * Bibliography * Author Index * Subject Index --- paper_title: Adaptive Cooperative Systems paper_content: From the Publisher: ::: This book presents a unified treatment of self-organizing processes, drawing upon examples from physics, spatial statistics, image processing, and brain science. It offers a rigorous theory of cooperative computation as applied to problems in perceptual inferencing. The problems addressed include the integration of multiple sensory information (multiuser function), figure ground segregation, the segmentation of visual images, attention, the self-organization of feature detecting neurons, and short-term synaptic plasticity. --- paper_title: Error And The Growth Of Experimental Knowledge paper_content: Thank you very much for downloading error and the growth of experimental knowledge. Maybe you have knowledge that, people have search numerous times for their chosen readings like this error and the growth of experimental knowledge, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some malicious bugs inside their laptop. --- paper_title: When Time Breaks Down: The Three‐Dimensional Dynamics of Electrochemical Waves and Cardiac Arrhythmias paper_content: The Description for this book, When Time Breaks Down: The Three-Dimensional Dynamics of Electrochemical Waves and Cardiac Arrhythmias, will be forthcoming. --- paper_title: Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations paper_content: Author(s): Mitchell, M; Hraber, P; Crutchfield, JP | Abstract: We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. --- paper_title: Ecological inference paper_content: Ecological inference is the process of ::: drawing conclusions about individual-level behavior from ::: aggregate-level data. Recent advances involve the combination of ::: statistical and deterministic means to produce such inferences. --- paper_title: Optimum experimental designs paper_content: Part I. Fundamentals Introduction Some key ideas Experimental strategies The choice of a model Models and least squares Criteria for a good experiment Standard designs The analysis of experiments Part II. Theory and applications Optimum design theory Criteria of optimality Experiments with both qualitative and quantitative factors Blocking response surface designs Restricted region designs Failure of the experiment and design augmentation Non-linear models Optimum Bayesian design Discrimination between models Composite design criteria Further topics. --- paper_title: Recursive nonlinear estimation: A geometric approach paper_content: Abstract The paper deals with approximation of nonlinear estimation within the Bayesian framework. The underlying idea is to project the true posterior density orthogonally onto a prespecified approximation family. The problem to be coped with is that the true posterior density is not typically at disposal when estimation is implemented recursively . It is shown that there exists a Bayes-closed description of the posterior density which is recursively computable without complete knowledge of the true posterior. We study a mutual relationship between the equivalence class, composed of densities matching the current description, and a prespecified parametric family. It is proved that if the approximation family is of the mixture type, the equivalence classes can be made orthogonal to this family. Then the approximating density done by the orthogonal projection of the true posterior density minimizes the Kullback-Leibler distance between both densities. On the contrary, if the approximation family is of the exponential type, the analogous result holds at most locally. To be able to give a sensible definition of the orthogonal projection, we have been forced to introduce a Riemannian geometry on the family of probability distributions. Being aware that the differential-geometric concepts and tools do not belong to common knowledge of control engineers, we include necessary preliminary information. --- paper_title: Spikes: Exploring the Neural Code paper_content: Our perception of the world is driven by input from the sensory nerves. This input arrives encoded as sequences of identical spikes. Much of neural computation involves processing these spike trains. What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.The authors invite the reader to play the role of a hypothetical observer inside the brain who makes decisions based on the incoming spike trains. Rather than asking how a neuron responds to a given stimulus, the authors ask how the brain could make inferences about an unknown stimulus from a given neural response. The flavor of some problems faced by the organism is captured by analyzing the way in which the observer can make a running reconstruction of the sensory stimulus as it evolves in time. These ideas are illustrated by examples from experiments on several biological systems. Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory. A quantitative framework is used to pose precise questions about the structure of the neural code. These questions in turn influence both the design and analysis of experiments on sensory neurons. --- paper_title: Learning in graphical models paper_content: Part 1 Inference: introduction to inference for Bayesian networks, Robert Cowell advanced inference in Bayesian networks, Robert Cowell inference in Bayesian networks using nested junction trees, Uffe Kjoerulff bucket elimination - a unifying framework for probabilistic inference, R. Dechter an introduction to variational methods for graphical models, Michael I. Jordan et al improving the mean field approximation via the use of mixture distributions, Tommi S. Jaakkola and Michael I. Jordan introduction to Monte Carlo methods, D.J.C. MacKay suppressing random walls in Markov chain Monte Carlo using ordered overrelaxation, Radford M. Neal. Part 2 Independence: chain graphs and symmetric associations, Thomas S. Richardson the multiinformation function as a tool for measuring stochastic dependence, M. Studeny and J. Vejnarova. Part 3 Foundations for learning: a tutorial on learning with Bayesian networks, David Heckerman a view of the EM algorithm that justifies incremental, sparse and other variants, Radford M. Neal and Geoffrey E. Hinton. Part 4 Learning from data: latent variable models, Christopher M. Bishop stochastic algorithms for exploratory data analysis - data clustering and data visualization, Joachim M. Buhmann learning Bayesian networks with local structure, Nir Friedman and Moises Goldszmidt asymptotic model selection for directed networks with hidden variables, Dan Geiger et al a hierarchical community of experts, Geoffrey E. Hinton et al an information-theoretic analysis of hard and soft assignment methods for clustering, Michael J. Kearns et al learning hybrid Bayesian networks from data, Stefano Monti and Gregory F. Cooper a mean field learning algorithm for unsupervised neural networks, Lawrence Saul and Michael Jordan edge exclusion tests for graphical Gaussian models, Peter W.F. Smith and Joe Whittaker hepatitis B - a case study in MCMC, D.J. Spiegelhalter et al prediction with Gaussian processes - from linear regression to linear prediction and beyond, C.K.I. Williams. --- paper_title: Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids paper_content: Probablistic models are becoming increasingly important in analyzing the huge amount of data being produced by large-scale DNA-sequencing efforts such as the Human Genome Project. For example, hidden Markov models are used for analyzing biological sequences, linguistic-grammar-based probabilistic models for identifying RNA secondary structure, and probabilistic evolutionary models for inferring phylogenies of sequences from different organisms. This book gives a unified, up-to-date and self-contained account, with a Bayesian slant, of such methods, and more generally to probabilistic methods of sequence analysis. Written by an interdisciplinary team of authors, it is accessible to molecular biologists, computer scientists, and mathematicians with no formal knowledge of the other fields, and at the same time presents the state of the art in this new and important field. --- paper_title: Recursive nonlinear estimation: A geometric approach paper_content: Abstract The paper deals with approximation of nonlinear estimation within the Bayesian framework. The underlying idea is to project the true posterior density orthogonally onto a prespecified approximation family. The problem to be coped with is that the true posterior density is not typically at disposal when estimation is implemented recursively . It is shown that there exists a Bayes-closed description of the posterior density which is recursively computable without complete knowledge of the true posterior. We study a mutual relationship between the equivalence class, composed of densities matching the current description, and a prespecified parametric family. It is proved that if the approximation family is of the mixture type, the equivalence classes can be made orthogonal to this family. Then the approximating density done by the orthogonal projection of the true posterior density minimizes the Kullback-Leibler distance between both densities. On the contrary, if the approximation family is of the exponential type, the analogous result holds at most locally. To be able to give a sensible definition of the orthogonal projection, we have been forced to introduce a Riemannian geometry on the family of probability distributions. Being aware that the differential-geometric concepts and tools do not belong to common knowledge of control engineers, we include necessary preliminary information. --- paper_title: Complexity: Hierarchical Structures and Scaling in Physics paper_content: Part I. Phenomenology and Models: 1. Introduction 2. Examples of complex behaviour 3. Mathematical models Part II: 4. Symbolic representations of physical systems 5. Probability, ergodic theory, and information 6. Thermodynamic formalism Part III. Formal Characterization of Complexity: 7. Physical and computational analysis of symbolic signals 8. Algorithmic and grammatical complexities 9. Hierarchical scaling complexities 10. Summary and perspectives. --- paper_title: COMPLEXITY AS THERMODYNAMIC DEPTH paper_content: Abstract A measure of complexity for the macroscopic states of physical systems is defined. Called depth, the measure is universal: it applies to all physical systems. The form of the measure is uniquely fixed by the requirement that it be a continuous, additive function of the processes that can result in a state. Applied to a Hamiltonian system, the measure is equal to the difference between the system's coarse- and fine-grained entropy, a quantity that we call thermodynamic depth. The measure satisfies the intuitive requirements that wholly ordered and wholly random systems are not thermodynamically deep and that a complex object together with a copy is not much deeper than the object alone. Applied to systems capable of computation, the measure yields a conventional computational measure of complexity as a special case. The relation of depth and thermodynamic depth to previously proposed definitions of complexity is discussed, and applications to physical, chemical, and mathematical problems are proposed. --- paper_title: HLA and HIV Infection Progression: Application of the Minimum Description Length Principle to Statistical Genetics paper_content: The minimum description length (MDL) principle was developed in the context of computational complexity and coding theory. It states that the best model to account for some data minimizes the sum of the lengths, in bits, of the descriptions of the model and the data as encoded via the model. The MDL principle gives a criterion for parameter selection, by using the description length as a test statistic. Class I HLA genes play a major role in the immune response to HIV, and are known to be associated with rates of progression to AIDS. However, these genes are extremely polymorphic, making it difficult to associate alleles with disease outcome, given statistical issues of multiple testing. Application of the MDL principle to immunogenetic data from a longitudinal cohort study (Chicago MACS) enables classification of alleles associated with plasma HIV RNA abundance, an indicator of infection progression. Variation in progression is strongly associated with HLA-B. Allele associations with viral levels support and extend previous studies. In particular, individuals without B58s supertype alleles average viral RNA levels 3.6 times greater than individuals with them. Mechanisms for these associations include variation in epitope specificity and selection that favors rare alleles. --- paper_title: Toward a Quantitative Theory of Self-Generated Complexity paper_content: Quantities are defined operationally which qualify as measures of complexity of patterns arising in physical situations. Their main features, distinguishing them from previously used quantities, are the following: (1) they are measuretheoretic concepts, more closely related to Shannon entropy than to computational complexity; and (2) they are observables related to ensembles of patterns, not to individual patterns. Indeed, they are essentially Shannon information needed to specify not individual patterns, but either measure-theoretic or algebraic properties of ensembles of patterns arising ina priori translationally invariant situations. Numerical estimates of these complexities are given for several examples of patterns created by maps and by cellular automata. --- paper_title: Predictability, Complexity, and Learning paper_content: We define predictive information Ipred(T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times T:Ipred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred(T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power-law growth is associated, for example, with the learning of infinite parameter (or nonparametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and the analysis of physical systems through statistical mechanics and dynamical systems theory. Furthermore, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T) provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in problems in physics, statistics, and biology. --- paper_title: Computational mechanics: Pattern and prediction, structure and simplicity paper_content: Computational mechanics, an approach to structural complexity, defines a process's causal states and gives a procedure for finding them. We show that the causal-state representation--an e-machine--is the minimal one consistent with accurate prediction. We establish several results on e-machine optimality and uniqueness and on how e-machines compare to alternative representations. Further results relate measures of randomness and structural complexity obtained from e-machines to those from ergodic and information theories. --- paper_title: Elements of the Theory of Computation paper_content: From the Publisher: ::: Lewis and Papadimitriou present this long awaited Second Edition of their best-selling theory of computation. The authors are well-known for their clear presentation that makes the material accessible to a a broad audience and requires no special previous mathematical experience. In this new edition, the authors incorporate a somewhat more informal, friendly writing style to present both classical and contemporary theories of computation. Algorithms, complexity analysis, and algorithmic ideas are introduced informally in Chapter 1, and are pursued throughout the book. Each section is followed by problems. --- paper_title: Unreconstructible At Any Radius paper_content: Abstract Modeling pattern data series with cellular automata fails for a wide range of deterministic nonlinear spatial processes. If the latter have finite spatially-local memory, reconstructed cellular automata with infinite radius may be required. In some cases, even this is not adequate: an irreducible stochasticity remains on the shortest time scales. The underlying problem is illustrated and quantitatively analyzed using an alternative model class called cellular transducers. --- paper_title: Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems paper_content: Preface 1. Introduction 2. Characterization of the SOC state 3. Systems exhibiting SOC 4. Computer models 5. The search for a formalism 6. Is it SOC or is it not? Appendices. --- paper_title: Optimal Design, Robustness, and Risk Aversion paper_content: Highly optimized tolerance is a model of optimization in engineered systems, which gives rise to power-law distributions of failure events in such systems. The archetypal example is the highly optimized forest fire model. Here we give an analytic solution for this model which explains the origin of the power laws. We also generalize the model to incorporate risk aversion, which results in truncation of the tails of the power law so that the probability of catastrophically large events is dramatically lowered, giving the system more robustness. --- paper_title: Highly optimized tolerance: A mechanism for power laws in designed systems paper_content: We introduce a mechanism for generating power law distributions, referred to as highly optimized tolerance (HOT), which is motivated by biological organisms and advanced engineering technologies. Our focus is on systems which are optimized, either through natural selection or engineering design, to provide robust performance despite uncertain environments. We suggest that power laws in these systems are due to tradeoffs between yield, cost of resources, and tolerance to risks. These tradeoffs lead to highly optimized designs that allow for occasional large events. We investigate the mechanism in the context of percolation and sand pile models in order to emphasize the sharp contrasts between HOT and self-organized criticality (SOC), which has been widely suggested as the origin for power laws in complex systems. Like SOC, HOT produces power laws. However, compared to SOC, HOT states exist for densities which are higher than the critical density, and the power laws are not restricted to special values of the density. The characteristic features of HOT systems include: (1) high efficiency, performance, and robustness to designed-for uncertainties; (2) hypersensitivity to design flaws and unanticipated perturbations; (3) nongeneric, specialized, structured configurations; and (4) power laws. The first three of these are in contrast to the traditional hallmarks of criticality, and are obtained by simply adding the element of design to percolation and sand pile models, which completely changes their characteristics. --- paper_title: From gene families and genera to incomes and internet file sizes: Why power laws are so common in nature paper_content: We present a simple explanation for the occurrence of power-law tails in statistical distributions by showing that if stochastic processes with exponential growth in expectation are killed (or observed) randomly, the distribution of the killed or observed state exhibits power-law behavior in one or both tails. This simple mechanism can explain power-law tails in the distributions of the sizes of incomes, cities, internet files, biological taxa, and in gene family and protein family frequencies. --- paper_title: Measuring complexity using information fluctuation paper_content: Abstract A method for analyzing deterministic dynamical systems is presented. New measures of complexity are proposed, based on fluctuation in net information gain and its dependence on system size. These measures are applied to one-dimensional cellular automata and shown to be useful in selecting rules that support slow-moving gliders in quiescent backgrounds. --- paper_title: Complexity: Hierarchical Structures and Scaling in Physics paper_content: Part I. Phenomenology and Models: 1. Introduction 2. Examples of complex behaviour 3. Mathematical models Part II: 4. Symbolic representations of physical systems 5. Probability, ergodic theory, and information 6. Thermodynamic formalism Part III. Formal Characterization of Complexity: 7. Physical and computational analysis of symbolic signals 8. Algorithmic and grammatical complexities 9. Hierarchical scaling complexities 10. Summary and perspectives. --- paper_title: Classes of network connectivity and dynamics paper_content: Many kinds of complex systems exhibit characteristic patterns of temporal correlations that emerge as the result of functional interactions within a structured network. One such complex system is the brain, composed of numerous neuronal units linked by synaptic connections. The activity of these neuronal units gives rise to dynamic states that are characterized by specific patterns of neuronal activation and co-activation. These patterns, called functional connectivity, are possible neural correlates of perceptual and cognitive processes. Which functional connectivity patterns arise depends on the anatomical structure of the underlying network, which in turn is modified by a broad range of activity-dependent processes. Given this intricate relationship between structure and function, the question of how patterns of anatomical connectivity constrain or determine dynamical patterns is of considerable theoretical importance. The present study develops computational tools to analyze networks in terms of their structure and dynamics. We identify different classes of network, including networks that are characterized by high complexity. These highly complex networks have distinct structural characteristics such as clustered connectivity and short wiring length similar to those of large-scale networks of the cerebral cortex. 2002 Wiley Periodicals, Inc. --- paper_title: Measures of Statistical Complexity: Why? paper_content: We review several statistical complexity measures proposed over the last decade and a half as general indicators of structure or correlation. Recently, L\`opez-Ruiz, Mancini, and Calbet [Phys. Lett. A 209 (1995) 321] introduced another measure of statistical complexity $C_{\rm LMC}$ that, like others, satisfies the ``boundary conditions'' of vanishing in the extreme ordered and disordered limits. We examine some properties of $C_{\rm LMC}$ and find that it is neither an intensive nor an extensive thermodynamic variable. It depends nonlinearly on system size and vanishes exponentially in the thermodynamic limit for all one-dimensional finite-range spin systems. We propose a simple alteration of $C_{\rm LMC}$ that renders it extensive. however, this remedy results in a quantity that is a trivial function of the entropy density and hence of no use as a measure of structure or memory. We conclude by suggesting that a useful ``statistical complexity'' must not only obey the ordered-random bounary conditions of vanishing, it must also be defined in a setting that gives a clear interpretation to what structures are quantified. --- paper_title: Complexity: Hierarchical Structures and Scaling in Physics paper_content: Part I. Phenomenology and Models: 1. Introduction 2. Examples of complex behaviour 3. Mathematical models Part II: 4. Symbolic representations of physical systems 5. Probability, ergodic theory, and information 6. Thermodynamic formalism Part III. Formal Characterization of Complexity: 7. Physical and computational analysis of symbolic signals 8. Algorithmic and grammatical complexities 9. Hierarchical scaling complexities 10. Summary and perspectives. --- paper_title: Computational mechanics: Pattern and prediction, structure and simplicity paper_content: Computational mechanics, an approach to structural complexity, defines a process's causal states and gives a procedure for finding them. We show that the causal-state representation--an e-machine--is the minimal one consistent with accurate prediction. We establish several results on e-machine optimality and uniqueness and on how e-machines compare to alternative representations. Further results relate measures of randomness and structural complexity obtained from e-machines to those from ergodic and information theories. --- paper_title: Principles of Data Mining paper_content: The growing interest in data mining is motivated by a common problem across disciplines: how does one store, access, model, and ultimately describe and understand very large data sets? Historically, different aspects of data mining have been addressed independently by different disciplines. This is the first truly interdisciplinary text on data mining, blending the contributions of information science, computer science, and statistics. The book consists of three sections. The first, foundations, provides a tutorial overview of the principles underlying data mining algorithms and their application. The presentation emphasizes intuition rather than rigor. The second section, data mining algorithms, shows how algorithms are constructed to solve specific problems in a principled manner. The algorithms covered include trees and rules for classification and regression, association rules, belief networks, classical statistical models, nonlinear models such as neural networks, and local "memory-based" models. The third section shows how all of the preceding analysis fits together when applied to real-world data mining problems. Topics include the role of metadata, how to handle missing data, and data preprocessing. --- paper_title: An information-geometric approach to a theory of pragmatic structuring paper_content: 1.1. The motivation of the approach. In the field of neural networks, so-called infomax principles like the principle of “maximum information preservation” by Linsker [20] are formulated to derive learning rules that improve the information processing properties of neural systems (see [12]). These principles, which are based on information-theoretic measures, are intended to describe the mechanism of learning in the brain. There, the starting point is a low-dimensional and biophysiologically motivated parametrization of the neural system, which need not necessarily be compatible with the given optimization principle. In contrast to this, we establish theoretical results about the low complexity of optimal solutions for the optimization problem of frequently used measures like the mutual information in an unconstrained and more theoretical setting. In the present paper, we do not comment on applications to modeling neural networks. This is intended to be done in a further step, where the results can be used for the characterization of “good” parameter sets that, on the one hand, are compatible with the underlying optimization and, on the other hand, are biologically motivated. --- paper_title: A Variational Formulation of Optimal Nonlinear Estimation paper_content: We propose a variational method to solve all three estimation problems for nonlinear stochastic dynamical systems: prediction, filtering, and smoothing. Our new approach is based upon a proper choice of cost function, termed the {\it effective action}. We show that this functional of time-histories is the unique statistically well-founded cost function to determine most probable histories within empirical ensembles. The ensemble dispersion about the sample mean history can also be obtained from the Hessian of the cost function. We show that the effective action can be calculated by a variational prescription, which generalizes the ``sweep method'' used in optimal linear estimation. An iterative numerical scheme results which converges globally to the variational estimator. This scheme involves integrating forward in time a ``perturbed'' Fokker-Planck equation, very closely related to the Kushner-Stratonovich equation for optimal filtering, and an adjoint equation backward in time, similarly related to the Pardoux-Kushner equation for optimal smoothing. The variational estimator enjoys a somewhat weaker property, which we call ``mean optimality''. However, the variational scheme has the principal advantage---crucial for practical applications---that it admits a wide variety of finite-dimensional moment-closure approximations. The moment approximations are derived reductively from the Euler-Lagrange variational formulation and preserve the good structural properties of the optimal estimator. --- paper_title: Cellular Automata And Complexity: Collected Papers paper_content: Primary Papers * Statistical Mechanics of Cellular Automata * Algebraic Properties of Cellular Automata * Universality and Complexity in Cellular Automata * Computation Theory of Cellular Automata * Undecidability and Intractability in Theoretical Physics * Two-Dimensional Cellular Automata * Origins of Randomness in Physical Systems * Thermodynamics and Hydrodynamics of Cellular Automata * Random Sequence Generation by Cellular Automata * Approaches to Complexity Engineering * Minimal Cellular Automaton Approximations to Continuum Systems * Cellular Automaton Fluids: Basic Theory Additional And Survey Papers * Cellular Automata * Computers in Science and Mathematics * Geometry of Binomial Coefficients * Twenty Problems in the Theory of Cellular Automata * Cryptography with Cellular Automata * Complex Systems Theory * Cellular Automaton Supercomputing Appendices * Tables of Cellular Automaton Properties * Scientific Bibliography of Stephen Wolfram --- paper_title: Cellular Automata: Theory and Experiment paper_content: Part 1 Mathematical analysis of cellular automata: aperiodicity in one-dimensional cellular automata, E. Jen cyclic cellular automata and related processes, R. Fisch nearest neighbour cellular automata over Z2 with periodic boundary conditions, B. Voorhees cellular automaton ruled by an eccentric conservation law, A.M. Barbe Boolean derivatives on cellular automata, G. Vichniac. Part 2 Structure of the space of cellular automata: transition phenomena in cellular automata: transition phenomena in cellular automata rule space, W. Li, et al is there a sharp phase transition for deterministic cellular automata?, W.W. Wootters and C.G. Langton Wolfram's class IV automata and a good life, H.V. McIntosh criticality in cellular automata, H. Chate and P. Manneville a hierarchical classification of cellular automata, H.A. Gutowitz. Part 3 Learning rules with specified properties: adaptive stochastic cellular automata - theory, Y.C. Lee, et al adaptive stochastic cellular automata - experiment, S. Qian, et al extracting cellular automaton rules directly from experimental data, F.C. Richards, et al. Part 4 Cellular automata and the natural sciences - biology: what can automaton theory tell us about the brain?, J.D. Victor simulation of HIV-infection in artificial immune systems, H. Sieburg, et al physics and chemistry - invertible cellular automata - a review, T. Toffoli and N. Margolus digital mechanics - an informational process based on reversible universal cellular automata, E. Fredkin representations of geometrical and topological quantities in cellular automata, M.A. Smith relaxation properties of elementary reversible cellular automata, S. Takesue a comparison of spin exchange and cellular automaton models for diffusion-controlled reactions, A. Canning and M. Droz reversible cellular automata and chemical turbulence, H. Hartman and P. Tamayo soliton turbulence in one-dimensional cellular automata, Y. Aizawa, et al knot invariants and cellular automata, B. Hasslacher and D.A. Meyer critical dynamics of one-dimensional irreversible systems, O. Martin. Part 5 comutation theory of cellular automata, computation theoretic aspects of cellular automata, K. Culik II reversibility of 2D cellular automata is undecidable, J. Kari classifying circular cellular automata, K. Sutner formal languages and global cellular automaton behaviour, K. Culik II a characterization of constant-time cellular automata computation, S. Kim and R. McCloskey constructive chaos by cellular automata and possible sources of an arrow to time, K. Svozil. Part 6 Generalizations of cellular automata: cellular automata and discrete neural networks, M. Garzon attractor dominance patterns in sparsely connected Boolean nets, C.C. Walker periodic orbits and log transients in coupled map lattices, R. Livi. Appendices: a brief review of cellular automata packages, D. Hiebeler maps of recent cellular automata and lattice gas automata literature, H.A. Gutowitz. --- paper_title: From classical models of morphogenesis to agent-based models of pattern formation paper_content: An extremely large body of theoretical work exists on pattern formation, but very few experimental results have confirmed the relevance of theoretical models. It is argued in this article that the notion of agent-based pattern formation, which is introduced and exemplified, can serve as a basis to study pattern formation in nature, especially because pattern-forming systems based on agents are (relatively) more easily amenable to experimental observations. Moreover, understanding agent-based pattern formation is a necessary step if one wishes to design distributed artificial pattern-forming systems. But, to achieve this goal, a theory of agent-based pattern formation is needed. This article suggests that it can certainly be derived from existing theories of pattern formation. --- paper_title: Agent-based computational models and generative social science paper_content: Agent-based computational modeling is changing the face of social science. In Generative Social Science , Joshua Epstein argues that this powerful, novel technique permits the social sciences to meet a fundamentally new standard of explanation, in which one "grows" the phenomenon of interest in an artificial society of interacting agents: heterogeneous, boundedly rational actors, represented as mathematical or software objects. After elaborating this notion of generative explanation in a pair of overarching foundational chapters, Epstein illustrates it with examples chosen from such far-flung fields as archaeology, civil conflict, the evolution of norms, epidemiology, retirement economics, spatial games, and organizational adaptation. In elegant chapter preludes, he explains how these widely diverse modeling studies support his sweeping case for generative explanation. This book represents a powerful consolidation of Epstein's interdisciplinary research activities in the decade since the publication of his and Robert Axtell's landmark volume, Growing Artificial Societies . Beautifully illustrated, Generative Social Science includes a CD that contains animated movies of core model runs, and programs allowing users to easily change assumptions and explore models, making it an invaluable text for courses in modeling at all levels. --- paper_title: Transmission of information paper_content: A quantitative measure of “information” is developed which is based on physical as contrasted with psychological considerations. How the rate of transmission of this information over a system is limited by the distortion resulting from storage of energy is discussed from the transient viewpoint. The relation between the transient and steady state viewpoints is reviewed. It is shown that when the storage of energy is used to restrict the steady state transmission to a limited range of frequencies the amount of information that can be transmitted is proportional to the product of the width of the frequency-range by the time it is available. Several illustrations of the application of this principle to practical systems are included. In the case of picture transmission and television the spacial variation of intensity is analyzed by a steady state method analogous to that commonly used for variations with time. --- paper_title: Entropy and Information Theory paper_content: This book is an updated version of the information theory classic, first published in 1990. About one-third of the book is devoted to Shannon source and channel coding theorems; the remainder addresses sources, channels, and codes and on information and distortion measures and their properties. New in this edition:Expanded treatment of stationary or sliding-block codes and their relations to traditional block codesExpanded discussion of results from ergodic theory relevant to information theoryExpanded treatment of B-processes -- processes formed by stationary coding memoryless sourcesNew material on trading off information and distortion, including the Marton inequalityNew material on the properties of optimal and asymptotically optimal source codesNew material on the relationships of source coding and rate-constrained simulation or modeling of random processesSignificant material not covered in other information theory texts includes stationary/sliding-block codes, a geometric view of information theory provided by process distance measures, and general Shannon coding theorems for asymptotic mean stationary sources, which may be neither ergodic nor stationary, and d-bar continuous channels. --- paper_title: Computational mechanics: Pattern and prediction, structure and simplicity paper_content: Computational mechanics, an approach to structural complexity, defines a process's causal states and gives a procedure for finding them. We show that the causal-state representation--an e-machine--is the minimal one consistent with accurate prediction. We establish several results on e-machine optimality and uniqueness and on how e-machines compare to alternative representations. Further results relate measures of randomness and structural complexity obtained from e-machines to those from ergodic and information theories. ---
Title: Methods and techniques of complex systems science: An overview Section 1: INTRODUCTION Description 1: This section discusses the general nature of complex systems and introduces the goals of complex systems science. Section 2: Outline of This Chapter Description 2: This section provides an overview of the chapter structure and the main topics that will be covered. Section 3: TIME SERIES ANALYSIS Description 3: This section delves into methods for analyzing time series data, including both traditional statistical approaches and nonlinear dynamics approaches. Section 4: The State-Space Picture Description 4: This section describes the state-space model as a common framework for analyzing time series. Section 5: General Properties of Time Series Description 5: This section covers properties such as stationarity, ergodicity, and autocorrelation functions, as well as frequency-domain properties. Section 6: The Traditional Statistical Approach Description 6: This section explains classical time series models including moving average, autoregressive, and autoregressive moving average models. Section 7: Applicability of Linear Statistical Models Description 7: This section discusses the application of linear models to nonlinear dynamical systems. Section 8: Extensions Description 8: This section explores extensions to standard linear models including long memory processes, volatility models, and nonlinear models. Section 9: The Nonlinear Dynamics Approach Description 9: This section discusses approaches to time series analysis inspired by nonlinear dynamics, including the Takens Embedding Theorem. Section 10: Filtering and State Estimation Description 10: This section covers techniques for estimating the state of a system from observed data, including linear and nonlinear filters. Section 11: Symbolic or Categorical Time Series Description 11: This section examines time series composed of discrete symbols, and the application of concepts from automata theory. Section 12: Hidden Markov Models Description 12: This section introduces Hidden Markov Models as a generalization of Markov chains to the state-space picture. Section 13: Variable-Length Markov Models Description 13: This section explains context trees or probabilistic suffix trees and their application to symbolic dynamics. Section 14: Cellular Automata Description 14: This section provides an overview of cellular automata as models of complex systems, including their use in pattern formation and other applications. Section 15: A Basic Explanation of CAs Description 15: This section gives a basic explanation of how cellular automata work, including their structure and rules. Section 16: Cellular Automata as Parallel Computers Description 16: This section discusses the computational power of cellular automata and their equivalence to Turing machines. Section 17: Cellular Automata as Discrete Field Theories Description 17: This section explores the use of cellular automata to simulate classical field theories and physical phenomena. Section 18: AGENT-BASED MODELS Description 18: This section introduces agent-based models as a way to simulate the behavior of individual agents and their interactions within a system. Section 19: Computational Implementation: Agents are Objects Description 19: This section discusses the implementation of agent-based models using object-oriented programming. Section 20: Three Things Which Are Not Agent-Based Models Description 20: This section clarifies misconceptions about what constitutes an agent-based model. Section 21: The Simplicity of Complex Systems Models Description 21: This section discusses the reasons for using simple models in complex systems science. Section 22: EVALUATING MODELS OF COMPLEX SYSTEMS Description 22: This section covers methods for evaluating models of complex systems, including simulation and comparison to real-world data. Section 23: Simulation Description 23: This section discusses the use of direct simulation to understand the behavior of complex system models. Section 24: Monte Carlo Methods Description 24: This section explains how Monte Carlo methods can be used to estimate properties of complex systems. Section 25: Analytical Techniques Description 25: This section explores analytical techniques for studying complex systems, particularly those based on Markov processes. Section 26: General issues Description 26: This section talks about the general issues in comparing models to real-world data. Section 27: Two Stories and Some Morals Description 27: This section provides two examples illustrating fundamental points about model evaluation in complex systems science. Section 28: Comparing Macro-data and Micro-models Description 28: This section discusses how to compare individual-level models to aggregate data. Section 29: Comparison to Other Models Description 29: This section discusses methods to compare different models of complex systems. Section 30: INFORMATION THEORY Description 30: This section covers the basic concepts and applications of information theory in complex systems science. Section 31: Basic Definitions Description 31: This section provides definitions of key information-theoretic quantities such as entropy and mutual information. Section 32: Optimal Coding Description 32: This section covers the basics of coding theory and its relevance to information theory. Section 33: Applications of Information Theory Description 33: This section discusses the applications of information theory in various scientific domains. Section 34: COMPLEXITY MEASURES Description 34: This section discusses different measures of complexity and their applications to complex systems. Section 35: Algorithmic Complexity Description 35: This section introduces the concept of Kolmogorov complexity and its limitations. Section 36: Refinements of Algorithmic Complexity Description 36: This section explores modifications of algorithmic complexity such as logical depth and algorithmic statistics. Section 37: Statistical Measures of Complexity Description 37: This section describes statistical approaches to measuring complexity, including the minimum description length principle. Section 38: Stochastic Complexity and the Minimum Description Length Description 38: This section explains how to quantify complexity using the minimum description length principle. Section 39: Complexity via Prediction I: Forecast Complexity and Predictive Information Description 39: This section discusses the concept of forecast complexity and its relationship to predictive information. Section 40: Complexity via Prediction II: The Crutchfield-Young Statistical Complexity Description 40: This section details the Crutchfield-Young method for measuring statistical complexity based on causal states. Section 41: Power Law Distributions Description 41: This section discusses the misconception that power law distributions are indicative of complexity. Section 42: Other Measures of Complexity Description 42: This section surveys other measures of complexity not covered in the main text. Section 43: Relevance of Complexity Measures Description 43: This section discusses the importance of choosing relevant complexity measures and their applications. Section 44: GUIDE TO FURTHER READING Description 44: This section provides references for further reading on the topics covered in the chapter. Section 45: General Description 45: This section lists general references and surveys on complex systems science. Section 46: Data Mining and Statistical Learning Description 46: This section provides reading recommendations on statistical learning and data mining. Section 47: Time Series Description 47: This section lists recommended readings on time series analysis. Section 48: Filtering Description 48: This section provides references on filtering methods and state estimation. Section 49: Symbolic Dynamics and Hidden Markov Models Description 49: This section discusses further reading on symbolic dynamics and hidden Markov models. Section 50: Cellular Automata Description 50: This section lists recommended readings on cellular automata, including applications and theory. Section 51: Agent-Based Modeling Description 51: This section provides references for further reading on agent-based modeling techniques and applications. Section 52: Evaluating Models of Complex Systems Description 52: This section suggests references on evaluating and comparing models of complex systems. Section 53: Monte Carlo Description 53: This section lists further reading on Monte Carlo methods. Section 54: Experimental design Description 54: This section provides references on the principles and methodologies of experimental design. Section 55: Information Theory Description 55: This section suggests further reading on information theory and its applications. Section 56: Complexity Measures Description 56: This section details references and surveys on different measures of complexity.
An overview of the recent wideband transcutaneous wireless communication techniques
6
--- paper_title: An RFID-Based Closed-Loop Wireless Power Transmission System for Biomedical Applications paper_content: This brief presents a standalone closed-loop wireless power transmission system that is built around a commercial off-the-shelf (COTS) radio-frequency identification (RFID) reader (TRF7960) operating at 13.56 MHz. It can be used for inductively powering implantable biomedical devices in a closed loop. Any changes in the distance and misalignment between transmitter and receiver coils in near-field wireless power transmission can cause a significant change in the received power, which can cause either a malfunction or excessive heat dissipation. RFID circuits are often used in an open loop. However, their back telemetry capability can be utilized to stabilize the received voltage on the implant. Our measurements showed that the delivered power to the transponder was maintained at 11.2 mW over a range of 0.5 to 2 cm, while the transmitter power consumption changed from 78 mW to 1.1 W. The closed-loop system can also oppose voltage variations as a result of sudden changes in the load current. --- paper_title: Using Pulse Width Modulation for Wireless Transmission of Neural Signals in Multichannel Neural Recording Systems paper_content: We have used a well-known technique in wireless communication, pulse width modulation (PWM) of time division multiplexed (TDM) signals, within the architecture of a novel wireless integrated neural recording (WINeR) system. We have evaluated the performance of the PWM-based architecture and indicated its accuracy and potential sources of error through detailed theoretical analysis, simulations, and measurements on a setup consisting of a 15-channel WINeR prototype as the transmitter and two types of receivers; an Agilent 89600 vector signal analyzer and a custom wideband receiver, with 36 and 75 MHz of maximum bandwidth, respectively. Furthermore, we present simulation results from a realistic MATLAB-Simulink model of the entire WINeR system to observe the system behavior in response to changes in various parameters. We have concluded that the 15-ch WINeR prototype, which is fabricated in a 0.5-mum standard CMOS process and consumes 4.5 mW from plusmn1.5 V supplies, can acquire and wirelessly transmit up to 320 k-samples/s to a 75-MHz receiver with 8.4 bits of resolution, which is equivalent to a wireless data rate of ~ 2.56 Mb/s. --- paper_title: Optimization of Data Coils in a Multiband Wireless Link for Neuroprosthetic Implantable Devices paper_content: We have presented the design methodology along with detailed simulation and measurement results for optimizing a multiband transcutaneous wireless link for high-performance implantable neuroprosthetic devices. We have utilized three individual carrier signals and coil/antenna pairs for power transmission, forward data transmission from outside into the body, and back telemetry in the opposite direction. Power is transmitted at 13.56 MHz through a pair of printed spiral coils (PSCs) facing each other. Two different designs have been evaluated for forward data coils, both of which help to minimize power carrier interference in the received data carrier. One is a pair of perpendicular coils that are wound across the diameter of the power PSCs. The other design is a pair of planar figure-8 coils that are in the same plane as the power PSCs. We have compared the robustness of each design against horizontal misalignments and rotations in different directions. Simulation and measurements are also conducted on a miniature spiral antenna, designed to operate with impulse-radio ultra-wideband (IR-UWB) circuitry for back telemetry. --- paper_title: Brain–machine interfaces: past, present and future paper_content: Since the original demonstration that electrical activity generated by ensembles of cortical neurons can be employed directly to control a robotic manipulator, research on brain–machine interfaces (BMIs) has experienced an impressive growth. Today BMIs designed for both experimental and clinical studies can translate raw neuronal signals into motor commands that reproduce arm reaching and hand grasping movements in artificial actuators. Clearly, these developments hold promise for the restoration of limb mobility in paralyzed subjects. However, as we review here, before this goal can be reached several bottlenecks have to be passed. These include designing a fully implantable biocompatible recording device, further developing real-time computational algorithms, introducing a method for providing the brain with sensory feedback from the actuators, and designing and building artificial prostheses that can be controlled directly by brain-derived signals. By reaching these milestones, future BMIs will be able to drive and control revolutionary prostheses that feel and act like the human arm. --- paper_title: An Inductively Powered Scalable 32-Channel Wireless Neural Recording System-on-a-Chip for Neuroscience Applications paper_content: We present an inductively powered 32-channel wireless integrated neural recording (WINeR) system-on-a-chip (SoC) to be ultimately used for one or more small freely behaving animals. The inductive powering is intended to relieve the animals from carrying bulky batteries used in other wireless systems, and enables long recording sessions. The WINeR system uses time-division multiplexing along with a novel power scheduling method that reduces the current in unused low-noise amplifiers (LNAs) to cut the total SoC power consumption. In addition, an on-chip high-efficiency active rectifier with optimized coils help improve the overall system power efficiency, which is controlled in a closed loop to supply stable power to the WINeR regardless of the coil displacements. The WINeR SoC has been implemented in a 0.5-μ m standard complementary metal-oxide semiconductor process, measuring 4.9×3.3 mm2 and consuming 5.85 mW at ±1.5 V when 12 out of 32 LNAs are active at any time by power scheduling. Measured input-referred noise for the entire system, including the receiver located at 1.2 m, is 4.95 μVrms in the 1 Hz~10 kHz range when the system is inductively powered with 7-cm separation between aligned coils. --- paper_title: Design and Optimization of Printed Spiral Coils for Efficient Transcutaneous Inductive Power Transmission paper_content: The next generation of implantable high-power neuroprosthetic devices such as visual prostheses and brain computer interfaces are going to be powered by transcutaneous inductive power links formed between a pair of printed spiral coils (PSC) that are batch-fabricated using micromachining technology. Optimizing the power efficiency of the wireless link is imperative to minimize the size of the external energy source, heating dissipation in the tissue, and interference with other devices. Previous design methodologies for coils made of 1-D filaments are not comprehensive and accurate enough to consider all geometrical aspects of PSCs with planar 3-D conductors as well as design constraints imposed by implantable device application and fabrication technology. We have outlined the theoretical foundation of optimal power transmission efficiency in an inductive link, and combined it with semi-empirical models to predict parasitic components in PSCs. We have used this foundation to devise an iterative PSC design methodology that starts with a set of realistic design constraints and ends with the optimal PSC pair geometries. We have executed this procedure on two design examples at 1 and 5 MHz achieving power transmission efficiencies of 41.2% and 85.8%, respectively, at 10-mm spacing. All results are verified with simulations using a commercial field solver (HFSS) as well as measurements using PSCs fabricated on printed circuit boards. --- paper_title: Toward the development of a cortically based visual neuroprosthesis paper_content: Motivated by the success of cochlear implants for deaf patients, we are now facing the goal of creating a visual neuroprosthesis designed to interface with the occipital cortex as a means through which a limited but useful sense of vision could be restored in profoundly blind patients. We review the most important challenges regarding this neuroprosthetic approach and emphasize the need for basic human psychophysical research on the best way of presenting complex stimulating patterns through multiple microelectrodes. Continued research will hopefully lead to the development of and design specifications for the first generation of a cortically based visual prosthesis system. --- paper_title: An advanced multiple channel cochlear implant paper_content: In the hearing prosthesis, stimulation is presented through an array of 20 electrodes located in the scala tympani. Any two electrodes can be configured as a bipolar pair to conduct a symmetrical, biphasic, constant-current pulsatile stimulus. Up to three stimuli can be presented in rapid succession or effectively simultaneously. For simultaneous stimulation, a novel time-division current multiplexing technique has been developed to obviate electrode interactions that may compromise safety. The stimuli are independently controllable in current amplitude, duration, and onset time. Groups of three stimuli can be generated at a rate of typically 500 Hz. Stimulus control data and power are conveyed to the implant through a single transcutaneous inductive link. The device incorporates a telemetry system that enables electrode voltage waveforms to be monitored externally in real time. The electronics of the implant are contained almost entirely on a custom designed integrated circuit. Preliminary results obtained with the first patient to receive the advanced implant are included. > --- paper_title: Design and Optimization of Printed Spiral Coils for Efficient Transcutaneous Inductive Power Transmission paper_content: The next generation of implantable high-power neuroprosthetic devices such as visual prostheses and brain computer interfaces are going to be powered by transcutaneous inductive power links formed between a pair of printed spiral coils (PSC) that are batch-fabricated using micromachining technology. Optimizing the power efficiency of the wireless link is imperative to minimize the size of the external energy source, heating dissipation in the tissue, and interference with other devices. Previous design methodologies for coils made of 1-D filaments are not comprehensive and accurate enough to consider all geometrical aspects of PSCs with planar 3-D conductors as well as design constraints imposed by implantable device application and fabrication technology. We have outlined the theoretical foundation of optimal power transmission efficiency in an inductive link, and combined it with semi-empirical models to predict parasitic components in PSCs. We have used this foundation to devise an iterative PSC design methodology that starts with a set of realistic design constraints and ends with the optimal PSC pair geometries. We have executed this procedure on two design examples at 1 and 5 MHz achieving power transmission efficiencies of 41.2% and 85.8%, respectively, at 10-mm spacing. All results are verified with simulations using a commercial field solver (HFSS) as well as measurements using PSCs fabricated on printed circuit boards. --- paper_title: An Overview of Near Field UHF RFID paper_content: In this paper, an overview of near field UHF RFID is presented. This technology recently received attention because of its possible use for item-level tagging where LF/HF RFID has traditionally been used. We review the relevant literature, discuss basic theory of near and far field antenna coupling in application to RFID, and present some experimental measurements. --- paper_title: An Inductively Powered Scalable 32-Channel Wireless Neural Recording System-on-a-Chip for Neuroscience Applications paper_content: We present an inductively powered 32-channel wireless integrated neural recording (WINeR) system-on-a-chip (SoC) to be ultimately used for one or more small freely behaving animals. The inductive powering is intended to relieve the animals from carrying bulky batteries used in other wireless systems, and enables long recording sessions. The WINeR system uses time-division multiplexing along with a novel power scheduling method that reduces the current in unused low-noise amplifiers (LNAs) to cut the total SoC power consumption. In addition, an on-chip high-efficiency active rectifier with optimized coils help improve the overall system power efficiency, which is controlled in a closed loop to supply stable power to the WINeR regardless of the coil displacements. The WINeR SoC has been implemented in a 0.5-μ m standard complementary metal-oxide semiconductor process, measuring 4.9×3.3 mm2 and consuming 5.85 mW at ±1.5 V when 12 out of 32 LNAs are active at any time by power scheduling. Measured input-referred noise for the entire system, including the receiver located at 1.2 m, is 4.95 μVrms in the 1 Hz~10 kHz range when the system is inductively powered with 7-cm separation between aligned coils. --- paper_title: An ultra low power, high performance Medical Implant Communication System (MICS) transceiver for implantable devices paper_content: A 402-405 MHz MICS band transceiver has been developed for implantable medical applications. The transceiver offers exceptionally low power consumption whilst providing a high data rate. The circuit features a unique ultra low power wakeup system enabling an average sleep current of less than 250 nA. The transmit and receive current is less than 5 mA when operating at a data rate of up to 800 kbps. System integration is high and only 3 external components (crystal and 2 decoupling capacitors) and a matching network are required. The transceiver can also operate in the 433 MHz ISM band. The key system design features and performance of this transceiver are presented in this paper. --- paper_title: A wireless neural/EMG telemetry system for freely moving insects paper_content: We have developed a miniature telemetry system that captures neural, EMG, and acceleration signals from a freely moving insect and transmits the data wirelessly to a remote digital receiver. The system is based on a custom low-power integrated circuit that amplifies and digitizes four biopotential signals as well as three acceleration signals from an off-chip MEMS accelerometer, and transmits this information over a wireless 920-MHz telemetry link. The unit weighs 0.79 g and runs for two hours on two small batteries. We have used this system to monitor neural and EMG signals in jumping and flying locusts. --- paper_title: Modeling and Optimization of Printed Spiral Coils in Air, Saline, and Muscle Tissue Environments paper_content: Printed spiral coils (PSCs) are viable candidates for near-field wireless power transmission to the next generation of high-performance neuroprosthetic devices with extreme size constraints, which will target intraocular and intracranial spaces. Optimizing the PSC geometries to maximize the power transfer efficiency of the wireless link is imperative to reduce the size of the external energy source, heating of the tissue, and interference with other devices. Implantable devices need to be hermetically sealed in biocompatible materials and placed in a conductive environment with high permittivity (tissue), which can affect the PSC characteristics. We have constructed a detailed model that includes the effects of the surrounding environment on the PSC parasitic components and eventually on the power transfer efficiency. We have combined this model with an iterative design method that starts with a set of realistic design constraints and ends with the optimal PSC geometries. We applied our design methodology to optimize the wireless link of a 1-cm 2 implantable device example, operating at 13.56 MHz. Measurement results showed that optimized PSC pairs, coated with 0.3 mm of silicone, achieved 72.2%, 51.8%, and 30.8% efficiencies at a face-to-face relative distance of 10 mm in air, saline, and muscle, respectively. The PSC, which was optimized for air, could only bear 40.8% and 21.8% efficiencies in saline and muscle, respectively, showing that by including the PSC tissue environment in the design process the result can be more than a 9% improvement in the power transfer efficiency. --- paper_title: Optimization of Data Coils in a Multiband Wireless Link for Neuroprosthetic Implantable Devices paper_content: We have presented the design methodology along with detailed simulation and measurement results for optimizing a multiband transcutaneous wireless link for high-performance implantable neuroprosthetic devices. We have utilized three individual carrier signals and coil/antenna pairs for power transmission, forward data transmission from outside into the body, and back telemetry in the opposite direction. Power is transmitted at 13.56 MHz through a pair of printed spiral coils (PSCs) facing each other. Two different designs have been evaluated for forward data coils, both of which help to minimize power carrier interference in the received data carrier. One is a pair of perpendicular coils that are wound across the diameter of the power PSCs. The other design is a pair of planar figure-8 coils that are in the same plane as the power PSCs. We have compared the robustness of each design against horizontal misalignments and rotations in different directions. Simulation and measurements are also conducted on a miniature spiral antenna, designed to operate with impulse-radio ultra-wideband (IR-UWB) circuitry for back telemetry. --- paper_title: A Non-Coherent DPSK Data Receiver With Interference Cancellation for Dual-Band Transcutaneous Telemetries paper_content: A dual-band telemetry, which has different carrier frequencies for power and data signals, is used to maximize both power transfer efficiency and data rate for transcutaneous implants. However, in such a system, the power signal interferes with the data transmission due to the multiple magnetic couplings paths within the inductive coils. Since the power level of the transmitted power signal is significantly larger than that of the data signal, it usually requires a high-order filter to suppress the interference. This paper presents a non-coherent DPSK receiver without a high-order filter that is robust to the interference caused by the power carrier signal. The proposed scheme uses differential demodulation in the analog domain to cancel the interference signal for a dual-band configuration. The data demodulation also uses subsampling to avoid carrier synchronization circuits such as PLLs. The experimental results show that the demodulator can recover 1 and 2 Mb/s data rates at a 20 MHz carrier frequency, and it is able to cancel an interference signal that is 12 dB larger than the data signal without using complex filters. The demodulator is fabricated in a 0.35 mum CMOS process, with a power consumption of 6.2 mW and an active die area of 2.6times1.7mm2. --- paper_title: Design and Optimization of Printed Spiral Coils for Efficient Transcutaneous Inductive Power Transmission paper_content: The next generation of implantable high-power neuroprosthetic devices such as visual prostheses and brain computer interfaces are going to be powered by transcutaneous inductive power links formed between a pair of printed spiral coils (PSC) that are batch-fabricated using micromachining technology. Optimizing the power efficiency of the wireless link is imperative to minimize the size of the external energy source, heating dissipation in the tissue, and interference with other devices. Previous design methodologies for coils made of 1-D filaments are not comprehensive and accurate enough to consider all geometrical aspects of PSCs with planar 3-D conductors as well as design constraints imposed by implantable device application and fabrication technology. We have outlined the theoretical foundation of optimal power transmission efficiency in an inductive link, and combined it with semi-empirical models to predict parasitic components in PSCs. We have used this foundation to devise an iterative PSC design methodology that starts with a set of realistic design constraints and ends with the optimal PSC pair geometries. We have executed this procedure on two design examples at 1 and 5 MHz achieving power transmission efficiencies of 41.2% and 85.8%, respectively, at 10-mm spacing. All results are verified with simulations using a commercial field solver (HFSS) as well as measurements using PSCs fabricated on printed circuit boards. --- paper_title: A wideband power-efficient inductive wireless link for implantable microelectronic devices using multiple carriers paper_content: This paper presents a novel inductive link for wireless transmission of power and data to biomedical implantable microelectronic devices using multiple carrier frequencies. Achieving higher data bandwidth without compromising the power efficiency is the driving force to use two separate carriers. Two separate pairs of coils have been utilized for inductive power and forward data transmission. One major challenge, however, is to minimize the interference among these carriers especially on the implantable side, where size and power are highly limited. Planar power coils with spiral shape are optimized in geometry to provide maximum coupling coefficient, k. The data coils are designed rectangular in shape and wound across the power coils diameter to be oriented perpendicular to the power coil planes. The goal is to maximize data coils direct coupling, while minimize their cross-coupling with the power coils. The effects of coils geometry, relative distance, and misalignments on the coupling coefficients have been modeled and experimentally measured. --- paper_title: High-Speed OQPSK and Efficient Power Transfer Through Inductive Link for Biomedical Implants paper_content: Biomedical implants require wireless power and bidirectional data transfer. We pursue our previous work on a novel topology for a multiple carrier inductive link by presenting the fabricated coils. We show that the coplanar geometry approach is better suited for displacement tolerance. We provide a theoretical analysis of the efficiency of power transfer and phase-shift-keying communications through an inductive link. An efficiency of up to 61% has been achieved experimentally for power transfer and a data rate of 4.16 Mb/s with a bit-error rate of less than 2 × 10-6 has been obtained with our fabricated offset quadrature phase-shift keying modules due to the inductive link optimization presented in this paper. --- paper_title: A Wireless Implantable Multichannel Microstimulating System-on-a-Chip With Modular Architecture paper_content: A 64-site wireless current microstimulator chip (Interestim-2B) and a prototype implant based on the same chip have been developed for neural prosthetic applications. Modular standalone architecture allows up to 32 chips to be individually addressed and operated in parallel to drive up to 2048 stimulating sites. The only off-chip components are a receiver inductive-capac- itive (LC) tank, a capacitive low-pass filter for ripple rejection, and arrays of microelectrodes for interfacing with the neural tissue. The implant receives inductive power up to 50 mW and data at 2.5 Mb/s from a frequency shift keyed (FSK) 5/10 MHZ carrier to generate up to 65 800 stimulus pulses/s. Each Interestim-2B chip contains 16 current drivers with 270 muA full-scale current, 5-bit (32-steps) digital-to-analog converter (DAC) resolution, 100 MOmega output impedance, and a voltage compliance that extends within 150 and 250 mV of the 5 V supply and ground rails, respectively. It can generate any arbitrary current waveform and supports a variety of monopolar and bipolar stimulation protocols. A common analog line provides access to each site potential, and exhausts residual stimulus charges for charge balancing. The chip has site potential measurement and in situ site impedance measurement capabilities, which help its users indicate defective sites or characteristic shifts in chronic stimulations. Interestim-2B chip is fabricated in the AMI 1.5 mum standard complementary metal-oxide-semiconductor (CMOS) process and measures 4.6 x 4.6 x 0.5 mm. The prototype implant size including test connectors is 19 X 14 x 6 mm, which can be shrunk down to <0.5 CC. This paper also summarizes some of the in vitro and in vivo experiments performed using the Interestim-2B prototype implant. --- paper_title: A 10.2 Mbps Pulse Harmonic Modulation Based Transceiver for Implantable Medical Devices paper_content: A low power wireless transceiver has been presented for near-field data transmission across inductive telemetry links, which operates based on pulse harmonic modulation (PHM). This PHM transceiver uses on-off keying (OOK) of a pattern of pulses to suppress inter-symbol interference (ISI), and its characteristics are suitable for low-power high-bandwidth telemetry in implantable neuroprosthetic devices. To transmit each bit across a pair of high-Q LC-tank circuits, the PHM transmitter generates a string of narrow pulses with specific amplitudes and timing. Each pulse generates a decaying oscillation at the harmonic frequency that the receiver LC-tank is tuned at, which is then superimposed with other oscillations across the receiver at the same frequency, to minimize the ISI. This allows for reaching high data rates without reducing the inductive link quality factor (to extend its bandwidth), which significantly improves the range and selectivity of the link. The PHM receiver architecture is based on non-coherent energy detection with programmable bandwidth and adjustable gain. The PHM transceiver was fabricated in a 0.5- μm standard CMOS process, occupying 1.8 mm2. The transceiver achieved a measured 10.2 Mbps data rate with a bit error rate (BER) of 6.3×10-8 at 1 cm distance using planar implant sized (1 cm2) figure-8 coils. The PHM transmitter power consumption was 345 pJ/bit and 8.85 pJ/bit at 1 cm and zero link distances, respectively. The receiver dissipates 3 mW at 3.3 V supply voltage. --- paper_title: Wideband Near-Field Data Transmission Using Pulse Harmonic Modulation paper_content: This paper introduces a new modulation technique, called pulse harmonic modulation (PHM), for wideband, low power data transmission across inductive telemetry links that operate in the near-field domain. The use of sharp and narrow pulses, similar to impulse-radio ultrawideband (IR-UWB) in the far-field domain, leads to significant reduction in the transmitter power consumption. However unlike IR-UWB, where all pulses are the same, in PHM each bit consists of a pattern of pulses with specific time delays and amplitudes, which minimize the intersymbol interference (ISI) across the receiver coil. This helps achieve a high data rate without reducing the inductive link quality factor and selectivity, which are necessary to block interferers. The received signal consists of an oscillation pattern that is amplitude modulated by the amplitude and timing of the successively transmitted pulses to facilitate data demodulation with low bit-error rate (BER). The main application of the PHM is expected to be in the neuroprostheses, such as brain-computer interfaces (BCIs) or cochlear/retinal implants, which need to transfer large volumes of data across the skin. It may also be used in short-range proximity-based digital communications with high-throughput wireless devices. This paper describes the PHM theoretical foundation and demonstrates its operation with a proof-of-concept prototype setup, which achieves a data rate of 5.2 Mbps at 1 cm coil separation with a BER of 10- 6 . --- paper_title: A wideband frequency-shift keying wireless link for inductively powered biomedical implants paper_content: A high data-rate frequency-shift keying (FSK) modulation protocol, a wideband inductive link, and three demodulator circuits have been developed with a data-rate-to-carrier-frequency ratio of up to 67%. The primary application of this novel FSK modulation/demodulation technique is to send data to inductively powered wireless biomedical implants at data rates in excess of 1 Mbps, using comparable carrier frequencies. This method can also be used in other applications such as radio-frequency identification tags and contactless smartcards by adding a back telemetry link. The inductive link utilizes a series-parallel inductive-capacitance tank combination on the transmitter side to provide more than 5 MHz of bandwidth. The demodulator circuits detect data bits by directly measuring the duration of each received FSK carrier cycle, as well as derive a constant frequency clock, which is used to sample the data bits. One of the demodulator circuits, digital FSK, occupies 0.29 mm/sup 2/ in the AMI 1.5-/spl mu/m, 2M/2P, standard CMOS process, and consumes 0.38 mW at 5 V. This circuit is simulated up to 4 Mbps, and experimentally tested up to 2.5 Mbps with a bit error rate of 10/sup -5/, while receiving a 5/10-MHz FSK carrier signal. It is also used in a wireless implantable neural microstimulation system. --- paper_title: A Tri-State FSK Demodulator for Asynchronous Timing of High-Rate Stimulation Pulses in Wireless Implantable Microstimulators paper_content: A tri-state FSK modulation protocol and demodulator circuit have been developed and explained in this paper for wireless data rates as high as the carrier frequency. This method is used for asynchronous timing of high-rate stimulation pulses in wireless implantable microstimulators to improve the timing resolution of the stimulation pulses from one data-frame period to only one carrier cycle period. The demodulator circuit is used in a 16-site wireless active stimulating microprobe fabricated in the University of Michigan 3-mum, 1-metal, 2-poly, N-epi, BiCMOS process, occupying 1.17 mm2 of the probe active area. The FSK demodulator circuit is simulated up to 2.5 Mega bits per second (b/s) and tested up to 300 kb/s --- paper_title: IMES - implantable myoElectric sensor system: Designing standardized ASICs paper_content: As a component of the RP2009 project, the IMES system has emerged as a strong candidate for extracting naturally-occurring control signals to be used for providing functional control of an upper body artificial limb. In earlier publications, we described various elements of this system as they were being researched and developed. Presently, the system has matured to a level for which it is now appropriate to consider application-specific-integrated circuits (ASIC) that are of a standardized form, and are suitable for clinical deployment of the IMES system. Here we describe one of our emerging ASIC designs that addresses the design challenges of the extracoporal transmitter controller. Although this ASIC is used in the IMES system, it may also be used for any command protocol that requires FSK modulation of a Class E converter. --- paper_title: Power-Efficient Impedance-Modulation Wireless Data Links for Biomedical Implants paper_content: We analyze the performance of wireless data telemetry links for implanted biomedical systems. An experimental realization of a bidirectional half-duplex link that uses near-field inductive coupling between the implanted system and an external transceiver is described. Our system minimizes power consumption in the implanted system by using impedance modulation to transmit high-bandwidth information in the uplink direction, i.e., from the implanted to the external system. We measured a data rate of 2.8 Mbps at a bit error rate (BER) of <10-6 (we could not measure error rates below 10-6 ) and a data rate of 4.0 Mbps at a BER of 10-3. Experimental results also demonstrate data transfer rates up to 300 kbps in the opposite, i.e., downlink direction. We also perform a theoretical analysis of the bit error rate performance. An important effect regarding the asymmetry of rising and falling edges that is inherent to impedance modulation is predicted by theory and confirmed by experiment. The link dissipates 2.5 mW in the external system and only 100 muW in the implanted system, making it among the most power-efficient inductive data links reported. Our link is compatible with FCC regulations on radiated emissions. --- paper_title: A 10.8 mW Body Channel Communication/MICS Dual-Band Transceiver for a Unified Body Sensor Network Controller paper_content: With the increasing number of portable and implantable personal health care devices, there is a strong demand to control their communication in a single wireless network. Recently, the IEEE 802.15 WBAN task group has discussed the combining of wearable and implantable body sensor networks (BSNs) [1], but no real chip implementation has been reported. In this paper, the implementation of a unified BSN as shown in Fig. 24.9.1 is described.. The unified BSN combines low-power body-channel communication (BCC) [2] and versatile medical implant communication service (MICS) [3] using a network controller located on the human body. This unified BSN has 2 main advantages over the conventional BSNs. First, the MICS band antenna shared with the BCC electrode can be attached directly to human skin to shorten the communication distances among the controller and implanted radios, relaxing their sensitivity and selectivity requirements. Second, due to low path loss of the human body channel, low-power communication is possible among the wearable devices [4]. In addition, the on-body sensors do not need external antennas because the bio-signal sensing electrode functions as the interface for data transmission. --- paper_title: Optimization of Data Coils in a Multiband Wireless Link for Neuroprosthetic Implantable Devices paper_content: We have presented the design methodology along with detailed simulation and measurement results for optimizing a multiband transcutaneous wireless link for high-performance implantable neuroprosthetic devices. We have utilized three individual carrier signals and coil/antenna pairs for power transmission, forward data transmission from outside into the body, and back telemetry in the opposite direction. Power is transmitted at 13.56 MHz through a pair of printed spiral coils (PSCs) facing each other. Two different designs have been evaluated for forward data coils, both of which help to minimize power carrier interference in the received data carrier. One is a pair of perpendicular coils that are wound across the diameter of the power PSCs. The other design is a pair of planar figure-8 coils that are in the same plane as the power PSCs. We have compared the robustness of each design against horizontal misalignments and rotations in different directions. Simulation and measurements are also conducted on a miniature spiral antenna, designed to operate with impulse-radio ultra-wideband (IR-UWB) circuitry for back telemetry. --- paper_title: A wideband power-efficient inductive wireless link for implantable microelectronic devices using multiple carriers paper_content: This paper presents a novel inductive link for wireless transmission of power and data to biomedical implantable microelectronic devices using multiple carrier frequencies. Achieving higher data bandwidth without compromising the power efficiency is the driving force to use two separate carriers. Two separate pairs of coils have been utilized for inductive power and forward data transmission. One major challenge, however, is to minimize the interference among these carriers especially on the implantable side, where size and power are highly limited. Planar power coils with spiral shape are optimized in geometry to provide maximum coupling coefficient, k. The data coils are designed rectangular in shape and wound across the power coils diameter to be oriented perpendicular to the power coil planes. The goal is to maximize data coils direct coupling, while minimize their cross-coupling with the power coils. The effects of coils geometry, relative distance, and misalignments on the coupling coefficients have been modeled and experimentally measured. --- paper_title: An AC-powered optical receiver consuming 270μW for transcutaneous 2Mb/s data transfer paper_content: Improving communication with implantable systems remains an important topic of research due to the limitations in power dissipation and the simultaneous need for high data rates. Neural recorders generate well above 10Mb/s data [1], which needs to be transmitted out-of-body. Multichannel stimulators, such as epiretinal implants, need control data in the range of several Mb/s for the into-body link [2]. Up to now, RF has been the dominant form of transcutaneous communication. One major issue is the crosstalk between the RF power link and the data signal. Therefore, dual-band telemetry is common in order to spectrally separate the data and power transfer. The standards reach from UWB transmitters [1], MICS band [3], to customized [2] RF receivers, often using sophisticated digital encoding. Also, orthogonal alignment has been used for the data and power coils to suppress crosstalk. Such RF communication needs a 2nd pair of coils, and the state-of-the-art power consumption ranges from 1.5 to 3nJ/b at rates of 120kb/s to 2.5Mb/s [3]. --- paper_title: Wideband Near-Field Data Transmission Using Pulse Harmonic Modulation paper_content: This paper introduces a new modulation technique, called pulse harmonic modulation (PHM), for wideband, low power data transmission across inductive telemetry links that operate in the near-field domain. The use of sharp and narrow pulses, similar to impulse-radio ultrawideband (IR-UWB) in the far-field domain, leads to significant reduction in the transmitter power consumption. However unlike IR-UWB, where all pulses are the same, in PHM each bit consists of a pattern of pulses with specific time delays and amplitudes, which minimize the intersymbol interference (ISI) across the receiver coil. This helps achieve a high data rate without reducing the inductive link quality factor and selectivity, which are necessary to block interferers. The received signal consists of an oscillation pattern that is amplitude modulated by the amplitude and timing of the successively transmitted pulses to facilitate data demodulation with low bit-error rate (BER). The main application of the PHM is expected to be in the neuroprostheses, such as brain-computer interfaces (BCIs) or cochlear/retinal implants, which need to transfer large volumes of data across the skin. It may also be used in short-range proximity-based digital communications with high-throughput wireless devices. This paper describes the PHM theoretical foundation and demonstrates its operation with a proof-of-concept prototype setup, which achieves a data rate of 5.2 Mbps at 1 cm coil separation with a BER of 10- 6 . --- paper_title: Listening to Brain Microcircuits for Interfacing With External World—Progress in Wireless Implantable Microelectronic Neuroengineering Devices paper_content: Acquiring neural signals at high spatial and temporal resolution directly from brain microcircuits and decoding their activity to interpret commands and/or prior planning activity, such as motion of an arm or a leg, is a prime goal of modern neurotechnology. Its practical aims include assistive devices for subjects whose normal neural information pathways are not functioning due to physical damage or disease. On the fundamental side, researchers are striving to decipher the code of multiple neural microcircuits which collectively make up nature's amazing computing machine, the brain. By implanting biocompatible neural sensor probes directly into the brain, in the form of microelectrode arrays, it is now possible to extract information from interacting populations of neural cells with spatial and temporal resolution at the single cell level. With parallel advances in application of statistical and mathematical techniques tools for deciphering the neural code, extracted populations or correlated neurons, significant understanding has been achieved of those brain commands that control, e.g., the motion of an arm in a primate (monkey or a human subject). These developments are accelerating the work on neural prosthetics where brain derived signals may be employed to bypass, e.g., an injured spinal cord. One key element in achieving the goals for practical and versatile neural prostheses is the development of fully implantable wireless microelectronic ?brain-interfaces? within the body, a point of special emphasis of this paper. --- paper_title: An advanced multiple channel cochlear implant paper_content: In the hearing prosthesis, stimulation is presented through an array of 20 electrodes located in the scala tympani. Any two electrodes can be configured as a bipolar pair to conduct a symmetrical, biphasic, constant-current pulsatile stimulus. Up to three stimuli can be presented in rapid succession or effectively simultaneously. For simultaneous stimulation, a novel time-division current multiplexing technique has been developed to obviate electrode interactions that may compromise safety. The stimuli are independently controllable in current amplitude, duration, and onset time. Groups of three stimuli can be generated at a rate of typically 500 Hz. Stimulus control data and power are conveyed to the implant through a single transcutaneous inductive link. The device incorporates a telemetry system that enables electrode voltage waveforms to be monitored externally in real time. The electronics of the implant are contained almost entirely on a custom designed integrated circuit. Preliminary results obtained with the first patient to receive the advanced implant are included. > --- paper_title: Hybrid RF/IR transcutaneous telemetry for power and high-bandwidth data paper_content: As neuroprosthetic control systems continue to advance and increase in channel density, there will be a constant need to deliver data at higher bandwidths in and out of the body. Currently, RF telemetry and inductive coupling are the most commonly used methods for transmitting power and electronic data between implants and external systems, and state of the art systems can deliver data rates up to hundreds of kilobits per second. However, it is difficult to operate implanted medical RF links at higher data rates due to electromagnetic compatibility (EMC) constraints. In this study, we investigate the potential for hybrid telemetry systems that use constant-frequency RF inductive links for power and transcutaneous infrared (IR) signals for data. We show that with commercially available infrared communication components, data rates of up to 40 Mbits per second can be transmitted out across 5 mm of skin with an internal device power dissipation under 100 mW. ---
Title: An Overview of the Recent Wideband Transcutaneous Wireless Communication Techniques Section 1: Introduction Description 1: Outline the challenges and importance of transcutaneous data telemetry in implantable medical devices, highlighting the need for wireless communication techniques for patient comfort and safety. Section 2: Reactive vs. Radiative Description 2: Explain the division of the area around a radio frequency transmitter into regions, and discuss the implications for transcutaneous communications, including the use of Industrial-Scientific-Medical (ISM) bands and limitations of current data rates. Section 3: Single Carrier vs. Multi-Carrier Description 3: Discuss the power consumption issues in neuroprosthetic devices, advantages and disadvantages of using single carrier signals for power and data transmission, and the challenges of implementing multiple carrier signals within limited space. Section 4: Carrier Based vs. Pulse Based Description 4: Review various modulation techniques for transcutaneous data telemetry, with a focus on phase-coherent Frequency Shift Keying (pcFSK) and Pulse Harmonic Modulation (PHM) for improving data rates and minimizing interference. Section 5: Electromagnetic, Optical, and Body Channel Description 5: Explore alternative methods to RF electromagnetic fields for transcutaneous communication, including optical links and using the human body as the channel, along with their respective advantages and limitations. Section 6: Conclusion Description 6: Summarize the challenges and proposed solutions for designing wideband transcutaneous links, emphasizing the importance of decoupling data carrier from power and optimizing coil and antenna designs.
Survey on Coarse Grained Reconfigurable Architectures
16
--- paper_title: Design and implementation of a coarse-grained dynamically reconfigurable hardware architecture paper_content: This paper presents the hardware structure and application of a coarse-grained dynamically reconfigurable hardware architecture dedicated to wireless communication systems. The application tailored architecture, called DReAM (D_ynamically R_econfigurable Hardware A_rchitecture for M_obile Communication Systems), is a research project at the Darmstadt University of Technology. It covers the complete design process from analyzing the requirements for the dedicated application field, the specification and VHDL implementation of the architecture, up to the physical layout for the final chip. In the following we provide an overview of the major design stages, starting with a motivation for choosing the concept of distributed arithmetic in reconfigurable computing. --- paper_title: A dynamically reconfigurable system-on-a-chip architecture for future mobile digital signal processing paper_content: The evolving of current and future broadband access techniques into the wireless domain introduces new and flexible network architectures with difficult and interesting challenges. The system designers are faced with a challenging set of problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper presents first the major challenges in realizing flexible microelectronic system solutions for digital baseband signal processing in future mobile communication applications. Based thereupon, the architecture design of flexible system-on-a-chip solutions is discussed. The focus of the paper is the introduction of a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. Its performance issues and potential are discussed by the implementation of a flexible and computation-intensive component of future mobile terminals. --- paper_title: Reconfigurable computing: a new business model-and its impact on SoC design paper_content: Making gate arrays obsolete, FPGAs are successfully proceeding from niche to mainstream. Like microprocessor usage, FPGA application is RAM-based, but by structural programming (also called "(re)configuration") instead of procedural programming. Now both, host and accelerator are RAM-based and as such also available on the same chip: a new approach to SoC design. Now also accelerator definition may be-at least partly-conveyed from vendor site to customer site. A new business model is needed. But this paradigm switch is still ignored: FPGAs do not repeat the RAM-based success story of the software industry. There is not yet a configware industry, since mapping applications onto FPGAs mainly uses hardware syntheses method. From a decade of world-wide research on Reconfigurable Computing another breed of reconfigurable platforms is an emerging future competitor to FPGAs. Supporting roughly single bit wide configurable logic blocks (CLBs) the mapping tools are mainly based on gate level methods-similar to CAD for hardware logic. In contrast to this fine-grained arrays of coarse-grained reconfigurable datapath units (rDPUs) with drastically reduced reconfigurability overhead: to directly configure high level parallelism. But the "von Neumann" paradigm does not support soft datapaths because "instruction fetch" is not done at run time, and, since most reconfigurable computing arrays do not run parallel processes, but multiple pipe networks instead. --- paper_title: Architecture generation of customized reconfigurable hardware paper_content: Reconfigurable hardware is ideal for use in systems-on-a-chip (SoCs), achieving hardware speeds but also flexibility not available with more traditional custom circuitry. Traditional FPGA structures can be used in an SoC, but they suffer from significant overhead due to their generic nature. Alternatively, for cases when the application domain of the SoC is known, the reconfigurable hardware can be optimized for that domain. The Totem Project focuses on the automatic creation of customized reconfigurable architectures, including high-level design, VLSI layout, and associated custom place and route tools. ::: This thesis focuses on the high-level design phase, or “Architecture Generation”. Two distinct categories of reconfigurable architectures can be created: highly optimized near-ASIC designs with a very low degree of reconfigurability, and flexible architectures with a one-dimensional segmented routing structure. Each of these design methods shows significant improvements through tailoring the architectures to the given application area. The cASIC designs are on average up to 12.3x smaller than an FPGA solution with embedded multipliers and 2.2x smaller than a standard cell implementation. The more flexible architectures, able to support a wider variety of circuits, are on average up to 5.5x smaller than the FPGA solution, and close in area to standard cells. --- paper_title: MorphoSys: a reconfigurable architecture for multimedia applications paper_content: We describe the MorphoSys reconfigurable system, which combines a reconfigurable array of processor cells with a RISC processor core and a high bandwidth memory interface unit. We introduce the array architecture, its configuration memory, inter-connection network, role of the control processor and related components. Architecture implementation is described in brief and the efficacy of MorphoSys is demonstrated through simulation of video compression (MPEG-2) and target-recognition applications. Comparison with other implementations illustrates that MorphoSys achieves higher performance by up to 10X. --- paper_title: A reconfigurable arithmetic array for multimedia applications paper_content: In this paper we describe a reconfigurable architecture optimised for media processing, and based on 4-bit ALUs and interconnect. --- paper_title: Fast Communication Mechanisms in Coarse-grained Dynamically Reconfigurable Array Architectures paper_content: The paper focuses on coarse-grained dynamically reconfigurable array architectures promising performance and flexibility for different challenging application areas, e. g. future broadband mobile communication systems. Here, new and flexible microelectronic architectures are required solving various problems that stem from access mechanisms, energy conservation, error rate, transmission speed characteristics of the wireless links and mobility aspects. This paper sketches first the major motivation for developing flexible microelectronic System-onChip (SoC) solutions for the digital baseband processing in future mobile radio devices. The paper introduces a new parallel and dynamically reconfigurable hardware architecture tailored to this application area. The focus of this contribution is the efficient communication and dynamic reconfiguration realization for such reconfigurable array architectures, which is crucial for their overall performance and flexibility. --- paper_title: Evaluating memory architectures for media applications on Coarse-grained Reconfigurable Architectures paper_content: Reconfigurable ALU Array (RAA) architectures – representing a popular class of Coarse-grained Reconfigurable Architectures – are gaining in popularity especially for media applications due to their flexibility, regularity, and efficiency. In such architectures, memory is critical not only for configuration data but also for the heavy data traffic required by the application. In this paper, we offer a scheme for system designers to quickly estimate the performance of media applications on RAA architectures. Our experimental results demonstrate the flexibility of our memory architecture evaluation scheme as well as the varying effects of the memory architectures on the application performance. --- paper_title: A coarse-grained Dynamically Reconfigurable MAC Processor for power-sensitive multi-standard devices paper_content: We have designed a coarse-grained, dynamically reconfigurable architecture, specifically for implementing the wireless MAC layer in consumer hand-held devices. The dynamically reconfigurable MAC Processor is a SoC architecture that uses a reconfigurable hardware co-processor to delegate critical tasks. The co-processor can reconfigure packet-by-packet, handling upto 3 data streams of different protocols concurrently. We present results of simulations involving transmission and reception of packets, showing that the platform concurrently handles three protocol streams, reconfigures dynamically, yet meets and exceeds the protocol timing constraints, all at a moderate frequency. Thus we show that this architecture is capable of replacing up to three MAC processors in a wireless device. Its heterogeneous and coarse-grained functional units, requirements of limited connectivity between these units, and the idle time of hardware resources promise a very modest power-consumption, suitable for mobile devices. --- paper_title: Research Article SmartCell: An Energy Efficient Coarse-Grained Reconfigurable Architecture for Stream-Based Applications paper_content: This paper presents SmartCell, a novel coarse-grained reconfigurable architecture, which tiles a large number of processor elements with reconfigurable interconnection fabrics on a single chip. SmartCell is able to provide high performance and energy efficient processing for stream-based applications. It can be configured to operate in various modes, such as SIMD, MIMD, and systolic array. This paper describes the SmartCell architecture design, including processing element, reconfigurable interconnection fabrics, instruction and control process, and configuration scheme. The SmartCell prototype with 64 PEs is implemented using 0.13 µm CMOS standard cell technology. The core area is about 8.5mm2, and the power consumption is about 1.6mW/MHz. The performance is evaluated through a set of benchmark applications, and then compared with FPGA, ASIC, and two well-known reconfigurable architectures including RaPiD and Montium. The results show that the SmartCell can bridge the performance and flexibility gap between ASIC and FPGA. It is also about 8% and 69% more energy efficient than Montium and RaPiD systems for evaluated benchmarks. Meanwhile, SmartCell can achieve 4 and 2 times more throughput gains when comparing with Montium and RaPiD, respectively. It is concluded that SmartCell system is a promising reconfigurable and energy efficient architecture for stream processing. --- paper_title: FloRA: Coarse-grained reconfigurable architecture with floating-point operation capability paper_content: This paper demonstrates a chip implementation of coarse-grained reconfigurable architecture named FloRA. Two-dimensional array of integer processing elements in the FloRA is configured in run-time to perform integer functions as well as floating-point functions. FloRA is implemented in Dongbu HiTek 130nm process and evaluate by running applications including physics engine and jpeg decoder. --- paper_title: FloRA: Coarse-grained reconfigurable architecture with floating-point operation capability paper_content: This paper demonstrates a chip implementation of coarse-grained reconfigurable architecture named FloRA. Two-dimensional array of integer processing elements in the FloRA is configured in run-time to perform integer functions as well as floating-point functions. FloRA is implemented in Dongbu HiTek 130nm process and evaluate by running applications including physics engine and jpeg decoder. --- paper_title: Implementation of floating-point operations for 3D graphics on a coarse-grained reconfigurable architecture paper_content: With the increasing requirements for more flexibility and higher performance in embedded systems design, reconfigurable computing has become more popular. There have been many coarse-grained reconfigurable architectures proposed and/or commercialized. But most of the existing architectures cannot be used for applications that require floating-point operations, since they have only integer units. In this paper, we present how we can perform various floating-point operations on a coarse-grained reconfigurable array of integer processing elements. We demonstrate the effectiveness of our approach through the implementation of various floating-point operations for 3D graphics and their performance analysis. ---
Title: Survey on Coarse Grained Reconfigurable Architectures Section 1: INTRODUCTION Description 1: This section introduces the background and motivation for developing coarse-grained reconfigurable architectures, highlighting the advantages over fine-grained FPGAs and ASICs. Section 2: COARSE GRAINED RECONFIGURABLE ARCHITECTURES Description 2: This section describes the basic architecture of coarse-grained reconfigurable architectures, including data-paths, processing elements, and interconnection networks. Section 3: CGRA DEVELOPMENT Description 3: This section outlines the development history and various approaches of different coarse-grained reconfigurable architectures, providing an overview of key architectures in the field. Section 4: The Reconfigurable Pipelined Datapath (RaPiD) Architecture Description 4: This sub-section discusses the RaPiD architecture, its components, and its target applications, specifically DSP tasks. Section 5: MorphoSys Architecture Description 5: This sub-section explores the MorphoSys architecture, focused on multimedia applications, and its combination of coarse and fine grain reconfiguration techniques. Section 6: The CHESS Architecture Description 6: This sub-section examines the CHESS architecture, tailored for multimedia and motion estimation applications, and discusses its computation components and routing strategies. Section 7: Dynamically Reconfigurable Architecture for Mobile Systems (DReAM) Description 7: This sub-section details the DReAM architecture, designed for mobile devices, and its capabilities in supporting future generations of wireless communication systems. Section 8: Dynamically Reconfigurable ALU Array (DRAA) Description 8: This sub-section introduces the DRAA, a generic architecture template with a focus on efficient memory interface for media applications. Section 9: ADRES (Architecture for Dynamically Reconfigurable Embedded System) Description 9: This sub-section describes the ADRES architecture, highlighting its VLIW processor and array of configurable processing cells for embedded applications. Section 10: MORA Architecture Description 10: This sub-section provides an overview of the MORA architecture, its scalable reconfigurable cells, and its application in multimedia. Section 11: Dynamically Reconfigurable MAC Processor (DRMP) Description 11: This sub-section covers the DRMP, designed for wireless MAC layer implementation in hand-held devices, and its power-efficient functional units. Section 12: PACT XPP-III Description 12: This sub-section focuses on the XPP-III architecture, its heterogeneous processor types, and its application in stream-based processing. Section 13: SYSCORE Architecture Description 13: This sub-section highlights the SYSCORE architecture, aimed at biosignal processing with low power consumption features. Section 14: SmartCell Description 14: This sub-section discusses the SmartCell architecture targeted for high data throughput and computationally intensive applications. Section 15: Floating-point Reconfigurable Array (FloRA) Description 15: This sub-section outlines the FloRA architecture, capable of performing both integer and floating-point operations, optimized for mobile applications. Section 16: CONCLUSION Description 16: This section summarizes the survey, emphasizing the performance and flexibility of coarse-grained reconfigurable architectures in SoC design.
A survey on mac protocols for ad hoc networks with directional antennas
7
--- paper_title: On the performance of ad hoc networks with beamforming antennas paper_content: Beamforming antennas have the potential to provide a fundamental breakthrough in ad hoc network capacity. We present a broad-based examination of this potential, focusing on exploiting the longer ranges as well as the reduced interference that beamforming antennas can provide. We consider a number of enhancements to a convectional ad hoc network system, and evaluation the impact of each enhancement using simulation. Such enhancements include "aggressive" and "conservative" channel access models for beamforming antennas, link power control, and directional neighbor discovery. Our simulations are based on detailed modeling on detailed modeling of steered as well as swiched beams using antenna patterns of varying gains, and a realistic radio and propagation model. For the scenarios studied, our results show that beamforming can yield a 28% to 118% (depending upon the density) improvement in throughput, and up to a factor-of-28 reduction in delay. Our study also tells us which mechanisms are likely to be more effective and under what conditions, which in turn identifies areas where future research is neede --- paper_title: The capacity of wireless networks paper_content: When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput /spl lambda/(n) obtainable by each node for a randomly chosen destination is /spl Theta/(W//spl radic/(nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is /spl Theta/(W/spl radic/An) bit-meters per second. Thus even under optimal circumstances, the throughput is only /spl Theta/(W//spl radic/n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance. --- paper_title: Wireless medium access control protocols paper_content: Technological advances, coupled with the flexibility and mobility of wireless systems, are the driving force behind the Anyone, Anywhere, Anytime paradigm of networking. At the same time, we see a convergence of the telephone, cable and data networks into a unified network that supports multimedia and real-time applications like voice and video in addition to data. Medium access control protocols define rules for orderly access to the shared medium and play a crucial role in the efficient and fair sharing of scarce wireless bandwidth. The nature of the wireless channel brings new issues like location-dependent carrier sensing, time varying channel and burst errors. Low power requirements and half duplex operation of the wireless systems add to the challenge. Wireless MAC protocols have been heavily researched and a plethora of protocols have been proposed. Protocols have been devised for different types of architectures, different applications and different media. This survey discusses the challenges in the design of wireless MAC protocols, classifies them based on architecture and mode of operation, and describes their relative performance and application domains in which they are best deployed. --- paper_title: Application of Antenna Arrays to Mobile Communications, Part II: Beam-Forming and Direction-of-Arrival Considerations paper_content: Array processing involves manipulation of signals induced on various antenna elements. Its capabilities of steering nulls to reduce cochannel interferences and pointing independent beams toward various mobiles, as well as its ability to provide estimates of directions of radiating sources, make it attractive to a mobile communications system designer. Array processing is expected to play an important role in fulfilling the increased demands of various mobile communications services. Part I of this paper showed how an array could be utilized in different configurations to improve the performance of mobile communications systems, with references to various studies where feasibility of apt array system for mobile communications is considered. This paper provides a comprehensive and detailed treatment of different beam-forming schemes, adaptive algorithms to adjust the required weighting on antennas, direction-of-arrival estimation methods-including their performance comparison-and effects of errors on the performance of an array system, as well as schemes to alleviate them. This paper brings together almost all aspects of array signal processing. --- paper_title: Using Directional Antennas for Medium Access Control in Ad Hoc Networks paper_content: A composition for use in the treatment for developing seedless fleshy berry of grapes. By treating the flower bunches of a grape tree with a composition containing gibberellin and cyclic 3',5'-adenylic acid in the form of an aqueous solution, it became possible to make seedless fleshy berry from grape trees belonging to varieties other than Delaware, namely belonging to Campbell-Arley, Berry A, Niagara, Kyoho, etc., from which seedless fleshy berry cannot be made by the conventional treatment with gibberellin. --- paper_title: Using Directional Antennas for Medium Access Control in Ad Hoc Networks paper_content: A composition for use in the treatment for developing seedless fleshy berry of grapes. By treating the flower bunches of a grape tree with a composition containing gibberellin and cyclic 3',5'-adenylic acid in the form of an aqueous solution, it became possible to make seedless fleshy berry from grape trees belonging to varieties other than Delaware, namely belonging to Campbell-Arley, Berry A, Niagara, Kyoho, etc., from which seedless fleshy berry cannot be made by the conventional treatment with gibberellin. --- paper_title: Deafness: a MAC problem in ad hoc networks when using directional antennas paper_content: This work addresses deafness - a problem that appears when MAC protocols are designed using directional antennas. Briefly, deafness is caused when a transmitter fails to communicate to its intended receiver, because the receiver is beamformed towards a direction away from the transmitter. Existing CSMA/CA protocols rely on the assumption that congestion is the predominant cause of communication failure, and adopt backoff schemes to handle congestion. While this may be appropriate for omnidirectional antennas, for directional antennas, both deafness and congestion can be the reason for communication failures. An appropriate directional MAC protocol needs to classify the actual cause of failure, and react accordingly. This paper quantifies the impact of deafness on directional medium access control, and proposes a tone-based mechanism as one way of addressing deafness. The tone-based mechanism, ToneDMAC, assumes congestion as the default reason for communication failures, and applies a corrective measure whenever the cause is deafness. Simulation results indicate that ToneDMAC can alleviate deafness, and perform better than existing directional MAC protocols. --- paper_title: Medium access control protocols using directional antennas in ad hoc networks paper_content: Using directional antennas can be beneficial for wireless ad hoc networks consisting of a collection of wireless hosts. To best utilize directional antennas, a suitable medium access control (MAC) protocol must be designed. Current MAC protocols, such as the IEEE 802.11 standard, do not benefit when using directional antennas, because these protocols have been designed for omnidirectional antennas. In this paper, we attempt to design new MAC protocols suitable for ad hoc networks based on directional antennas. --- paper_title: Deafness: a MAC problem in ad hoc networks when using directional antennas paper_content: This work addresses deafness - a problem that appears when MAC protocols are designed using directional antennas. Briefly, deafness is caused when a transmitter fails to communicate to its intended receiver, because the receiver is beamformed towards a direction away from the transmitter. Existing CSMA/CA protocols rely on the assumption that congestion is the predominant cause of communication failure, and adopt backoff schemes to handle congestion. While this may be appropriate for omnidirectional antennas, for directional antennas, both deafness and congestion can be the reason for communication failures. An appropriate directional MAC protocol needs to classify the actual cause of failure, and react accordingly. This paper quantifies the impact of deafness on directional medium access control, and proposes a tone-based mechanism as one way of addressing deafness. The tone-based mechanism, ToneDMAC, assumes congestion as the default reason for communication failures, and applies a corrective measure whenever the cause is deafness. Simulation results indicate that ToneDMAC can alleviate deafness, and perform better than existing directional MAC protocols. --- paper_title: CSMA/CA with Beam Forming Antennas in Multi-hop Packet Radio paper_content: 1 Visiting researcher from the Dept. of Electrical Engineering, National University of Engineering (UNI), P O Box 5595, Managua, Nicaragua. AbstractLow cost, reliable and easily deployed ad hoc rural-area wireless networks are needed for both civilian and military communications. In order to fulfill the requirement of easy deployment the network needs to be autonomous, self-organising and self-healing. Multi-hop Packet Radio Networks are a suitable solution to fulfill this requirement. In this type of network all nodes use the same frequency for transmission as well as using a store and forward procedure which enables communications between nodes that are out of direct radio range. There are a variety of multiple access protocols applicable to this type of system. Spatial Time Division Multiple Access (STDMA) has been studied and found to be efficient and fair, the draw-back is that it may not offer good peak rates for bursty data traffic. The Carrier Sense Multiple Access/ Collision Avoidance (CSMA/CA) protocol promises high efficiency and the ability to provide high peak data rates. Although CSMA/CA is not new there is very little work one considering the rural-area multi-hop environment. This paper presents the analysis of CSMA/CA in the rural-area multi-hop environment with and without adaptive antennas. An explanation of the system design with adaptive antennas is given as well as simulation results showing the expected performance gain. --- paper_title: A novel MAC layer protocol for space division multiple access in wireless ad hoc networks paper_content: Recently, MAC protocols using directional antennas for wireless ad hoc networks that are based on and similar to IEEE 802.11 type WLAN have been proposed. These protocols, however, are unable to attain substantial performance improvements because they do not enable the nodes to perform multiple simultaneous transmissions/receptions. In this paper, we propose a MAC layer protocol that exploits space division multiple access, thus using the property of directional reception to receive more than one packet from spatially separated transmitter nodes (equipped with smart antenna systems). Our simulation results show that drastic throughput improvements may be achieved through this scheme. --- paper_title: A busy-tone based directional MAC protocol for ad hoc networks paper_content: In mobile wireless ad hoc networking environments, such as the future combat system (FCS), the shared wireless communication medium is an inherently limited resource and is collision prone. In this paper, we propose to adapt the dual busy tone multiple access (DBTMA) protocol for use with directional antennas, which further increases effective channel capacity. In contrast to other directional antenna based MAC protocols, our protocol, termed DBTMA/DA, is capable of reserving channel capacity in finer grain without relying on extra locationing support. A simulation study is performed to demonstrate the better network performance of DBTMA/DA over DBTMA and the IEEE 802.11a MAC protocols. --- paper_title: Using Directional Antennas for Medium Access Control in Ad Hoc Networks paper_content: A composition for use in the treatment for developing seedless fleshy berry of grapes. By treating the flower bunches of a grape tree with a composition containing gibberellin and cyclic 3',5'-adenylic acid in the form of an aqueous solution, it became possible to make seedless fleshy berry from grape trees belonging to varieties other than Delaware, namely belonging to Campbell-Arley, Berry A, Niagara, Kyoho, etc., from which seedless fleshy berry cannot be made by the conventional treatment with gibberellin. --- paper_title: A MAC protocol for full exploitation of directional antennas in ad-hoc wireless networks paper_content: Directional antennas in ad hoc networks offer many benefits compared with classical omnidirectional antennas. The most important include significant increase of spatial reuse, coverage range and subsequently network capacity as a whole. On the other hand, the use of directional antennas requires new approach in the design of a MAC protocol to fully exploit these benefits. Unfortunately, directional transmissions increase the hidden terminal problem, the problem of deafness and the problem of determination of neighbors' location. In this paper we propose a new MAC protocol that deals effectively with these problems while it exploits in an efficient way the advantages of the directional antennas. We evaluate our work through simulation study. Numerical results show that our protocol offers significant improvement compared to the performance of omni transmissions. --- paper_title: Directional virtual carrier sensing for directional antennas in mobile ad hoc networks paper_content: This paper presents a new carrier sensing mechanism called DVCS (Directional Virtual Carrier Sensing) for wireless communication using directional antennas. DVCS does not require specific antenna configurations or external devices. Instead it only needs information on AOA (Angle of Arrival) and antenna gain for each signal from the underlying physical device, both of which are commonly used for the adaptation of antenna pattern. DVCS also supports interoperability of directional and omni-directional antennas. In this study, the performance of DVCS for mobile ad hoc networks is evaluated using simulation with a realistic directional antenna model and the full IP protocol stack. The experimental results showed that compared with omni-directional communication, DVCS improved network capacity by a factor of 3 to 4 for a 100 node ad hoc network. --- paper_title: A MAC protocol for mobile ad hoc networks using directional antennas paper_content: We propose a medium access control (MAC) protocol for an ad hoc network of mobile wireless terminals that are equipped with multiple directional antennas. Use of directional antennas in ad hoc networks can largely reduce the radio interference, thereby improving the packet throughput. However, the main problem of using directional antennas in such networks is due to the dynamic nature of the network caused by frequent node movements. This gives rise to problems such as locating and tracking during random channel access. The MAC protocol presented in this paper proposes a solution to these problems without the help of additional hardware. Results obtained from detailed computer simulations demonstrate the performance improvement obtained with the proposed scheme. --- paper_title: A network-aware MAC and routing protocol for effective load balancing in ad hoc wireless networks with directional antenna paper_content: Use of directional antenna in the context of ad hoc wireless networks can largely reduce radio interference, thereby improving the utilization of wireless medium. Our major contribution in this paper is to devise a routing strategy, along with a MAC protocol, that exploits the advantages of directional antenna in ad hoc networks for improved system performance. In this paper, we have illustrated a MAC and routing protocol for ad hoc networks using directional antenna with the objective of effective load balancing through the selection of maximally zone disjoint routes. Zone-disjoint routes would minimize the effect of route coupling by selecting routes in such a manner that data communication over one route will minimally interfere with data communication over the others. In our MAC protocol, each node keeps certain neighborhood status information dynamically in order that each node is aware of its neighborhood and communications going on in its neighborhood at that instant of time. This status information from each node is propagated periodically throughout the network. This would help each node to capture the approximate network status periodically that helps each node to become topology-aware and aware of communications going on in the network, although in an approximate manner. With this status information, each intermediate node adaptively computes routes towards destination. The performance of the proposed framework has been evaluated on QualNet Network Simulator with DSR (as in QualNet) as a benchmark. Our proposed mechanism shows four to five times performance improvement over DSR, thus demonstrating the effectiveness of this proposal. --- paper_title: Smart-802.11b MAC protocol for use with smart antennas paper_content: Smart antennas enable a receiver to determine the direction of arrival (DOA) of multiple transmissions as well as to form nulls in some number of directions to maximize SINR (Signal to Interference and Noise Ratio) of the received signal. We utilize the benefits of these capabilities to develop a simple modified version of the popular 802.11b protocol. This protocol exhibits high throughput under a variety of network conditions and is fair. The performance of the protocol is examined exhaustively using joint simulation in OPNET and Matlab. --- paper_title: A MAC protocol based on adaptive beamforming for ad hoc networks paper_content: This paper presents a novel slotted MAC (medium access control) protocol for nodes equipped with adaptive antenna array in ad hoc network. The protocol relies on the ability of antenna to uses DOA (direction-of-arrival) information to beamform by placing nulls in the direction of interferers thus maximize SINR (signal to interference and noise ratio) at the receiver. We studied the performance of the protocol using joint simulation in OPNET and Matlab. We studied the impact of variable number of antenna elements, DOA algorithm, and nulling. The performance of our new protocol is compared against one of the recent directional MAC protocols [R.R.N.H.V. Romit Roy Choudhury, Xue Yang, 2002]. We observe that despite the simplicity of our protocol it achieves high throughput. --- paper_title: Transmission scheduling in ad hoc networks with directional antennas paper_content: Directional antennas can adaptively select radio signals of interest in specific directions, while filtering out unwanted interference from other directions. Although a couple of medium access protocols based on random access schemes have been proposed for networks with directional antennas, they suffer from high probability of collisions because of their dependence on omnidirectional mode for the transmission or reception of control packets in order to establish directional links. We propose a distributed receiver-oriented multiple access (ROMA) channel access scheduling protocol for ad hoc networks with directional antennas, each of which can form multiple beams and commence several simultaneous communication sessions. Unlike random access schemes that use on-demand handshakes or signal scanning to resolve communication targets, ROMA determines a number of links for activation in every time slot using only two-hop topology information. It is shown that significant improvements on network throughput and delay can be achieved by exploiting the multi-beam forming capability of directional antennas in both transmission and reception. The performance of ROMA is studied by simulations, and compared with a well-know static scheduling scheme that is based on global topology information. --- paper_title: Smart-802.11b MAC protocol for use with smart antennas paper_content: Smart antennas enable a receiver to determine the direction of arrival (DOA) of multiple transmissions as well as to form nulls in some number of directions to maximize SINR (Signal to Interference and Noise Ratio) of the received signal. We utilize the benefits of these capabilities to develop a simple modified version of the popular 802.11b protocol. This protocol exhibits high throughput under a variety of network conditions and is fair. The performance of the protocol is examined exhaustively using joint simulation in OPNET and Matlab. ---
Title: A survey on MAC protocols for ad hoc networks with directional antennas Section 1: INTRODUCTION AND MOTIVATION Description 1: Write a comprehensive introduction highlighting the importance of ad hoc networking, the role of directional antennas, challenges in MAC protocol design, and an overview of what the paper will cover. Section 2: MEDIUM ACCESS PROBLEMS USING DIRECTIONAL ANTENNAS Description 2: Summarize the key issues that arise when using directional antennas with conventional MAC protocols, including neighbor location, extended transmission range, side lobe pattern, directional carrier sensing, new hidden terminals, deafness, and antenna array rotation. Section 3: MAC PROTOCOLS FOR DIRECTIONAL ANTENNAS Description 3: Discuss various MAC protocols designed for directional antennas, focusing on how they extend or differ from the 802.11 DCF protocol. Include a detailed breakdown of both 802.11-type and non-802.11-type protocols. Section 4: 802.11-type MAC protocols for directional antennas Description 4: Provide a detailed survey of MAC protocols based on the IEEE 802.11 DCF protocol, their mechanisms, advantages, and performance impacts. Section 5: "Non-802.11"-type MAC protocols for directional antennas Description 5: Explore alternative MAC protocols that do not rely on the IEEE 802.11 RTS/CTS scheme, focusing on scheduled approaches and time-slotted protocols. Section 6: COMPARISON OF MAC PROTOCOLS FOR DIRECTIONAL ANTENNAS Description 6: Compare and contrast the different MAC protocols discussed, emphasizing how they manage spatial reuse, interference, and protocol overhead. Section 7: CONCLUSIONS Description 7: Summarize the key findings of the survey, discuss the potential benefits and challenges of implementing directional antennas in ad hoc networks, and suggest areas for future research.
A review on linear mixed models for longitudinal data, possibly subject to dropout:
11
--- paper_title: Measuring the quality of life of cancer patients: the Functional Living Index-Cancer: development and validation. paper_content: The classical criteria for the evaluation of clinical trials in cancer reflect alterations in physical well-being, but are insensitive to other important factors, such as psychosocial state, sociability, and somatic sensation that may play a critical role in determining the patients' functional response to their illness and its treatment. The Functional Living Index-Cancer is designed for easy, repeated patient self-administration. It is a 22-item questionnaire that has been validated on 837 patients in two cities over a three-year period. Criteria for validity include stability of factor analysis, concurrent validation studies against the Karnofsky, Beck Depression, Spielberger State and Trait Anxiety, and Katz Activities of Daily Living scales, as well as the scaled version of The General Health Questionnaire and The McGill/ Melzack Pain Index. The index is uncontaminated by social desirability issues. The validation studies demonstrate the lack of correlation between traditional measures of patient response and other significant functional factors such as depression and anxiety (r = 0.33), sociability and family interaction, and nausea. These findings elucidate the frequently observed discrepancies between traditional assessments of clinical response and overall functional patient outcome. The index is proposed as an adjunct to clinical trials assessment and may provide additional patient functional information on which to analyse the outcome of clinical trials or offer specific advice to individual patients. --- paper_title: Mixed-effects regression models for studying the natural history of prostate disease paper_content: Although prostate cancer and benign prostatic hyperplasia are major health problems in U.S. men, little is known about the early stages of the natural history of prostate disease. A molecular biomarker called prostate specific antigen (PSA), together with a unique longitudinal bank of frozen serum, now allows a historic prospective study of changes in PSA levels for decades prior to the diagnosis of prostate disease. Linear mixed-effects regression models were used to test whether rates of change in PSA were different in men with and without prostate disease. In addition, since the prostate cancer cases developed their tumours at different (and unknown) times during their periods of follow-up, a piece-wise non-linear mixed-effects regression model was used to estimate the time when rapid increases in PSA were first observable beyond the background level of PSA change. These methods have a wide range of applications in biomedical research utilizing repeated measures data such as pharmacokinetic studies, crossover trials, growth and development studies, aging studies, and disease detection. --- paper_title: Normal Human Aging: The Baltimore Longitudinal Study of Aging paper_content: The two outstanding longitudinal studies in the United States are the Framingham Study of Cardiovascular Disease and the Baltimore Longitudinal Study of Aging (BLSA). The latter deserves to be better known. Hopefully, this volume, a bargain at $18, will serve as an introduction to the BLSA. Started in 1958 with a group of volunteers, the BLSA admittedly lacks the breadth one would like to see in an epidemiologic study. The volunteers were all male (females were not admitted until 1978) and mostly middle-class and professional, and thus the study has a restricted socioeconomic base. However, with this restricted study population, the BLSA has served as a useful antidote to the many studies that have been based on such skewed samples as persons in nursing homes or old patients from hospitals mostly serving the indigent. It is apparent that only a select group would have volunteered for this kind of study --- paper_title: Consistency of the Maximum Likelihood Estimator in the Presence of Infinitely Many Incidental Parameters paper_content: 0 and ai. The parameter 0, upon which all the distributions depend, is called "structural"; the parameters {aiI} are called "incidental". Throughout this paper we shall assume that the Xi, are independently distributed when 0, a1, *** a., are given, and shall consider the problem of consisteently estimating 0 (as n --* X ). The chance variables {Xij} and the parameters 0 and {fa} may be vectors. However, for simplicity of exposition we shall throughout this paper, except in Example 2, assume that they are scalars. Obvious changes will suffice to treat the vector case. Very many interesting problems are subsumed under the above formulation. Among these is the following: --- paper_title: Nonparametric Maximum Likelihood Estimation of a Mixing Distribution paper_content: Abstract The nonparametric maximum likelihood estimate of a mixing distribution is shown to be self-consistent, a property which characterizes the nonparametric maximum likelihood estimate of a distribution function in incomplete data problems. Under various conditions the estimate is a step function, with a finite number of steps. Its computation is illustrated with a small example. --- paper_title: Flexible Modelling of the Covariance Matrix in a Linear Random Effects Model paper_content: A flexible approach is proposed for modelling the covariance matrix of a linear mixed model for longitudinal data. The method combines parametric modelling of the random effects part with flexible modelling of the serial correlation component. The approach is exemplified on weight gain data and on the evolution of height of children in their first year of life of the Jimma Infant Survival Study, an Ethiopian cohort study. The analyses show the usefulness of the approach. --- paper_title: A Linear Mixed-Effects Model with Heterogeneity in the Random-Effects Population paper_content: Abstract This article investigates the impact of the normality assumption for random effects on their estimates in the linear mixed-effects model. It shows that if the distribution of random effects is a finite mixture of normal distributions, then the random effects may be badly estimated if normality is assumed, and the current methods for inspecting the appropriateness of the model assumptions are not sound. Further, it is argued that a better way to detect the components of the mixture is to build this assumption in the model and then “compare” the fitted model with the Gaussian model. All of this is illustrated on two practical examples. --- paper_title: Semiparametric Estimation in the Rasch Model and Related Exponential Response Models, Including a Simple Latent Class Model for Item Analysis paper_content: Abstract The Rasch model for item analysis is an important member of the class of exponential response models in which the number of nuisance parameters increases with the number of subjects, leading to the failure of the usual likelihood methodology. Both conditional-likelihood methods and mixture-model techniques have been used to circumvent these problems. In this article, we show that these seemingly unrelated analyses are in fact closely linked to each other, despite dramatic structural differences between the classes of models implied by each approach. We show that the finite-mixture model for J dichotomous items having T latent classes gives the same estimates of item parameters as conditional likelihood on a set whose probability approaches one if T ≥ (J + 1)/2. Unconditional maximum likelihood estimators for the finite-mixture model can be viewed as Keifer-Wolfowitz estimators for the random-effects version of the Rasch model. Latent-class versions of the model are especially attractive when T is... --- paper_title: A Smooth Nonparametric Estimate of a Mixing Distribution Using Mixtures of Gaussians paper_content: Abstract We propose a method of estimating mixing distributions using maximum likelihood over the class of arbitrary mixtures of Gaussians subject to the constraint that the component variances be greater than or equal to some minimum value h. This approach can lead to estimates of many shapes, with smoothness controlled by parameter h. We show that the resulting estimate will always be a finite mixture of Gaussians, each having variance h. The nonparametric maximum likelihood estimate can be viewed as a special case, with h = 0. The method can be extended to estimate multivariate mixing distributions. Examples and the results of a simulation study are presented. --- paper_title: Mixed-effects regression models for studying the natural history of prostate disease paper_content: Although prostate cancer and benign prostatic hyperplasia are major health problems in U.S. men, little is known about the early stages of the natural history of prostate disease. A molecular biomarker called prostate specific antigen (PSA), together with a unique longitudinal bank of frozen serum, now allows a historic prospective study of changes in PSA levels for decades prior to the diagnosis of prostate disease. Linear mixed-effects regression models were used to test whether rates of change in PSA were different in men with and without prostate disease. In addition, since the prostate cancer cases developed their tumours at different (and unknown) times during their periods of follow-up, a piece-wise non-linear mixed-effects regression model was used to estimate the time when rapid increases in PSA were first observable beyond the background level of PSA change. These methods have a wide range of applications in biomedical research utilizing repeated measures data such as pharmacokinetic studies, crossover trials, growth and development studies, aging studies, and disease detection. --- paper_title: The detection of residual serial correlation in linear mixed models paper_content: Diggle (1988) described how the empirical semi-variogram of ordinary least squares residuals can be used to suggest an appropriate serial correlation structure in stationary linear mixed models. In this paper, this approach is extended to non-stationary models which include random effects other than intercepts, and will be applied to prostate cancer data, taken from the Baltimore Longitudinal Study of Aging. A simulation study demonstrates the effectiveness of this extended variogram for improving the covariance structure of the linear mixed model used to describe the prostate data. © 1998 John Wiley & Sons, Ltd. --- paper_title: Conditional Linear Mixed Models paper_content: The main advantage of longitudinal studies is that they can distinguish changes over time within individuals (longitudinal effects) from differences among subjects at the start of the study (cross-sectional effects). In observational studies, however, longitudinal changes need to be studied after correction for potential important cross-sectional differences between subjects. It will be shown that, in the context of linear mixed models, the estimation of longitudinal effects may be highly influenced by the assumptions about cross-sectional effects. Furthermore, aspects from conditional and mixture inference will be combined, yielding so-called conditional linear mixed models that allow estimation of longitudinal effects (average trends as well as subject-specific trends), independent of any cross-sectional assumptions. These models will be introduced and justified, and extensively illustrated in the analysis of longitudinal data from 680 participants in the Baltimore Longitudinal Study of Aging. --- paper_title: The Effect of Drop-Out on the Efficiency of Longitudinal Experiments paper_content: It is shown that drop-out often reduces the efficiency of longitudinal experiments considerably. In the framework of linear mixed models, a general, computationally simple method is provided, for designing longitudinal studies when drop-out is to be expected, such that there is little risk of large losses of efficiency due to the missing data. All the results are extensively illustrated using data from a randomized experiment with rats. --- paper_title: Small sample inference for fixed effects from restricted maximum likelihood. paper_content: Restricted maximum likelihood (REML) is now well established as a method for estimating the parameters of the general Gaussian linear model with a structured covariance matrix, in particular for mixed linear models. Conventionally, estimates of precision and inference for fixed effects are based on their asymptotic distribution, which is known to be inadequate for some small-sample problems. In this paper, we present a scaled Wald statistic, together with an F approximation to its sampling distribution, that is shown to perform well in a range of small sample settings. The statistic uses an adjusted estimator of the covariance matrix that has reduced small sample bias. This approach has the advantage that it reproduces both the statistics and F distributions in those settings where the latter is exact, namely for Hotelling T2 type statistics and for analysis of variance F-ratios. The performance of the modified statistics is assessed through simulation studies of four different REML analyses and the methods are illustrated using three examples. --- paper_title: Recovery of inter-block information when block sizes are unequal paper_content: SUMMARY A method is proposed for estimating intra-block and inter-block weights in the analysis of incomplete block designs with block sizes not necessarily equal. The method consists of maximizing the likelihood, not of all the data, but of a set of selected error contrasts. When block sizes are equal results are identical with those obtained by the method of Nelder (1968) for generally balanced designs. Although mainly concerned with incomplete block designs the paper also gives in outline an extension of the modified maximum likelihood procedure to designs with a more complicated block structure. In this paper we consider the estimation of weights to be used in the recovery of interblock information in incomplete block designs with possibly unequal block sizes. The problem can also be thought of as one of estimating constants and components of variance from data arranged in a general two-way classification when the effects of one classification are regarded as fixed and the effects of the second classification are regarded as random. Nelder (1968) described the efficient estimation of weights in generally balanced designs, in which the blocks are usually, although not always, of equal size. Lack of balance resulting from unequal block sizes is, however, common in some experimental work, for example in animal breeding experiments. The maximum likelihood procedure described by Hartley & Rao (1967) can be used but does not give the same estimates as Nelder's method in the balanced case. As will be shown, the two methods in effect use the same weighted sums of squares of residuals but assign different expectations. In the maximum likelihood approach, expectations are taken over a conditional distribution with the treatment effects fixed at their estimated values. In contrast Nelder uses unconditional expectations. The difference between the two methods is analogous to the well-known difference between two methods of estimating the variance o2 of a normal distribution, given a random sample of n values. Both methods use the same total sum of squares of deviations. But --- paper_title: Synthesis of Variance paper_content: The distribution of a linear combination of two statistics distributed as is Chi-square is studied. The degree of approximation involved in assuming a Chi-square distribution is illustrated for several representative cases. It is concluded that the approximation is sufficiently accurate to use in many practical applications. Illustrations are given of its use in extending the Chi-square, the Student “t” and the Fisher “z” tests to a wider range of problems. --- paper_title: Maximum Likelihood Approaches to Variance Component Estimation and to Related Problems paper_content: Abstract Recent developments promise to increase greatly the popularity of maximum likelihood (ml) as a technique for estimating variance components. Patterson and Thompson (1971) proposed a restricted maximum likelihood (reml) approach which takes into account the loss in degrees of freedom resulting from estimating fixed effects. Miller (1973) developed a satisfactory asymptotic theory for ml estimators of variance components. There are many iterative algorithms that can be considered for computing the ml or reml estimates. The computations on each iteration of these algorithms are those associated with computing estimates of fixed and random effects for given values of the variance components. --- paper_title: Likelihood Ratio Tests for Fixed Model Terms using Residual Maximum Likelihood paper_content: Likelihood ratio tests for fixed model terms are proposed for the analysis of linear mixed models when using residual maximum likelihood estimation. Bartlett‐type adjustments, using an approximate decomposition of the data, are developed for the test statistics. A simulation study is used to compare properties of the test statistics proposed, with or without adjustment, with a Wald test. A proposed test statistic constructed by dropping fixed terms from the full fixed model is shown to give a better approximation to the asymptotic χ2‐distribution than the Wald test for small data sets. Bartlett adjustment is shown to improve the χ2‐approximation for the proposed tests substantially. --- paper_title: Bayesian Data Analysis paper_content: FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter. --- paper_title: Bayesian Inference in Statistical Analysis paper_content: Nature of Bayesian Inference Standard Normal Theory Inference Problems Bayesian Assessment of Assumptions: Effect of Non-Normality on Inferences About a Population Mean with Generalizations Bayesian Assessment of Assumptions: Comparison of Variances Random Effect Models Analysis of Cross Classification Designs Inference About Means with Information from More than One Source: One-Way Classification and Block Designs Some Aspects of Multivariate Analysis Estimation of Common Regression Coefficients Transformation of Data Tables References Indexes. --- paper_title: Bayesian Data Analysis paper_content: FUNDAMENTALS OF BAYESIAN INFERENCE Probability and Inference Single-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian Approaches Hierarchical Models FUNDAMENTALS OF BAYESIAN DATA ANALYSIS Model Checking Evaluating, Comparing, and Expanding Models Modeling Accounting for Data Collection Decision Analysis ADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional Approximations REGRESSION MODELS Introduction to Regression Models Hierarchical Linear Models Generalized Linear Models Models for Robust Inference Models for Missing Data NONLINEAR AND NONPARAMETRIC MODELS Parametric Nonlinear Models Basic Function Models Gaussian Process Models Finite Mixture Models Dirichlet Process Models APPENDICES A: Standard Probability Distributions B: Outline of Proofs of Asymptotic Theorems C: Computation in R and Stan Bibliographic Notes and Exercises appear at the end of each chapter. --- paper_title: Empirical Bayes estimation of individual growth-curve parameters and their relationship to covariates. paper_content: The analysis of growth curves has long been important in biostatistics. Work has focused on two problems: the estimation of individual curves based on many data points, and the estimation of the mean growth curve for a group of individuals. This paper extends a recent approach that seeks to combine data from a group of individuals in order to improve the estimates of individual growth parameters. Growth is modeled as polynomial in time, and the group model is also linear, incorporating growth-related covariates into the model. The estimation used is empirical Bayes. The estimation formulas are illustrated with a set of data on rat growth, originally presented by Box (1950, Biometrics 6, 362-389). --- paper_title: A Linear Mixed-Effects Model with Heterogeneity in the Random-Effects Population paper_content: Abstract This article investigates the impact of the normality assumption for random effects on their estimates in the linear mixed-effects model. It shows that if the distribution of random effects is a finite mixture of normal distributions, then the random effects may be badly estimated if normality is assumed, and the current methods for inspecting the appropriateness of the model assumptions are not sound. Further, it is argued that a better way to detect the components of the mixture is to build this assumption in the model and then “compare” the fitted model with the Gaussian model. All of this is illustrated on two practical examples. --- paper_title: Mixed-effects regression models for studying the natural history of prostate disease paper_content: Although prostate cancer and benign prostatic hyperplasia are major health problems in U.S. men, little is known about the early stages of the natural history of prostate disease. A molecular biomarker called prostate specific antigen (PSA), together with a unique longitudinal bank of frozen serum, now allows a historic prospective study of changes in PSA levels for decades prior to the diagnosis of prostate disease. Linear mixed-effects regression models were used to test whether rates of change in PSA were different in men with and without prostate disease. In addition, since the prostate cancer cases developed their tumours at different (and unknown) times during their periods of follow-up, a piece-wise non-linear mixed-effects regression model was used to estimate the time when rapid increases in PSA were first observable beyond the background level of PSA change. These methods have a wide range of applications in biomedical research utilizing repeated measures data such as pharmacokinetic studies, crossover trials, growth and development studies, aging studies, and disease detection. --- paper_title: The effect of misspecifying the random-effects distribution in linear mixed models for longitudinal data paper_content: Maximum likelihood estimators for fixed effects and variance components in linear mixed models, obtained under the assumption of normally distributed random effects, are shown to be consistent and asymptotically normally distributed, even when the random-effects distribution is not normal. However, a sandwich-type correction to the inverse Fisher information matrix is then needed in order to get the correct asymptotic covariance matrix. Extensive simulations show that the so-obtained corrected standard errors are clearly superior to the naive uncorrected ones, especially for the parameters in the random-effects covariance matrix, even in moderate samples. --- paper_title: The analysis of incomplete data. paper_content: A non-woven mesh reflector for radio waves comprises two or more parallelly positioned layers of electrically conductive high modulus, high yield strength, inextensible fibers. The fibers in each layer extend parallel to each other and the fibers of one layer extend in a direction different from the direction of the fibers of the other layer. Examples of fibers include wires of beryllium, aluminum, stainless steel type 304, CHROMEL R, INVAR 36, and other alloys of the stainless steel INVAR type. --- paper_title: Missing Observations in Multivariate Statistics I. Review of the Literature paper_content: Abstract In this paper we review the literature on the problem of handling multivariate data with observations missing on some or all of the variables under study. We examine the ways that statisticians have devised to estimate means, variances, correlations and linear regression functions from such data and refer to specific computer programs for carrying out the estimation. We show how the estimation problems can be simplified if the missing data follows certain patterns. Finally, we outline the statistical properties of the various estimators. --- paper_title: Pattern-Mixture Models for Multivariate Incomplete Data paper_content: Consider a random sample on variables X1, …, Xv with some values of Xv missing. Selection models specify the distribution of X1 , …, XV over respondents and nonrespondents to Xv , and the conditional distribution that Xv is missing given X1 , …, Xv . In contrast, pattern-mixture models specify the conditional distribution of X 1, …, Xv given that XV is observed or missing respectively and the marginal distribution of the binary indicator for whether or not Xv is missing. For multivariate data with a general pattern of missing values, the literature has tended to adopt the selection-modeling approach (see for example Little and Rubin); here, pattern-mixture models are proposed for this more general problem. Pattern-mixture models are chronically underidentified; in particular for the case of univariate nonresponse mentioned above, there are no data on the distribution of Xv given X1 , …, XV–1 , in the stratum with Xv missing. Thus the models require restrictions or prior information to identify the paramet... --- paper_title: Modeling the drop-out mechanism in repeated-measures studies paper_content: Abstract Subjects often drop out of longitudinal studies prematurely, yielding unbalanced data with unequal numbers of measures for each subject. Modern software programs for handling unbalanced longitudinal data improve on methods that discard the incomplete cases by including all the data, but also yield biased inferences under plausible models for the drop-out process. This article discusses methods that simultaneously model the data and the drop-out process within a unified model-based framework. Models are classified into two broad classes—random-coefficient selection models and random-coefficient pattern-mixture models—depending on how the joint distribution of the data and drop-out mechanism is factored. Inference is likelihood-based, via maximum likelihood or Bayesian methods. A number of examples in the literature are placed in this framework, and possible extensions outlined. Data collection on the nature of the drop-out process is advocated to guide the choice of model. In cases where the drop-... --- paper_title: The Calculation of Posterior Distributions by Data Augmentation paper_content: Abstract The idea of data augmentation arises naturally in missing value problems, as exemplified by the standard ways of filling in missing cells in balanced two-way tables. Thus data augmentation refers to a scheme of augmenting the observed data so as to make it more easy to analyze. This device is used to great advantage by the EM algorithm (Dempster, Laird, and Rubin 1977) in solving maximum likelihood problems. In situations when the likelihood cannot be approximated closely by the normal likelihood, maximum likelihood estimates and the associated standard errors cannot be relied upon to make valid inferential statements. From the Bayesian point of view, one must now calculate the posterior distribution of parameters of interest. If data augmentation can be used in the calculation of the maximum likelihood estimate, then in the same cases one ought to be able to use it in the computation of the posterior distribution. It is the purpose of this article to explain how this can be done. The basic idea ... --- paper_title: Parametric models for incomplete continuous and categorical longitudinal data paper_content: This paper reviews models for incomplete continuous and categorical longitudinal data. In terms of Rubin's classification of missing value processes we are specifically concerned with the problem of nonrandom missingness. A distinction is drawn between the classes of selection and pattern-mixture models and, using several examples, these approaches are compared and contrasted. The central roles of identifiability and sensitivity are emphasized throughout. --- paper_title: Pattern-Mixture Models for Multivariate Incomplete Data paper_content: Consider a random sample on variables X1, …, Xv with some values of Xv missing. Selection models specify the distribution of X1 , …, XV over respondents and nonrespondents to Xv , and the conditional distribution that Xv is missing given X1 , …, Xv . In contrast, pattern-mixture models specify the conditional distribution of X 1, …, Xv given that XV is observed or missing respectively and the marginal distribution of the binary indicator for whether or not Xv is missing. For multivariate data with a general pattern of missing values, the literature has tended to adopt the selection-modeling approach (see for example Little and Rubin); here, pattern-mixture models are proposed for this more general problem. Pattern-mixture models are chronically underidentified; in particular for the case of univariate nonresponse mentioned above, there are no data on the distribution of Xv given X1 , …, XV–1 , in the stratum with Xv missing. Thus the models require restrictions or prior information to identify the paramet... --- paper_title: MIXTURE MODELS FOR THE JOINT DISTRIBUTION OF REPEATED MEASURES AND EVENT TIMES paper_content: Many long-term clinical trials collect both a vector of repeated measurements and an event time on each subject; often, the two outcomes are dependent. One example is the use of surrogate markers to predict disease onset or survival. Another is longitudinal trials which have outcome-related dropout. We describe a mixture model for the joint distribution which accommodates incomplete repeated measures and right-censored event times, and provide methods for full maximum likelihood estimation. The methods are illustrated through analysis of data from a clinical trial for a new schizophrenia therapy; in the trial, dropout time is closely related to outcome, and the dropout process differs between treatments. The parameter estimates from the model are used to make a treatment comparison after adjusting for the effects of dropout. An added benefit of the analysis is that it permits using the repeated measures to increase efficiency of estimates of the event time distribution. © 1997 by John Wiley & Sons, Ltd. --- paper_title: The Muscatine children’s obesity data reanalysed using pattern mixture models paper_content: A set of longitudinal binary, partially incomplete, data on obesity among children in the USA is reanalysed. The multivariate Bernoulli distribution is parameterized by the univariate marginal probabilities and dependence ratios of all orders, which together support maximum likelihood inference. The temporal association of obesity is strong and complex but stationary. We fit a saturated model for the distribution of response patterns and find that non-response is missing completely at random for boys but that the probability of obesity is consistently higher among girls who provided incomplete records than among girls who provided complete records. We discuss the statistical and substantive features of, respectively, pattern mixture and selection models for this data set. --- paper_title: Modeling the drop-out mechanism in repeated-measures studies paper_content: Abstract Subjects often drop out of longitudinal studies prematurely, yielding unbalanced data with unequal numbers of measures for each subject. Modern software programs for handling unbalanced longitudinal data improve on methods that discard the incomplete cases by including all the data, but also yield biased inferences under plausible models for the drop-out process. This article discusses methods that simultaneously model the data and the drop-out process within a unified model-based framework. Models are classified into two broad classes—random-coefficient selection models and random-coefficient pattern-mixture models—depending on how the joint distribution of the data and drop-out mechanism is factored. Inference is likelihood-based, via maximum likelihood or Bayesian methods. A number of examples in the literature are placed in this framework, and possible extensions outlined. Data collection on the nature of the drop-out process is advocated to guide the choice of model. In cases where the drop-... --- paper_title: Modelling progression of CD4-lymphocyte count and its relationship to survival time. paper_content: The purpose of this article is to model the progression of CD4-lymphocyte count and the relationship between different features of this progression and survival time. The complicating factors in this analysis are that the CD4-lymphocyte count is observed only at certain fixed times and with a high degree of measurement error, and that the length of the vector of observations is determined, in part, by the length of survival. If probability of death depends on the true, unobserved CD4-lymphocyte count, then the survival process must be modelled. Wu and Carroll (1988, Biometrics 44, 175-188) proposed a random effects model for two-sample longitudinal data in the presence of informative censoring, in which the individual effects included only slopes and intercepts. We propose methods for fitting a broad class of models of this type, in which both the repeated CD4-lymphocyte counts and the survival time are modelled using random effects. These methods permit us to estimate parameters describing the progression of CD4-lymphocyte count as well as the effect of differences in the CD4 trajectory on survival. We apply these methods to results of AIDS clinical trials. --- paper_title: Inference from Nonrandomly Missing Categorical Data: An Example from a Genetic Study on Turner's Syndrome paper_content: Abstract The process leading to partial classification with categorical data is sometimes nonrandom. A particular model accounting for incomplete data, which allows the probability of uncertain classification to depend on category identity, is utilized for an analysis of data obtained from a genetic study on Turner's syndrome. Estimates of population proportions are obtained from maximum likelihood. A method for handling nonrandomly missing data arrayed in contingency tables is discussed. Sensitivity analyses incorporating parameters related to the missing-data mechanism are recommended for estimation and testing. --- paper_title: Pattern-Mixture Models for Multivariate Incomplete Data paper_content: Consider a random sample on variables X1, …, Xv with some values of Xv missing. Selection models specify the distribution of X1 , …, XV over respondents and nonrespondents to Xv , and the conditional distribution that Xv is missing given X1 , …, Xv . In contrast, pattern-mixture models specify the conditional distribution of X 1, …, Xv given that XV is observed or missing respectively and the marginal distribution of the binary indicator for whether or not Xv is missing. For multivariate data with a general pattern of missing values, the literature has tended to adopt the selection-modeling approach (see for example Little and Rubin); here, pattern-mixture models are proposed for this more general problem. Pattern-mixture models are chronically underidentified; in particular for the case of univariate nonresponse mentioned above, there are no data on the distribution of Xv given X1 , …, XV–1 , in the stratum with Xv missing. Thus the models require restrictions or prior information to identify the paramet... --- paper_title: Estimation and comparison of changes in the presence of informative right censoring by modeling the censoring process paper_content: Abstract : In estimating and comparing the rates of change of a continuous variable between two groups, the unweighted averages of individual simple least squares estimates from each group are often used. Under a linear random effects model, when all individuals have completed observations at identical time points these statistics are maximum likelihood estimates for the expected rates of change. However, with censored of missing data, these estimates are no longer efficient when compared to generalized least squares estimates. When, in addition, the right censoring process is dependent upon the individual rates of change (i.e., informative right censoring), the generalized least squares estimates will be biased. Likelihood ratio test for informativeness of the censoring process and maximum likelihood estimates for the expected rates of change and the parameters of the right censoring process are developed under a linear random effect models with a probit model for the right censoring process. In realistic situations, we illustrate that the bias in estimating group rate of change and the reduction of power in comparing group difference could be substantial when strong dependency of the right censoring process on individual rates of change is ignored. (Author) --- paper_title: Sensitivity Analysis for Nonrandom Dropout: A Local Influence Approach paper_content: Diggle and Kenward (1994, Applied Statistics 43, 49-93) proposed a selection model for continuous longitudinal data subject to nonrandom dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumptions on which this type of model invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. This paper presents a formal and flexible approach to such a sensitivity assessment based on local influence (Cook, 1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The influence of perturbing a missing-at-random dropout model in the direction of nonrandom dropout is explored. The method is applied to data from a randomized experiment on the inhibition of testosterone production in rats. --- paper_title: Strategies to fit pattern-mixture models. paper_content: Whereas most models for incomplete longitudinal data are formulated within the selection model framework, pattern-mixture models have gained considerable interest in recent years (Little, 1993, 1994). In this paper, we outline several strategies to fit pattern-mixture models, including the so-called identifying restrictions strategy. Multiple imputation is used to apply this strategy to realistic settings, such as quality-of-life data from a longitudinal study on metastatic breast cancer patients. --- paper_title: Logistic regression with incompletely observed categorical covariates--investigating the sensitivity against violation of the missing at random assumption. paper_content: Missing values in the covariates are a widespread complication in the statistical inference of regression models. The maximum likelihood principle requires specification of the distribution of the covariates, at least in part. For categorical covariates, log-linear models can be used. Additionally, the missing at random assumption is necessary, which excludes a dependence of the occurrence of missing values on the unobserved covariate values. This assumption is often highly questionable. We present a framework to specify alternative missing value mechanisms such that maximum likelihood estimation of the regression parameters under a specified alternative is possible. This allows investigation of the sensitivity of a single estimate against violations of the missing at random assumption. The possible results of a sensitivity analysis are illustrated by artificial examples. The practical application is demonstrated by the analysis of two case-control studies. --- paper_title: Modeling the drop-out mechanism in repeated-measures studies paper_content: Abstract Subjects often drop out of longitudinal studies prematurely, yielding unbalanced data with unequal numbers of measures for each subject. Modern software programs for handling unbalanced longitudinal data improve on methods that discard the incomplete cases by including all the data, but also yield biased inferences under plausible models for the drop-out process. This article discusses methods that simultaneously model the data and the drop-out process within a unified model-based framework. Models are classified into two broad classes—random-coefficient selection models and random-coefficient pattern-mixture models—depending on how the joint distribution of the data and drop-out mechanism is factored. Inference is likelihood-based, via maximum likelihood or Bayesian methods. A number of examples in the literature are placed in this framework, and possible extensions outlined. Data collection on the nature of the drop-out process is advocated to guide the choice of model. In cases where the drop-... --- paper_title: Regression Models for Longitudinal Binary Responses with Informative Drop‐Outs paper_content: This paper reviews both likelihood-based and non-likelihood (generalized estimating equations) regression models for longitudinal binary responses when there are drop-outs. Throughout, it is assumed that the regression parameters for the marginal expectations of the binary responses are of primary scientific interest. The association or time dependence between the responses is largely regarded as a nuisance characteristic of the data. The performance of the methods is compared, in terms of asymptotic bias, under mis-specification of the association between the responses and the missing data mechanism or drop-out process. --- paper_title: Parametric models for incomplete continuous and categorical longitudinal data paper_content: This paper reviews models for incomplete continuous and categorical longitudinal data. In terms of Rubin's classification of missing value processes we are specifically concerned with the problem of nonrandom missingness. A distinction is drawn between the classes of selection and pattern-mixture models and, using several examples, these approaches are compared and contrasted. The central roles of identifiability and sensitivity are emphasized throughout. --- paper_title: Local influence in linear mixed models. paper_content: The linear mixed model has become an important tool in modelling, partially due to the introduction of the SAS procedure MIXED, which made the method widely available to practising statisticians. Its growing popularity calls for data-analytic methods to check the underlying assumptions and robustness. Here, the problem of detecting influential subjects in the context of longitudinal data is considered, following the approach of local influence proposed by Cook (1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The diagnostics are illustrated on a prostate cancer data set. --- paper_title: Sensitivity Analysis for Nonrandom Dropout: A Local Influence Approach paper_content: Diggle and Kenward (1994, Applied Statistics 43, 49-93) proposed a selection model for continuous longitudinal data subject to nonrandom dropout. It has provoked a large debate about the role for such models. The original enthusiasm was followed by skepticism about the strong but untestable assumptions on which this type of model invariably rests. Since then, the view has emerged that these models should ideally be made part of a sensitivity analysis. This paper presents a formal and flexible approach to such a sensitivity assessment based on local influence (Cook, 1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The influence of perturbing a missing-at-random dropout model in the direction of nonrandom dropout is explored. The method is applied to data from a randomized experiment on the inhibition of testosterone production in rats. --- paper_title: Assessment of Local Influence paper_content: SUMMARY Statistical models usually involve some degree of approximation and therefore are nearly always wrong. Because of this inexactness, an assessment of the influence of minor perturbations of the model is important. We discuss a method for carrying out such an assessment. The method is not restricted to linear regression models, and it seems to provide a relatively simple, unified approach for handling a variety of problems. --- paper_title: Selection models for repeated measurements with non-random dropout: an illustration of sensitivity paper_content: The outcome-based selection model of Diggle and Kenward for repeated measurements with non-random dropout is applied to a very simple example concerning the occurrence of mastitis in dairy cows, in which the occurrence of mastitis can be modelled as a dropout process. It is shown through sensitivity analysis how the conclusions concerning the dropout mechanism depend crucially on untestable distributional assumptions. This example is exceptional in that from a simple plot of the data two outlying observations can be identified that are the source of the apparent evidence for non-random dropout and also provide an explanation of the behaviour of the sensitivity analysis. It is concluded that a plausible model for the data does not require the assumption of non-random dropout. --- paper_title: Local influence in linear mixed models. paper_content: The linear mixed model has become an important tool in modelling, partially due to the introduction of the SAS procedure MIXED, which made the method widely available to practising statisticians. Its growing popularity calls for data-analytic methods to check the underlying assumptions and robustness. Here, the problem of detecting influential subjects in the context of longitudinal data is considered, following the approach of local influence proposed by Cook (1986, Journal of the Royal Statistical Society, Series B 48, 133-169). The diagnostics are illustrated on a prostate cancer data set. --- paper_title: The Analysis of Designed Experiments and Longitudinal Data by Using Smoothing Splines paper_content: In designed experiments and in particular longitudinal studies, the aim may be to assess the effect of a quantitative variable such as time on treatment effects. Modelling treatment effects can be complex in the presence of other sources of variation. Three examples are presented to illustrate an approach to analysis in such cases. The first example is a longitudinal experiment on the growth of cows under a factorial treatment structure where serial correlation and variance heterogeneity complicate the analysis. The second example involves the calibration of optical density and the concentration of a protein DNase in the presence of sampling variation and variance heterogeneity. The final example is a multienvironment agricultural field experiment in which a yield-seeding rate relationship is required for several varieties of lupins. Spatial variation within environments, heterogeneity between environments and variation between varieties all need to be incorporated in the analysis. In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure. The key result that allows coherent and flexible empirical model building in complex situations is the linear mixed model representation of the cubic smoothing spline. An extension is proposed in which trend is partitioned into smooth and nonsmooth components. Estimation and inference, the analysis of the three examples and a discussion of extensions and unresolved issues are also presented. --- paper_title: A Linear Mixed-Effects Model with Heterogeneity in the Random-Effects Population paper_content: Abstract This article investigates the impact of the normality assumption for random effects on their estimates in the linear mixed-effects model. It shows that if the distribution of random effects is a finite mixture of normal distributions, then the random effects may be badly estimated if normality is assumed, and the current methods for inspecting the appropriateness of the model assumptions are not sound. Further, it is argued that a better way to detect the components of the mixture is to build this assumption in the model and then “compare” the fitted model with the Gaussian model. All of this is illustrated on two practical examples. --- paper_title: Assessment of Local Influence paper_content: SUMMARY Statistical models usually involve some degree of approximation and therefore are nearly always wrong. Because of this inexactness, an assessment of the influence of minor perturbations of the model is important. We discuss a method for carrying out such an assessment. The method is not restricted to linear regression models, and it seems to provide a relatively simple, unified approach for handling a variety of problems. --- paper_title: A Smooth Nonparametric Estimate of a Mixing Distribution Using Mixtures of Gaussians paper_content: Abstract We propose a method of estimating mixing distributions using maximum likelihood over the class of arbitrary mixtures of Gaussians subject to the constraint that the component variances be greater than or equal to some minimum value h. This approach can lead to estimates of many shapes, with smoothness controlled by parameter h. We show that the resulting estimate will always be a finite mixture of Gaussians, each having variance h. The nonparametric maximum likelihood estimate can be viewed as a special case, with h = 0. The method can be extended to estimate multivariate mixing distributions. Examples and the results of a simulation study are presented. --- paper_title: The detection of residual serial correlation in linear mixed models paper_content: Diggle (1988) described how the empirical semi-variogram of ordinary least squares residuals can be used to suggest an appropriate serial correlation structure in stationary linear mixed models. In this paper, this approach is extended to non-stationary models which include random effects other than intercepts, and will be applied to prostate cancer data, taken from the Baltimore Longitudinal Study of Aging. A simulation study demonstrates the effectiveness of this extended variogram for improving the covariance structure of the linear mixed model used to describe the prostate data. © 1998 John Wiley & Sons, Ltd. --- paper_title: Conditional Linear Mixed Models paper_content: The main advantage of longitudinal studies is that they can distinguish changes over time within individuals (longitudinal effects) from differences among subjects at the start of the study (cross-sectional effects). In observational studies, however, longitudinal changes need to be studied after correction for potential important cross-sectional differences between subjects. It will be shown that, in the context of linear mixed models, the estimation of longitudinal effects may be highly influenced by the assumptions about cross-sectional effects. Furthermore, aspects from conditional and mixture inference will be combined, yielding so-called conditional linear mixed models that allow estimation of longitudinal effects (average trends as well as subject-specific trends), independent of any cross-sectional assumptions. These models will be introduced and justified, and extensively illustrated in the analysis of longitudinal data from 680 participants in the Baltimore Longitudinal Study of Aging. ---
<format> Title: A Review on Linear Mixed Models for Longitudinal Data, Possibly Subject to Dropout Section 1: Introduction Description 1: Introduce the use of longitudinal studies in medical science and present examples; explain the motivation behind the review and outline the paper's structure. Section 2: The Linear Mixed Model Description 2: Provide a detailed explanation of the linear mixed model formulation, including fixed effects, random effects, and model assumptions. Section 3: Inference for the Marginal Model Description 3: Discuss methods for deriving inferences for the marginal model, including maximum likelihood estimation and restricted maximum likelihood estimation (REML). Section 4: Inference for the Random Effects Description 4: Explore the estimation of random effects and their interpretation, including empirical Bayes estimates and the concept of shrinkage. Section 5: The Missing Data Problem Description 5: Address general issues related to missing data in longitudinal studies, classify different types of missing data, and introduce conceptual approaches for handling them. Section 6: Selection Models Description 6: Review selection models for handling dropout in longitudinal data, focusing on methods such as the Tobit model and the Diggle and Kenward selection model. Section 7: Pattern-Mixture Models Description 7: Explain pattern-mixture models as an alternative framework to handle dropout; illustrate with applications to practical data sets. Section 8: Sensitivity Analysis Description 8: Highlight the importance of sensitivity analysis when dealing with incomplete longitudinal data; outline different strategies and tools for conducting such analyses. Section 9: Local Influence Description 9: Discuss the concept of local influence and how small model perturbations can impact parameter estimates; introduce methods for detecting influential data points. Section 10: Computational Issues Description 10: Discuss computational challenges in fitting linear mixed models, including starting values, convergence problems, and the need for advanced numerical procedures. Section 11: Conclusion Description 11: Summarize the key points discussed in the paper; emphasize the practical importance and current research directions related to linear mixed models for longitudinal data. </format>
Semantic web service discovery approaches: overview and limitations
7
--- paper_title: Efficient and adaptive discovery techniques of Web Services handling large data sets paper_content: Attempts have been made concerning the search and finding of a Web Service based on keywords and descriptions. However, no work has been done concerning the efficient selection of the appropriate Web Service instance in terms of quality and performance factors at the moment of the Web Service consumption attempt. Such factors may include execution time and response time. The proposed approach adaptively selects the most efficient WS among possible different alternatives with real-time, optimized and countable factors-parameters. Implementation issues and case study experiments are presented along with the corresponding results. Additionally, an optimal selection algorithm for series of Web Services requests is proposed. Finally, conclusions and future steps are discussed. --- paper_title: Coupled Signature and Specification Matching for Automatic Service Binding paper_content: Matching of semantic service descriptions is the key to automatic service discovery and binding. Existing approaches split the matchmaking process in two step: signature and specification matching. However, this leads to the problem that offers are not found although they are functionally suitable if their signature is not fitting the requested one. Therefore, in this paper, we propose a matching algorithm that does not use a separated and explicit signature matching step, but derives the necessary messages from the comparison of pre- and postconditions. As a result, the algorithm not only finds all functionally suitable services even if their signatures do not match, but also is able to derive the messages needed for an automatic invocation. --- paper_title: Imprecise RDQL: towards generic retrieval in ontologies using similarity joins paper_content: Traditional semantic web query languages support a logic-based access to the semantic web. They offer a retrieval (or reasoning) of data based on facts. On the traditional web and in databases, however, exact querying often provides an incomplete answer as queries are overspecified or the mix of multiple ontologies/modelling differences requires "interpretational flexibility." Therefore, similarity measures or ranking approaches are frequently used to extend the reach of a query. This paper extends this idea to the semantic web. It introduces iRDQL---a semantic web query language with support for similarity joins. It is an extension of traditional RDQL (RDF Data Query Language) that enables the users to query for similar resources ranking the results using a similarity measure. We show how iRDQL allows to extend the reach of a query by finding additional results. We quantitatively evaluated four similarity measures for their usefulness in iRDQL in the context of an OWL-S semantic web service retrieval test collection and compared the results to a specialized OWL-S matchmaker. Initial results of iRDQL indicate that it is indeed useful for extending the reach of queries and that it is able to improve recall without overly sacrificing precision. We also found that our generic iRDQL approach was only slightly outperformed by the specialized algorithm. --- paper_title: Semantic Service Discovery with DIANE Service Descriptions paper_content: In this paper, we introduce the DIANE Service Description (DSD) and show how it has been used to solve the discovery problems stated in the scenarios of the SWSChallenge 1 . We provide a consolidated description of our approach that we presented at the first four SWS-Challenge workshops and briefly discuss its strengths and drawbacks. --- paper_title: CASCOM: Intelligent Service Coordination in the Semantic Web paper_content: This book presents the design, implementation and validation of a value-added supportive infrastructure for Semantic Web based business application services across mobile and fixed networks, applied to an emergency healthcare application. This infrastructure has been realized by the CASCOM European research project. For end users, the CASCOM framework provides seamless access to semantic Web services anytime, anywhere, by using any mobile computing device. For service providers, CASCOM offers an innovative development platform for intelligent and mobile business application services in the Semantic Web. The essential approach of CASCOM is the innovative inter-disciplinary combination of intelligent agent, Semantic Web, peer-to-peer, and mobile computing technology. Conventional peer-to-peer computing environments are extended with components for mobile and wireless communication. Semantic Web services are provided by peer software agents, which exploit the coordination infrastructure to efficiently operate in highly dynamic environments. The generic coordination support infrastructure includes efficient communication means, support for context-aware adaptation techniques, as well as flexible, resource-efficient service discovery, execution, and composition planning. The book has three main parts. First, the state-or-the-art is reviewed in related research fields. Then, a full proof-of-concept design and implementation of the generic infrastructure is presented. Finally, quantitative and qualitative analysis is presented on the basis of the field trials of the emergency application. --- paper_title: What is needed for semantic service descriptions? A proposal for suitable language constructs paper_content: The big promise of service-oriented computing is the ability to form agile networks. Agile networks are networks of loosely coupled participants that cooperate by dynamically discovering and invoking each other's services at run-time. The major prerequisite for this promise to be fulfilled is an appropriate semantic service description. In this paper, we identify requirements towards such a service description language and show that neither of the two main current approaches, OWL-S and WSMO, is able to fully meet these requirements. We then proceed to suggest additional language constructs and a prototypical language, the DIANE Service Description (DSD), which implements these constructs. We explain how service offers and requests can be described and matched using DSD. --- paper_title: Large-Scale Information Retrieval with Latent Semantic Indexing paper_content: Abstract As the amount of electronic information increases, traditional lexical (or Boolean) information retrieval techniques will become less useful. Large, heterogeneous collections will be difficult to search since the sheer volume of unranked documents returned in response to a query will overwhelm the user. Vector-space approaches to information retrieval, on the other hand, allow the user to search for concepts rather than specific words, and rank the results of the search according to their relative similarity to the query. One vector-space approach, Latent Semantic Indexing (LSI), has achieved up to 30% better retrieval performance than lexical searching techniques by employing a reduced-rank model of the term-document space. However, the original implementation of LSI lacked the execution efficiency required to make LSI useful for large data sets. A new implementation of LSI, LSI++, seeks to make LSI efficient, extensible, portable, and maintainable. The LSI++ Application Programming Interface (API) allows applications to immediately use LSI without knowing the implementation details of the underlying system. LSI++ supports both serial and distributed searching of large data sets, providing the same programming interface regardless of the implementation actually executing. In addition, a World Wide Web interface was created to allow simple, intuitive searching of document collections using LSI++. Timing results indicate that the serial implementation of LSI++ searches up to six times faster than the original implementation of LSI, while the parallel implementation searches nearly 180 times faster on large document collections. --- paper_title: DIANE: A Matchmaking-Centered Framework for Automated Service Discovery, Composition, Binding, and Invocation on the Web paper_content: Service-oriented computing will allow for the automatic discovery, composition, binding, and invocation of Web services. The single most important component of this goal is appropriate matchmaking. This paper presents a service-description language and its associated matchmaking algorithms. Together they precisely capture requester preferences through fuzzy sets, express and use instance information for matchmaking, and deal efficiently with multiple effects. The approach described here has been extensively evaluated both in experiments and in the 2006 Semantic Web Services Challenge. --- paper_title: Jensen-Shannon divergence and Hilbert space embedding paper_content: This paper describes the Jensen-Shannon divergence (JSD) and Hilbert space embedding. With natural definitions making these considerations precise, one finds that the general Jensen-Shannon divergence related to the mixture is the minimum redundancy, which can be achieved by the observer. The set of distributions with the metric /spl radic/JSD can even be embedded isometrically into Hilbert space and the embedding can be identified. --- paper_title: TOP-K cosine similarity interesting pairs search paper_content: Recent years have witnessed an increased interest in computing cosine similarities between documents (or commodities). Most previous studies require the specification of a minimum similarity threshold to perform cosine similarity search. However, it is usually difficult for users to provide an appropriate threshold in practice. Instead, in this paper, we propose to search top-K strongly related pairs of objects as measured by the cosine similarity. Specifically, we first define the cosine similarity measure from the association analysis point of view and identify the monotone property of an upper bound of the cosine measure, then exploit a diagonal traversal strategy for developing the TOP-DATA and TOP-DATA-R algorithms. Finally, experimental results demonstrate the computational efficiencies of above algorithms. --- paper_title: Agent approach for service discovery and utilization paper_content: There is an extensive set of published and usable services on the Internet. Human based approaches to discover and utilize these services is not only time consuming, but also requires continuous user interaction. This paper describes an agent approach for service discovery and utilization (AASDU), which focuses on using light weight autonomous agents built into a multi-agent referral community, and Web service standards (namely UDDI, SOAP, WSDL and XML). The AASDU approach proposes to use agents that interact with end users by accepting their queries to discover services, and efficiently manage service invocation. AASDU uses intrinsic multi-agent properties to allow agents to communicate and cooperate with one another. Each agent conforms to a communication protocol that allows it to send and receive messages from another agent, without needing to know the address of the receiving agent. There is a need for effective and efficient communication among components in a multi-disciplinary, cross-organizational architecture. This has resulted in a proliferation of communication building blocks, or middleware, for distributed scientific computing. The most recent, and quite network-based services, referred to as Web services. This paper also discusses the interoperability that can be achieved between software components through the use of Web service standards and protocols in the context of AASDU. --- paper_title: Two-phase Web Service Discovery based on Rich Functional Descriptions paper_content: Discovery is a central reasoning task in service-oriented architectures, concerned with detecting Web services that are usable for solving a given request. This paper presents two extensions in continuation of previous works towards goal-based Web service discovery with sophisticated semantic matchmaking. At first, we distinguish goal templates as generic objective descriptions and goal instances that denote concrete requests as an instantiation of a goal template. Secondly, we formally describe requested and provided functionalities on the level of state transitions that denote executions of Web services, respectively solutions for goals. Upon this, we specify a two-phase discovery procedure along with semantic matchmaking techniques that allow to accurately determine the usability of a Web service. The techniques are defined in the Abstract State Space model that supports several languages for describing Web services. --- paper_title: CASCOM: Intelligent Service Coordination in the Semantic Web paper_content: This book presents the design, implementation and validation of a value-added supportive infrastructure for Semantic Web based business application services across mobile and fixed networks, applied to an emergency healthcare application. This infrastructure has been realized by the CASCOM European research project. For end users, the CASCOM framework provides seamless access to semantic Web services anytime, anywhere, by using any mobile computing device. For service providers, CASCOM offers an innovative development platform for intelligent and mobile business application services in the Semantic Web. The essential approach of CASCOM is the innovative inter-disciplinary combination of intelligent agent, Semantic Web, peer-to-peer, and mobile computing technology. Conventional peer-to-peer computing environments are extended with components for mobile and wireless communication. Semantic Web services are provided by peer software agents, which exploit the coordination infrastructure to efficiently operate in highly dynamic environments. The generic coordination support infrastructure includes efficient communication means, support for context-aware adaptation techniques, as well as flexible, resource-efficient service discovery, execution, and composition planning. The book has three main parts. First, the state-or-the-art is reviewed in related research fields. Then, a full proof-of-concept design and implementation of the generic infrastructure is presented. Finally, quantitative and qualitative analysis is presented on the basis of the field trials of the emergency application. --- paper_title: A System Architecture for Context-Aware Service Discovery paper_content: Recent technological advances have enabled both the consumption and provision of mobile services (m-services) by small, portable, handheld devices. However, mobile devices still have restricted capabilities with respect to processing, storage space, energy consumption, stable connectivity, bandwidth availability. In order to address these shortcomings, a potential solution is context-awareness (by context we refer to the implicit information related both to the requesting user and service provider that can affect the usefulness of the returned results). Context plays the role of a filtering mechanism, allowing only transmission of relevant data and services back to the device, thus saving bandwidth and reducing processing costs. In this paper, we present an architecture for context-aware service discovery. We describe in detail the system implementation and we present the system evaluation as a tradeoff between a) the increase of the quality of service discovery when context-awareness is taken into account and b) the extra cost/burden imposed by context management. --- paper_title: Evaluating Semantic Web Service Matchmaking Effectiveness Based on Graded Relevance paper_content: Semantic web services (SWS) promise to take service oriented computing to a new level by allowing to semi-automate time-consuming programming tasks. At the core of SWS are solutions to the problem of SWS matchmaking, i.e., the problem of comparing semantic goal descriptions with semantic offer descriptions to determine services able to fulfill a given request. Approaches to this problem have so far been evaluated based on binary relevance despite the fact that virtually all SWS matchmakers support more fine-grained levels of match. In this paper, a solution to this discrepancy is presented. A graded relevance scale for SWS matchmaking is proposed as are measures to evaluate SWS matchmakers based on such graded relevance scales. The feasibility of the approach is shown by means of a preliminary evaluation of two hybrid OWL-S matchmakers based on the proposed measures. --- paper_title: Semantic Matching of Web Services Capabilities paper_content: The Web is moving from being a collection of pages toward a collection of services that interoperate through the Internet. The first step toward this interoperation is the location of other services that can help toward the solution of a problem. In this paper we claim that location of web services should be based on the semantic match between a declarative description of the service being sought, and a description of the service being offered. Furthermore, we claim that this match is outside the representation capabilities of registries such as UDDI and languages such as WSDL.We propose a solution based on DAML-S, a DAML-based language for service description, and we show how service capabilities are presented in the Profile section of a DAML-S description and how a semantic match between advertisements and requests is performed. --- paper_title: Automatic location of services paper_content: The automatic location of services that fulfill a given need is a key step towards dynamic and scalable integration. In this paper we present a model for the automatic location of services that considers the static and dynamic aspects of service descriptions and identifies what notions and techniques are useful for the matching of both. Our model presents three important features: ease of use for the requester, efficient pre-filtering of relevant services, and accurate contracting of services that fulfill a given requester goal. We further elaborate previous work and results on Web service discovery by analyzing what steps and what kinds of descriptions are necessary for efficient and usable automatic service location. Furthermore, we analyze intuitive and formal notions of match that are of interest for locating services that fulfill a given goal. Although having a formal underpinning, the proposed model does not impose any restrictions on how to implement it for specific applications, but proposes some useful formalisms for providing such implementations. --- paper_title: Semantic Web Service Discovery in the OWL-S IDE paper_content: The increasing availability of web services necessitates efficient discovery and execution framework. The use of xml at various levels of web services standards poses challenges to the above process. OWL-S is a service ontology and language, whose semantics are based on OWL. The semantics provided by OWL support greater automation of service selection, invocation, translation of message content between heterogeneous services, and service composition. The development and consumption of an OWL-S based web service is time consuming and error prone. OWL-S IDE assists developers in the semantic web service development, deployment and consumption processes. In order to achieve this the OWL-S IDE uses and extends existing web service tools. In this paper we will look in detail at the support for discovery for semantic web services. We also present the matching schemes, the implementation and the results of performance evaluation. --- paper_title: Towards P2P-Based Semantic Web Service Discovery with QoS Support paper_content: The growing number of web services advocates distributed discovery infrastructures which are semantics-enabled and support quality of service (QoS). In this paper, we introduce a novel approach for semantic discovery of web services in P2P-based registries taking into account QoS characteristics. We distribute (semantic) service advertisements among available registries such that it is possible to quickly identify the repositories containing the best probable matching services. Additionally, we represent the information relevant for the discovery process using Bloom filters and pre-computed matching information such that search efforts are minimized when querying for services with a certain functional/QoS profile. Query results can be ranked and users can provide feedbacks on the actual QoS provided by a service. To evaluate the credibility of these user reports when predicting service quality, we include a robust trust and reputation management mechanism. --- paper_title: Making the Difference: A Subtraction Operation for Description Logics paper_content: Abstract We define a new operation in description logics, the difference operation or subtraction operation. This operation allows to remove from a description as much as possible of the information contained in another description. We define the operation independently of a specific description logic. Then we consider its implementation in several specific logics. Finally we describe practical applications of the operation. --- paper_title: Coupled Signature and Specification Matching for Automatic Service Binding paper_content: Matching of semantic service descriptions is the key to automatic service discovery and binding. Existing approaches split the matchmaking process in two step: signature and specification matching. However, this leads to the problem that offers are not found although they are functionally suitable if their signature is not fitting the requested one. Therefore, in this paper, we propose a matching algorithm that does not use a separated and explicit signature matching step, but derives the necessary messages from the comparison of pre- and postconditions. As a result, the algorithm not only finds all functionally suitable services even if their signatures do not match, but also is able to derive the messages needed for an automatic invocation. --- paper_title: The Field Matching Problem: Algorithms and Applications paper_content: To combine information from heterogeneous sources, equivalent data in the multiple sources must be identified. This task is the field matching problem. Specifically, the task is to determine whether or not two syntactic values are alternative designations of the same semantic entity. For example the addresses Dept. of Comput. Sci. and Eng., University of California, San Diego, 9500 Gilman Dr. Dept. 0114, La Jolla. CA 92093 and UCSD, Computer Science and Engineering Department, CA 92093-0114 do designate the same department. This paper describes three field matching algorithms, and evaluates their performance on real-world datasets. One proposed method is the well-known Smith-Waterman algorithm for comparing DNA and protein sequences. Several applications of field matching in knowledge discovery are described briefly, including WEBFIND, which is a new software tool that discovers scientific papers published on the worldwide web. WEBFIND uses external information sources to guide its search for authors and papers. Like many other worldwide web tools, WEBFIND needs to solve the field matching problem in order to navigate between information sources. --- paper_title: WSMO-MX: A Logic Programming Based Hybrid Service Matchmaker paper_content: In this paper, we present an approach to hybrid semantic web service matching based on both logic programming, and syntactic similarity measurement. The implemented matchmaker, called WSMO-MX, applies different matching filters to retrieve WSMO-oriented service descriptions that are semantically relevant to a given query with respect to seven degrees of hybrid matching. These degrees are recursively computed by aggregated valuations of ontology based type matching, logical constraint and relation matching, and syntactic similarity as well. --- paper_title: Semantic Process Retrieval with iSPARQL paper_content: The vision of semantic business processes is to enable the integration and inter-operability of business processes across organizational boundaries. Since different organizations model their processes differently, the discovery and retrieval of similar semantic business processes is necessary in order to foster inter-organizational collaborations. This paper presents our approach of using iSPARQL --- our imprecise query engine based on iSPARQL --- to query the OWL MIT Process Handbook --- a large collection of over 5000 semantic business processes. We particularly show how easy it is to use iSPARQL to perform the presented process retrieval task. Furthermore, since choosing the best performing similarity strategy is a non-trivial, data-, and context-dependent task, we evaluate the performance of three simple and two human-engineered similarity strategies. In addition, we conduct machine learning experiments to learn similarity measures showing that complementary information contained in the different notions of similarity strategies provide a very high retrieval accuracy. Our preliminary results indicate that iSPARQL is indeed useful for extending the reach of queries and that it, therefore, is an enabler for inter- and intra-organizational collaborations. --- paper_title: Semantic Web Service Selection with SAWSDL-MX paper_content: In this paper, we present an approach to hybrid semantic Web service selection of semantic services in SAWSDL based on logic-based matching as well as text retrieval strategies. We discuss the principles of semantic Web service description in SAWSDL and selected problems for service matching implied by its specification. Based on the result of this discussion, we present different variants of hybrid semantic selection of SAWSDL services implemented by our matchmaker called SAWSDL-MX together with preliminary results of its performance in terms of recall/precision and average query response time. For experimental evaluation we created a first version of a SAWSDL service retrieval test collection called SAWSDL-TC. --- paper_title: Interleaving Execution and Planning for Nondeterministic, Partially Observable Domains paper_content: Methods that interleave planning and execution are a practical solution to deal with complex planning problems in non-deterministic domains under partial observability. However, most of the existing approaches do not tackle in a principled way the important issue of termination of the planning-execution loop, or only do so considering specific assumptions over the domains. ::: ::: In this paper, we tackle the problem of interleaving planning and execution relying on a general framework, which is able to deal with nondeterministic, partially observable planning domains. We propose a new, general planning algorithm that guarantees the termination of the interleaving of planning and execution: either the goal is achieved, or the system detects that there is no longer a guarantee to progress toward it. ::: ::: Our experimental analysis shows that our algorithm can efficiently solve planning problems that cannot be tackled with a state of the art off-line planner for nondeterministic domains under partial observability, MBP. Moreover, we show that our algorithm can efficiently detect situations where progress toward the goal can be no longer guaranteed. --- paper_title: Automatic location of services paper_content: The automatic location of services that fulfill a given need is a key step towards dynamic and scalable integration. In this paper we present a model for the automatic location of services that considers the static and dynamic aspects of service descriptions and identifies what notions and techniques are useful for the matching of both. Our model presents three important features: ease of use for the requester, efficient pre-filtering of relevant services, and accurate contracting of services that fulfill a given requester goal. We further elaborate previous work and results on Web service discovery by analyzing what steps and what kinds of descriptions are necessary for efficient and usable automatic service location. Furthermore, we analyze intuitive and formal notions of match that are of interest for locating services that fulfill a given goal. Although having a formal underpinning, the proposed model does not impose any restrictions on how to implement it for specific applications, but proposes some useful formalisms for providing such implementations. --- paper_title: CASCOM: Intelligent Service Coordination in the Semantic Web paper_content: This book presents the design, implementation and validation of a value-added supportive infrastructure for Semantic Web based business application services across mobile and fixed networks, applied to an emergency healthcare application. This infrastructure has been realized by the CASCOM European research project. For end users, the CASCOM framework provides seamless access to semantic Web services anytime, anywhere, by using any mobile computing device. For service providers, CASCOM offers an innovative development platform for intelligent and mobile business application services in the Semantic Web. The essential approach of CASCOM is the innovative inter-disciplinary combination of intelligent agent, Semantic Web, peer-to-peer, and mobile computing technology. Conventional peer-to-peer computing environments are extended with components for mobile and wireless communication. Semantic Web services are provided by peer software agents, which exploit the coordination infrastructure to efficiently operate in highly dynamic environments. The generic coordination support infrastructure includes efficient communication means, support for context-aware adaptation techniques, as well as flexible, resource-efficient service discovery, execution, and composition planning. The book has three main parts. First, the state-or-the-art is reviewed in related research fields. Then, a full proof-of-concept design and implementation of the generic infrastructure is presented. Finally, quantitative and qualitative analysis is presented on the basis of the field trials of the emergency application. --- paper_title: Semantic-Driven Matchmaking of Web Services Using Case-Based Reasoning paper_content: With the rapid proliferation of Web services as the medium of choice to securely publish application services beyond the firewall, the importance of accurate, yet flexible matchmaking of similar services gains importance both for the human user and for dynamic composition engines . In this paper, we present a novel approach that utilizes the case based reasoning methodology for modelling dynamic Web service discovery and matchmaking. Our framework considers Web services execution experiences in the decision making process and is highly adaptable to the service requester constraints. The framework also utilises OWL semantic descriptions extensively for implementing both the components of the CBR engine and the matchmaking profile of the Web services. --- paper_title: Review of Web services description approaches paper_content: The WS (Web services) description is an important step in the services consumption's cycle. Semantic WS extends the capabilities of a WS by associating semantic concepts in order to enable better search, discovery, selection, composition and integration. Several WS description approaches have been proposed to present a detailed description exceeding the limitations of the syntactic standard WSDL. Our analysis of these works leads us to group them into two classes: Annotations Based Description Approaches and Semantic Language Based Approaches. In this paper, we provide a comparative evaluation of these approaches based on a set of criteria that can be qualified as performance indicators. The identified comparison criteria are the following: ontology dependence, service description adaptation, expressiveness and the capacity of the description. --- paper_title: Semantic-Driven Matchmaking of Web Services Using Case-Based Reasoning paper_content: With the rapid proliferation of Web services as the medium of choice to securely publish application services beyond the firewall, the importance of accurate, yet flexible matchmaking of similar services gains importance both for the human user and for dynamic composition engines . In this paper, we present a novel approach that utilizes the case based reasoning methodology for modelling dynamic Web service discovery and matchmaking. Our framework considers Web services execution experiences in the decision making process and is highly adaptable to the service requester constraints. The framework also utilises OWL semantic descriptions extensively for implementing both the components of the CBR engine and the matchmaking profile of the Web services. ---
Title: Semantic Web Service Discovery Approaches: Overview and Limitations Section 1: Introduction Description 1: This section provides an introduction to Web services, their importance in web engineering, and the motivation for Web service discovery. It also outlines the structure of the paper. Section 2: Background and Motivation Description 2: This section discusses the fundamental concepts of Web service discovery, its challenges, and the motivation behind developing new discovery approaches. Section 3: Survey and Classification Description 3: This section presents a detailed classification of existing Web service discovery approaches into algebraic, deductive, and hybrid categories, and describes some representative works for each category. Section 4: Comparative Study of WS Discovery Approaches Description 4: This section provides a comparative evaluation of the different Web service discovery approaches based on various criteria such as matching type, matching objects, and alignment with standards. Section 5: Reuse of Experience and CBR-based Approaches for WS Discovery Description 5: This section explores the concept of reuse of experience, specifically focusing on Case-Based Reasoning (CBR) approaches for improving Web service discovery. Section 6: Comparative Study of CBR-based WS Discovery Approaches Description 6: This section presents a comparative analysis of CBR-based Web service discovery approaches, evaluating their characteristics based on specific criteria relevant to CBR. Section 7: Synthesis and Conclusion Description 7: This section provides a synthesis of the survey and comparative studies, summarizing the advantages and limitations of existing approaches, and concluding with recommendations for future research.
A Review of Data Mining Techniques
7
--- paper_title: Predictive Data Mining: A Practical Guide paper_content: 1 What is Data Mining? 2 Statistical Evaluation for Big Data 3 Preparing the Data 4 Data Reduction 5 Looking for Solutions 6 What's Best for Data Reduction and Mining? 7 Art or Science? Case Studies in Data Mining --- paper_title: Knowledge Discovery in Databases paper_content: From the Publisher: ::: Knowledge Discovery in Databases brings together current research on the exciting problem of discovering useful and interesting knowledge in databases. It spans many different approaches to discovery, including inductive learning, bayesian statistics, semantic query optimization, knowledge acquisition for expert systems, information theory, and fuzzy 1 sets. ::: The rapid growth in the number and size of databases creates a need for tools and techniques for intelligent data understanding. Relationships and patterns in data may enable a manufacturer to discover the cause of a persistent disk failure or the reason for consumer complaints. But today's databases hide their secrets beneath a cover of overwhelming detail. The task of uncovering these secrets is called "discovery in databases." This loosely defined subfield of machine learning is concerned with discovery from large amounts of possible uncertain data. Its techniques range from statistics to the use of domain knowledge to control search. ::: Following an overview of knowledge discovery in databases, thirty technical chapters are grouped in seven parts which cover discovery of quantitative laws, discovery of qualitative laws, using knowledge in discovery, data summarization, domain specific discovery methods, integrated and multi-paradigm systems, and methodology and application issues. An important thread running through the collection is reliance on domain knowledge, starting with general methods and progressing to specialized methods where domain knowledge is built in. ::: Gregory Piatetski-Shapiro is Senior Member of Technical Staff and Principal Investigator of the Knowledge Discovery Project at GTELaboratories. William Frawley is Principal Member of Technical Staff at GTE and Principal Investigator of the Learning in Expert Domains Project. --- paper_title: Data Mining: An Overview from a Database Perspective paper_content: Mining information and knowledge from large databases has been recognized by many researchers as a key research topic in database systems and machine learning, and by many industrial companies as an important area with an opportunity of major revenues. Researchers in many di erent elds have shown great interest in data mining. Several emerging applications in information providing services, such as data warehousing and on-line services over the Internet, also call for various data mining techniques to better understand user behavior, to improve the service provided, and to increase the business opportunities. In response to such a demand, this article is to provide a survey, from a database researcher's point of view, on the data mining techniques developed recently. A classi cation of the available data mining techniques is provided and a comparative study of such techniques is presented. Index Terms | Data mining, knowledge discovery, association rules, classi cation, data clustering, pattern matching algorithms, data generalization and characterization, data cubes, multiple-dimensional databases. J. Han was supported in part by the research grant NSERC-A3723 from the Natural Sciences and Engineering Research Council of Canada, the research grant NCE:IRIS/Precarn-HMI5 from the Networks of Centres of Excellence of Canada, and research grants from MPR Teltech Ltd. and Hughes Research Laboratories. --- paper_title: Data Mining: An Overview from a Database Perspective paper_content: Mining information and knowledge from large databases has been recognized by many researchers as a key research topic in database systems and machine learning, and by many industrial companies as an important area with an opportunity of major revenues. Researchers in many di erent elds have shown great interest in data mining. Several emerging applications in information providing services, such as data warehousing and on-line services over the Internet, also call for various data mining techniques to better understand user behavior, to improve the service provided, and to increase the business opportunities. In response to such a demand, this article is to provide a survey, from a database researcher's point of view, on the data mining techniques developed recently. A classi cation of the available data mining techniques is provided and a comparative study of such techniques is presented. Index Terms | Data mining, knowledge discovery, association rules, classi cation, data clustering, pattern matching algorithms, data generalization and characterization, data cubes, multiple-dimensional databases. J. Han was supported in part by the research grant NSERC-A3723 from the Natural Sciences and Engineering Research Council of Canada, the research grant NCE:IRIS/Precarn-HMI5 from the Networks of Centres of Excellence of Canada, and research grants from MPR Teltech Ltd. and Hughes Research Laboratories. --- paper_title: Mining association rules between sets of items in large databases paper_content: We are given a large database of customer transactions. Each transaction consists of items purchased by a customer in a visit. We present an efficient algorithm that generates all significant association rules between items in the database. The algorithm incorporates buffer management and novel estimation and pruning techniques. We also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm. --- paper_title: The World-Wide Web: quagmire or gold mine? paper_content: An efficient continuous wave solid state laser is described. An array of semiconductor diode lasers is employed to generate a pump light source at an absorption wavelength for a solid state laser such as Nd:YAG laser. The array of diode lasers is pulsed at a high repetition rate to produce narrow light pulses which are directed upon an end surface of the solid state laser rod. The pump pulses rate is selected sufficiently high to establish CW operation of the solid state laser while the duty cycle for the pump-light source is sufficiently low to avoid excessive junction heating of the diode lasers, which would substantially reduce their output power. --- paper_title: MINE OVER MATTER paper_content: There are obscene amounts of data in corporate coffers that could be used to reinvent marketing strategies. Data mining is one way to find the information that counts. ---
Title: A Review of Data Mining Techniques Section 1: Introduction Description 1: Provide an introduction to data mining, its importance, and its role in extracting useful information from large datasets. Section 2: Current trends on data mining Description 2: Discuss the recent trends and developments in data mining, including the growing interest in the field and its applications in various industries. Section 3: Requirements and challenges of DM Description 3: Outline the main requirements and challenges associated with data mining, including handling different types of data, algorithm efficiency, and data security. Section 4: Data mining steps Description 4: Describe the general steps involved in the data mining process, including data preparation, data reduction, and information extraction. Section 5: Classifying DM techniques Description 5: Explain how data mining techniques can be classified based on the type of database, the knowledge to be discovered, and the techniques employed. Section 6: Major DM techniques Description 6: Review and discuss the major data mining techniques, including statistics, transactional/relational database mining, AI techniques, decision trees, genetic algorithms, and visualization. Section 7: Conclusion Description 7: Summarize the key points discussed in the paper and highlight the future prospects and unresolved issues in the field of data mining.
Ethernet Topology Discovery: A Survey
6
--- paper_title: A scalable content-addressable network paper_content: Hash tables - which map "keys" onto "values" - are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation. --- paper_title: Topology discovery in heterogeneous IP networks: the NetInventory system paper_content: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either 1) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored, or 2) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of the NetInventory topology-discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology with reasonably small running-time requirements even for fairly large network configurations. --- paper_title: Discovering Network Topology of Large Multisubnet Ethernet Networks paper_content: In this paper we investigate the problem of finding the physical layer network topology of large, heterogeneous multisubnet Ethernet networks that may include uncooperative network elements. Our approach utilizes only generic MIB information and does not require any hardware or software modifications of the underlying network elements. We propose here the first O(n3) algorithm that guarantees discovering a topology that is compatible with the given set of input MIBs, provided that the input is complete. We prove the correctness of the algorithms and the necessary and sufficient conditions for the uniqueness of the restored topology. Finally, we demonstrate the application of the algorithm on several examples. --- paper_title: Chord: a scalable peer-to-peer lookup protocol for internet applications paper_content: A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes. --- paper_title: Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System Design paper_content: Despite recent excitement generated by the peer-to-peer (P2P) paradigm and the surprisingly rapid deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. The open architecture, achieved scale, and self-organizing structure of the Gnutella network make it an interesting P2P architecture to study. Like most other P2P applications, Gnutella builds, at the application level, a virtual network with its own routing mechanisms. The topology of this virtual network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We have built a "crawler" to extract the topology of Gnutella's application level network. In this paper we analyze the topology graph and evaluate generated network traffic. Our two major findings are that: (1) although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure, and (2) the Gnutella virtual network topology does not match well the underlying Internet topology, hence leading to ineffective use of the physical networking infrastructure. These findings guide us to propose changes to the Gnutella protocol and implementations that may bring significant performance and scalability improvements. We believe that our findings as well as our measurement and analysis techniques have broad applicability to P2P systems and provide unique insights into P2P system design tradeoffs. --- paper_title: Characterizing unstructured overlay topologies in modern P2P file-sharing systems paper_content: In recent years, peer-to-peer (P2P) file-sharing systems have evolved to accommodate growing numbers of participating peers. In particular, new features have changed the properties of the unstructured overlay topologies formed by these peers. Little is known about the characteristics of these topologies and their dynamics in modern file-sharing applications, despite their importance. This paper presents a detailed characterization of P2P overlay topologies and their dynamics, focusing on the modern Gnutella network. We present Cruiser, a fast and accurate P2P crawler, which can capture a complete snapshot of the Gnutella network of more than one million peers in just a few minutes, and show how inaccuracy in snapshots can lead to erroneous conclusions-such as a power-law degree distribution. Leveraging recent overlay snapshots captured with Cruiser, we characterize the graph-related properties of individual overlay snapshots and overlay dynamics across slices of back-to-back snapshots. Our results reveal that while the Gnutella network has dramatically grown and changed in many ways, it still exhibits the clustering and short path lengths of a small world network. Furthermore, its overlay topology is highly resilient to random peer departure and even systematic attacks. More interestingly, overlay dynamics lead to an ldquoonion-likerdquo biased connectivity among peers where each peer is more likely connected to peers with higher uptime. Therefore, long-lived peers form a stable core that ensures reachability among peers despite overlay dynamics. --- paper_title: Ethernet topology discovery without network assistance paper_content: This work addresses the problem of layer 2 topology discovery. Current techniques concentrate on using SNMP to query information from Ethernet switches. In contrast, we present a technique that infers the Ethernet (layer 2) topology without assistance from the network elements by injecting suitable probe packets from the end-systems and observing where they are delivered. We describe the algorithm, formally characterize its correctness and completeness, and present our implementation and experimental results. Performance results show that although originally aimed at the home and small office the techniques scale to much larger networks. --- paper_title: Topology discovery for large ethernet networks paper_content: Accurate network topology information is important for both network management and application performance prediction. Most topology discovery research has focused on wide-area networks and examined topology only at the IP router level, ignoring the need for LAN topology information. Recent work has demonstrated that bridged Ethernet topology can be determined using standard SNMP MIBs; however, these algorithms require each bridge to learn about all other bridges in the network. Our approach to Ethernet topology discovery can determine the connection between a pair of the bridges that share forwarding entries for only three hosts. This minimal knowledge requirement significantly expands the size of the network that can be discovered. We have implemented the new algorithm, and it has accurately determined the topology of several different networks using a variety of hardware and network configurations. Our implementation requires access to only one endpoint to perform the queries needed for topology discovery. --- paper_title: Capturing accurate snapshots of the Gnutella network paper_content: A common approach for measurement-based characterization of peer-to-peer (P2P) systems is to capture overlay snapshots using a crawler. The accuracy of captured snapshots by P2P crawlers directly depends on both the crawling speed and the fraction of unreachable peers. This in turn affects the accuracy of the conducted characterization based on these captured snapshots. Prior studies frequently rely on crawling the network over an hour or more, during which time the overlay may change substantially. Moreover, none of the previous measurement-based studies on P2P systems have examined the accuracy of their captured snapshots or the impact on conducted characterization. --- paper_title: Topology discovery in heterogeneous IP networks paper_content: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either: (a) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored; or (b) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of a topology discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology in time that is roughly quadratic in the number of network elements. --- paper_title: Measuring ISP topologies with rocketfuel paper_content: To date, realistic ISP topologies have not been accessible to the research community, leaving work that depends on topology on an uncertain footing. In this paper, we present new Internet mapping techniques that have enabled us to directly measure router-level ISP topologies. Our techniques reduce the number of required traces compared to a brute-force, all-to-all approach by three orders of magnitude without a significant loss in accuracy. They include the use of BGP routing tables to focus the measurements, exploiting properties of IP routing to eliminate redundant measurements, better alias resolution, and the use of DNS to divide each map into POPs and backbone. We collect maps from ten diverse ISPs using our techniques, and find that our maps are substantially more complete than those of earlier Internet mapping efforts. We also report on properties of these maps, including the size of POPs, distribution of router outdegree, and the inter-domain peering structure. As part of this work, we release our maps to the community. --- paper_title: Ethernet Topology Discovery for Networks with Incomplete Information paper_content: In this paper we investigate the problem of finding a layer-2 network topology when the information available from SNMP MIB is incomplete. We prove that finding a network topology in this case is NP-hard. We further prove that deciding whether the given information defines a unique network topology is a co-NP-hard problem. We show that if there is a single node r such that every other network node sees it, then the network topology can be discovered in polynomial (in the number of network ports) time. Finally, we design a polynomial time heuristic algorithm to discover a topology when the information available from SNMP MIB is incomplete and conduct extensive experiments with it to determine how often the algorithm succeeds in finding topology. Our results indicate that our algorithm discovers the network topology in close to 100% of all test cases. --- paper_title: Heuristics for Internet map discovery paper_content: Mercator is a program that uses hop-limited probes-the same primitive used in traceroute-to infer an Internet map. It uses informed random address probing to carefully exploring the IP address space when determining router adjacencies, uses source-route capable routers wherever possible to enhance the fidelity of the resulting map, and employs novel mechanisms for resolving aliases (interfaces belonging to the same router). This paper describes the design of these heuristics and our experiences with Mercator, and presents some preliminary analysis of the resulting Internet map. --- paper_title: In search of path diversity in ISP networks paper_content: Internet Service Providers (ISPs) can exploit path diversity to balance load and improve robustness. Unfortunately, it is difficult to evaluate the potential impact of these approaches without routing and topological data, which are confidential. In this paper, we characterize path diversity in the real Sprint network. We then characterize path diversity in ISP topologies inferred using the Rocketfuel tool. Comparing the real Sprint topology to the one inferred by Rocketfuel, we find that the Rocketfuel topology has significantly higher apparent path diversity.(As a metric, path diversity is particularly sensitive to the presence of false or missing links, both of which are artifacts of active measurement techniques.) We evaluate heuristics that improve the accuracy of the inferred Rocketfuel topologies. Finally, we discuss limitations of active measurements techniques to capture topological properties such as path diversity. --- paper_title: Characterization of Layer-2 Unique Topologies in Multisubnet Local Area Networks paper_content: Obtaining network connectivity information (or, alternatively, network topology) at layer-2 of the ISO hierarchy is critical to an effective management of large local area networks (LANs) that include hundreds of layer-2 network elements. Despite the importance of getting a layer-2 network topology, there are several major difficulties in tracking this information --- paper_title: Characterization of layer-2 unique topologies paper_content: In this paper we study a layer-2 topology restoration for multisubnet networks. We design a new algorithm for generating such topologies and prove a criterion on a set of input data that guarantees a unique layer-2 topology. Our criterion is easily verifiable in O(n^2) time, where n is the number of internal network nodes. --- paper_title: Mapping the Internet paper_content: How can you determine what the Internet or even an intranet looks like? The answer is, of course, to draw it on screen. Once you can see the data succinctly, it becomes much easier to understand. The drawing itself can help locate bottlenecks and possible points of failure. Where is that newly acquired subsidiary connected? Which business units have connections to business partners? More important, visual displays of networks have another dimension-color. Color is an easy way to display link use, status, ownership, and network changes. --- paper_title: Checkmate network security modeling paper_content: Effective reasoning about system attacks and responses requires a comprehensive model that covers all aspects of the system being analyzed, from network topology and configuration, to specific vulnerabilities, to possible adversary capabilities and possible attacks. A comprehensive model can be used as the basis for real-time attack/response simulations "what if" course of action analysis, policy simulation and debugging, and more. This paper describes the Checkmate security model and illustrates how this model can be used as the basis for a tool that performs effective security analysis on real-world networks. --- paper_title: Topology discovery in heterogeneous IP networks: the NetInventory system paper_content: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either 1) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored, or 2) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of the NetInventory topology-discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology with reasonably small running-time requirements even for fairly large network configurations. --- paper_title: On realistic network topologies for simulation paper_content: Simulations are an important tool in network research. As the selected topology often influences the outcome of the simulation, realistic topologies are needed to produce realistic simulation results. We first discuss the different types of topologies and present our collection of real-world topologies that can be used for simulation. We then define several similarity metrics to compare artificially generated topologies with real world topologies. We use them to find out what the input parameter range of the topology generators of BRITE, TIERS and GTITM are to create realistic topologies. These parameters can act as a valuable starting point for researchers that have to generate artificial topologies. --- paper_title: Composable tools for network discovery and security analysis paper_content: Security analysis should take advantage of a reliable knowledge base that contains semantically-rich information about a protected network. This knowledge is provided by network mapping tools. These tools rely on models to represent the entities of interest, and they leverage off network discovery techniques to populate the model structure with the data that is pertinent to a specific target network. Unfortunately, existing tools rely on incomplete data models. Networks are complex systems and most approaches oversimplify their target models in an effort to limit the problem space. In addition, the techniques used to populate the models are limited in scope and are difficult to extend. This paper presents NetMap, a security tool for network modeling, discovery, and analysis. NetMap relies on a comprehensive network model that is not limited to a specific network level; it integrates network information throughout the layers. The model contains information about topology, infrastructure, and deployed services. In addition, the relationships among different entities in different layers of the model are made explicit. The modeled information is managed by using a suite of composable network tools that can determine various aspects of network configurations through scanning techniques and heuristics. Tools in the suite are responsible for a single, well-defined task. --- paper_title: Ethernet topology discovery without network assistance paper_content: This work addresses the problem of layer 2 topology discovery. Current techniques concentrate on using SNMP to query information from Ethernet switches. In contrast, we present a technique that infers the Ethernet (layer 2) topology without assistance from the network elements by injecting suitable probe packets from the end-systems and observing where they are delivered. We describe the algorithm, formally characterize its correctness and completeness, and present our implementation and experimental results. Performance results show that although originally aimed at the home and small office the techniques scale to much larger networks. --- paper_title: Topology discovery for large ethernet networks paper_content: Accurate network topology information is important for both network management and application performance prediction. Most topology discovery research has focused on wide-area networks and examined topology only at the IP router level, ignoring the need for LAN topology information. Recent work has demonstrated that bridged Ethernet topology can be determined using standard SNMP MIBs; however, these algorithms require each bridge to learn about all other bridges in the network. Our approach to Ethernet topology discovery can determine the connection between a pair of the bridges that share forwarding entries for only three hosts. This minimal knowledge requirement significantly expands the size of the network that can be discovered. We have implemented the new algorithm, and it has accurately determined the topology of several different networks using a variety of hardware and network configurations. Our implementation requires access to only one endpoint to perform the queries needed for topology discovery. --- paper_title: Topology Discovery for Public IPv6 Networks paper_content: This paper presents Atlas,a system that facilitates the automated capture of IPv6 network topology information from a single probing host.It describes the Atlas infrastructure and its data collection processes.We also present some initial results from our probing of the 6Bone,currently the largest public IPv6 network.The results illustrate the effectiveness of the probing algorithm and also identify some trends in prefix allocation and routing policy. --- paper_title: Topology discovery in heterogeneous IP networks paper_content: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either: (a) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored; or (b) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of a topology discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology in time that is roughly quadratic in the number of network elements. --- paper_title: Distributed agent-based real time network intrusion forensics system architecture design paper_content: Network forensics is a new approach for the network security, because the firewall and IDS cannot always stop and discover the misuse in the network. Once the system is compromised, the forensics and investigation always after the attacks and lose some useful instant evidence. The integrated analysis of the log and audit system and network traffic can lead to an efficient navigation of the traffic. The current network forensics approaches only focus on the network traffic capture and traffic replay, which always result in the performance bottleneck or forensics analysis difficulties. However, the adaptive capture without lose the potential sensitive traffic and real time investigation are seldom discussed. In this paper, we discuss the frameworks of distributed agent-based real time network intrusion forensics system, which is deployed in local area network environment. Some novel approaches for network forensics are discussed for the first time, such as network forensics server, network forensics database, network forensics agents, forensics data integration and active real time network forensic. --- paper_title: Network forensics analysis paper_content: Many tools let you view traffic in real time, but real-time monitoring at any level requires significant human and hardware resources, and doesn't scale to networks larger than a single workgroup. It is generally more practical to archive all traffic and analyze subsets as necessary. This process is known as reconstructive traffic analysis, or network forensics. In practice, it is often limited to data collection and packet-level inspection; however, a network forensics analysis tool can provide a richer view of the data collected, allowing you to inspect the traffic from further up the protocol stack? The IT industry's ever-growing concern with security is the primary motivation for network forensics. A network that has been prepared for forensic analysis is easy to monitor, and security vulnerabilities and configuration problems can be conveniently identified. It also allows the best possible analysis of security violations. Most importantly, analyzing a complete record of your network traffic with the appropriate reconstructive tools provides context for other breach-related events. --- paper_title: Ethernet topology discovery without network assistance paper_content: This work addresses the problem of layer 2 topology discovery. Current techniques concentrate on using SNMP to query information from Ethernet switches. In contrast, we present a technique that infers the Ethernet (layer 2) topology without assistance from the network elements by injecting suitable probe packets from the end-systems and observing where they are delivered. We describe the algorithm, formally characterize its correctness and completeness, and present our implementation and experimental results. Performance results show that although originally aimed at the home and small office the techniques scale to much larger networks. --- paper_title: A scalable content-addressable network paper_content: Hash tables - which map "keys" onto "values" - are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation. --- paper_title: Characterizing unstructured overlay topologies in modern P2P file-sharing systems paper_content: In recent years, peer-to-peer (P2P) file-sharing systems have evolved to accommodate growing numbers of participating peers. In particular, new features have changed the properties of the unstructured overlay topologies formed by these peers. Little is known about the characteristics of these topologies and their dynamics in modern file-sharing applications, despite their importance. This paper presents a detailed characterization of P2P overlay topologies and their dynamics, focusing on the modern Gnutella network. We present Cruiser, a fast and accurate P2P crawler, which can capture a complete snapshot of the Gnutella network of more than one million peers in just a few minutes, and show how inaccuracy in snapshots can lead to erroneous conclusions-such as a power-law degree distribution. Leveraging recent overlay snapshots captured with Cruiser, we characterize the graph-related properties of individual overlay snapshots and overlay dynamics across slices of back-to-back snapshots. Our results reveal that while the Gnutella network has dramatically grown and changed in many ways, it still exhibits the clustering and short path lengths of a small world network. Furthermore, its overlay topology is highly resilient to random peer departure and even systematic attacks. More interestingly, overlay dynamics lead to an ldquoonion-likerdquo biased connectivity among peers where each peer is more likely connected to peers with higher uptime. Therefore, long-lived peers form a stable core that ensures reachability among peers despite overlay dynamics. --- paper_title: Capturing accurate snapshots of the Gnutella network paper_content: A common approach for measurement-based characterization of peer-to-peer (P2P) systems is to capture overlay snapshots using a crawler. The accuracy of captured snapshots by P2P crawlers directly depends on both the crawling speed and the fraction of unreachable peers. This in turn affects the accuracy of the conducted characterization based on these captured snapshots. Prior studies frequently rely on crawling the network over an hour or more, during which time the overlay may change substantially. Moreover, none of the previous measurement-based studies on P2P systems have examined the accuracy of their captured snapshots or the impact on conducted characterization. --- paper_title: Mapping the Internet paper_content: How can you determine what the Internet or even an intranet looks like? The answer is, of course, to draw it on screen. Once you can see the data succinctly, it becomes much easier to understand. The drawing itself can help locate bottlenecks and possible points of failure. Where is that newly acquired subsidiary connected? Which business units have connections to business partners? More important, visual displays of networks have another dimension-color. Color is an easy way to display link use, status, ownership, and network changes. --- paper_title: Topology discovery in heterogeneous IP networks: the NetInventory system paper_content: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either 1) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored, or 2) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of the NetInventory topology-discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology with reasonably small running-time requirements even for fairly large network configurations. --- paper_title: Discovering Network Topology of Large Multisubnet Ethernet Networks paper_content: In this paper we investigate the problem of finding the physical layer network topology of large, heterogeneous multisubnet Ethernet networks that may include uncooperative network elements. Our approach utilizes only generic MIB information and does not require any hardware or software modifications of the underlying network elements. We propose here the first O(n3) algorithm that guarantees discovering a topology that is compatible with the given set of input MIBs, provided that the input is complete. We prove the correctness of the algorithms and the necessary and sufficient conditions for the uniqueness of the restored topology. Finally, we demonstrate the application of the algorithm on several examples. --- paper_title: Ethernet topology discovery without network assistance paper_content: This work addresses the problem of layer 2 topology discovery. Current techniques concentrate on using SNMP to query information from Ethernet switches. In contrast, we present a technique that infers the Ethernet (layer 2) topology without assistance from the network elements by injecting suitable probe packets from the end-systems and observing where they are delivered. We describe the algorithm, formally characterize its correctness and completeness, and present our implementation and experimental results. Performance results show that although originally aimed at the home and small office the techniques scale to much larger networks. --- paper_title: Topology discovery for large ethernet networks paper_content: Accurate network topology information is important for both network management and application performance prediction. Most topology discovery research has focused on wide-area networks and examined topology only at the IP router level, ignoring the need for LAN topology information. Recent work has demonstrated that bridged Ethernet topology can be determined using standard SNMP MIBs; however, these algorithms require each bridge to learn about all other bridges in the network. Our approach to Ethernet topology discovery can determine the connection between a pair of the bridges that share forwarding entries for only three hosts. This minimal knowledge requirement significantly expands the size of the network that can be discovered. We have implemented the new algorithm, and it has accurately determined the topology of several different networks using a variety of hardware and network configurations. Our implementation requires access to only one endpoint to perform the queries needed for topology discovery. --- paper_title: Topology discovery in heterogeneous IP networks paper_content: Knowledge of the up-to-date physical topology of an IP network is crucial to a number of critical network management tasks, including reactive and proactive resource management, event correlation, and root-cause analysis. Given the dynamic nature of today's IP networks, keeping track of topology information manually is a daunting (if not impossible) task. Thus, effective algorithms for automatically discovering physical network topology are necessary. Earlier work has typically concentrated on either: (a) discovering logical (i.e., layer-3) topology, which implies that the connectivity of all layer-2 elements (e.g., switches and bridges) is ignored; or (b) proprietary solutions targeting specific product families. In this paper, we present novel algorithms for discovering physical topology in heterogeneous (i.e., multi-vendor) IP networks. Our algorithms rely on standard SNMP MIB information that is widely supported by modern IP network elements and require no modifications to the operating system software running on elements or hosts. We have implemented the algorithms presented in this paper in the context of a topology discovery tool that has been tested on Lucent's own research network. The experimental results clearly validate our approach, demonstrating that our tool can consistently discover the accurate physical network topology in time that is roughly quadratic in the number of network elements. --- paper_title: Ethernet Topology Discovery for Networks with Incomplete Information paper_content: In this paper we investigate the problem of finding a layer-2 network topology when the information available from SNMP MIB is incomplete. We prove that finding a network topology in this case is NP-hard. We further prove that deciding whether the given information defines a unique network topology is a co-NP-hard problem. We show that if there is a single node r such that every other network node sees it, then the network topology can be discovered in polynomial (in the number of network ports) time. Finally, we design a polynomial time heuristic algorithm to discover a topology when the information available from SNMP MIB is incomplete and conduct extensive experiments with it to determine how often the algorithm succeeds in finding topology. Our results indicate that our algorithm discovers the network topology in close to 100% of all test cases. --- paper_title: Characterization of Layer-2 Unique Topologies in Multisubnet Local Area Networks paper_content: Obtaining network connectivity information (or, alternatively, network topology) at layer-2 of the ISO hierarchy is critical to an effective management of large local area networks (LANs) that include hundreds of layer-2 network elements. Despite the importance of getting a layer-2 network topology, there are several major difficulties in tracking this information --- paper_title: Characterization of layer-2 unique topologies paper_content: In this paper we study a layer-2 topology restoration for multisubnet networks. We design a new algorithm for generating such topologies and prove a criterion on a set of input data that guarantees a unique layer-2 topology. Our criterion is easily verifiable in O(n^2) time, where n is the number of internal network nodes. ---
Title: Ethernet Topology Discovery: A Survey Section 1: INTRODUCTION Description 1: Provide an overview of the focus of the survey, the importance of Ethernet network topology discovery, and a summary of different types of network topologies. Section 2: MOTIVATIONS Description 2: Discuss the various applications and benefits of Ethernet network topology information in network administration, performance prediction, protocol design, simulation, and security. Section 3: NETWORK NODE DISCOVERY Description 3: Explain the process of identifying unique network nodes in Ethernet networks, differentiating between active and passive nodes, and the importance of discovering dumb network devices. Section 4: NETWORK TOPOLOGY INFERENCE Description 4: Describe the methods and algorithms used to infer network topologies, the challenges associated with different network environments, and details of the SNMP and non-SNMP based topology discovery techniques. Section 5: LIMITATIONS AND ISSUES Description 5: Outline the limitations and issues related to current Ethernet topology discovery methods, including network topology changes, VLANs, and integration of wireless and mobile nodes. Section 6: CONCLUSION Description 6: Summarize the significance of network topology discovery, the advancements made in this field, and the ongoing challenges and potential future directions for research.
A Review of Cell Equalization Methods for Lithium Ion and Lithium Polymer Battery Systems
7
--- paper_title: Improved charge algorithms for valve regulated lead acid batteries paper_content: The cycle life obtained from valve-regulated lead-acid (VRLA) batteries is strongly influenced by the manner in which they have been charged over their lifetime. Although VRLA batteries initially behave similarly to their flooded counterparts, that behavior changes as the batteries age and the oxygen generation/recombination cycle begins to dominate at near 100% full charge. This means that an increasing portion of the applied charge is consumed in the recombination cycle and that more and more overcharge must be applied to maintain full capacity. The overall result is that the battery heats up because of increased overcharge and oxygen generation. Conventional charge approaches attempt to deal with rising temperatures by lowering the current during the overcharge phase. However, this approach does not ultimately prevent capacity loss, and a battery charged thusly typically will yield 200-300 cycles to 50% of initial capacity. The main failure mode appears to be undercharging of the negative plate, not positive-plate corrosion. Two approaches, called partial state of recharge (PSOR) and current interrupt (CI) were successful in extending battery life. PSOR uses nine limited recharge cycles followed by a tenth cycle using 120% charge return. The best PSOR cycle life to date is 1160 cycles to 50% and 800 cycles to 80%. CI uses a high current in the overcharge applied discontinuously to control battery temperature. CI effectively maintains negative-plate capacity, with an Optima group 34 deep-cycle battery yielding 415 cycles to 80% initial capacity and 760 cycles to 50%. ---
Title: A Review of Cell Equalization Methods for Lithium Ion and Lithium Polymer Battery Systems Section 1: INTRODUCTION Description 1: Introduce the need for cell equalization for lithium-based batteries in series configurations and outline the challenges and importance of balancing methods. Section 2: END-OF-CHARGE CELL BALANCING METHODS Description 2: Discuss the cell-balancing methods used specifically at the end of the charging process, including their applicability to electric vehicle batteries. Section 3: CHARGE SHUNTING Description 3: Explain the charge-shunting cell balancing method, its operation, advantages, and disadvantages. Section 4: ACTIVE CELL BALANCING METHODS Description 4: Describe active cell balancing methods, including charge shuttling and energy converting techniques, along with examples and practical considerations. Section 5: CHARGE SHUTTLING Description 5: Detail the specifics of charge shuttling mechanisms for cell balancing, including the flying capacitor method and its variations, with advantages and challenges. Section 6: DISSIPATIVE RESISTORS Description 6: Explore the simplest and most cost-effective cell balancing method using dissipative resistors, including its operation, benefits, and drawbacks. Section 7: CONCLUSIONS Description 7: Summarize the key findings and applications of various cell equalization methods for electric and hybrid electric vehicles.
Assessing autism at its social and developmental roots: A review of Autism Spectrum Disorder studies using functional near‐infrared spectroscopy
13
--- paper_title: Reduced functional connectivity within and between ‘social’ resting state networks in autism spectrum conditions paper_content: Individuals with autism spectrum conditions (ASC) have difficulties in social interaction and communication, which is reflected in hypoactivation of brain regions engaged in social processing, such as medial prefrontal cortex (mPFC), amygdala and insula. Resting state studies in ASC have identified reduced connectivity of the default mode network (DMN), which includes mPFC, suggesting that other resting state networks incorporating 'social' brain regions may also be abnormal. Using seed-based connectivity and group independent component analysis (ICA) approaches, we looked at resting functional connectivity in ASC between specific 'social' brain regions, as well as within and between whole networks incorporating these regions. We found reduced functional connectivity within the DMN in individuals with ASC, using both ICA and seed-based approaches. Two further networks identified by ICA, the salience network, incorporating the insula and a medial temporal lobe network, incorporating the amygdala, showed reduced inter-network connectivity. This was underlined by reduced seed-based connectivity between the insula and amygdala. The results demonstrate significantly reduced functional connectivity within and between resting state networks incorporating 'social' brain regions. This reduced connectivity may result in difficulties in communication and integration of information across these networks, which could contribute to the impaired processing of social signals in ASC. --- paper_title: A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology paper_content: This year marks the 20th anniversary of functional near-infrared spectroscopy and imaging (fNIRS/fNIRI). As the vast majority of commercial instruments developed until now are based on continuous wave technology, the aim of this publication is to review the current state of instrumentation and methodology of continuous wave fNIRI. For this purpose we provide an overview of the commercially available instruments and address instrumental aspects such as light sources, detectors and sensor arrangements. Methodological aspects, algorithms to calculate the concentrations of oxy- and deoxyhemoglobin and approaches for data analysis are also reviewed. From the single-location measurements of the early years, instrumentation has progressed to imaging initially in two dimensions (topography) and then three (tomography). The methods of analysis have also changed tremendously, from the simple modified Beer-Lambert law to sophisticated image reconstruction and data analysis methods used today. Due to these advances, fNIRI has become a modality that is widely used in neuroscience research and several manufacturers provide commercial instrumentation. It seems likely that fNIRI will become a clinical tool in the foreseeable future, which will enable diagnosis in single subjects. --- paper_title: A failure of left temporal cortex to specialize for language is an early emerging and fundamental property of autism paper_content: Failure to develop normal language comprehension is an early warning sign of autism, but the neural mechanisms underlying this signature deficit are unknown. This is because of an almost complete absence of functional studies of the autistic brain during early development. Using functional magnetic resonance imaging, we previously observed a trend for abnormally lateralized temporal responses to language (i.e. greater activation on the right, rather than the expected left) in a small sample ( n = 12) of sleeping 2–3 year olds with autism in contrast to typically developing children, a finding also reported in autistic adults and adolescents. It was unclear, however, if findings of atypical laterality would be observed in a larger sample, and at even earlier ages in autism, such as around the first birthday. Answers to these questions would provide the foundation for understanding how neurofunctional defects of autism unfold, and provide a foundation for studies using patterns of brain activation as a functional early biomarker of autism. To begin to examine these issues, a prospective, cross-sectional design was used in which brain activity was measured in a large sample of toddlers ( n = 80) during the presentation of a bedtime story during natural sleep. Forty toddlers with autism spectrum disorder and 40 typically developing toddlers ranging in age between 12–48 months participated. Any toddler with autism who participated in the imaging experiment prior to final diagnosis was tracked and diagnoses confirmed at a later age. Results indicated that at-risk toddlers later diagnosed as autistic display deficient left hemisphere response to speech sounds and have abnormally right-lateralized temporal cortex response to language; this defect worsens with age, becoming most severe in autistic 3- and 4-year-olds. Typically developing children show opposite developmental trends with a tendency towards greater temporal cortex response with increasing age and maintenance of left-lateralized activation with age. We have now demonstrated lateralized abnormalities of temporal cortex processing of language in autism across two separate samples, including a large sample of young infants who later are diagnosed with autism, suggesting that this pattern may reflect a fundamental early neural developmental pathology in autism. ::: ::: * Abbreviation ::: : ADOS ::: : autism diagnostic observation schedule ::: ASD ::: : austism spectrum disorder --- paper_title: An information theoretical approach to prefrontal executive function paper_content: The prefrontal cortex subserves executive control – that is, the ability to select actions or thoughts in relation to internal goals. Here, we propose a theory that draws upon concepts from information theory to describe the architecture of executive control in the lateral prefrontal cortex. Supported by evidence from brain imaging in human subjects, the model proposes that action selection is guided by hierarchically ordered control signals, processed in a network of brain regions organized along the anterior–posterior axis of the lateral prefrontal cortex. The theory clarifies how executive control can operate as a unitary function, despite the requirement that information be integrated across multiple distinct, functionally specialized prefrontal regions. --- paper_title: A new research trend in social neuroscience: Towards an interactive‐brain neuroscience paper_content: The ability to flexibly modulate our behaviors in social contexts and to successfully interact with other persons is a fundamental, but pivotal, requirement for human survival. Although previous social neuroscience research with single individuals has contributed greatly to our understanding of the basic mechanisms underlying social perception and social emotions, much of the dynamic nature of interactions between persons remains beyond the reach of single-brain studies. This has led to a growing argument for a shift to the simultaneous measurement of the brain activity of two or more individuals in realistic social interactions-an approach termed "hyperscanning." Although this approach offers important promise in unlocking the brain's role in truly social situations, there are multiple procedural and theoretical questions that require review and analysis. In this paper we discuss this research trend from four aspects: hyperscanning apparatus, experimental task, quantification method, and theoretical interpretation. We also give four suggestions for future research: (a) electroencephalography and near-infrared spectroscopy are useful tools by which to explore the interactive brain in more ecological settings; (b) games are an appropriate method to simulate daily life interactions; (c) transfer entropy may be an important method by which to quantify directed exchange of information between brains; and (d) more explanation is needed of the results of interbrain synchronization itself. --- paper_title: White matter structure in autism: preliminary evidence from diffusion tensor imaging paper_content: Abstract Background Individuals with autism have severe difficulties in social communication and relationships. Prior studies have suggested that abnormal connections between brain regions important for social cognition may contribute to the social deficits seen in autism. Methods In this study, we used diffusion tensor imaging to investigate white matter structure in seven male children and adolescents with autism and nine age-, gender-, and IQ-matched control subjects. Results Reduced fractional anisotropy (FA) values were observed in white matter adjacent to the ventromedial prefrontal cortices and in the anterior cingulate gyri as well as in the temporoparietal junctions. Additional clusters of reduced FA values were seen adjacent to the superior temporal sulcus bilaterally, in the temporal lobes approaching the amygdala bilaterally, in occipitotemporal tracts, and in the corpus callosum. Conclusions Disruption of white matter tracts between regions implicated in social functioning may contribute to impaired social cognition in autism. --- paper_title: Neural Signatures of Autism Spectrum Disorders: Insights into Brain Network Dynamics paper_content: Neuroimaging investigations of autism spectrum disorders (ASDs) have advanced our understanding of atypical brain function and structure, and have recently converged on a model of altered network-level connectivity. Traditional task-based functional magnetic resonance imaging (MRI) and volume-based structural MRI studies have identified widespread atypicalities in brain regions involved in social behavior and other core ASD-related behavioral deficits. More recent advances in MR-neuroimaging methods allow for quantification of brain connectivity using diffusion tensor imaging, functional connectivity, and graph theoretic methods. These newer techniques have moved the field toward a systems-level understanding of ASD etiology, integrating functional and structural measures across distal brain regions. Neuroimaging findings in ASD as a whole have been mixed and at times contradictory, likely due to the vast genetic and phenotypic heterogeneity characteristic of the disorder. Future longitudinal studies of brain development will be crucial to yield insights into mechanisms of disease etiology in ASD sub-populations. Advances in neuroimaging methods and large-scale collaborations will also allow for an integrated approach linking neuroimaging, genetics, and phenotypic data. --- paper_title: A specific hypoactivation of right temporo-parietal junction/posterior superior temporal sulcus in response to socially awkward situations in autism paper_content: People with autism spectrum disorders (ASD) often have difficulty comprehending social situations in the complex, dynamic contexts encountered in the real world. To study the social brain under conditions which approximate naturalistic situations, we measured brain activity with fMRI while participants watched a full-length episode of the sitcom The Office. Having quantified the degree of social awkwardness at each moment of the episode, as judged by an independent sample of controls, we found that both individuals with ASD and control participants showed reliable activation of several brain regions commonly associated with social perception and cognition (e.g., those comprising the “mentalizing network”) during the more awkward moments. However, individuals with ASD showed less activity than controls in a region near right temporo-parietal junction (RTPJ) extending into the posterior end of the right superior temporal sulcus (RSTS). Further analyses suggested that, despite the free-form nature of the experimental design, this group difference was specific to this RTPJ/RSTS area of the mentalizing network; other regions of interest showed similar activity across groups with respect to both location and magnitude. These findings add support to a body of evidence suggesting that RTPJ/RSTS plays a special role in social processes across modalities and may function atypically in individuals with ASD navigating the social world. --- paper_title: Longitudinal Volumetric Brain Changes in Autism Spectrum Disorder Ages 6–35 Years paper_content: Since the impairments associated with autism spectrum disorder (ASD) tend to persist or worsen from childhood into adulthood, it is of critical importance to examine how the brain develops over this growth epoch. We report initial findings on whole and regional longitudinal brain development in 100 male participants with ASD (226 high-quality magnetic resonance imaging [MRI] scans; mean inter-scan interval 2.7 years) compared to 56 typically developing controls (TDCs) (117 high-quality scans; mean inter-scan interval 2.6 years) from childhood into adulthood, for a total of 156 participants scanned over an 8-year period. This initial analysis includes between one and three high-quality scans per participant that have been processed and segmented to date, with 21% having one scan, 27% with two scans, and 52% with three scans in the ASD sample; corresponding percentages for the TDC sample are 30%, 30%, and 40%. The proportion of participants with multiple scans (79% of ASDs and 68% of TDCs) was high in comparison to that of large longitudinal neuroimaging studies of typical development. We provide volumetric growth curves for the entire brain, total gray matter (GM), frontal GM, temporal GM, parietal GM, occipital GM, total cortical white matter (WM), corpus callosum, caudate, thalamus, total cerebellum, and total ventricles. Mean volume of cortical WM was reduced significantly. Mean ventricular volume was increased in the ASD sample relative to the TDCs across the broad age range studied. Decreases in regional mean volumes in the ASD sample most often were due to decreases during late adolescence and adulthood. The growth curve of whole brain volume over time showed increased volumes in young children with autism, and subsequently decreased during adolescence to meet the TDC curve between 10 and 15 years of age. The volume of many structures continued to decline atypically into adulthood in the ASD sample. The data suggest that ASD is a dynamic disorder with complex changes in whole and regional brain volumes that change over time from childhood into adulthood. Autism Res 2015, 8: 82–93. © 2014 International Society for Autism Research, Wiley Periodicals, Inc. --- paper_title: EEG evidence for mirror neuron dysfunction in autism spectrum disorders paper_content: Abstract Autism spectrum disorders (ASD) are largely characterized by deficits in imitation, pragmatic language, theory of mind, and empathy. Previous research has suggested that a dysfunctional mirror neuron system may explain the pathology observed in ASD. Because EEG oscillations in the mu frequency (8–13 Hz) over sensorimotor cortex are thought to reflect mirror neuron activity, one method for testing the integrity of this system is to measure mu responsiveness to actual and observed movement. It has been established that mu power is reduced (mu suppression) in typically developing individuals both when they perform actions and when they observe others performing actions, reflecting an observation/execution system which may play a critical role in the ability to understand and imitate others' behaviors. This study investigated whether individuals with ASD show a dysfunction in this system, given their behavioral impairments in understanding and responding appropriately to others' behaviors. Mu wave suppression was measured in ten high-functioning individuals with ASD and ten age- and gender-matched control subjects while watching videos of (1) a moving hand, (2) a bouncing ball, and (3) visual noise, or (4) moving their own hand. Control subjects showed significant mu suppression to both self and observed hand movement. The ASD group showed significant mu suppression to self-performed hand movements but not to observed hand movements. These results support the hypothesis of a dysfunctional mirror neuron system in high-functioning individuals with ASD. --- paper_title: The Neuroscience of Human Relationships: Attachment and the Developing Social Brain paper_content: As human beings, we cherish our individuality yet we know that we live in constant relationship to others, and that other people play a significant part in regulating our emotional and social behavior. Although this interdependence is a reality of our existence, we are just beginning to understand that we have evolved as social creatures with interwoven brains and biologies. The human brain itself is a social organ and to truly understand being human, we must understand not only how we as whole people exist with others, but how our brains, themselves, exist in relationship to other brains. The first edition of this book tackled these important questions of interpersonal neurobiology-that the brain is a social organ built through experience-using poignant case examples from the author's years of clinical experience. Brain drawings and elegant explanations of social neuroscience wove together emerging findings from the research literature to bring neuroscience to the stories of our lives. Since the publication of the first edition in 2006, the field of social neuroscience has grown at a mind-numbing pace. Technical advances now provide more windows into our inner neural universe and terms like attachment, empathy, compassion, and mindfulness have begun to appear in the scientific literature. Overall, there has been a deepening appreciation for the essential interdependence of brain and mind. More and more parents, teachers, and therapists are asking how brains develop, grow, connect, learn, and heal. The new edition of this book organizes this cutting-edge, abundant research and presents its compelling insights, reflecting a host of significant developments in social neuroscience. Our understanding of mirror neurons and their significance to human relationships has continued to expand and deepen and is discussed here. Additionally, this edition reflects the gradual shift in focus from individual brain structures to functional neural systems-an important and necessary step forward. A great deal of neural overlap has been discovered in brain activation when we are thinking about others and ourselves. This raises many questions including how we come to know others and whether the notion of an "individual self" is anything more than an evolutionary strategy to support our interconnection. In short, we are just beginning to see the larger implications of all neurological processes-how the architecture of the brain can help us to better understand individuals and our relationships. This book gives readers a deeper appreciation of how and why relationships have the power to reshape our brains throughout our life. --- paper_title: Autism Spectrum Disorders According to DSM-IV-TR and Comparison With DSM-5 Draft Criteria: An Epidemiological Study paper_content: Objective The latest definitions of autism spectrum disorders (ASDs) were specified in DSM-IV-TR in 2000. DSM-5 criteria are planned for 2013. Here, we estimated the prevalence of ASDs and autism according to DSM-IV-TR, clarified confusion concerning diagnostic criteria, and evaluated DSM-5 draft criteria for ASD posted by the American Psychiatry Association (APA) in February 2010. Method This was an epidemiological study of 5,484 eight-year-old children in Finland, 4,422 (81%) of them rated via the Autism Spectrum Screening Questionnaire by parents and/or teachers, and 110 examined by using a structured interview, semi-structured observation, IQ measurement, school-day observation, and patient records. Diagnoses were assigned according to DSM-IV-TR criteria and DSM-5 draft criteria in children with a full-scale IQ (FSIQ) ≥50. Patient records were evaluated in children with an FSIQ Results The prevalence of ASDs was 8.4 in 1,000 and that of autism 4.1 in 1,000 according to DSM-IV-TR. Of the subjects with ASDs and autism, 65% and 61% were high-functioning (FSIQ ≥70), respectively. The prevalence of pervasive developmental disorder not otherwise specified was not estimated because of inconsistency in DSM-IV-TR criteria. DSM-5 draft criteria were shown to be less sensitive in regard to identification of subjects with ASDs, particularly those with Asperger's syndrome and some high-functioning subjects with autism. Conclusions DSM-IV-TR helps with the definition of ASDs only up to a point. We suggest modifications to five details of DSM-5 draft criteria posted by the APA in February 2010. Completing revision of DSM criteria for ASDs is a challenging task. --- paper_title: Global Prevalence of Autism and Other Pervasive Developmental Disorders paper_content: We provide a systematic review of epidemiological surveys of autistic disorder and pervasive developmental disorders (PDDs) worldwide. A secondary aim was to consider the possible impact of geographic, cultural/ethnic, and socioeconomic factors on prevalen --- paper_title: Eye tracking in early autism research paper_content: Eye tracking has the potential to characterize autism at a unique intermediate level, with links ‘down’ to underlying neurocognitive networks, as well as ‘up’ to everyday function and dysfunction. Because it is non-invasive and does not require advanced motor responses or language, eye tracking is particularly important for the study of young children and infants. In this article, we review eye tracking studies of young children with autism spectrum disorder (ASD) and children at risk for ASD. Reduced looking time at people and faces, as well as problems with disengagement of attention, appear to be among the earliest signs of ASD, emerging during the first year of life. In toddlers with ASD, altered looking patterns across facial parts such as the eyes and mouth have been found, together with limited orienting to biological motion. We provide a detailed discussion of these and other key findings and highlight methodological opportunities and challenges for eye tracking research of young children with ASD. We conclude that eye tracking can reveal important features of the complex picture of autism. --- paper_title: Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders paper_content: To examine mirror neuron abnormalities in autism, high-functioning children with autism and matched controls underwent fMRI while imitating and observing emotional expressions. Although both groups performed the tasks equally well, children with autism showed no mirror neuron activity in the inferior frontal gyrus (pars opercularis). Notably, activity in this area was inversely related to symptom severity in the social domain, suggesting that a dysfunctional 'mirror neuron system' may underlie the social deficits observed in autism. --- paper_title: Compared to What? Early Brain Overgrowth in Autism and the Perils of Population Norms paper_content: Background Early brain overgrowth (EBO) in autism spectrum disorder (ASD) is among the best replicated biological associations in psychiatry. Most positive reports have compared head circumference (HC) in ASD (an excellent proxy for early brain size) with well-known reference norms. We sought to reappraise evidence for the EBO hypothesis given 1) the recent proliferation of longitudinal HC studies in ASD, and 2) emerging reports that several of the reference norms used to define EBO in ASD may be biased toward detecting HC overgrowth in contemporary samples of healthy children. Methods Systematic review of all published HC studies in children with ASD. Comparison of 330 longitudinally gathered HC measures between birth and 18 months from male children with autism ( n = 35) and typically developing control subjects ( n = 22). Results In systematic review, comparisons with locally recruited control subjects were significantly less likely to identify EBO in ASD than norm-based studies ( p n ~ 75,000). Controlling for known HC norm biases leaves inconsistent support for a subtle, later emerging and subgroup specific pattern of EBO in clinically ascertained ASD versus community control subjects. Conclusions The best-replicated aspects of EBO reflect generalizable HC norm biases rather than disease-specific biomarkers. The potential HC norm biases we detail are not specific to ASD research but apply throughout clinical and academic medicine. --- paper_title: Towards a neuroanatomy of autism: A systematic review and meta-analysis of structural magnetic resonance imaging studies paper_content: Abstract Background Structural brain abnormalities have been described in autism but studies are often small and contradictory. We aimed to identify which brain regions can reliably be regarded as different in autism compared to healthy controls. Method A systematic search was conducted for magnetic resonance imaging studies of regional brain size in autism. Data were extracted and combined using random effects meta-analysis. The modifying effects of age and IQ were investigated using meta-regression. Results The total brain, cerebral hemispheres, cerebellum and caudate nucleus were increased in volume, whereas the corpus callosum area was reduced. There was evidence for a modifying effect of age and IQ on the cerebellar vermal lobules VI–VII and for age on the amygdala. Conclusions Autism may result from abnormalities in specific brain regions and a global lack of integration due to brain enlargement. Inconsistencies in the literature partly relate to differences in the age and IQ of study populations. Some regions may show abnormal growth trajectories. --- paper_title: EEG source imaging paper_content: Abstract Objective: Electroencephalography (EEG) is an important tool for studying the temporal dynamics of the human brain's large-scale neuronal circuits. However, most EEG applications fail to capitalize on all of the data's available information, particularly that concerning the location of active sources in the brain. Localizing the sources of a given scalp measurement is only achieved by solving the so-called inverse problem. By introducing reasonable a priori constraints, the inverse problem can be solved and the most probable sources in the brain at every moment in time can be accurately localized. Methods and Results: Here, we review the different EEG source localization procedures applied during the last two decades. Additionally, we detail the importance of those procedures preceding and following source estimation that are intimately linked to a successful, reliable result. We discuss (1) the number and positioning of electrodes, (2) the varieties of inverse solution models and algorithms, (3) the integration of EEG source estimations with MRI data, (4) the integration of time and frequency in source imaging, and (5) the statistical analysis of inverse solution results. Conclusions and Significance: We show that modern EEG source imaging simultaneously details the temporal and spatial dimensions of brain activity, making it an important and affordable tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences. --- paper_title: Head circumference and brain size in autism spectrum disorder: A systematic review and meta-analysis paper_content: Macrocephaly and brain overgrowth have been associated with autism spectrum disorder. We performed a systematic review and meta-analysis to provide an overall estimate of effect size and statistical significance for both head circumference and total brain volume in autism. Our literature search strategy identified 261 and 391 records, respectively; 27 studies defining percentages of macrocephalic patients and 44 structural brain imaging studies providing total brain volumes for patients and controls were included in our meta-analyses. Head circumference was significantly larger in autistic compared to control individuals, with 822/5225 (15.7%) autistic individuals displaying macrocephaly. Structural brain imaging studies measuring brain volume estimated effect size. The effect size is higher in low functioning autistics compared to high functioning and ASD individuals. Brain overgrowth was recorded in 142/1558 (9.1%) autistic patients. Finally, we found a significant interaction between age and total brain volume, resulting in larger head circumference and brain size during early childhood. Our results provide conclusive effect sizes and prevalence rates for macrocephaly and brain overgrowth in autism, confirm the variation of abnormal brain growth with age, and support the inclusion of this endophenotype in multi-biomarker diagnostic panels for clinical use. --- paper_title: Why the frontal cortex in autism might be talking only to itself: local over-connectivity but long-distance disconnection paper_content: Although it has long been thought that frontal lobe abnormality must play an important part in generating the severe impairment in higher-order social, emotional and cognitive functions in autism, only recently have studies identified developmentally early frontal lobe defects. At the microscopic level, neuroinflammatory reactions involving glial activation, migration defects and excess cerebral neurogenesis and/or defective apoptosis might generate frontal neural pathology early in development. It is hypothesized that these abnormal processes cause malformation and thus malfunction of frontal minicolumn microcircuitry. It is suggested that connectivity within frontal lobe is excessive, disorganized and inadequately selective, whereas connectivity between frontal cortex and other systems is poorly synchronized, weakly responsive and information impoverished. Increased local but reduced long-distance cortical–cortical reciprocal activity and coupling would impair the fundamental frontal function of integrating information from widespread and diverse systems and providing complex context-rich feedback, guidance and control to lower-level systems. --- paper_title: Autism and the Social Brain: The First-Year Puzzle paper_content: The atypical features of social perception and cognition observed in individuals with a diagnosis of autism have been explained in two different ways. First, domain-specific accounts are based on the assumption that these end-state symptoms result from specific impairments within component structures of the social brain network. Second, domain-general accounts hypothesize that rather than being localized, atypical brain structure and function are widespread, or hypothesize that the apparent social brain differences are the consequence of adaptations to earlier occurring widespread changes in brain function. Critical evidence for resolving this basic issue comes from prospective longitudinal studies of infants at risk for later diagnosis. We highlight selected studies from the newly emerging literature on infants at familial risk for autism to shed light on this issue. Despite multiple reports of possible alterations in brain function in the first year of life, overt behavioral symptoms do not emerge until the second year. Our review reveals only mixed support, within this very early period, for localized deficits in social brain network systems and instead favors the view that atypical development involving perceptual, attentional, motor, and social systems precede the emerging autism phenotype. --- paper_title: fNIRS in the developmental sciences paper_content: With the introduction of functional near-infrared spectroscopy (fNIRS) into the experimental setting, developmental scientists have, for the first time, the capacity to investigate the functional activation of the infant brain in awake, engaged participants. The advantages of fNIRS clearly outweigh the limitations, and a description of how this technology is implemented in infant populations is provided. Most fNIRS research falls into one of three content domains: object processing, processing of biologically and socially relevant information, and language development. Within these domains, there are ongoing debates about the origins and development of human knowledge, making early neuroimaging particularly advantageous. The use of fNIRS has allowed investigators to begin to identify the localization of early object, social, and linguistic knowledge in the immature brain and the ways in which this changes with time and experience. In addition, there is a small but growing body of research that provides insight into the neural mechanisms that support and facilitate learning during the first year of life. At the same time, as with any emerging field, there are limitations to the conclusions that can be drawn on the basis of current findings. We offer suggestions as to how to optimize the use of this technology to answer questions of theoretical and practical importance to developmental scientists. WIREs Cogn Sci 2015, 6:263–283. doi: 10.1002/wcs.1343 ::: ::: ::: ::: For further resources related to this article, please visit the WIREs website. ::: ::: ::: ::: Conflict of interest: The authors have declared no conflicts of interest for this article. --- paper_title: The superior temporal sulcus performs a common function for social and speech perception: Implications for the emergence of autism paper_content: Abstract Within the cognitive neuroscience literature, discussion of the functional role of the superior temporal sulcus (STS) has traditionally been divided into two domains; one focuses on its activity during language processing while the other emphasizes its role in biological motion and social attention, such as eye gaze processing. I will argue that a common process underlying both of these functional domains is performed by the STS, namely analyzing changing sequences of input, either in the auditory or visual domain, and interpreting the communicative significance of those inputs. From a developmental perspective, the fact that these two domains share an anatomical substrate suggests the acquisition of social and speech perception may be linked. In addition, I will argue that because of the STS’ role in interpreting social and speech input, impairments in STS function may underlie many of the social and language abnormalities seen in autism. --- paper_title: A systematic review and meta-analysis of the fMRI investigation of autism spectrum disorders paper_content: Abstract Recent years have seen a rapid increase in the investigation of autism spectrum disorders (ASD) through the use of functional magnetic resonance imaging (fMRI). We carried out a systematic review and ALE meta-analysis of fMRI studies of ASD. A disturbance to the function of social brain regions is among the most well replicated finding. Differences in social brain activation may relate to a lack of preference for social stimuli as opposed to a primary dysfunction of these regions. Increasing evidence points towards a lack of effective integration of distributed functional brain regions and disruptions in the subtle modulation of brain function in relation to changing task demands in ASD. Limitations of the literature to date include the use of small sample sizes and the restriction of investigation to primarily high functioning males with autism. --- paper_title: Involvement of the anterior thalamic radiation in boys with high functioning autism spectrum disorders: A Diffusion Tensor Imaging study paper_content: Abstract Background: Autism has been hypothesized to reflect neuronal disconnection. Several recent reports implicate the key thalamic relay nuclei and cortico-thalamic connectivity in the pathophysiology of autism. Accordingly, we aimed to focus on evaluating the integrity of the thalamic radiation and sought to replicate prior white matter findings in Korean boys with high-functioning autism spectrum disorders (ASD) using Diffusion Tensor Imaging (DTI). Methods: We compared fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD) and radial diffusivity (RD) in 17 boys with ASD and 17 typically developing controls in the anterior thalamic radiation (ATR), superior thalamic radiation (STR), posterior thalamic radiation (PTR), corpus callosum (CC), uncinate fasciculus (UF) and inferior longitudinal fasciculus (ILF). Results: The two groups were group-matched on age, IQ, handedness and head circumference. In whole-brain voxel-wise analyses, FA was significantly reduced and MD was significantly increased in the right ATR, CC, and left UF in subjects with ASD (p --- paper_title: Developmental pathways to autism: A review of prospective studies of infants at risk paper_content: Autism Spectrum Disorders (ASDs) are neurodevelopmental disorders characterized by impairments in social interaction and communication, and the presence of restrictive and repetitive behaviors. Symptoms of ASD likely emerge from a complex interaction between pre-existing neurodevelopmental vulnerabilities and the child's environment, modified by compensatory skills and protective factors. Prospective studies of infants at high familial risk for ASD (who have an older sibling with a diagnosis) are beginning to characterize these developmental pathways to the emergence of clinical symptoms. Here, we review the range of behavioral and neurocognitive markers for later ASD that have been identified in high-risk infants in the first years of life. We discuss theoretical implications of emerging patterns, and identify key directions for future work, including potential resolutions to several methodological challenges for the field. Mapping how ASD unfolds from birth is critical to our understanding of the developmental mechanisms underlying this disorder. A more nuanced understanding of developmental pathways to ASD will help us not only to identify children who need early intervention, but also to improve the range of interventions available to them. --- paper_title: MR Diffusion Tensor Spectroscopy and Imaging 259 paper_content: This paper describes a new NMR imaging modality--MR diffusion tensor imaging. It consists of estimating an effective diffusion tensor, Deff, within a voxel, and then displaying useful quantities derived from it. We show how the phenomenon of anisotropic diffusion of water (or metabolites) in anisotropic tissues, measured noninvasively by these NMR methods, is exploited to determine fiber tract orientation and mean particle displacements. Once Deff is estimated from a series of NMR pulsed-gradient, spin-echo experiments, a tissue's three orthotropic axes can be determined. They coincide with the eigenvectors of Deff, while the effective diffusivities along these orthotropic directions are the eigenvalues of Deff. Diffusion ellipsoids, constructed in each voxel from Deff, depict both these orthotropic axes and the mean diffusion distances in these directions. Moreover, the three scalar invariants of Deff, which are independent of the tissue's orientation in the laboratory frame of reference, reveal useful information about molecular mobility reflective of local microstructure and anatomy. Inherently tensors (like Deff) describing transport processes in anisotropic media contain new information within a macroscopic voxel that scalars (such as the apparent diffusivity, proton density, T1, and T2) do not. --- paper_title: Atypical development of white matter microstructure in adolescents with autism spectrum disorders paper_content: Abstract Diffusion tensor imaging (DTI) studies in adolescents with autism spectrum disorders (ASD) indicate aberrant neurodevelopment of frontal white matter (WM), potentially underlying abnormal social cognition and communication in ASD. Here, we further use tract-based spatial statistics (TBSS) to examine the developmental change of WM skeleton (i.e., the most compact whole-brain WM) during adolescence in ASD. This whole-brain DTI used TBSS measures fractional anisotropy (FA) and longitudinal and radial diffusivities in fifty adolescents, 25 ASD and 25 controls. Results show that adolescents with ASD versus controls had significantly reduced FA in the right posterior limb of internal capsule (increased radial diffusivity distally and reduced longitudinal diffusivity centrally). Adolescents with ASD versus controls (covarying for age and IQ) had significantly greater FA in the frontal lobe (reduced radial diffusivity), right cingulate gyrus (reduced radial diffusivity), bilateral insula (reduced radial diffusivity and increased longitudinal diffusivity), right superior temporal gyrus (reduced radial diffusivity), and bilateral middle cerebellar peduncle (reduced radial diffusivity). Notably, a significant interaction with age by group was found in the right paracentral lobule and bilateral superior frontal gyrus as indicated by an age-related FA gain in the controls whilst an age-related FA loss in the ASD. To our knowledge, this is the first study to use TBSS to examine WM in individuals with ASD. Our findings indicate that the frontal lobe exhibits abnormal WM microstructure as well as an aberrant neurodevelopment during adolescence in ASD, which support the frontal disconnectivity theory of autism. --- paper_title: The Autism Diagnostic Observation Schedule-Generic: A Standard Measure of Social and Communication Deficits Associated with the Spectrum of Autism paper_content: The Autism Diagnostic Observation Schedule-Generic (ADOS-G) is a semistructured, stan- dardized assessment of social interaction, communication, play, and imaginative use of ma- terials for individuals suspected of having autism spectrum disorders. The observational sched- ule consists of four 30-minute modules, each designed to be administered to different individuals according to their level of expressive language. Psychometric data are presented for 223 children and adults with Autistic Disorder (autism), Pervasive Developmental Disor- der Not Otherwise Specified (PDDNOS) or nonspectrum diagnoses. Within each module, di- agnostic groups were equivalent on expressive language level. Results indicate substantial in- terrater and test-retest reliability for individual items, excellent interrater reliability within domains and excellent internal consistency. Comparisons of means indicated consistent dif- ferentiation of autism and PDDNOS from nonspectrum individuals, with some, but less con- sistent, differentiation of autism from PDDNOS. A priori operationalization of DSM-IV/ICD- 10 criteria, factor analyses, and ROC curves were used to generate diagnostic algorithms with thresholds set for autism and broader autism spectrum/PDD. Algorithm sensitivities and speci- ficities for autism and PDDNOS relative to nonspectrum disorders were excellent, with mod- erate differentiation of autism from PDDNOS. --- paper_title: Autism: A Very Short Introduction paper_content: 1. The autism spectrum 2. Causes of autism 3. Explaining the social impairment 4. Explaining the communication impairment 5. Explaining islets of ability 6. Explaining problems in everyday living 7. Can we explain everything at once? --- paper_title: The use of near-infrared spectroscopy in the study of typical and atypical development paper_content: The use of functional near infrared spectroscopy (fNIRS) has grown exponentially over the past decade, particularly among investigators interested in early brain development. The use of this neuroimaging technique has begun to shed light on the development of a variety of sensory, perceptual, linguistic, and social-cognitive functions. Rather than cast a wide net, in this paper we first discuss typical development, focusing on joint attention, face processing, language, and sensorimotor development. We then turn our attention to infants and children whose development has been compromised or who are at risk for atypical development. We conclude our review by critiquing some of the methodological issues that have plagued the extant literature as well as offer suggestions for future research. --- paper_title: The phenotype and neural correlates of language in autism: An integrative review paper_content: Although impaired communication is one of the defining criteria in autism, linguistic functioning is highly variable among people with this disorder. Accumulating evidence shows that language impairments in autism are more extensive than commonly assumed and described by formal diagnostic criteria and are apparent at various levels. Phenotypically, most people with autism have semantic, syntactic and pragmatic deficits, a smaller number are known to have phonological deficits. Neurophysiologically, abnormal processing of low-level linguistic information points to perceptual difficulties. Also, abnormal high-level linguistic processing of the frontal and temporal language association cortices indicates more self-reliant and less connected neural subsystems. Early sensory impairments and subsequent atypical neural connectivity are likely to play a part in abnormal language acquisition in autism. This paper aims to review the available data on the phenotype of language in autism as well as a number of structural, electrophysiological and functional brain-imaging studies to provide a more integrated view of the linguistic phenotype and its underlying neural deficits, and to provide new directions for research and therapeutic and experimental applications. --- paper_title: Subgrouping the Autism “Spectrum": Reflections on DSM-5 paper_content: The biology of autism cannot yet be used diagnostically, and so—like most psychiatric conditions—autism is defined by behavior [Rett syndrome (Rett's disorder) is diagnosed by incorporating biology, but it has been moved out of the “Autism Spectrum Disorder" category in DSM-5]. The two international psychiatric classification systems (the Diagnostic and Statistical Manual of Mental Disorders [DSM] and the International Classification of Diseases [ICD]) remain useful for making clinical diagnoses, but each time these classification systems are revised, the new definitions inevitably subtly change the nature of how the conditions are construed. While acknowledging concerns about issues such as diagnostic inflation [1] and financial conflicts of interest [2], DSM-5 is now “set in stone" and will be published in May 2013. Although this manual is primarily designed for creating a common language for clinical practice, it is also often used in research settings to define the conditions to be studied. Here we reflect on what the revision may mean for research, and for understanding the nature of autism. ::: ::: New in DSM-5 is the explicit recognition of the “spectrum" nature of autism, subsuming and replacing the DSM-IV Pervasive Developmental Disorder (PDD) categorical subgroups of “autistic disorder," “Asperger's disorder," “pervasive developmental disorder not otherwise specified," and “childhood disintegrative disorder" into a single umbrella term “Autism Spectrum Disorder" (ASD). [Here and throughout we use the term “ASD" because this is what is used in DSM-5. However, in our publications over many years we have opted for the more neutral term “ASC" (Autism Spectrum Conditions) to signal that this is a biomedical diagnosis in which the individual needs support, and which leaves room for areas of strength as well as difficulty, without the somewhat negative overtones of the term “disorder," which implies something is “broken."] DSM-5 characterizes ASD in two behavioral domains (difficulties in social communication and social interaction, and unusually restricted, repetitive behaviors and interests) and is accompanied by a severity scale to capture the “spectrum" nature of ASD. ::: ::: Also new in DSM-5, language development/level is treated as separate from ASD. This means an individual can have ASD with or without a language disorder. Finally, DSM-5 proposes a more inclusive age-of-onset criterion, recognizing that although symptoms should present in early childhood, they may not fully manifest until social demands exceed the capacity of the individual to cope with them. The major rationale behind these changes is to improve reliability [3]. The DSM-5 field trial in North America has shown that ASD diagnosis has reasonable test-retest reliability, with an intraclass Kappa (a statistical measure of reliability) of 0.69 (95% CI 0.58–0.79) [4]. ::: ::: There have been concerns that the DSM-5 criteria may be more stringent than DSM-IV, such that some individuals who qualified for PDD will not meet the new ASD criteria. A series of studies testing the initial [5] and revised draft ASD criteria [6]–[12] showed increased specificity but decreased sensitivity of the DSM-5 draft compared to DSM-IV, and suggested relaxation of the threshold (e.g., fewer numbers of required symptom subdomains) to achieve reasonable sensitivity. However, most of these studies suffer from the weakness of using retrospective datasets and tools developed earlier that may not satisfactorily capture symptoms now included in DSM-5 [13]. One prospective study tested both DSM-IV and DSM-5 criteria against the gold standard of “best-estimate clinical diagnoses" and agreed with the “too-stringent" conclusion in a clinical sample [14]. However, another substantially large retrospective study using data from three existing datasets found few differences between the two systems in sensitivity [15]. ::: ::: In brief, these studies all show that DSM-5 provides better specificity (so reducing false-positive diagnoses), but at the expense of potentially reduced sensitivity, especially for older children, adolescents and adults, individuals without intellectual disability, and individuals who previously met criteria for diagnoses of DSM-IV “Asperger's disorder" or “pervasive developmental disorder not otherwise specified." It remains to be seen in real-life settings how diagnostic practice, service delivery, and prevalence estimates will be affected by applying DSM-5 ASD criteria. In particular, one major nosological issue is to what extent individuals fitting DSM-IV PDD but not DSM-5 ASD diagnoses will end up falling into the newly created diagnosis of “Social (Pragmatic) Communication Disorder" [12],[16]–[18]. Clearly more research needs to be done to provide a thorough and fair evaluation of this revision. ::: ::: Highlighting the dimensional nature of the two cardinal behavioral domains of ASD, as well as the improved organization of symptom descriptions, are excellent features of DSM-5. A unitary label of “ASD" accompanied by individualized assessment of needs for support will likely be useful in clinical settings, especially to guarantee the required levels of support for all individuals “on the spectrum" who will benefit from educational, occupational, social, mental health, and medical interventions (even if they are etiologically, developmentally, and clinically heterogeneous). However, this approach is not useful for research in general, given the known massive heterogeneity within such an omnibus label. Within autism there is a huge variability in terms of behavior (symptom severity and combination), cognition (the range of deficits and assets), and biological mechanisms. Acknowledging heterogeneity has led to the idea that there are many “autisms," with partially distinct etiologies, nested within the umbrella term of “ASD" [19]. Therefore, two critical issues need to be addressed: a clarification of the meaning(s) of the term “spectrum"; and the need for subgrouping. --- paper_title: Structural MRI in Autism Spectrum Disorder paper_content: Magnetic resonance (MR) examination provides a powerful tool for investigating brain structural changes in children with autism spectrum disorder (ASD). We review recent advances in the understanding of structural MR correlates of ASD. We summarize findings from studies based on voxel-based morphometry, surface-based morphometry, tensor-based morphometry, and diffusion-tensor imaging. Finally, we discuss diagnostic models of ASD based on MR-derived features. --- paper_title: Comparison between diagnostic instruments for identifying high-functioning children with autism paper_content: Two instruments for identifying autism in children and adolescents with intellectual abilities in the normal range were compared. Diagnostic tools consisted of the Autism Behavior Checklist (ABC) and the Autism Diagnostic Interview (ADI). The sample was composed of 18 children who were all diagnosed as having either infantile autism or infantile autism, residual state based on DSM-III criteria by a clinical team using observations, parental interviews, and interactions with the children. Only 4 of the children met diagnostic cutoffs for autism on the current ABC but all met criteria for diagnosis on the ABC using parental recall of the child's behavior at 3–5 years of age. The ADI had somewhat greater specificity in that 3 children did not meet criteria for diagnosis although 2 of these children also received ABC scores based on parental recollection that were in the borderline range. --- paper_title: Review of neuroimaging in autism spectrum disorders: what have we learned and where we go from here paper_content: Autism spectrum disorder (ASD) refers to a syndrome of social communication deficits and repetitive behaviors or restrictive interests. It remains a behaviorally defined syndrome with no reliable biological markers. The goal of this review is to summarize the available neuroimaging data and examine their implication for our understanding of the neurobiology of ASD.Although there is variability in the literature on structural magnetic resonance literature (MRI), there is evidence of volume abnormalities in both grey and white matter, with a suggestion of some region-specific differences. Early brain overgrowth is probably the most replicated finding in a subgroup of people with ASD, and new techniques, such as cortical-thickness measurements and surface morphometry have begun to elucidate in more detail the patterns of abnormalities as they evolve with age, and are implicating specific neuroanatomical or neurodevelopmental processes. Functional MRI and diffusion tensor imaging techniques suggest that such volume abnormalities are associated with atypical functional and structural connectivity in the brain, and researchers have begun to use magnetic resonance spectroscopy (MRS) techniques to explore the neurochemical substrate of such abnormalities. The data from multiple imaging methods suggests that ASD is associated with an atypically connected brain. We now need to further clarify such atypicalities, and start interpreting them in the context of what we already know about typical neurodevelopmental processes including migration and organization of the cortex. Such an approach will allow us to relate imaging findings not only to behavior, but also to genes and their expression, which may be related to such processes, and to further our understanding of the nature of neurobiologic abnormalities in ASD. --- paper_title: Network inefficiencies in autism spectrum disorder at 24 months paper_content: Autism spectrum disorder (ASD) is a developmental disorder defined by behavioral symptoms that emerge during the first years of life. Associated with these symptoms are differences in the structure of a wide array of brain regions, and in the connectivity between these regions. However, the use of cohorts with large age variability and participants past the generally recognized age of onset of the defining behaviors means that many of the reported abnormalities may be a result of cascade effects of developmentally earlier deviations. This study assessed differences in connectivity in ASD at the age at which the defining behaviors first become clear. There were 113 24-month-old participants at high risk for ASD, 31 of whom were classified as ASD, and 23 typically developing 24-month-old participants at low risk for ASD. Utilizing diffusion data to obtain measures of the length and strength of connections between anatomical regions, we performed an analysis of network efficiency. Our results showed significantly decreased local and global efficiency over temporal, parietal and occipital lobes in high-risk infants classified as ASD, relative to both low- and high-risk infants not classified as ASD. The frontal lobes showed only a reduction in global efficiency in Broca's area. In addition, these same regions showed an inverse relation between efficiency and symptom severity across the high-risk infants. The results suggest delay or deficits in infants with ASD in the optimization of both local and global aspects of network structure in regions involved in processing auditory and visual stimuli, language and nonlinguistic social stimuli. --- paper_title: Research Review: Constraining heterogeneity: the social brain and its development in autism spectrum disorder paper_content: The expression of autism spectrum disorder (ASD) is highly heterogeneous, owing to the complex interactions between genes, the brain, and behavior throughout development. Here we present a model of ASD that implicates an early and initial failure to develop the specialized functions of one or more of the set of neuroanatomical structures involved in social information processing (i.e., the ‘social brain’). From this early and primary disruption, abnormal brain development is canalized because the individual with an ASD must develop in a highly social world without the specialized neural systems that would ordinarily allow him or her to partake in the fabric of social life, which is woven from the thread of opportunities for social reciprocity and the tools of social engagement. This brain canalization gives rise to other characteristic behavioral deficits in ASD including deficits in communication, restricted interests, and repetitive behaviors. We propose that focused efforts to explore the brain mechanisms underlying the core, pathognomic deficits in the development of mechanisms for social engagement in ASD will greatly elucidate our understanding and treatment of this complex, devastating family of neurodevelopmental disorders. In particular, developmental studies (i.e., longitudinal studies of young children with and without ASD, as well as infants at increased risk for being identified with ASD) of the neural circuitry supporting key aspects of social information processing are likely to provide important insights into the underlying components of the full-syndrome of ASD. These studies could also contribute to the identification of developmental brain endophenotypes to facilitate genetic studies. The potential for this kind of approach is illustrated via examples of functional neuroimaging research from our own laboratory implicating the posterior superior temporal sulcus (STS) as a key player in the set of neural structures giving rise to ASD. Keywords: Social perception, social cognition, autism, functional neuroimaging, social brain. The considerable heterogeneity in the expression and severity of the core and associated symptoms is a challenge that has hindered progress towards understanding autism spectrum disorder (ASD). To illustrate, within autistic disorder, variability in the social domain ranges from a near absence of interest in interacting with others to more subtle difficulties managing complex social interactions that require an understanding of other people’s goals and intentions and other cues of social context. Similarly, stereotyped and repetitive behaviors range from simple motor stereotypies and/or a preference for sameness to much more complex and elaborate rituals, accompanied by emotional dysregulation or ‘meltdowns’ when these rituals are interrupted. Some individuals with ASD lack basic speech abilities, while others can have language deficits that are mild and limited to language pragmatics. While a majority of individuals with ASD exhibit some level of intellectual impairment, intelligence quotients vary from the severe and profoundly impaired range to well above average. --- paper_title: A systematic review of the diagnostic stability of Autism Spectrum Disorder paper_content: Abstract There is debate in the current literature regarding the permanence of an Autism Spectrum Disorder (ASD) diagnosis. We undertook a systematic review of the diagnostic stability of ASD to summarise current evidence. A comprehensive search strategy was used to identify studies. Participants were children with ASD. Risk of bias was assessed by examining the sample selected, recruitment method, completeness of follow up, timing of diagnosis and blinding. Twenty three studies assessed diagnostic stability with a total of 1466 participants. Fifty three to100% of children still had a diagnosis of Autistic Disorder (AD) and 14–100% of children still had a diagnosis of another form of ASD at follow up. There is some evidence that Autistic Disorder is a reasonably stable diagnosis; however a significant minority of children will no longer meet diagnostic criteria after a period of follow up, particularly those diagnosed in the preschool years with cognitive impairment. Other Autism Spectrum Disorders have very variable stability between studies and clinicians when using this diagnosis need inform parents of its instability. This study supports the stricter diagnostic criteria in DSM-V. There is a need for long term, large population cohort studies measuring diagnostic stability. --- paper_title: Deviant Functional Magnetic Resonance Imaging Patterns of Brain Activity to Speech in 2–3-Year-Old Children with Autism Spectrum Disorder paper_content: Background A failure to develop normal language is one of the most common first signs that a toddler might be at risk for autism. Currently the neural bases underlying this failure to develop language are unknown. Methods In this study, functional magnetic resonance imaging (fMRI) was used to identify the brain regions involved in speech perception in 12 2–3-year-old children with autism spectrum disorder (ASD) during natural sleep. We also recorded fMRI data from two typically developing control groups: a mental age-matched (MA) ( n = 11) and a chronological age-matched (CA) ( n = 12) group. During fMRI data acquisition, forward and backward speech stimuli were presented with intervening periods of no sound presentation. Results Direct statistical comparison between groups revealed significant differences in regions recruited to process speech. In comparison with their MA-matched control subjects, the ASD group showed reduced activity in an extended network of brain regions, which are recruited in typical early language acquisition. In comparison with their CA-matched control subjects, ASD participants showed greater activation primarily within right and medial frontal regions. Laterality analyses revealed a trend toward greater recruitment of right hemisphere regions in the ASD group and left hemisphere regions in the CA group during the forward speech condition. Furthermore, correlation analyses revealed a significant positive relationship between right hemisphere frontal and temporal activity to forward speech and receptive language skill. Conclusions These findings suggest that at 2–3 years, children with ASD might be on a deviant developmental trajectory characterized by a greater recruitment of right hemisphere regions during speech perception. --- paper_title: Online monitoring of the social presence effects in a two-person-like driving video game using near-infrared spectroscopy paper_content: Abstract: We examined how a friend’s presence affects a performer’s prefrontalactivation in daily-life activities using two wireless portable near-infrared spectroscopy(NIRS) devices. Participants played a driving video game either solely in the singlegroup or with a friend in the paired group. The two groups (single and paired) weresubdivided according to their game proficiency (low and high). The NIRS data dem-onstrated a significant interaction of group by proficiency. Low-proficiency players inthe paired group showed lower activation than those in the single group, but high-proficiency players did not. In the paired group, high-proficiency players showedhigher activation than low-proficiency players, but not in the single group. Theseresults suggest that NIRS detects social presence effects in everyday situations:decreasing prefrontal activation in low-proficiency performers due to tension reduc-tion and increasing prefrontal activation in high-proficiency performers due toincreased arousal.Key words: presence of others, near-infrared spectroscopy (NIRS), online monitor-ing, prefrontal cortex (PFC), task proficiency.The presence of others (PO) affects an indi-vidual’s mind and performance widely in dailyactivities (Allport, 1924),such as sports,in-classlearning, group work, and playing games.Accordingly, a critical question arises: How canwe predict whether PO will improve or impairour solitary performance? Because of its funda-mental importance on human development andbehavior (Allport, 1954),the effects of PO haveattracted great interest for both theoretical andapplied research for several decades (Aiello D Uziel, 2007).The social presence effects have been previ-ously studied in two main research fields.Socialfacilitation literature suggests that PO (strangeror friend) generally improves well-learnedtasks and impairs poorly learned tasks as asource of arousal (Zajonc, 1965, 1980). That is,PO generally increases an individual’s arousal,which in turn enhances emission of a habitualresponse from his/her behavioral repertoire. Ina well-learned task, the habitual response isusually the correct one, leading to performanceimprovement,whereas in a poorly learned task,it is usually the incorrect one, leading to perfor-mance impairment.In contrast, social support research has con-sistently revealed that PO does not always func-tion to increase one’s tension; the presenceof a friend may relax an individual in stressful --- paper_title: Prevalence of autism spectrum disorders in an Icelandic birth cohort paper_content: OBJECTIVES ::: A steady increase in the prevalence of autism spectrum disorders (ASD) has been reported in studies based on different methods, requiring adjustment for participation and missing data. Recent studies with high ASD prevalence rates rarely report on co-occurring medical conditions. The aim of the study was to describe the prevalence of clinically confirmed cases of ASD in Iceland and concomitant medical conditions. ::: ::: ::: DESIGN ::: The cohort is based on a nationwide database on ASD among children born during 1994-1998. ::: ::: ::: PARTICIPANTS ::: A total of 267 children were diagnosed with ASD, 197 boys and 70 girls. Only clinically confirmed cases were included. All received physical and neurological examination, standardised diagnostic workup for ASD, as well as cognitive testing. ASD diagnosis was established by interdisciplinary teams. Information on medical conditions and chromosomal testing was obtained by record linkage with hospital registers. ::: ::: ::: SETTING ::: Two tertiary institutions in Iceland. The population registry recorded 22 229 children in the birth cohort. ::: ::: ::: RESULTS ::: Prevalence of all ASD was 120.1/10 000 (95% CI 106.6 to 135.3), for boys 172.4/10 000 (95% CI 150.1 to 198.0) and for girls 64.8/10 000 (95% CI 51.3 to 81.8). Prevalence of all medical conditions was 17.2% (95% CI 13.2 to 22.2), including epilepsy of 7.1% (95% CI 4.6 to 10.8). The proportion of ASD cases with cognitive impairment (intellectual quotient <70) was 45.3%, but only 34.1% were diagnosed with intellectual disability (ID). Children diagnosed earlier or later did not differ on mean total score on a standardised interview for autism. ::: ::: ::: CONCLUSIONS ::: The number of clinically verified cases is larger than in previous studies, yielding a prevalence of ASD on a similar level as found in recent non-clinical studies. The prevalence of co-occurring medical conditions was high, considering the low proportion of ASD cases that also had ID. Earlier detection is clearly desirable in order to provide counselling and treatment. --- paper_title: Advances in autism genetics: on the threshold of a new neurobiology paper_content: Nature Reviews Genetics 9, 341–355 (2008) The first row in Table 1 on page 344 of this Review was incorrect; the corrected version is shown below. The authors apologize for this error. --- paper_title: Comparing symptoms of autism spectrum disorders using the current DSM-IV-TR diagnostic criteria and the proposed DSM-V diagnostic criteria. paper_content: Abstract The American Psychiatric Association has proposed major revisions for the diagnostic category encompassing Autism Spectrum Disorders (ASD), which will reportedly increase the specificity and maintain the sensitivity of diagnoses. As a result, the aim of the current study was to compare symptoms of ASD in children and adolescents ( N = 208) who met criteria for ASD according to only the DSM-IV-TR to those who met criteria according to the forthcoming version of the DSM and to those that were typically developing. Participants comprising the DSM-IV-TR and DSM-V groups did not score significantly different from each other on overall autism symptoms, but both groups scored significantly different from the control group. However significant differences emerged between the DSM-IV-TR and DSM-V groups in the core domain of nonverbal communication/socialization. Implications of the results and the proposed changes to the ASD diagnostic category are discussed. --- paper_title: The Implications of Brain Connectivity in the Neuropsychology of Autism paper_content: Autism is a neurodevelopmental disorder that has been associated with atypical brain functioning. Functional connectivity MRI (fcMRI) studies examining neural networks in autism have seen an exponential rise over the last decade. Such investigations have led to the characterization of autism as a distributed neural systems disorder. Studies have found widespread cortical underconnectivity, local overconnectivity, and mixed results suggesting disrupted brain connectivity as a potential neural signature of autism. In this review, we summarize the findings of previous fcMRI studies in autism with a detailed examination of their methodology, in order to better understand its potential and to delineate the pitfalls. We also address how a multimodal neuroimaging approach (incorporating different measures of brain connectivity) may help characterize the complex neurobiology of autism at a global level. Finally, we also address the potential of neuroimaging-based markers in assisting neuropsychological assessment of autism. The quest for a neural marker for autism is still ongoing, yet new findings suggest that aberrant brain connectivity may be a promising candidate. --- paper_title: Functional connectivity in the first year of life in infants at-risk for autism: a preliminary near-infrared spectroscopy study paper_content: Background: Autism spectrum disorder (ASD) has been called a “developmental disconnection syndrome,” however the majority of the research examining connectivity in ASD has been conducted exclusively with older children and adults. Yet, prior ASD research suggests that perturbations in neurodevelopmental trajectories begin as early as the first year of life. Prospective longitudinal studies of infants at risk for ASD may provide a window into the emergence of these aberrant patterns of connectivity. The current study employed functional connectivity near-infrared spectroscopy (NIRS) in order to examine the development of intra- and inter-hemispheric functional connectivity in high- and low-risk infants across the first year of life. Methods: NIRS data were collected from 27 infants at high risk for autism (HRA) and 37 low-risk comparison (LRC) infants who contributed a total of 116 data sets at 3-, 6-, 9-, and 12-months. At each time point, HRA and LRC groups were matched on age, sex, head circumference, and Mullen Scales of Early Learning scores. Regions of interest (ROI) were selected from anterior and posterior locations of each hemisphere. The average time course for each ROI was calculated and correlations for each ROI pair were computed. Differences in functional connectivity were examined in a cross-sectional manner. Results: At 3-months, HRA infants showed increased overall functional connectivity compared to LRC infants. This was the result of increased connectivity for intra- and inter-hemispheric ROI pairs. No significant differences were found between HRA and LRC infants at 6- and 9-months. However, by 12-months, HRA infants showed decreased connectivity relative to LRC infants. --- paper_title: Brain-to-brain coupling: a mechanism for creating and sharing a social world paper_content: Cognition materializes in an interpersonal space. The emergence of complex behaviors requires the coordination of actions among individuals according to a shared set of rules. Despite the central role of other individuals in shaping one's mind, most cognitive studies focus on processes that occur within a single individual. We call for a shift from a single-brain to a multi-brain frame of reference. We argue that in many cases the neural processes in one brain are coupled to the neural processes in another brain via the transmission of a signal through the environment. Brain-to-brain coupling constrains and shapes the actions of each individual in a social network, leading to complex joint behaviors that could not have emerged in isolation. --- paper_title: Early brain enlargement and elevated extra-axial fluid in infants who develop autism spectrum disorder paper_content: Prospective studies of infants at risk for autism spectrum disorder have provided important clues about the early behavioural symptoms of autism spectrum disorder. Diagnosis of autism spectrum disorder, however, is not currently made until at least 18 months of age. There is substantially less research on potential brain-based differences in the period between 6 and 12 months of age. Our objective in the current study was to use magnetic resonance imaging to identify any consistently observable brain anomalies in 6–9 month old infants who would later develop autism spectrum disorder. We conducted a prospective infant sibling study with longitudinal magnetic resonance imaging scans at three time points (6–9, 12–15, and 18–24 months of age), in conjunction with intensive behavioural assessments. Fifty-five infants (33 ‘high-risk’ infants having an older sibling with autism spectrum disorder and 22 ‘low-risk’ infants having no relatives with autism spectrum disorder) were imaged at 6–9 months; 43 of these (27 high-risk and 16 low-risk) were imaged at 12–15 months; and 42 (26 high-risk and 16 low-risk) were imaged again at 18–24 months. Infants were classified as meeting criteria for autism spectrum disorder, other developmental delays, or typical development at 24 months or later (mean age at outcome: 32.5 months). Compared with the other two groups, infants who developed autism spectrum disorder ( n = 10) had significantly greater extra-axial fluid at 6–9 months, which persisted and remained elevated at 12–15 and 18–24 months. Extra-axial fluid is characterized by excessive cerebrospinal fluid in the subarachnoid space, particularly over the frontal lobes. The amount of extra-axial fluid detected as early as 6 months was predictive of more severe autism spectrum disorder symptoms at the time of outcome. Infants who developed autism spectrum disorder also had significantly larger total cerebral volumes at both 12–15 and 18–24 months of age. This is the first magnetic resonance imaging study to prospectively evaluate brain growth trajectories from infancy in children who develop autism spectrum disorder. The presence of excessive extra-axial fluid detected as early as 6 months and the lack of resolution by 24 months is a hitherto unreported brain anomaly in infants who later develop autism spectrum disorder. This is also the first magnetic resonance imaging evidence of brain enlargement in autism before age 2. These findings raise the potential for the use of structural magnetic resonance imaging to aid in the early detection of children at risk for autism spectrum disorder or other neurodevelopmental disorders. ::: ::: * Abbreviations ::: : ADOS ::: : Autism Diagnostic Observation Schedule ::: ASD ::: : autism spectrum disorder --- paper_title: Centrality of Social Interaction in Human Brain Function paper_content: People are embedded in social interaction that shapes their brains throughout lifetime. Instead of emerging from lower-level cognitive functions, social interaction could be the default mode via which humans communicate with their environment. Should this hypothesis be true, it would have profound implications on how we think about brain functions and how we dissect and simulate them. We suggest that the research on the brain basis of social cognition and interaction should move from passive spectator science to studies including engaged participants and simultaneous recordings from the brains of the interacting persons. --- paper_title: Hypersensitivity to acoustic change in children with autism: Electrophysiological evidence of left frontal cortex dysfunctioning paper_content: Exaggerated reactions to even small changes in the environment and abnormal behaviors in response to auditory stimuli are frequently observed in children with autism (CWA). Brain mechanisms involved in the automatic detection of auditory frequency change were studied using scalp potential and scalp current density (SCD) mapping of mismatch negativity (MMN) in 15 CWA matched with 15 healthy children. Compared with the response in controls, MMN recorded at the Fz site in CWA showed significantly shorter latency and was followed by a P3a wave. Mapping of potentials indicated significant intergroup differences. Moreover, SCD mapping demonstrated the dynamics of the different MMN generators: Although temporal component was evidenced bilaterally in both groups, it occurred earlier on the left hemisphere in CWA, preceded by an abnormal early left frontal component. The electrophysiological pattern reported here emphasized a left frontal cortex dysfunctioning that might also be implicated in cognitive and behavioral impairment characteristic, of this complex neurodevelopmental disorder. --- paper_title: Autism, Language Disorder, and Social (Pragmatic) Communication Disorder: DSM-V and Differential Diagnoses paper_content: 1. Mark D. Simms, MD, MPH* ::: 2. Xing Ming Jin, MD† ::: ::: ::: ::: ::: 1. *Department of Pediatrics, Medical College of Wisconsin, Milwaukee, WI. ::: ::: 2. †Department of Pediatrics, Jiao Tong University School of Medicine, Shanghai, Peoples Republic of China. ::: ::: The recent revision of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-V) included refinements to the diagnostic criteria for autism spectrum disorders and language disorders and introduced a new entity, social (pragmatic) communication disorder. Clinicians should become familiar with these changes and understand how to apply this new knowledge in clinical practice. ::: ::: After completing this article, readers should be able to: ::: ::: 1. Know the revised criteria for autistic spectrum disorders and language disorders and the diagnostic criteria for social (pragmatic) communication disorder. ::: ::: 2. Understand the clinical similarities and difference of these disorders. ::: ::: 3. Know the differences in the long-term prognosis of these disorders. ::: ::: 4. Be familiar with some relatively common “nonspecific” behaviors that should not be confused with specific developmental disorders. ::: ::: The past decade has witnessed an explosion in public and professional awareness of autism and autistic spectrum disorders (ASDs). Once considered to be a rare disorder, ASD now has a reported prevalence rate of slightly more than 1% among United States children. (1) Although the cause of this increased prevalence is not certain, greater awareness has likely resulted in improved recognition. This has been accompanied by increased research on autism focused on its cause and effective interventions for young children. Autism treatment programs are now widely available in school and community settings. ::: ::: At the same time, childhood language disorders, which are more common than ASDs, have remained relatively unknown publicly and professionally. At kindergarten entry, approximately 7% to 8% of children have evidence of a language impairment (2) and are at significant risk for difficulty with language-based learning tasks and social adaptation as they progress through school. The most … --- paper_title: In search of biomarkers for autism: scientific, social and ethical challenges paper_content: There is widespread hope that the discovery of valid biomarkers for autism will both reveal the causes of autism and enable earlier and more targeted methods for diagnosis and intervention. However, growing enthusiasm about recent advances in this area of autism research needs to be tempered by an awareness of the major scientific challenges and the important social and ethical concerns arising from the development of biomarkers and their clinical application. Collaborative approaches involving scientists and other stakeholders must combine the search for valid, clinically useful autism biomarkers with efforts to ensure that individuals with autism and their families are treated with respect and understanding. --- paper_title: Connectivity in Autism: A Review of MRI Connectivity Studies. paper_content: Learning ObjectiveAfter participating in this activity, learners should be better able to:Assess the resting state and diffusion tensor imaging connectivity literature regarding subjects with autism spectrum disorder.AbstractAutism spectrum disorder (ASD) affects 1 in 50 children between the ages of --- paper_title: Neural bases of gaze and emotion processing in children with autism spectrum disorders paper_content: Abnormal eye contact is a core symptom of autism spectrum disorders (ASD), though little is understood of the neural bases of gaze processing in ASD. Competing hypotheses suggest that individuals with ASD avoid eye contact due to the anxiety-provoking nature of direct eye gaze or that eye-gaze cues hold less interest or significance to children with ASD. The current study examined the effects of gaze direction on neural processing of emotional faces in typically developing (TD) children and those with ASD. While undergoing functional magnetic resonance imaging (fMRI), 16 high-functioning children and adolescents with ASD and 16 TD controls viewed a series of faces depicting emotional expressions with either direct or averted gaze. Children in both groups showed significant activity in visual-processing regions for both direct and averted gaze trials. However, there was a significant group by gaze interaction such that only TD children showed reliably greater activity in ventrolateral prefrontal cortex for direct versus averted gaze. The ASD group showed no difference between direct and averted gaze in response to faces conveying negative emotions. These results highlight the key role of eye gaze in signaling communicative intent and suggest altered processing of the emotional significance of direct gaze in children with ASD. --- paper_title: Autism as a disorder of neural information processing: directions for research and targets for therapy* paper_content: The broad variation in phenotypes and severities within autism spectrum disorders suggests the involvement of multiple predisposing factors, interacting in complex ways with normal developmental courses and gradients. Identification of these factors, and the common developmental path into which they feed, is hampered by the large degrees of convergence from causal factors to altered brain development, and divergence from abnormal brain development into altered cognition and behaviour. Genetic, neurochemical, neuroimaging, and behavioural findings on autism, as well as studies of normal development and of genetic syndromes that share symptoms with autism, offer hypotheses as to the nature of causal factors and their possible effects on the structure and dynamics of neural systems. Such alterations in neural properties may in turn perturb activity-dependent development, giving rise to a complex behavioural syndrome many steps removed from the root causes. Animal models based on genetic, neurochemical, neurophysiological, and behavioural manipulations offer the possibility of exploring these developmental processes in detail, as do human studies addressing endophenotypes beyond the diagnosis itself. --- paper_title: The neural basis of functional brain imaging signals paper_content: The haemodynamic responses to neural activity that underlie the blood-oxygen-level-dependent (BOLD) signal used in functional magnetic resonance imaging (fMRI) of the brain are often assumed to be driven by energy use, particularly in presynaptic terminals or glia. However, recent work has suggested that most brain energy is used to power postsynaptic currents and action potentials rather than presynaptic or glial activity and, furthermore, that haemodynamic responses are driven by neurotransmitter-related signalling and not directly by the local energy needs of the brain. A firm understanding of the BOLD response will require investigation to be focussed on the neural signalling mechanisms controlling blood flow rather than on the locus of energy use. --- paper_title: Prevalence of Autism Spectrum Disorder Among Children Aged 8 Years — Autism and Developmental Disabilities Monitoring Network, 11 Sites, United States, 2016 paper_content: PROBLEM/CONDITION ::: Autism spectrum disorder (ASD). ::: ::: ::: PERIOD COVERED ::: 2016. ::: ::: ::: DESCRIPTION OF SYSTEM ::: The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active surveillance program that provides estimates of the prevalence of ASD among children aged 8 years whose parents or guardians live in 11 ADDM Network sites in the United States (Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin). Surveillance is conducted in two phases. The first phase involves review and abstraction of comprehensive evaluations that were completed by medical and educational service providers in the community. In the second phase, experienced clinicians who systematically review all abstracted information determine ASD case status. The case definition is based on ASD criteria described in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. ::: ::: ::: RESULTS ::: For 2016, across all 11 sites, ASD prevalence was 18.5 per 1,000 (one in 54) children aged 8 years, and ASD was 4.3 times as prevalent among boys as among girls. ASD prevalence varied by site, ranging from 13.1 (Colorado) to 31.4 (New Jersey). Prevalence estimates were approximately identical for non-Hispanic white (white), non-Hispanic black (black), and Asian/Pacific Islander children (18.5, 18.3, and 17.9, respectively) but lower for Hispanic children (15.4). Among children with ASD for whom data on intellectual or cognitive functioning were available, 33% were classified as having intellectual disability (intelligence quotient [IQ] ≤70); this percentage was higher among girls than boys (40% versus 32%) and among black and Hispanic than white children (47%, 36%, and 27%, respectively). Black children with ASD were less likely to have a first evaluation by age 36 months than were white children with ASD (40% versus 45%). The overall median age at earliest known ASD diagnosis (51 months) was similar by sex and racial and ethnic groups; however, black children with IQ ≤70 had a later median age at ASD diagnosis than white children with IQ ≤70 (48 months versus 42 months). ::: ::: ::: INTERPRETATION ::: The prevalence of ASD varied considerably across sites and was higher than previous estimates since 2014. Although no overall difference in ASD prevalence between black and white children aged 8 years was observed, the disparities for black children persisted in early evaluation and diagnosis of ASD. Hispanic children also continue to be identified as having ASD less frequently than white or black children. ::: ::: ::: PUBLIC HEALTH ACTION ::: These findings highlight the variability in the evaluation and detection of ASD across communities and between sociodemographic groups. Continued efforts are needed for early and equitable identification of ASD and timely enrollment in services. --- paper_title: Three-dimensional probabilistic anatomical cranio-cerebral correlation via the international 10–20 system oriented for transcranial functional brain mapping paper_content: Abstract The recent advent of multichannel near-infrared spectroscopy (NIRS) has expanded its technical potential for human brain mapping. However, NIRS measurement has a technical drawback in that it measures cortical activities from the head surface without anatomical information of the object to be measured. This problem is also found in transcranial magnetic stimulation (TMS) that transcranially activates or inactivates the cortical surface. To overcome this drawback, we examined cranio-cerebral correlation using magnetic resonance imaging (MRI) via the guidance of the international 10–20 system for electrode placement, which had originally been developed for electroencephalography. We projected the 10–20 standard cranial positions over the cerebral cortical surface. After examining the cranio-cerebral correspondence for 17 healthy adults, we normalized the 10–20 cortical projection points of the subjects to the standard Montreal Neurological Institute (MNI) and Talairach stereotactic coordinates and obtained their probabilistic distributions. We also expressed the anatomical structures for the 10–20 cortical projection points probabilistically. Next, we examined the distance between the cortical surface and the head surface along the scalp and created a cortical surface depth map. We found that the locations of 10–20 cortical projection points in the standard MNI or Talairach space could be estimated with an average standard deviation of 8 mm. This study provided an initial step toward establishing a three-dimensional probabilistic anatomical platform that enables intra- and intermodal comparisons of NIRS and TMS brain imaging data. --- paper_title: A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology paper_content: This year marks the 20th anniversary of functional near-infrared spectroscopy and imaging (fNIRS/fNIRI). As the vast majority of commercial instruments developed until now are based on continuous wave technology, the aim of this publication is to review the current state of instrumentation and methodology of continuous wave fNIRI. For this purpose we provide an overview of the commercially available instruments and address instrumental aspects such as light sources, detectors and sensor arrangements. Methodological aspects, algorithms to calculate the concentrations of oxy- and deoxyhemoglobin and approaches for data analysis are also reviewed. From the single-location measurements of the early years, instrumentation has progressed to imaging initially in two dimensions (topography) and then three (tomography). The methods of analysis have also changed tremendously, from the simple modified Beer-Lambert law to sophisticated image reconstruction and data analysis methods used today. Due to these advances, fNIRI has become a modality that is widely used in neuroscience research and several manufacturers provide commercial instrumentation. It seems likely that fNIRI will become a clinical tool in the foreseeable future, which will enable diagnosis in single subjects. --- paper_title: Functional near‐infrared optical imaging: Utility and limitations in human brain mapping paper_content: Although near-infrared spectroscopy (NIRS) was developed as a tool for clinical monitoring of tissue oxygenation, it also has potential for neuroimaging. A wide range of different NIRS instruments have been developed, and instruments for continuous intensity measurements with fixed spacing [continuous wave (CW)-type instruments], which are most readily available commercially, allow us to see dynamic changes in regional cerebral blood flow in real time. However, quantification, which is necessary for imaging of brain functions, is impossible with these CW-type instruments. Over the past 20 years, many different approaches to quantification have been tried, and several multichannel time-resolved and frequency-domain instruments are now in common use for imaging. Although there are still many problems with this technique, such as incomplete knowledge of how light propagates through the head, NIRS will not only open a window on brain physiology for subjects who have rarely been examined until now, but also provide a new direction for functional mapping studies. --- paper_title: Time domain functional NIRS imaging for human brain mapping paper_content: This review is aimed at presenting the state-of-the-art of time domain (TD) functional near-infrared spectroscopy (fNIRS). We first introduce the physical principles, the basics of modeling and data analysis. Basic instrumentation components (light sources, detection techniques, and delivery and collection systems) of a TD fNIRS system are described. A survey of past, existing and next generation TD fNIRS systems used for research and clinical studies is presented. Performance assessment of TD fNIRS systems and standardization issues are also discussed. Main strengths and weakness of TD fNIRS are highlighted, also in comparison with continuous wave (CW) fNIRS. Issues like quantification of the hemodynamic response, penetration depth, depth selectivity, spatial resolution and contrast-to-noise ratio are critically examined, with the help of experimental results performed on phantoms or in vivo. Finally we give an account on the technological developments that would pave the way for a broader use of TD fNIRS in the neuroimaging community. --- paper_title: Frontal Lobe Activation during Object Permanence: Data from Near-Infrared Spectroscopy paper_content: Abstract The ability to create and hold a mental schema of an object is one of the milestones in cognitive development. Developmental scientists have named the behavioral manifestation of this competence object permanence. Convergent evidence indicates that frontal lobe maturation plays a critical role in the display of object permanence, but methodological and ethical constrains have made it difficult to collect neurophysiological evidence from awake, behaving infants. Near-infrared spectroscopy provides a noninvasive assessment of changes in oxy- and deoxyhemoglobin and total hemoglobin concentration within a prescribed region. The evidence described in this report reveals that the emergence of object permanence is related to an increase in hemoglobin concentration in frontal cortex. --- paper_title: A brief review on the history of human functional near-infrared spectroscopy (fNIRS) development and fields of application paper_content: article i nfo Functional near-infrared spectroscopy Functional near-infrared topography Hemodynamic response Optical imaging Cortical activation This review is aimed at celebrating the upcoming 20th anniversary of the birth of human functional near- infrared spectroscopy (fNIRS). After the discovery in 1992 that the functional activation of the human cerebral cortex (due to oxygenation and hemodynamic changes) can be explored by NIRS, human functional brain mapping research has gained a new dimension. fNIRS or optical topography, or near-infrared imaging or diffuse optical imaging is used mainly to detect simultaneous changes in optical properties of the human cortex from multiple measurement sites and displays the results in the form of a map or image over a specific area. In order to place current fNIRS research in its proper context, this paper presents a brief historical overview of the events that have shaped the present status of fNIRS. In particular, technological progresses of fNIRS are highlighted (i.e. from single-site to multi-site functional cortical measurements (images)), introduction of the commercial multi-channel systems, recent commercial wireless instrumentation and more advanced prototypes. --- paper_title: Beyond the Visible—Imaging the Human Brain with Light paper_content: Optical approaches to investigate cerebral function and metabolism have long been applied in invasive studies. From the neuron cultured in vitro to the exposed cortex in the human during neurosurgical procedures, high spatial resolution can be reached and several processes such as membrane potential, cell swelling, metabolism of mitochondrial chromophores, and vascular response can be monitored, depending on the respective preparation. The authors focus on an extension of optical methods to the noninvasive application in the human. Starting with the pioneering work of Jobsis 25 years ago, near-infrared spectroscopy (NIRS) has been used to investigate functional activation of the human cerebral cortex. Recently, several groups have started to use imaging systems that allow the generation of images of a larger area of the subject's head and, thereby, the production of maps of cortical oxygenation changes. Such images have a much lower spatial resolution compared with the invasively obtained optical images. The noninvasive NIRS images, however, can be obtained in undemanding set-ups that can be easily combined with other functional methods, in particular EEG. Moreover, NIRS is applicable to bedside use. The authors briefly review some of the abundant literature on intrinsic optical signals and the NIRS imaging studies of the past few years. The weaknesses and strengths of the approach are critically discussed. The authors conclude that NIRS imaging has two major advantages: it can address issues concerning neurovascular coupling in the human adult and can extend functional imaging approaches to the investigation of the diseased brain. --- paper_title: A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology paper_content: This year marks the 20th anniversary of functional near-infrared spectroscopy and imaging (fNIRS/fNIRI). As the vast majority of commercial instruments developed until now are based on continuous wave technology, the aim of this publication is to review the current state of instrumentation and methodology of continuous wave fNIRI. For this purpose we provide an overview of the commercially available instruments and address instrumental aspects such as light sources, detectors and sensor arrangements. Methodological aspects, algorithms to calculate the concentrations of oxy- and deoxyhemoglobin and approaches for data analysis are also reviewed. From the single-location measurements of the early years, instrumentation has progressed to imaging initially in two dimensions (topography) and then three (tomography). The methods of analysis have also changed tremendously, from the simple modified Beer-Lambert law to sophisticated image reconstruction and data analysis methods used today. Due to these advances, fNIRI has become a modality that is widely used in neuroscience research and several manufacturers provide commercial instrumentation. It seems likely that fNIRI will become a clinical tool in the foreseeable future, which will enable diagnosis in single subjects. --- paper_title: Functional near‐infrared optical imaging: Utility and limitations in human brain mapping paper_content: Although near-infrared spectroscopy (NIRS) was developed as a tool for clinical monitoring of tissue oxygenation, it also has potential for neuroimaging. A wide range of different NIRS instruments have been developed, and instruments for continuous intensity measurements with fixed spacing [continuous wave (CW)-type instruments], which are most readily available commercially, allow us to see dynamic changes in regional cerebral blood flow in real time. However, quantification, which is necessary for imaging of brain functions, is impossible with these CW-type instruments. Over the past 20 years, many different approaches to quantification have been tried, and several multichannel time-resolved and frequency-domain instruments are now in common use for imaging. Although there are still many problems with this technique, such as incomplete knowledge of how light propagates through the head, NIRS will not only open a window on brain physiology for subjects who have rarely been examined until now, but also provide a new direction for functional mapping studies. --- paper_title: Sustained decrease in oxygenated hemoglobin during video games in the dorsal prefrontal cortex: A NIRS study of children paper_content: Abstract Traditional neuroimaging studies have mainly focused on brain activity derived from a simple stimulus and task. Therefore, little is known about brain activity during daily operations. In this study, we investigated hemodynamic changes in the dorsal prefrontal cortex (DPFC) during video games as one of daily amusements, using near infrared spectroscopy technique. It was previously reported that oxygenated hemoglobin (oxyHb) in adults' DPFC decreased during prolonged game playing time. In the present study, we examined whether similar changes were observed in children. Twenty children (7–14 years old) participated in our study, but only 13 of them were eventually subject to analysis. They played one or two commercially available video games; namely a fighting and a puzzle game, for 5 min. We used changes in concentration of oxyHb as an indicator of brain activity and consequently, most of the children exhibited a sustained game-related oxyHb decrease in DPFC. Decrease patterns of oxyHb in children during video game playing time did not differ from those in adults. There was no significant correlation between ages or game performances and changes in oxyHb. These findings suggest that game-related oxyHb decrease in DPFC is a common phenomenon to adults and children at least older than 7 years old, and we suggest that this probably results from attention demand from the video games rather than from subject's age and performance. --- paper_title: Motion artifacts in functional near-infrared spectroscopy: a comparison of motion correction techniques applied to real cognitive data. paper_content: Motion artifacts are a significant source of noise in many functional near-infrared spectroscopy (fNIRS) experiments. Despite this, there is no well-established method for their removal. Instead, functional trials of fNIRS data containing a motion artifact are often rejected completely. However, in most experimental circumstances the number of trials is limited, and multiple motion artifacts are common, particularly in challenging populations. Many methods have been proposed recently to correct for motion artifacts, including principle component analysis, spline interpolation, Kalman filtering, wavelet filtering and correlation-based signal improvement. The performance of different techniques has been often compared in simulations, but only rarely has it been assessed on real functional data. Here, we compare the performance of these motion correction techniques on real functional data acquired during a cognitive task, which required the participant to speak aloud, leading to a low-frequency, low-amplitude motion artifact that is correlated with the hemodynamic response. To compare the efficacy of these methods, objective metrics related to the physiology of the hemodynamic response have been derived. Our results show that it is always better to correct for motion artifacts than reject trials, and that wavelet filtering is the most effective approach to correcting this type of artifact, reducing the area under the curve where the artifact is present in 93% of the cases. Our results therefore support previous studies that have shown wavelet filtering to be the most promising and powerful technique for the correction of motion artifacts in fNIRS data. The analyses performed here can serve as a guide for others to objectively test the impact of different motion correction algorithms and therefore select the most appropriate for the analysis of their own fNIRS experiment. --- paper_title: Interpretation of near-infrared spectroscopy signals: a study with a newly developed perfused rat brain model. paper_content: Using a newly developed perfused rat brain model, we examined direct effects of each change in cerebral blood flow (CBF) and oxygen metabolic rate on cerebral hemoglobin oxygenation to interpret near-infrared spectroscopy signals. Changes in CBF and total hemoglobin (tHb) were in parallel, although tHb showed no change when changes in CBF were small (< or =10%). Increasing CBF caused an increase in oxygenated hemoglobin (HbO(2)) and a decrease in deoxygenated hemoglobin (deoxy-Hb). Decreasing CBF was accompanied by a decrease in HbO(2), whereas changes in direction of deoxy-Hb were various. Cerebral blood congestion caused increases in HbO(2), deoxy-Hb, and tHb. Administration of pentylenetetrazole without increasing the flow rate caused increases in HbO(2) and tHb with a decrease in deoxy-Hb. There were no significant differences in venous oxygen saturation before vs. during seizure. These results suggest that, in activation studies with near-infrared spectroscopy, HbO(2) is the most sensitive indicator of changes in CBF, and the direction of changes in deoxy-Hb is determined by the degree of changes in venous blood oxygenation and volume. --- paper_title: A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology paper_content: This year marks the 20th anniversary of functional near-infrared spectroscopy and imaging (fNIRS/fNIRI). As the vast majority of commercial instruments developed until now are based on continuous wave technology, the aim of this publication is to review the current state of instrumentation and methodology of continuous wave fNIRI. For this purpose we provide an overview of the commercially available instruments and address instrumental aspects such as light sources, detectors and sensor arrangements. Methodological aspects, algorithms to calculate the concentrations of oxy- and deoxyhemoglobin and approaches for data analysis are also reviewed. From the single-location measurements of the early years, instrumentation has progressed to imaging initially in two dimensions (topography) and then three (tomography). The methods of analysis have also changed tremendously, from the simple modified Beer-Lambert law to sophisticated image reconstruction and data analysis methods used today. Due to these advances, fNIRI has become a modality that is widely used in neuroscience research and several manufacturers provide commercial instrumentation. It seems likely that fNIRI will become a clinical tool in the foreseeable future, which will enable diagnosis in single subjects. --- paper_title: A new research trend in social neuroscience: Towards an interactive‐brain neuroscience paper_content: The ability to flexibly modulate our behaviors in social contexts and to successfully interact with other persons is a fundamental, but pivotal, requirement for human survival. Although previous social neuroscience research with single individuals has contributed greatly to our understanding of the basic mechanisms underlying social perception and social emotions, much of the dynamic nature of interactions between persons remains beyond the reach of single-brain studies. This has led to a growing argument for a shift to the simultaneous measurement of the brain activity of two or more individuals in realistic social interactions-an approach termed "hyperscanning." Although this approach offers important promise in unlocking the brain's role in truly social situations, there are multiple procedural and theoretical questions that require review and analysis. In this paper we discuss this research trend from four aspects: hyperscanning apparatus, experimental task, quantification method, and theoretical interpretation. We also give four suggestions for future research: (a) electroencephalography and near-infrared spectroscopy are useful tools by which to explore the interactive brain in more ecological settings; (b) games are an appropriate method to simulate daily life interactions; (c) transfer entropy may be an important method by which to quantify directed exchange of information between brains; and (d) more explanation is needed of the results of interbrain synchronization itself. --- paper_title: Functional near Infrared Spectroscopy in Psychiatry: A Critical Review paper_content: This review deals with the utilisation of functional near infrared (fNIR) spectroscopy for an in vivo assessment of activation changes in brain tissue, which has broadened the range of non-invasive functional imaging methods within the field of neuroscientific research. Due to its simple and quick applicability as well as the absence of side effects, fNIR spectroscopy is particularly well tolerated by psychiatric patients and can hence markedly contribute to the understanding of the neurobiological basis of psychiatric disorders. The optical, light-based method emits near infrared wavelengths of about 700–1000 nm, which are able to penetrate the scalp and skull, into the head. Because near infrared light is distinctively absorbed by the chromophores oxy-haemoglobin (O2Hb) and deoxy-haemoglobin (HHb), the measured relative amount of reflected NIR light can indicate regional oxygenation patterns in cortical brain tissue with a depth resolution of, on average, 1.5 cm and a spatial resolution of about 2–3 cm. Validity and reliability of fNIR spectroscopy measurements to assess task-related cognitive activation have been repeatedly confirmed among healthy subjects. Beyond that, the application of fNIR spectroscopy to detect altered cortical oxygenation in psychiatric patients during cognitive tasks has been greatly intensified over the last two decades. In this context, hypo-frontality, a decrease in frontal lobe activity that is associated with a number of clinical symptoms and psychiatric disorders, has been demonstrated in a wide range of fNIR spectroscopy studies with psychiatric patients. Despite its variety of beneficial properties, the most apparent disadvantages of NIR spectroscopy compared to other imaging techniques are its limited spatial as well as depth resolution and its restriction to cortical areas. Although multimodal approaches based on simultaneous application of NIR spectroscopy combined with other imaging techniques initially revealed promising results, further technical development and a broadened implementation of combined measurements are necessary in order to uncover distinct brain activity alterations in different psychiatric disorders. In addition to the need for further technical improvement of the method, broad and longitudinal applications of fNIR spectroscopy measurements in psychiatric research are required in order to identify robust diagnostic markers which are required to establish NIR spectroscopy as a valid inter-individual screening instrument in psychiatry. --- paper_title: Reduced interhemispheric functional connectivity of children with autism spectrum disorder: evidence from functional near infrared spectroscopy studies paper_content: Autism spectrum disorder (ASD) is a neuro-developmental disorder, which has been associated with atypical neural synchronization. In this study, functional near infrared spectroscopy (fNIRS) was used to study the differences in functional connectivity in bilateral inferior frontal cortices (IFC) and bilateral temporal cortices (TC) between ASD and typically developing (TD) children between 8 and 11 years of age. As the first report of fNIRS study on the resting state functional connectivity (RSFC) in children with ASD, ten children with ASD and ten TD children were recruited in this study for 8 minute resting state measurement. Compared to TD children, children with ASD showed reduced interhemispheric connectivity in TC. Children with ASD also showed significantly lower local connectivity in bilateral temporal cortices. In contrast to TD children, children with ASD did not show typical patterns of symmetry in functional connectivity in temporal cortex. These results support the feasibility of using the fNIRS method to assess atypical functional connectivity of cortical responses of ASD and its potential application in diagnosis. --- paper_title: Interpretation of near-infrared spectroscopy signals: a study with a newly developed perfused rat brain model. paper_content: Using a newly developed perfused rat brain model, we examined direct effects of each change in cerebral blood flow (CBF) and oxygen metabolic rate on cerebral hemoglobin oxygenation to interpret near-infrared spectroscopy signals. Changes in CBF and total hemoglobin (tHb) were in parallel, although tHb showed no change when changes in CBF were small (< or =10%). Increasing CBF caused an increase in oxygenated hemoglobin (HbO(2)) and a decrease in deoxygenated hemoglobin (deoxy-Hb). Decreasing CBF was accompanied by a decrease in HbO(2), whereas changes in direction of deoxy-Hb were various. Cerebral blood congestion caused increases in HbO(2), deoxy-Hb, and tHb. Administration of pentylenetetrazole without increasing the flow rate caused increases in HbO(2) and tHb with a decrease in deoxy-Hb. There were no significant differences in venous oxygen saturation before vs. during seizure. These results suggest that, in activation studies with near-infrared spectroscopy, HbO(2) is the most sensitive indicator of changes in CBF, and the direction of changes in deoxy-Hb is determined by the degree of changes in venous blood oxygenation and volume. --- paper_title: fNIRS in the developmental sciences paper_content: With the introduction of functional near-infrared spectroscopy (fNIRS) into the experimental setting, developmental scientists have, for the first time, the capacity to investigate the functional activation of the infant brain in awake, engaged participants. The advantages of fNIRS clearly outweigh the limitations, and a description of how this technology is implemented in infant populations is provided. Most fNIRS research falls into one of three content domains: object processing, processing of biologically and socially relevant information, and language development. Within these domains, there are ongoing debates about the origins and development of human knowledge, making early neuroimaging particularly advantageous. The use of fNIRS has allowed investigators to begin to identify the localization of early object, social, and linguistic knowledge in the immature brain and the ways in which this changes with time and experience. In addition, there is a small but growing body of research that provides insight into the neural mechanisms that support and facilitate learning during the first year of life. At the same time, as with any emerging field, there are limitations to the conclusions that can be drawn on the basis of current findings. We offer suggestions as to how to optimize the use of this technology to answer questions of theoretical and practical importance to developmental scientists. WIREs Cogn Sci 2015, 6:263–283. doi: 10.1002/wcs.1343 ::: ::: ::: ::: For further resources related to this article, please visit the WIREs website. ::: ::: ::: ::: Conflict of interest: The authors have declared no conflicts of interest for this article. --- paper_title: Coupled oxygenation oscillation measured by NIRS and intermittent cerebral activation on EEG in premature infants paper_content: Abstract Electroencephalography of premature neonates shows a physiological discontinuity of electrical activity during quiet sleep. Near infrared spectroscopy (NIRS) shows spontaneous oscillations of hemoglobin oxygenation and volume. Similar oscillations are visible in term neonates and adults, with NIRS and other functional imaging techniques (fMRI, Doppler, etc.), but are generally thought to result from vasomotion and to be a physiological artifact of limited interest. The origin and possible relationship to neuronal activity of the baseline changes in the NIRS signal have not been established. We carried out simultaneous EEG–NIRS recordings on six healthy premature neonates and four premature neonates presenting neurological distress, to determine whether changes in the concentration of cerebral oxy- and deoxy- and total hemoglobin were related to the occurrence of spontaneous bursts of cerebral electric activity. Bursts of electroencephalographic activity in neonates during quiet sleep were found to be coupled to a transient stereotyped hemodynamic response involving a decrease in oxy-hemoglobin concentration, sometimes beginning a few seconds before the onset of electroencephalographic activity, followed by an increase, and then a return to baseline. This pattern could be either part of the baseline oscillations or superimposed changes to this baseline, influencing its shape and phase. The temporal patterns of NIRS parameters present an unique configuration, and tend to be different between our healthy and pathological subjects. Studies of physiological activities and of the effects of intrinsic regulation on the NIRS signal should increase our understanding of these patterns and EEG–NIRS studies should facilitate the integration of NIRS into the set of clinical tools used in neurology. --- paper_title: The use of near-infrared spectroscopy in the study of typical and atypical development paper_content: The use of functional near infrared spectroscopy (fNIRS) has grown exponentially over the past decade, particularly among investigators interested in early brain development. The use of this neuroimaging technique has begun to shed light on the development of a variety of sensory, perceptual, linguistic, and social-cognitive functions. Rather than cast a wide net, in this paper we first discuss typical development, focusing on joint attention, face processing, language, and sensorimotor development. We then turn our attention to infants and children whose development has been compromised or who are at risk for atypical development. We conclude our review by critiquing some of the methodological issues that have plagued the extant literature as well as offer suggestions for future research. --- paper_title: Low resolution brain electromagnetic tomography (LORETA) functional imaging in acute, neuroleptic-naive, first-episode, productive schizophrenia paper_content: Functional imaging of brain electrical activity was performed in nine acute, neuroleptic-naive, first-episode, productive patients with schizophrenia and 36 control subjects. Low-resolution electromagnetic tomography (LORETA, three-dimensional images of cortical current density) was computed from 19-channel of electroencephalographic (EEG) activity obtained under resting conditions, separately for the different EEG frequencies. Three patterns of activity were evident in the patients: (1) an anterior, near-bilateral excess of delta frequency activity; (2) an anterior-inferior deficit of theta frequency activity coupled with an anterior-inferior left-sided deficit of alpha-1 and alpha-2 frequency activity; and (3) a posterior-superior right-sided excess of beta-1, beta-2 and beta-3 frequency activity. Patients showed deviations from normal brain activity as evidenced by LORETA along an anterior-left-to-posterior-right spatial axis. The high temporal resolution of EEG makes it possible to specify the deviations not only as excess or deficit, but also as inhibitory, normal and excitatory. The patients showed a dis-coordinated brain functional state consisting of inhibited prefrontal/frontal areas and simultaneously overexcited right parietal areas, while left anterior, left temporal and left central areas lacked normal routine activity. Since all information processing is brain-state dependent, this dis-coordinated state must result in inadequate treatment of (externally or internally generated) information. --- paper_title: Near-infrared spectroscopy: A report from the McDonnell infant methodology consortium paper_content: Near-infrared spectroscopy (NIRS) is a new and increasingly widespread brain imaging technique, particularly suitable for young infants. The laboratories of the McDonnell Con- sortium have contributed to the technological development and research applications of this technique for nearly a decade. The present paper provides a general introduction to the technique as well as a detailed report of the methodological innovations developed by the Consortium. The basic principles of NIRS and some of the existing developmental stud- ies are reviewed. Issues concerning technological improvements, parameter optimization, possible experimental designs and data analysis techniques are discussed and illustrated by novel empirical data. --- paper_title: Time courses of brain activation and their implications for function: A multichannel near-infrared spectroscopy study during finger tapping paper_content: The time courses of brain activation were monitored during a finger tapping task using multichannel near-infrared spectroscopy with a time resolution of 0.1 s in 30 healthy volunteers. Task-induced brain activations were demonstrated as significant increases in oxygenated hemoglobin concentration ([oxy-Hb]) in a broad area around the motor cortex and significant decreases in deoxygenated hemoglobin concentration ([deoxy-Hb]) in a more restricted area, with a large degree of activation in the contralateral hemisphere. The time courses of the [oxy-Hb] changes varied depending on channel location: sustained activation across the task period in the motor cortex, transient activation during the initial segments of the task period in the somatosensory cortex, and accumulating activation along the task period in the frontal lobe. These characteristics are assumed to reflect the functional roles of the brain structures during the task period, that is, the execution, sensory monitoring, and maintenance of finger tapping. --- paper_title: Response Inhibition Impairment in High Functioning Autism and Attention Deficit Hyperactivity Disorder: Evidence from Near-Infrared Spectroscopy Data paper_content: BACKGROUND ::: Response inhibition, an important domain of executive function (EF), involves the ability to suppress irrelevant or interfering information and impulses. Previous studies have shown impairment of response inhibition in high functioning autism (HFA) and attention deficit hyperactivity disorder (ADHD), but more recent findings have been inconsistent. To date, almost no studies have been conducted using functional imaging techniques to directly compare inhibitory control between children with HFA and those with ADHD. ::: ::: ::: METHOD ::: Nineteen children with HFA, 16 age- and intelligence quotient (IQ)-matched children with ADHD, and 16 typically developing (TD) children were imaged using functional near-infrared spectroscopy (NIRS) while performing Go/No-go and Stroop tasks. ::: ::: ::: RESULTS ::: Compared with the TD group, children in both the HFA and ADHD groups took more time to respond during the No-go blocks, with reaction time longest for HFA and shortest for TD. Children in the HFA and ADHD groups also made a greater number of reaction errors in the No-go blocks than those in the TD group. During the Stroop task, there were no significant differences between these three groups in reaction time and omission errors. Both the HFA and ADHD groups showed a higher level of inactivation in the right prefrontal cortex (PFC) during the No-go blocks, relative to the TD group. However, no significant differences were found between groups in the levels of oxyhemoglobin concentration in the PFC during the Stroop task. ::: ::: ::: CONCLUSION ::: Functional brain imaging using NIRS showed reduced activation in the right PFC in children with HFA or ADHD during an inhibition task, indicating that inhibitory dysfunction is a shared feature of both HFA and ADHD. --- paper_title: Neurobehavioral and hemodynamic evaluation of Stroop and reverse Stroop interference in children with attention-deficit/hyperactivity disorder paper_content: Failure of executive function (EF) is a core symptom of attention-deficit/hyperactivity disorder (ADHD). However, various results have been reported and sufficient evidence is lacking. In the present study, we evaluated the characteristics of children with ADHD using the Stroop task (ST) and reverse Stroop task (RST) that reflects the inhibition function of EF. We compared children with ADHD, typically developing children (TDC), and children with autism spectrum disorder (ASD), which is more difficult to discriminate from ADHD. A total of 10 children diagnosed with ADHD, 15 TDC, and 11 children diagnosed with ASD, all matched by age, sex, language ability, and intelligence quotient, participated in this study. While each subject performed computer-based ST and RST with a touch panel, changes in oxygenated hemoglobin (oxy-Hb) were measured in the prefrontal cortex (PFC) by near-infrared spectroscopy (NIRS) to correlate test performance with neural activity. Behavioral performance significantly differed among 3 groups during RST but not during ST. The ADHD group showed greater color interference than the TDC group. In addition, there was a negative correlation between right lateral PFC (LPFC) activity and the severity of attention deficit. Children with ADHD exhibit several problems associated with inhibition of color, and this symptom is affected by low activities of the right LPFC. In addition, it is suggested that low hemodynamic activities in this area are correlated with ADHD. --- paper_title: Inhibition and the Validity of the Stroop Task for Children with Autism paper_content: Findings are mixed concerning inhibition in autism. Using the classic Stroop, children with autism (CWA) often outperform typically developing children (TDC). A classic Stroop and a chimeric animal Stroop were used to explore the validity of the Stroop task as a test of inhibition for CWA. During the classic Stroop, children ignored the word and named the ink colour, then vice versa. Although CWA showed less interference than TDC when colour naming, both groups showed comparable interference when word reading. During the chimeric animal task, children ignored bodies of animals and named heads, and vice versa; the groups performed comparably. Findings confirm that lower reading comprehension affects Stroop interference in CWA, potentially leading to inaccurate conclusions concerning inhibition in CWA. --- paper_title: Cortical activation during attention to sound in autism spectrum disorders. paper_content: Abstract Individuals with autism spectrum disorders (ASDs) can demonstrate hypersensitivity to sounds as well as a lack of awareness of them. Several functional imaging studies have suggested an abnormal response in the auditory cortex of such subjects, but it is not known whether these subjects have dysfunction in the auditory cortex or are simply not listening. We measured changes in blood oxygenated hemoglobin (OxyHb) in the prefrontal and temporal cortices using near-infrared spectroscopy during various listening and ignoring tasks in 11 ASD and 12 control subjects. Here we show that the auditory cortex in ASD subjects responds to sounds fully during attention. OxyHb in the auditory cortex increased with intentional listening but not with ignoring of the same auditory stimulus in a similar fashion in both groups. Cortical responses differed not in the auditory but in the prefrontal region between the ASD and control groups. Thus, unawareness to sounds in ASD could be interpreted as due to inattention rather than dysfunction of the auditory cortex. Difficulties in attention control may account for the contrary behaviors of hypersensitivity and unawareness to sound in ASD. --- paper_title: Atypicalities in Cortical Structure, Handedness, and Functional Lateralization for Language in Autism Spectrum Disorders paper_content: Language is typically a highly lateralized function, with atypically reduced or reversed lateralization linked to language impairments. Given the diagnostic and prognostic role of impaired language for autism spectrum disorders (ASDs), this paper reviews the growing body of literature that examines patterns of lateralization in individuals with ASDs. Including research from structural and functional imaging paradigms, and behavioral evidence from investigations of handedness, the review confirms that atypical lateralization is common in people with ASDs. The evidence indicates reduced structural asymmetry in fronto-temporal language regions, attenuated functional activation in response to language and pre-linguistic stimuli, and more ambiguous (mixed) hand preferences, in individuals with ASDs. Critically, the evidence emphasizes an intimate relationship between atypical lateralization and language impairment, with more atypical asymmetries linked to more substantive language impairment. Such evidence highlights opportunities for the identification of structural and functional biomarkers of ASDs, affording the potential for earlier diagnosis and intervention implementation. --- paper_title: Reduced interhemispheric functional connectivity of children with autism spectrum disorder: evidence from functional near infrared spectroscopy studies paper_content: Autism spectrum disorder (ASD) is a neuro-developmental disorder, which has been associated with atypical neural synchronization. In this study, functional near infrared spectroscopy (fNIRS) was used to study the differences in functional connectivity in bilateral inferior frontal cortices (IFC) and bilateral temporal cortices (TC) between ASD and typically developing (TD) children between 8 and 11 years of age. As the first report of fNIRS study on the resting state functional connectivity (RSFC) in children with ASD, ten children with ASD and ten TD children were recruited in this study for 8 minute resting state measurement. Compared to TD children, children with ASD showed reduced interhemispheric connectivity in TC. Children with ASD also showed significantly lower local connectivity in bilateral temporal cortices. In contrast to TD children, children with ASD did not show typical patterns of symmetry in functional connectivity in temporal cortex. These results support the feasibility of using the fNIRS method to assess atypical functional connectivity of cortical responses of ASD and its potential application in diagnosis. --- paper_title: Anterior Prefrontal Hemodynamic Connectivity in Conscious 3- to 7-Year-Old Children with Typical Development and Autism Spectrum Disorder paper_content: Socio-communicative impairments are salient features of autism spectrum disorder (ASD) from a young age. The anterior prefrontal cortex (aPFC), or Brodmann area 10, is a key processing area for social function, and atypical development of this area is thought to play a role in the social deficits in ASD. It is important to understand these brain functions in developing children with ASD. However, these brain functions have not yet been well described under conscious conditions in young children with ASD. In the present study, we focused on the brain hemodynamic functional connectivity between the right and the left aPFC in children with ASD and typically developing (TD) children and investigated whether there was a correlation between this connectivity and social ability. Brain hemodynamic fluctuations were measured non-invasively by near-infrared spectroscopy (NIRS) in 3- to 7-year-old children with ASD (n = 15) and gender- and age-matched TD children (n = 15). The functional connectivity between the right and the left aPFC was assessed by measuring the coherence for low-frequency spontaneous fluctuations (0.01 – 0.10 Hz) during a narrated picture-card show. Coherence analysis demonstrated that children with ASD had a significantly higher inter-hemispheric connectivity with 0.02-Hz fluctuations, whereas a power analysis did not demonstrate significant differences between the two groups in terms of low frequency fluctuations (0.01 – 0.10 Hz). This aberrant higher connectivity in children with ASD was positively correlated with the severity of social deficit, as scored with the Autism Diagnostic Observation Schedule. This is the first study to demonstrate aberrant brain functional connectivity between the right and the left aPFC under conscious conditions in young children with ASD. --- paper_title: Functional connectivity in the first year of life in infants at-risk for autism: a preliminary near-infrared spectroscopy study paper_content: Background: Autism spectrum disorder (ASD) has been called a “developmental disconnection syndrome,” however the majority of the research examining connectivity in ASD has been conducted exclusively with older children and adults. Yet, prior ASD research suggests that perturbations in neurodevelopmental trajectories begin as early as the first year of life. Prospective longitudinal studies of infants at risk for ASD may provide a window into the emergence of these aberrant patterns of connectivity. The current study employed functional connectivity near-infrared spectroscopy (NIRS) in order to examine the development of intra- and inter-hemispheric functional connectivity in high- and low-risk infants across the first year of life. Methods: NIRS data were collected from 27 infants at high risk for autism (HRA) and 37 low-risk comparison (LRC) infants who contributed a total of 116 data sets at 3-, 6-, 9-, and 12-months. At each time point, HRA and LRC groups were matched on age, sex, head circumference, and Mullen Scales of Early Learning scores. Regions of interest (ROI) were selected from anterior and posterior locations of each hemisphere. The average time course for each ROI was calculated and correlations for each ROI pair were computed. Differences in functional connectivity were examined in a cross-sectional manner. Results: At 3-months, HRA infants showed increased overall functional connectivity compared to LRC infants. This was the result of increased connectivity for intra- and inter-hemispheric ROI pairs. No significant differences were found between HRA and LRC infants at 6- and 9-months. However, by 12-months, HRA infants showed decreased connectivity relative to LRC infants. --- paper_title: Neural Processing of Facial Identity and Emotion in Infants at High-Risk for Autism Spectrum Disorders paper_content: Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7 month-old infants at high risk for developing autism and typically developing controls at low risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near Infrared Spectroscopy (fNIRS). In addition, we employed independent component analysis (ICA), as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. --- paper_title: The Link between Social Cognition and Self-referential Thought in the Medial Prefrontal Cortex paper_content: The medial prefrontal cortex (mPFC) has been implicated in seemingly disparate cognitive functions, such as understanding the minds of other people and processing information about the self. This functional overlap would be expected if humans use their own experiences to infer the mental states of others, a basic postulate of simulation theory. Neural activity was measured while participants attended to either the mental or physical aspects of a series of other people. To permit a test of simulation theory's prediction that inferences based on self-reflection should only be made for similar others, targets were subsequently rated for their degree of similarity to self. Parametric analyses revealed a region of the ventral mPFC—previously implicated in self-referencing tasks—in which activity correlated with perceived self/other similarity, but only for mentalizing trials. These results suggest that self-reflection may be used to infer the mental states of others when they are sufficiently similar to self. --- paper_title: Neuroimaging in autism spectrum disorders: 1H-MRS and NIRS study. paper_content: Using proton magnetic resonance spectroscopy (1H-MRS), we measured chemical metabolites in the left amygdala and the bilateral orbito-frontal cortex (OFC) in children with autism spectrum disorders (ASD). The concentrations of N-acetylaspartate (NAA) in these regions of ASD were significantly decreased compared to those in the control group. In the autistic patients, the NAA concentrations in these regions correlated with their social quotient. These findings suggest the presence of neuronal dysfunction in the amygdala and OFC in ASD. Dysfunction in the amygdala and OFCmay contribute to the pathogenesis of ASD.We performed a near-infrared spectroscopy (NIRS) study to evaluate the mirror neuron system in children with ASD. The concentrations of oxygenated hemoglobin (oxy-Hb) were measured with frontal probes using a 34-channel NIRS machine while the subjects imitated emotional facial expressions. The increments in the concentration of oxy-Hb in the pars opercularis of the inferior frontal gyrus in autistic subjects were significantly lower than those in the controls. However, the concentrations of oxy-Hb in this area were significantly elevated in autistic subjects after they were trained to imitate emotional facial expressions. The results suggest that mirror neurons could be activated by repeated imitation in children with ASD. --- paper_title: Self-face recognition in children with autism spectrum disorders: A near-infrared spectroscopy study paper_content: It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness. --- paper_title: Atypical neural self-representation in autism paper_content: The ‘self’ is a complex multidimensional construct deeply embedded and in many ways defined by our relations with the social world. Individuals with autism are impaired in both self-referential and other-referential social cognitive processing. Atypical neural representation of the self may be a key to understanding the nature of such impairments. Using functional magnetic resonance imaging we scanned adult males with an autism spectrum condition and age and IQ-matched neurotypical males while they made reflective mentalizing or physical judgements about themselves or the British Queen. Neurotypical individuals preferentially recruit the middle cingulate cortex and ventromedial prefrontal cortex in response to self compared with other-referential processing. In autism, ventromedial prefrontal cortex responded equally to self and other, while middle cingulate cortex responded more to other-mentalizing than self-mentalizing. These atypical responses occur only in areas where self-information is preferentially processed and does not affect areas that preferentially respond to other-referential information. In autism, atypical neural self-representation was also apparent via reduced functional connectivity between ventromedial prefrontal cortex and areas associated with lower level embodied representations, such as ventral premotor and somatosensory cortex. Furthermore, the magnitude of neural self-other distinction in ventromedial prefrontal cortex was strongly related to the magnitude of early childhood social impairments in autism. Individuals whose ventromedial prefrontal cortex made the largest distinction between mentalizing about self and other were least socially impaired in early childhood, while those whose ventromedial prefrontal cortex made little to no distinction between mentalizing about self and other were the most socially impaired in early childhood. These observations reveal that the atypical organization of neural circuitry preferentially coding for self-information is a key mechanism at the heart of both self-referential and social impairments in autism. --- paper_title: Functional connectivity in the first year of life in infants at-risk for autism: a preliminary near-infrared spectroscopy study paper_content: Background: Autism spectrum disorder (ASD) has been called a “developmental disconnection syndrome,” however the majority of the research examining connectivity in ASD has been conducted exclusively with older children and adults. Yet, prior ASD research suggests that perturbations in neurodevelopmental trajectories begin as early as the first year of life. Prospective longitudinal studies of infants at risk for ASD may provide a window into the emergence of these aberrant patterns of connectivity. The current study employed functional connectivity near-infrared spectroscopy (NIRS) in order to examine the development of intra- and inter-hemispheric functional connectivity in high- and low-risk infants across the first year of life. Methods: NIRS data were collected from 27 infants at high risk for autism (HRA) and 37 low-risk comparison (LRC) infants who contributed a total of 116 data sets at 3-, 6-, 9-, and 12-months. At each time point, HRA and LRC groups were matched on age, sex, head circumference, and Mullen Scales of Early Learning scores. Regions of interest (ROI) were selected from anterior and posterior locations of each hemisphere. The average time course for each ROI was calculated and correlations for each ROI pair were computed. Differences in functional connectivity were examined in a cross-sectional manner. Results: At 3-months, HRA infants showed increased overall functional connectivity compared to LRC infants. This was the result of increased connectivity for intra- and inter-hemispheric ROI pairs. No significant differences were found between HRA and LRC infants at 6- and 9-months. However, by 12-months, HRA infants showed decreased connectivity relative to LRC infants. --- paper_title: How Would You Feel versus How Do You Think She Would Feel? A Neuroimaging Study of Perspective-Taking with Social Emotions paper_content: Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy. --- paper_title: Perceptual inconstancy in early infantile autism. The syndrome of early infant autism and its variants including certain cases of childhood schizophrenia. paper_content: WITHIN the decade since the syndrome of early infantile autism was first described by Kanner, 1-2 terms such as childhood schizophrenia, 3 atypical children, 4 children with unusual sensitivities, 5 and symbiotic psychosis 6 were used to conceptualize similar, yet apparently distinctive clinical entities. The tendency to create separate entities was reinforced by a desire for diagnostic specificity and accuracy and etiologic preference. As the symptomatology in these children varies both with the severity of the illness and age, it has been possible to emphasize distinctive clusters of symptoms and relate these to particular theories of causation. For instance, the predominance of disturbances of relating coupled with the prevailing belief in the 1940's and 1950's that specific syndromes in children must be outgrowths of specific parental behaviors or attitudes 7 led to attempts to implicate the parents in the development of early infantile autism. The --- paper_title: Neural Processing of Facial Identity and Emotion in Infants at High-Risk for Autism Spectrum Disorders paper_content: Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7 month-old infants at high risk for developing autism and typically developing controls at low risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near Infrared Spectroscopy (fNIRS). In addition, we employed independent component analysis (ICA), as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. --- paper_title: Usefulness of near-infrared spectroscopy to detect brain dysfunction in children with autism spectrum disorder when inferring the mental state of others paper_content: Aims ::: ::: The purpose of this study was to examine the usefulness of near-infrared spectroscopy (NIRS) for identifying abnormalities in prefrontal brain activity in children with autism spectrum disorders (ASD) as they inferred the mental states of others. ::: ::: ::: ::: Methods ::: ::: The subjects were 16 children with ASD aged between 8 and 14 years and 16 age-matched healthy control children. Oxygenated hemoglobin concentration was measured in the subject's prefrontal brain region on NIRS during tasks expressing a person's mental state (MS task) and expressing an object's characteristics (OC task). ::: ::: ::: ::: Results ::: ::: There was a significant main effect of group (ASD vs control), with the control group having more activity than the ASD group. But there was no significant main effect of task (MS task vs OC task) or hemisphere (right vs left). Significant interactions of task and group were found, with the control group showing more activity than the ASD group during the MS task relative to the OC task. ::: ::: ::: ::: Conclusions ::: ::: NIRS showed that there was lower activity in the prefrontal brain area when children with ASD performed MS tasks. Therefore, clinicians might be able to use NIRS and these tasks for conveniently detecting brain dysfunction in children with ASD related to inferring mental states, in the clinical setting. --- paper_title: Perception of Complex Sounds: Abnormal Pattern of Cortical Activation in Autism paper_content: OBJECTIVE: Bilateral temporal hypoperfusion at rest was recently described in autism. In normal adults, these regions are activated by listening to speech-like sounds. To investigate auditory cortical processing in autism, the authors performed a positron emission tomography activation study. METHOD: Regional cerebral blood flow was measured in five autistic adults and eight comparison subjects during rest and while listening to speech-like sounds. RESULTS: Similar to the comparison subjects, autistic patients showed a bilateral activation of the superior temporal gyrus. However, an abnormal pattern of hemispheric activation was observed in the autistic group. The volume of activation was larger on the right side in the autistic patients, whereas the reverse pattern was found in the comparison group. The direct comparison between the two groups showed that the right middle frontal gyrus exhibited significantly greater activation in the autistic group. Conversely, the left temporal areas exhibited less act... --- paper_title: Autistic Traits and Brain Activation during Face-to-Face Conversations in Typically Developed Adults paper_content: Background ::: Autism spectrum disorders (ASD) are characterized by impaired social interaction and communication, restricted interests, and repetitive behaviours. The severity of these characteristics is posited to lie on a continuum that extends into the general population. Brain substrates underlying ASD have been investigated through functional neuroimaging studies using functional magnetic resonance imaging (fMRI). However, fMRI has methodological constraints for studying brain mechanisms during social interactions (for example, noise, lying on a gantry during the procedure, etc.). In this study, we investigated whether variations in autism spectrum traits are associated with changes in patterns of brain activation in typically developed adults. We used near-infrared spectroscopy (NIRS), a recently developed functional neuroimaging technique that uses near-infrared light, to monitor brain activation in a natural setting that is suitable for studying brain functions during social interactions. ::: ::: ::: Methodology ::: We monitored regional cerebral blood volume changes using a 52-channel NIRS apparatus over the prefrontal cortex (PFC) and superior temporal sulcus (STS), 2 areas implicated in social cognition and the pathology of ASD, in 28 typically developed participants (14 male and 14 female) during face-to-face conversations. This task was designed to resemble a realistic social situation. We examined the correlations of these changes with autistic traits assessed using the Autism-Spectrum Quotient (AQ). ::: ::: ::: Principal Findings ::: Both the PFC and STS were significantly activated during face-to-face conversations. AQ scores were negatively correlated with regional cerebral blood volume increases in the left STS during face-to-face conversations, especially in males. ::: ::: ::: Conclusions ::: Our results demonstrate successful monitoring of brain function during realistic social interactions by NIRS as well as lesser brain activation in the left STS during face-to-face conversations in typically developed participants with higher levels of autistic traits. --- paper_title: Autism, the superior temporal sulcus and social perception paper_content: The most common clinical sign of autism spectrum disorders (ASD) is social interaction impairment, which is associated with communication deficits and stereotyped behaviors. Based on recent brain-imaging results, our hypothesis is that abnormalities in the superior temporal sulcus (STS) are highly implicated in ASD. STS abnormalities are characterized by decreased gray matter concentration, rest hypoperfusion and abnormal activation during social tasks. STS anatomical and functional anomalies occurring during early brain development could constitute the first step in the cascade of neural dysfunction underlying ASD. We will focus this review on the STS, which has been highly implicated in social cognition. We will review recent data on the contribution of the STS to normal social cognition and review brain-imaging data implicating this area in ASD. This review is part of the INMED/TINS special issue Nature and nurture in brain development and neurological disorders, based on presentations at the annual INMED/TINS symposium (http://inmednet.com/). --- paper_title: Differences in Neural Correlates of Speech Perception in 3 Month Olds at High and Low Risk for Autism Spectrum Disorder paper_content: In this study, we investigated neural precursors of language acquisition as potential endophenotypes of autism spectrum disorder (ASD) in 3-month-old infants at high and low familial ASD risk. Infants were imaged using functional near-infrared spectroscopy while they listened to auditory stimuli containing syllable repetitions; their neural responses were analyzed over left and right temporal regions. While female low risk infants showed initial neural activation that decreased over exposure to repetition-based stimuli, potentially indicating a habituation response to repetition in speech, female high risk infants showed no changes in neural activity over exposure. This finding may indicate a potential neural endophenotype of language development or ASD specific to females at risk for the disorder. --- paper_title: The superior temporal sulcus performs a common function for social and speech perception: Implications for the emergence of autism paper_content: Abstract Within the cognitive neuroscience literature, discussion of the functional role of the superior temporal sulcus (STS) has traditionally been divided into two domains; one focuses on its activity during language processing while the other emphasizes its role in biological motion and social attention, such as eye gaze processing. I will argue that a common process underlying both of these functional domains is performed by the STS, namely analyzing changing sequences of input, either in the auditory or visual domain, and interpreting the communicative significance of those inputs. From a developmental perspective, the fact that these two domains share an anatomical substrate suggests the acquisition of social and speech perception may be linked. In addition, I will argue that because of the STS’ role in interpreting social and speech input, impairments in STS function may underlie many of the social and language abnormalities seen in autism. --- paper_title: Most genetic risk for autism resides with common variation paper_content: Joseph Buxbaum and colleagues use an epidemiological sample from Sweden to investigate the genetic architecture of autism spectrum disorders. They conclude that most inherited risk for autism is determined by common variation and that rare variation explains a smaller fraction of total heritability. --- paper_title: Research Review: Constraining heterogeneity: the social brain and its development in autism spectrum disorder paper_content: The expression of autism spectrum disorder (ASD) is highly heterogeneous, owing to the complex interactions between genes, the brain, and behavior throughout development. Here we present a model of ASD that implicates an early and initial failure to develop the specialized functions of one or more of the set of neuroanatomical structures involved in social information processing (i.e., the ‘social brain’). From this early and primary disruption, abnormal brain development is canalized because the individual with an ASD must develop in a highly social world without the specialized neural systems that would ordinarily allow him or her to partake in the fabric of social life, which is woven from the thread of opportunities for social reciprocity and the tools of social engagement. This brain canalization gives rise to other characteristic behavioral deficits in ASD including deficits in communication, restricted interests, and repetitive behaviors. We propose that focused efforts to explore the brain mechanisms underlying the core, pathognomic deficits in the development of mechanisms for social engagement in ASD will greatly elucidate our understanding and treatment of this complex, devastating family of neurodevelopmental disorders. In particular, developmental studies (i.e., longitudinal studies of young children with and without ASD, as well as infants at increased risk for being identified with ASD) of the neural circuitry supporting key aspects of social information processing are likely to provide important insights into the underlying components of the full-syndrome of ASD. These studies could also contribute to the identification of developmental brain endophenotypes to facilitate genetic studies. The potential for this kind of approach is illustrated via examples of functional neuroimaging research from our own laboratory implicating the posterior superior temporal sulcus (STS) as a key player in the set of neural structures giving rise to ASD. Keywords: Social perception, social cognition, autism, functional neuroimaging, social brain. The considerable heterogeneity in the expression and severity of the core and associated symptoms is a challenge that has hindered progress towards understanding autism spectrum disorder (ASD). To illustrate, within autistic disorder, variability in the social domain ranges from a near absence of interest in interacting with others to more subtle difficulties managing complex social interactions that require an understanding of other people’s goals and intentions and other cues of social context. Similarly, stereotyped and repetitive behaviors range from simple motor stereotypies and/or a preference for sameness to much more complex and elaborate rituals, accompanied by emotional dysregulation or ‘meltdowns’ when these rituals are interrupted. Some individuals with ASD lack basic speech abilities, while others can have language deficits that are mild and limited to language pragmatics. While a majority of individuals with ASD exhibit some level of intellectual impairment, intelligence quotients vary from the severe and profoundly impaired range to well above average. --- paper_title: The neonate brain detects speech structure paper_content: What are the origins of the efficient language learning abilities that allow humans to acquire their mother tongue in just a few years very early in life? Although previous studies have identified different mechanisms underlying the acquisition of auditory and speech patterns in older infants and adults, the earliest sensitivities remain unexplored. To address this issue, we investigated the ability of newborns to learn simple repetition-based structures in two optical brain-imaging experiments. In the first experiment, 22 neonates listened to syllable sequences containing immediate repetitions (ABB; e.g., “mubaba,” “penana”), intermixed with random control sequences (ABC; e.g., “mubage,” “penaku”). We found increased responses to the repetition sequences in the temporal and left frontal areas, indicating that the newborn brain differentiated the two patterns. The repetition sequences evoked greater activation than the random sequences during the first few trials, suggesting the presence of an automatic perceptual mechanism to detect repetitions. In addition, over the subsequent trials, activation increased further in response to the repetition sequences but not in response to the random sequences, indicating that recognition of the ABB pattern was enhanced by repeated exposure. In the second experiment, in which nonadjacent repetitions (ABA; e.g., “bamuba,” “napena”) were contrasted with the same random controls, no discrimination was observed. These findings suggest that newborns are sensitive to certain input configurations in the auditory domain, a perceptual ability that might facilitate later language development. --- paper_title: Sibling Recurrence and the Genetic Epidemiology of Autism paper_content: Objective:Although the symptoms of autism exhibit quantitative distributions in nature, estimates of recurrence risk in families have never previously considered or incorporated quantitative characterization of the autistic phenotype among siblings. Method:The authors report the results of quantitative characterization of 2,920 children from 1,235 families participating in a national volunteer register, with at least one child clinically affected by an autism spectrum disorder and at least one full biological sibling. Results:A traditionally defined autism spectrum disorder in an additional child occurred in 10.9% of the families. An additional 20% of nonautism-affected siblings had a history of language delay, one-half of whom exhibited autistic qualities of speech. Quantitative characterization using the Social Responsiveness Scale supported previously reported aggregation of a wide range of subclinical (quantitative) autistic traits among otherwise unaffected children in multiple-incidence families and... --- paper_title: Behavioral manifestations of autism in the first year of life paper_content: In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism. --- paper_title: Dissociable Roles of the Superior Temporal Sulcus and the Intraparietal Sulcus in Joint Attention: A Functional Magnetic Resonance Imaging Study paper_content: Previous imaging work has shown that the superior temporal sulcus (STS) region and the intraparietal sulcus (IPS) are specifically activated during the passive observation of shifts in eye gaze [Pelphrey, K. A., Singerman, J. D., Allison, T., & McCarthy, G. Brain activation evoked by perception of gaze shifts: The influence of context. Neuropsychologia, 41, 156170, 2003; Hoffman, E. A., & Haxby, J. V. Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nature Neuroscience, 3, 8084, 2000; Puce, A., Allison, T., Bentin, S., Gore, J. C., & McCarthy, G. Temporal cortex activation in humans viewing eye and mouth movements. Journal of Neuroscience, 18, 21882199, 1998; Wicker, B., Michel, F., Henaff, M. A., & Decety, J. Brain regions involved in the perception of gaze: A PET study. Neuroimage, 8, 221227, 1998]. Are the same brain regions also involved in extracting gaze direction in order to establish joint attention? In an event-related functional magnetic resonance imaging experiment, healthy human subjects actively followed the directional cue provided by the eyes of another person toward an object in space or, in the control condition, used a nondirectional symbolic cue to make an eye movement toward an object in space. Our results show that the posterior part of the STS region and the cuneus are specifically involved in extracting and using detailed directional information from the eyes of another person to redirect one's own gaze and establish joint attention. The IPS, on the other hand, seems to be involved in encoding spatial direction and mediating shifts of spatial attention independent of the type of cue that triggers this process. --- paper_title: A new research trend in social neuroscience: Towards an interactive‐brain neuroscience paper_content: The ability to flexibly modulate our behaviors in social contexts and to successfully interact with other persons is a fundamental, but pivotal, requirement for human survival. Although previous social neuroscience research with single individuals has contributed greatly to our understanding of the basic mechanisms underlying social perception and social emotions, much of the dynamic nature of interactions between persons remains beyond the reach of single-brain studies. This has led to a growing argument for a shift to the simultaneous measurement of the brain activity of two or more individuals in realistic social interactions-an approach termed "hyperscanning." Although this approach offers important promise in unlocking the brain's role in truly social situations, there are multiple procedural and theoretical questions that require review and analysis. In this paper we discuss this research trend from four aspects: hyperscanning apparatus, experimental task, quantification method, and theoretical interpretation. We also give four suggestions for future research: (a) electroencephalography and near-infrared spectroscopy are useful tools by which to explore the interactive brain in more ecological settings; (b) games are an appropriate method to simulate daily life interactions; (c) transfer entropy may be an important method by which to quantify directed exchange of information between brains; and (d) more explanation is needed of the results of interbrain synchronization itself. --- paper_title: Cortical activation during attention to sound in autism spectrum disorders. paper_content: Abstract Individuals with autism spectrum disorders (ASDs) can demonstrate hypersensitivity to sounds as well as a lack of awareness of them. Several functional imaging studies have suggested an abnormal response in the auditory cortex of such subjects, but it is not known whether these subjects have dysfunction in the auditory cortex or are simply not listening. We measured changes in blood oxygenated hemoglobin (OxyHb) in the prefrontal and temporal cortices using near-infrared spectroscopy during various listening and ignoring tasks in 11 ASD and 12 control subjects. Here we show that the auditory cortex in ASD subjects responds to sounds fully during attention. OxyHb in the auditory cortex increased with intentional listening but not with ignoring of the same auditory stimulus in a similar fashion in both groups. Cortical responses differed not in the auditory but in the prefrontal region between the ASD and control groups. Thus, unawareness to sounds in ASD could be interpreted as due to inattention rather than dysfunction of the auditory cortex. Difficulties in attention control may account for the contrary behaviors of hypersensitivity and unawareness to sound in ASD. --- paper_title: Development of a neurofeedback protocol targeting the frontal pole using near‐infrared spectroscopy paper_content: Aim ::: Neurofeedback has been studied with the aim of controlling cerebral activity. Near-infrared spectroscopy is a non-invasive neuroimaging technique used for measuring hemoglobin concentration changes in cortical surface areas with high temporal resolution. Thus, near-infrared spectroscopy may be useful for neurofeedback, which requires real-time feedback of repeated brain activation measurements. However, no study has specifically targeted neurofeedback, using near-infrared spectroscopy, in the frontal pole cortex. ::: ::: Methods ::: We developed an original near-infrared spectroscopy neurofeedback system targeting the frontal pole cortex. Over a single day of testing, each healthy participant (n = 24) received either correct or incorrect (Sham) feedback from near-infrared spectroscopy signals, based on a crossover design. ::: ::: Results ::: Under correct feedback conditions, significant activation was observed in the frontal pole cortex (P = 0.000073). Additionally, self-evaluation of control and metacognitive beliefs were associated with near-infrared spectroscopy signals (P = 0.006). ::: ::: Conclusion ::: The neurofeedback system developed in this study might be useful for developing control of frontal pole cortex activation. --- paper_title: Neuroimaging in autism spectrum disorders: 1H-MRS and NIRS study. paper_content: Using proton magnetic resonance spectroscopy (1H-MRS), we measured chemical metabolites in the left amygdala and the bilateral orbito-frontal cortex (OFC) in children with autism spectrum disorders (ASD). The concentrations of N-acetylaspartate (NAA) in these regions of ASD were significantly decreased compared to those in the control group. In the autistic patients, the NAA concentrations in these regions correlated with their social quotient. These findings suggest the presence of neuronal dysfunction in the amygdala and OFC in ASD. Dysfunction in the amygdala and OFCmay contribute to the pathogenesis of ASD.We performed a near-infrared spectroscopy (NIRS) study to evaluate the mirror neuron system in children with ASD. The concentrations of oxygenated hemoglobin (oxy-Hb) were measured with frontal probes using a 34-channel NIRS machine while the subjects imitated emotional facial expressions. The increments in the concentration of oxy-Hb in the pars opercularis of the inferior frontal gyrus in autistic subjects were significantly lower than those in the controls. However, the concentrations of oxy-Hb in this area were significantly elevated in autistic subjects after they were trained to imitate emotional facial expressions. The results suggest that mirror neurons could be activated by repeated imitation in children with ASD. --- paper_title: Near-infrared spectroscopy (NIRS) neurofeedback as a treatment for children with attention deficit hyperactivity disorder (ADHD)—a pilot study paper_content: In this pilot study near-infrared spectroscopy (NIRS) neurofeedback was investigated as a new method for the treatment of ADHD. Oxygenated hemoglobin in the prefrontal cortex of children with ADHD was measured and fed back. 12 sessions of NIRS-neurofeedback were compared to the intermediate outcome after 12 sessions of EEG-neurofeedback (slow cortical potentials, SCP) and 12 sessions of EMG-feedback (muscular activity of left and right musculus supraspinatus). The task was either to increase or decrease hemodynamic activity in the prefrontal cortex (NIRS), to produce positive or negative shifts of SCP (EEG) or to increase or decrease muscular activity (EMG). In each group nine children with ADHD, aged 7 to 10 years, took part. Changes in parents’ ratings of ADHD symptoms were assessed before and after the 12 sessions and compared within and between groups. For the NIRS-group additional teachers’ ratings of ADHD symptoms, parents’ and teachers’ ratings of associated behavioral symptoms, childrens’ self reports on quality of life and a computer based attention task were conducted before, 4 weeks and 6 months after training. As primary outcome, ADHD symptoms decreased significantly 4 weeks and 6 months after the NIRS training, according to parents’ ratings. In teachers’ ratings of ADHD symptoms there was a significant reduction 4 weeks after the training. The performance in the computer based attention test improved significantly. Within-group comparisons after 12 sessions of NIRS-, EEG- and EMG-training revealed a significant reduction in ADHD symptoms in the NIRS-group and a trend for EEG- and EMG-groups. No significant differences for symptom reduction were found between the groups. Despite the limitations of small groups and the comparison of a completed with two uncompleted interventions, the results of this pilot study are promising. NIRS-neurofeedback could be a time-effective treatment for ADHD and an interesting new option to consider in the treatment of ADHD. --- paper_title: Self-face recognition in children with autism spectrum disorders: A near-infrared spectroscopy study paper_content: It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness. --- paper_title: Weak network efficiency in young children with Autism Spectrum Disorder: Evidence from a functional near-infrared spectroscopy study paper_content: Abstract Functional near infrared spectroscopy (fNIRS) is particularly suited for the young population and ecological measurement. However, thus far, not enough effort has been given to the clinical diagnosis of young children with Autism Spectrum Disorder (ASD) by using fNIRS. The current study provided some insights into the quantitative analysis of functional networks in young children (ages 4.8–8.0 years old) with and without ASD and, in particular, investigated the network efficiency and lobe-level connectivity of their functional networks while watching a cartoon. The main results included that: (i) Weak network efficiency was observed in young children with ASD, even for a wide range of threshold for the binarization of functional networks; (ii) A maximum classification accuracy rate of 83.3% was obtained for all participants by using the k-means clustering method with network efficiencies as the feature parameters; and (iii) Weak lobe-level inter-region connections were uncovered in the right prefrontal cortex, including its linkages with the left prefrontal cortex and the bilateral temporal cortex. Such results indicate that the right prefrontal cortex might make a major contribution to the psychopathology of young children with ASD at the functional network architecture level, and at the functional lobe-connectivity level, respectively. --- paper_title: Fundamental Components of Attention paper_content: A mechanistic understanding of attention is necessary for the elucidation of the neurobiological basis of conscious experience. This chapter presents a framework for thinking about attention that facilitates the analysis of this cognitive process in terms of underlying neural mechanisms. Four processes are fundamental to attention: working memory, top-down sensitivity control, competitive selection, and automatic bottom-up filtering for salient stimuli. Each process makes a distinct and essential contribution to attention. Voluntary control of attention involves the first three processes (working memory, top-down sensitivity control, and competitive selection) operating in a recurrent loop. Recent results from neurobiological research on attention are discussed within this framework. --- paper_title: Distinctive activation patterns under intrinsically versus extrinsically driven cognitive loads in prefrontal cortex: A near-infrared spectroscopy study using a driving video game paper_content: Abstract To investigate the neural bases of intrinsically and extrinsically driven cognitive loads in daily life, we measured repetitively prefrontal activation in three (one control and two experimental) groups during a driving video game using near-infrared spectroscopy. The control group drove to goal four times with distinct route-maps illustrating default turning points. In contrast, the memory group drove the memorized default route without a route-map, and the emergency group drove with a route-map, but was instructed to change the default route by an extrinsically given verbal command (turn left or right) as an envisioned emergency. The predictability of a turning point in the route in each group was relatively different: due to extrinsic dictate of others in the emergency group, intrinsic memory in the memory group, and route-map aid in the control group. We analyzed concentration changes of oxygenated hemoglobin (CoxyHb) in the three critical periods (pre-turning, actual-turning, and post-turning). The emergency group showed a significantly increasing pattern of CoxyHb throughout the three periods, and a significant reduction in CoxyHb throughout the repetitive trials, but the memory group did not, even though both experimental groups showed higher activation than the control group in the pre-turning period. These results suggest that the prefrontal cortex differentiates the intrinsically (memory) and the extrinsically (dictate of others) driven cognitive loads according to the predictability of turning behavior, although the two types of cognitive loads commonly show increasing activation in the pre-turning period as the preparation effect. --- paper_title: An Integrative Theory of Prefrontal Cortex Function paper_content: ▪ Abstract The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals. Its neural basis, however, has remained a mystery. Here, we propose that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task. We review neurophysiological, neurobiological, neuroimaging, and computational studies that support this theory and discuss its implications as well as further issues to be addressed --- paper_title: Atypical attentional networks and the emergence of autism paper_content: The sociocommunicative impairments that define autism spectrum disorder (ASD) are not present at birth but emerge gradually over the first two years of life. In typical development, basic attentional processes may provide a critical foundation for sociocommunicative abilities. Therefore early attentional dysfunction in ASD may result in atypical development of social communication. Prior research has demonstrated that persons with ASD exhibit early and lifelong impairments in attention. The primary aim of this paper is to provide a review of the extant research on attention in ASD using a framework of functionally independent attentional networks as conceptualized by Posner and colleagues: the alerting, orienting and executive control networks (Posner and Petersen, 1990 ; Petersen and Posner, 2012). The neural substrates and typical development of each attentional network are briefly discussed, a review of the ASD attention literature is presented, and a hypothesis is proposed that links aberrant attentional mechanisms, specifically impaired disengagement of attention, with the emergence of core ASD symptoms. --- paper_title: Cross-Brain Neurofeedback: Scientific Concept and Experimental Platform paper_content: The present study described a new type of multi-person neurofeedback with the neural synchronization between two participants as the direct regulating target, termed as “cross-brain neurofeedback.” As a first step to implement this concept, an experimental platform was built on the basis of functional near-infrared spectroscopy, and was validated with a two-person neurofeedback experiment. This novel concept as well as the experimental platform established a framework for investigation of the relationship between multiple participants' cross-brain neural synchronization and their social behaviors, which could provide new insight into the neural substrate of human social interactions. --- paper_title: Near-infrared spectroscopy based neurofeedback training increases specific motor imagery related cortical activation compared to sham feedback paper_content: Abstract In the present study we implemented a real-time feedback system based on multichannel near-infrared spectroscopy (NIRS). Prior studies indicated that NIRS-based neurofeedback can enhance motor imagery related cortical activation. To specify these prior results and to confirm the efficacy of NIRS-based neurofeedback, we examined changes in blood oxygenation level collected in eight training sessions. One group got real feedback about their own brain activity ( N = 9) and one group saw a playback of another person’s feedback recording ( N = 8). All participants performed motor imagery of a right hand movement. Real neurofeedback induced specific and focused brain activation over left motor areas. This focal brain activation became even more specific over the eight training sessions. In contrast, sham feedback led to diffuse brain activation patterns over the whole cortex. These findings can be useful when training patients with focal brain lesions to increase activity of specific brain areas for rehabilitation purpose. --- paper_title: Neurofeedback Using Real-Time Near-Infrared Spectroscopy Enhances Motor Imagery Related Cortical Activation paper_content: Accumulating evidence indicates that motor imagery and motor execution share common neural networks. Accordingly, mental practices in the form of motor imagery have been implemented in rehabilitation regimes of stroke patients with favorable results. Because direct monitoring of motor imagery is difficult, feedback of cortical activities related to motor imagery (neurofeedback) could help to enhance efficacy of mental practice with motor imagery. To determine the feasibility and efficacy of a real-time neurofeedback system mediated by near-infrared spectroscopy (NIRS), two separate experiments were performed. Experiment 1 was used in five subjects to evaluate whether real-time cortical oxygenated hemoglobin signal feedback during a motor execution task correlated with reference hemoglobin signals computed off-line. Results demonstrated that the NIRS-mediated neurofeedback system reliably detected oxygenated hemoglobin signal changes in real-time. In Experiment 2, 21 subjects performed motor imagery of finger movements with feedback from relevant cortical signals and irrelevant sham signals. Real neurofeedback induced significantly greater activation of the contralateral premotor cortex and greater self-assessment scores for kinesthetic motor imagery compared with sham feedback. These findings suggested the feasibility and potential effectiveness of a NIRS-mediated real-time neurofeedback system on performance of kinesthetic motor imagery. However, these results warrant further clinical trials to determine whether this system could enhance the effects of mental practice in stroke patients. ---
Title: Assessing autism at its social and developmental roots: A review of Autism Spectrum Disorder studies using functional near-infrared spectroscopy Section 1: Introduction Description 1: Introduce Autism Spectrum Disorder (ASD), its challenges, and the significance of early diagnosis and treatment. Mention the limitations of current behavioral and neuroimaging methods, and propose fNIRS as a promising alternative. Section 2: Review: two puzzles in current ASD study Description 2: Provide an overview of previous brain imaging approaches (fMRI and EEG) and introduce the main targets of current fNIRS studies, focusing on two major puzzles: ASD in the developing baby brain and the social brain and brain-to-brain interactions. Section 3: Puzzle one: ASD in the developing baby brain Description 3: Discuss studies investigating cerebral structure and function in children with ASD, highlighting deficiencies in executive control, speech processing, auditory processing, and social cognition. Section 4: Puzzle two: the social brain and brain-to-brain interactions Description 4: Explore social difficulties in communication and interaction, emphasizing the need for multi-brain "hyperscanning" paradigms to better understand social cognition and neural coupling in ASD. Section 5: Basic principles of fNIRS Description 5: Describe the functional near-infrared spectroscopy (fNIRS) technique, including its operation, key modalities, and advantages over traditional neuroimaging methods. Section 6: Key modalities in fNIRS application Description 6: Outline various fNIRS device setups, emission and reception methods, and data assessment techniques used in studies. Section 7: fNIRS data assessment and focus Description 7: Explain the data preprocessing steps, typical measures of neural activity assessed in fNIRS studies, and the experimental paradigms used. Section 8: fNIRS studies on ASD Description 8: Review current fNIRS studies on ASD, categorized into non-social difficulties, atypical brain connectivity, and social difficulties in interaction and communication. Section 9: Non-social difficulties Description 9: Summarize fNIRS studies exploring executive function deficits, sensory perception issues, and auditory processing abnormalities in children with ASD. Section 10: Atypical brain connectivity Description 10: Detail fNIRS studies on functional connectivity in the brain, discussing findings from both infant and older children populations. Section 11: Social difficulties in interaction and communication Description 11: Discuss research on self-other distinction, face processing, visual and auditory social cue recognition, and theory of mind, highlighting deficits observed in ASD. Section 12: Communication deficits Description 12: Examine fNIRS studies investigating communication difficulties in ASD, focusing on speech perception and neural precursors of communication. Section 13: Discussion and conclusion Description 13: Summarize the findings of reviewed fNIRS studies, discuss their implications for understanding ASD, and propose future research directions and potential applications of fNIRS in ASD intervention and treatment.